| Emotions have an important impact on people’s daily communication,cognition and decision making.With the rapid development of sensor technology and humancomputer interaction technology,emotion recognition based on physiological signals and non-physiological signals has become a hot research topic.Because emotional changes are closely related to the brain and EEG signals are not easily artifacted,EEG signals are widely used in emotion recognition studies.Currently,research on EEGbased emotion recognition focuses on the normal population.In this paper,we will conduct a study on emotion recognition between hearing-impaired and normal people with regard to the change of EEG signal under the stimulation of emotional face pictures.The main contents are as follows:(1)Designed and completed an electroencephalographic emotion experiment based on emotional face picture stimulation in hearing-impaired and normal people.We selected Chinese facial affective pictures to elicit five emotions(happiness,neutral,sadness,fear and anger)in 20 hearing-impaired subjects and 20 normal subjects.The EEG emotion datasets were constructed for hearing-impaired and normal people.(2)Exploring the EEG emotion recognition based on manual feature.Firstly,five different feature extraction methods,namely Differential Entropy(DE),Power Spectral Density(PSD),Wavelet Entropy(WE),Sample entropy(SE)and C0 complexity,were used to compare and analyze the emotion characterization ability of different features.Six classifiers,Support Vector Machin-Linear(SVM-Linear),KNearest Neighbor(KNN),Gaussian Kernel SVM(SVM-Ploy),Gaussian Na?ve Bayes(GNB),Polynomial SVM(SVM-RBF),and Decision tree(DT),are used for manual feature classification.The results show that on the hearing-impaired people dataset and the normal people dataset,the DE features applied the SVM-Linear classifier achieve the highest accuracy of emotion recognition.(3)A model of emotion classification based on Multi-axis self-attention(MaxSA)mechanism is proposed for obtaining correlation information between brain regions and between channels A total of four feature matrices,Subtract Symmetric Matrix(SSM)and Quotient Symmetric Matrix(QSM)of the original signal and DE features,are constructed.The fused feature matrix is classified using a multi-axis selfattentive model which combines local attention and global attention with Inverted Mobile Bottleneck Convolution(MBConv).The proposed method achieved better classification results on the hearing-impaired people dataset and the normal people dataset,respectively.The brain topography map of hearing-impaired and normal people in emotion recognition revealed that the emotion discriminative brain regions were distributed in the temporal and parietal lobes in hearing-impaired people and in the temporal and occipital lobes in normal people. |