| As a person’s response to internal or external events,emotion is a comprehensive psychological and physiological state produced by the interweaving of various thoughts,behaviors,and feelings,and it is also an important means of interaction and communication between people.Due to the lack of perception of auditory information,hearing-impaired people may have biases in emotional perception.Electroencephalogram(EEG)reflects the electrophysiological activity on the surface of the cerebral cortex,and can effectively record changes in the brain’s emotions.Therefore,this paper uses emotional movie clips to induce different emotions of hearing-impaired subjects,and collects the EEG signals of the subjects,extracts the micro-state features,frequency-domain features and functional brain network features of the EEG signals,and uses one-dimensional depth Residual Shrinkage Network(1DDRSN)recognizes the emotion of hearing-impaired subjects.The main research content of this paper is as follows:(1)Design and propose an emotion-induced experimental paradigm based on hearing-impaired subjects.In this paper,the emotion of hearing-impaired subjects was elicited by video,and the EEG data of 15 hearing-impaired subjects were collected when they watched four types of emotional movie clips(happy,calm,sad and fearful),and constructed a hearing-impaired subject Emotional EEG dataset.Among them,the selection of happy,calm and sad movie clips is the same as the SEED dataset,and the fear emotion movie clips are voted by psychology students.(2)Proposed an EEG microstate-based method for emotion recognition in hearingimpaired subjects.In the microstate analysis process,the improved K-means algorithm is used to extract ten microstate classes from the preprocessed EEG signals.Then,the ten microstate classes are back-fitted and a microstate sequence is constructed.For feature extraction,select 6 microstate features such as Global Explained Variance(GEV),Global Explained Variance Total(GEVT),Global Field Power(GFP),Coverage,Persistence time and incidence.In this paper,1D-DRSN is used to filter the emotion-independent noise information in microstate features to obtain emotion representation information.Finally,we explore the classification effect of different microstate features and microstate classes,and prove the effectiveness of microstate features for emotion recognition.The results showed that GEV,coverage rate and incidence rate were highly correlated with the emotion of hearing-impaired subjects.(3)In the process of analyzing the functional brain network,an emotion recognition method based on the fusion of EEG multi-domain features is proposed.The Differential Entropy(DE),Power Spectral Density(PSD)and brain function network features that represent the frequency domain and spatial domain information of emotional EEG signals are fused.We explored different coupling methods and binarization methods of functional brain networks to discover connectivity of different brain regions for different emotions.Combine the characteristics of different brain networks to find the representative features of emotional information in the spatial domain.Experimental results show that combining Phase-Locked Value(PLV)with 20% sparsity to construct a functional brain network outperforms other methods.In addition,the combination of four brain network attributes Clustering Coefficient(CC),Node Degree(ND),Node Betweenness Centrality(NBC)and Node Strength(NS)is better than other The combination has higher classification accuracy.Finally,a combination of feature dimensionality reduction and deep learning is designed,and the combination of Principal Component Analysis(PCA)and 1D-DRSN is used for emotional recognition of fusion features.The results show that the functional brain network is better than the frequency domain feature for emotion recognition,and the EEG recognition result based on the fusion feature is better than the single feature recognition result.The deep learning algorithm designed in this paper can more effectively identify different emotions of hearing-impaired subjects. |