| With the development of information technology,computer technology is more and more widely used in people’s life and work.More intelligent human-computer interaction is the key factor to improve the quality of life and work efficiency.The automatic recognition of emotional state is one of the important ways to improve the performance of human-computer interaction.Electroencephalogram(EEG)signal,as the most real response of cortical neurons,is closely related to human emotional states.In recent years,the theory and model of deep learning have been continuously improved,and its advantages in complex big data processing have gradually emerged.Therefore,emotion recognition based on deep learning and EEG signals has become a research hotspot in the field of affective computing.Although the existing deep learning-based EEG emotion recognition researches have made some achievements,there are still some problems.First of all,the synchronization changes and spatial information of brain regions in different emotional states are easy to be ignored.Secondly,how to characterize the unique characteristics of single EEG channel and the correlation characteristics among channels,and make full use of these characteristics to improve recognition performance,is also a key problem to be solved urgently.In view of the above problems,this paper has carried out the corresponding research and exploration,the work is as follows:(1)In order to verify the ability of synchronization changes between EEG channels to represent emotional states,a feature extraction method based on the combination of synchronization measurement and deep learning is proposed.Firstly,the maximum information coefficient(MIC)is used to measure the synchronization features of all EEG channel pairs,and the gray feature image is constructed based on the spatial position information of EEG electrodes to achieve a richer emotional feature representation.Then,an unsupervised deep neural network named principal component analysis network(PCANet)is employed to extract high-level emotional features from gray feature image.Finally,support vector machine(SVM)is used for emotion recognition.Experiments show that EEG features based on synchronization measurement can effectively reflect the emotional states,and PCANet can fully extract the synchronization and spatial domain features to improve the emotion recognition performance.(2)In order to use the spatial information of multi-channel EEG signals and extract more effective emotion distinguishing features,an emotion recognition method based on multiband feature matrix and deep capsules network(CapsNet)is proposed.This method first extracts the power spectral density(PSD)features of each channel,and then constructs a multiband feature matrix including frequency domain features,spatial domain information and frequency band information according to the arrangement of sensors in the cerebral cortex.Finally,a deep CapsNet is used for emotion recognition.Experiment results show that the spatial features of multichannel EEG signals can bring benefits to emotion recognition,and the CapsNet can effectively capture the spatial domain information to improve the emotion recognition performance.(3)In order to make full use of the unique features of each EEG channel and the multi-channel correlation features of brain area to improve the performance of emotion recognition,an emotion recognition method using multichannel three-dimensional(3D)features and multivariable convolutional neural network(CNN)is proposed.Firstly,the time-domain features of each EEG channel are extracted,and a three-dimensional feature matrix is constructed according to the electrode arrangement rules,so as to achieve the feature representation closer to the real reflection of the brain.After that,multivariable CNN is used to recognition emotional states.Experiments show that 3D feature matrix can more effectively represent the emotional changes in EEG,and multivariable CNN can make full use of the single channel features and the correlation features among channels to improve the performance of emotion recognition.In order to verify the effectiveness of the proposed algorithms,an emotion recognition system based on EEG is designed and implemented.The system first plays the stimulation video to collect the emotion-related EEG signals,then uses the algorithms proposed in this paper to preprocess the signals and extract the features.After that,the features are input into the deep learning models.Finally,the system outputs the emotion states of user.The system achieves the expected effect. |