| As the product of physiological activities of the brain,EEG has great research value in the study of brain mechanism,artificial intelligence,human-computer interaction and other fields.Due to the strong correlation between EEG signal and emotion,EEG emotion recognition has gradually become a research focus in recent years,which also provides a good entry point for studying the working mechanism of the brain.As a non-stationary random signal,emotional EEG has many feature extraction methods,but most of them focus on frequency domain and time domain,failing to consider the problems of spatial domain and multi-modal feature fusion.In addition,the existing emotional EEG classification models also have some shortcomings such as poor global perception ability,poor parallelism ability and poor interpretability.Based on this,the main research contents and innovations of this paper are summarized as follows:(1)In view of the difficulties in feature extraction and poor fusion effect of emotional EEG,this paper firstly proposes a frequency-domain feature extraction method,and selects differential entropy as the algorithm to summarize time series features through experiments.The spatial domain features of original signals are preserved by equivalent matrix mapping,and the multi-mode fusion of frequency domain and time domain features is carried out.Finally,the classification model is matched according to the characteristics of multi-modal features and a comparative experiment is carried out.Experimental results show the binary classification accuracy is improved by 1.7% and the 4-classification is improved by 7%.Compared with the feature extraction method in frequency space domain,the accuracy of binary classification is improved by 2.2% and that of quad-classification is improved by1.1%.The validity of the method is verified.(2)Aiming at the problems of poor global perception ability,parallelism ability and interpretability of the current model,this paper improved the CNN-LSTM emotion recognition model in Chapter 3 through the attention mechanism.Firstly,the convolution layer attention module is introduced on CNN to make CNN have global perception ability.Secondly,Transformer model used for natural language processing is improved to make it suitable for emotional EEG classification task,which can enhance parallel ability and improve accuracy.Experimental results show that compared with the baseline model in Chapter 3,the improved CNN improves the accuracy of binary classification by 1.1%,and the accuracy of quad-classification by 1.5%.The improved Transformer model improves the accuracy of binary classification and quad-classification by 0.6% and 0.5%,respectively,confirming the superiority of the model in emotion recognition tasks.This paper also studies the interpretability of the emotion recognition model.The weight of each electrode was derived through the CBAM mechanism of the model,which was visualized and analyzed on the brain region map.The experimental results were consistent with the physiological characteristics,which verified the rationality of the model.To sum up,this paper proposes a frequency-domain feature extraction method for emotional EEG in view of the difficulty in extracting EEG features by traditional methods.Experimental results show that differential entropy is effective in generalizing time series features by dividing different frequency bands of EEG.Secondly for the current feature extraction algorithm is mostly concentrated in the frequency domain and time domain,seldom consider the problem of spatial domain,a fusion of three-dimensional features more modal feature extraction method and matching the corresponding classification model,through frequency space domain and frequency domain feature model,space model contrast,verified the necessity and effectiveness of the modal characteristics of fusion;Finally,aiming at the problems of low precision,parallelism and poor interpretability of the classification model,an integrated model based on attention mechanism is proposed in this paper.Experiments show that the performance of the model is better than the baseline model,and visual analysis of the attention mechanism is carried out to verify the interpretability of the model. |