| Emotion plays a vital role in human life.Positive emotion can help us improve the efficiency of daily work,while negative emotion can affect decision-making,concentration and even health.In recent years,emotion recognition has attracted more and more researchers,which has become a hot research topic in the field of emotion computing and pattern recognition.Electroencephalograph(EEG)signal,as a typical neurophysiological signal,has been widely used in the field of emotion recognition.In emotion recognition based on EEG signals,there are usually two major technical challenges: one is how to build a more effective emotion recognition model.The other is how to extract discriminative emotion features from EEG signals.Therefore,this thesis explores the method of emotion recognition based on EEG,and the content mainly includes the following three aspects:Firstly,this thesis acquires and processes emotional EEG signals based on emotional facial stimuli.Using emotional faces as stimulus materials,the experimental paradigm of emotional induction was designed and realized,and the induced EEG signals were collected.A relatively pure emotional EEG signal dataset is obtained for further research and analysis by preprocessing.Secondly,the emotional EEG data was decoded based on the Lightweight Convolutional Neural Network(LW-CNN)model,and the model was interpreted by visualization technology.CNN has been widely used in EEG decoding tasks and has achieved excellent performance.However,most of the existing CNNs introduce a large number of hyperparameters,which makes it difficult to understand the process of automatic learning features.Aiming at this problem,this research proposed the LW-CNN model,and conducted a cross-subject classification on emotional EEG data.This model greatly reduced the number of trainable parameters while ensuring good performance.Then,the performance of the model and the learning process are explained through visualization technology: first,the maximum activation is used to visualize the convolution kernel at each layer,so as to understand the working mode of the convolution kernel;second,saliency map is used to obtain emotion-related temporal and spatial features which drive the decision-making model.The results showed that the temporal occipital lobe area was the key area for emotional face recognition,and the critical time period is between 140ms-240 ms after stimulation.Thirdly,the three-dimensional EEG topology sequence was classified emotional EEG signals.Although CNN has been widely used in the classification of emotional EEG signals,the input of existing models don’t consider the spatial correlation between the electrode positions of EEG signals,and many researches on emotion recognition still use2 Dimensional CNN(2D CNN)in their network.Therefore,the present research proposed a new EEG representation,named as three-dimensional topological sequence,which was used as the input of CNN.Then,based on the inspiration of video analysis,two different types of 3 Dimensional CNN(3D CNN)models named as Convolutional 3Dimensional(C3D)and Res Net with(2+1)Dimensional(R(2+1)D)were constructed to identify emotions across subjects successfully by using 3D EEG topological sequence.Compared with other input forms,the best results which proved the effectiveness of the input form of 3D EEG topology sequence were obtained. |