| Emotion is a complex state that integrates feelings,thoughts,and behaviors,and it is a person’s psychophysiological response to internal or external stimuli,which plays a crucial role in people’s decision-making,perception and communication.With the rapid development of machine learning and information fusion technologies,the ability of computers to understand,recognize and analyze emotion has been made possible.Since human-computer interaction(HCI)exists in a variety of environments,more and more researchers in the fields of ergonomics and intelligent systems are working to improve the efficiency and flexibility of HCI.HCI systems require computers to have the adaptive ability to accurately understand the way humans communicate and then trigger the correct feedback,and human intentions can be expressed through verbal and nonverbal behaviors with different emotions.Emotion recognition,as one of the fundamental tasks to realize the comprehensive intelligence of computers,occupies an important position in HCI,which has triggered a strong interest among researchers.Generally speaking,human emotion can be expressed in a variety of ways,and compared with facial expressions,speech and other physiological signals,Electroencephalogram(EEG)singal has become the first choice for scholars to study the problem of emotion recognition due to the advantages of high temporal resolution,non-invasiveness,objectivity and reliability.In recent years,with the popularity of deep learning in various fields,more and more neural network models have been applied to emotion recognition based on EEG signal,and have shown better performance than traditional machine learning,which also bring many challenges.This dissertation focuses on the two most representative problems in EEG-based emotion recognition:(1)how to accurately and quickly extract more discriminative emotional features,and(2)how to design an efficient classification model so as to improve the ability of emotion recognition,indepth research and discussion,and proposes a variety of emotion recognition methods based on graph neural network models.Specifically,the main research content of this dissertation is as follows:(1)Aiming at the problem that traditional feature extraction methods easily ignore the features of neighboring electrodes and asymmetric electrodes and spatial features,the paper proposes EEG emotion recognition method based on hierarchy graph convolution network,named ERHGCN.The method extracts six different frequency and spatial domain features from five frequency bands of EEG signal,namely,power spectral density(PSD),differential entropy(DE),differential asymmetry(DASM),rational asymmetry(RASM),asymmetry(ASM)and differential causality(DCAU).Subsequently,a hierarchy graph convolution network(HGCN)model is designed for feature training by processing vertically and horizontally neighboring electrode pairs to extract deeper spatial features.Finally,two fully connected layers are used to integrate all the trained features and softmax is used as a classifier for emotion recognition.The validation of the ERHGCN method is performed on the DEAP dataset,which proves the effectiveness of the proposed method.The ERHGCN method improves recognition accuracy by training features between neighboring electrodes and asymmetric electrodes,and achieves a classification accuracy of 90.56% and 88.79% on the dimensions of valence and arousal,respectively.(2)Aiming at the problem that many existing research only focus on extracting the time and frequency domain features of the EEG signals while failing to utilize the dynamic temporal changes and the positional relationships between different electrode channels.this paper proposes dynamic differential entropy and linear graph convolutional network based EEG emotion recognition,named DDELGCN.The DDELGCN method mainly consists of the following three steps.First,the dynamic differential entropy(DDE)feature which represents the frequency domain feature as well as time domain feature is extracted based on the traditional differential entropy feature.Second,the brain connectivity matrices are constructed by calculating Pearson correlation coefficient(PCC),phase locked value(PLV)and transfer entropy(TE),which considers the connectivity features between brains.Finally,a linear graph convolutional network(LGCN)is customized and applied to aggregate the features from total electrode combinations and then classifies the emotional states.Extensive experiments with the DDELGCN method are conducted on DEAP and SEED datasets,which demonstrate excellent emotion recognition ability,and compared with existing research methods,proving that the DDELGCN method is effective in performance enhancement for emotion recognition.(3)Aiming at the problem that most of the existing researches either focus on onedimensional EEG data,ignoring the relationship between channels,or only extract time frequency features while not involving spatial features,this paper develops spatial temporal features-based EEG emotion recognition using a graph convolution network(GCN)and long short-term memory(LSTM),named ERGL.The ERGL method mainly consists of the following three steps.First,the one-dimensional EEG vector is converted into a two-dimensional mesh matrix,so that the matrix configuration corresponds to the distribution of brain regions at EEG electrode locations,thus to represent the spatial correlation between multiple adjacent channels in a better way.Second,the GCN and LSTM are employed together to extract spatial temporal features,the GCN is used to extract spatial features,while LSTM units are applied to extract temporal features.Finally,the fully connected layer is used to integrate all the features and the softmax layer is applied for emotion classification.Extensive experiments are conducted on the DEAP and SEED datasets,the results demonstrate that the proposed ERGL method is encouraging in comparison to state-of-theart emotion recognition researches.(4)Aiming at the problem that EEG data acquisition process is relatively complex and the current EEG emotion database is small in size,this paper proposes conditional Wasserstein generated adversarial and adaptive graph convolutional network based EEG emotion recognition,named CWAGCN,which consists of the following three main steps.First,the original EEG singal is replaced by differential entropy features as input,and the feature dataset is enhanced by the conditional Wasserstein generated adversarial network(CWGAN)model,so as to obtain rich EEG feature sets for larger scale model training.Then,high-quality feature sets are obtained as inputs to the graph convolutional network model based on three evaluation metrics.Finally,an adaptive graph convolutional network(AGCN)model is designed to train the features from both specific and public convolutional modules to obtain classification results.Experiments results on DEAP and SEED datasets demonstrate the effectiveness and rationality of the method.The classification accuracy and precision on the DEAP dataset reach 94.28% and 95.67% for the valence dimension and 94.72% and 95.33% for the arousal dimension,respectively.The classification accuracy and precision on the SEED dataset reach 97.57% and 98.03%,respectively. |