| Emotion is a state that integrates people’s feelings,thoughts and behaviors.It includes people’s psychological response to external or self stimulation,as well as the physiological response accompanying this psychological response.At present,emotion recognition has been widely used in human-computer interaction,fatigue driving and other fields.In recent years,with the development of machine learning technology and the in-depth understanding of emotion,emotion recognition has become a hot spot in the field of artificial intelligence,which has attracted the extensive attention of related scientific researchers.The signals used in emotion recognition are generally categories of non-physiological signals(facial expressions,limb movements)and physiological signals such as electroencephalogram(EEG),electromyogram(EMG).Compared with non-physiological signals,physiological signals have the characteristics that cannot be camouflaged.Physiological signals can truly express a person’s emotion state.Electroencephalogram(EEG),as an important physiological signal of emotion performance,can well represent the internal emotion state and is not affected by the control of human supervisors.EEG data typically contains signals collected from electrodes on the scalp,which can reflect changes in the electrical activity of the brain under different emotion states.EEG features can be divided into temporal and spatial features.Time domain features usually refer to the amplitude,frequency,and phase characteristics of EEG signals,which can reflect the rapid changes in EEG signals.Spatial characteristics refer to the spatial distribution characteristics of EEG signals collected from different electrodes,including information about the spatial distribution,relative strength,and interrelationship of EEG signals.In EEG-based emotion recognition,the extraction and analysis of spatial-temporal features can help us investigate the changing mechanisms of EEG signals under different emotion states,thereby effectively distinguishing different emotion states.At the same time,the selection and extraction of spatial-temporal features also directly affect the performance and generalization ability of classifiers.This thesis aims at investigating the effectiveness EEG-based emotion recognition methods based on graph convolution network,especially adopting its spatial-temporal features.The main research contents of this thesis are as follows:(1)Due to the non Euclidean distance characteristics of EEG signals and the fact that existing methods can only learn single domain information in their temporal or spatial domains,this paper proposes a spatial-temporal joint learning method for emotional EEG using GCN and LSTM.The method includes a GCN module to learn the relationship characteristics of non Euclidean distances,and an LSTM module to learn the temporal characteristics of EEG.Experimental results on open datasets show that the average classification accuracy of subjects relying on emotion recognition in the valence and arousal classification tasks is 90.58% and 90.44%,respectively.(2)In the research work(1),the problem of too single edge features in graph convolution based network emotion classification methods was found,and a multi-branch graph convolution model was proposed to solve this problem(MGCNNL).This model considers both the physical connection and correlation connection between channels at the same time.First,use the attention mechanism based on the correlation between channels to solve the problem of the information simplification of the adjacency matrix in the current common graph convolution model.Second,the spatial-temporal attention mechanism is combined to obtain the representation of real emotion characteristics,so as to filter the redundant data and use it for auxiliary emotion classification.Last,the experiment result shows that the graph convolution model can be well adopted to the emotion classification task.Multi-branch graph convolution achieves 93.61% accuracy in the three classification tasks on SEED dataset.(3)This thesis proposes a new multi branch graph convolution models.Specifically,this thesis treats each subject as a specific source domain,and uses the data in the source domain to extract the emotion characteristics of the same subject.The maximum average differential MMD is used to reduce the distribution differences between the source and target domains,while using migration learning for domain generalization to extract domain adaptive features from the source and target domain data.The experimental results show that this method can not only effectively solve the impact of differences in EEG signals between different objects on the generalization ability of the classifier,but also improve the generalization ability of the classifier.Compared with multi-branch graph convolution,the accuracy of emotion recognition after introducing transfer learning is higher,with a classification accuracy of 78.46% across subjects unrelated on the SEED dataset. |