Font Size: a A A

Research On Cross-subject And Cross-session EEG Feature Analysis And Calculation Model For Emotion Recognition

Posted on:2024-04-12Degree:MasterType:Thesis
Country:ChinaCandidate:H G LiuFull Text:PDF
GTID:2530307103469954Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Emotion is a very important psychological state in interpersonal communication.It is an essential part of realizing intelligent Human-computer interaction for machines to accurately and automatically recognize human emotional states.EEG signals,which are produced by neural activity in the central nervous system,are difficult to disguise and are more reliable than traditional data patterns such as facial expressions,text and speech.Due to the non-stationary nature of EEG signals,significant individual differences are presented.In order to align the data distribution of different subjects,transfer learning has a good performance in cross-subject EEG emotion recognition.However,most existing models first learn domain invariant features and then estimate target domain label information.Such a staged strategy breaks the internal connection between the two processes,inevitably leading to sub-optimal problems.On the other hand,EEG data is multi-rhythm and multi-channel,on which basis we can extract multiple features for further processing.In the emotion recognition based on EEG data,can we conduct more in-depth and detailed research at the feature level,that is,whether there are some label-common features shared by different emotional states,and the label-specific features related to each emotional state? However,this issue is ignored by most existing research.In this thesis,for the sub-optimal problem caused by the staged strategy of learning domain-invariant features first and then estimating target domain label information,we propose a joint feature transfer and semi-supervised emotion recognition model,in which the shared subspace projection matrices and target labels jointly iterate towards the optimal direction.Specifically,in the shared feature subspace,compared with the original EEG data,the distribution difference between the source and target domains can be effectively reduced.At the same time,the gradually aligned EEG data will obtain a better label prediction performance,more accurate target domain label estimation allows better alignment of EEG data in the shared feature subspace.A large number of experiments were carried out on the SEED-IV data set and the SEED data set,and the results showed that: 1)the joint learning mode significantly enhanced the emotion recognition performance;2)the shared subspace of analytic learning was quantized to study the space-frequency activation patterns of critical EEG bands and brain regions in cross-subject emotion expression.Aiming at whether EEG data can be further studied in more depth and detail,that is,whether there are label-common features shared by different emotional states,and the label-specific features related to each emotional state.To this end,we propose a Joint label-Common and label-Specific Features Exploration(JCSFE)model for semi-supervised cross-session EEG emotion recognition in this thesis.To be specific,JCSFE imposes the norm on the projection matrix to explore the label-common EEG features and simultaneously the norm is used to explore the label-specific EEG features.Besides,a graph regularization term is introduced to enforce the data local invariance property,i.e.,similar EEG samples are encouraged to have the same emotional state.Results obtained from the SEED-IV and SEED-V data sets experimentally demonstrate that JCSFE not only achieves superior emotion recognition performance in comparison with the state-of-the-art models but also provides us with a quantitative method to identify the label-common and label-specific EEG features in emotion recognition.
Keywords/Search Tags:EEG signals, emotion recognition, transfer learning, label-common features, label-specific features
PDF Full Text Request
Related items