Font Size: a A A

Research On Single-Modal And Multi-Modal Emotion Recognition Based On EEG Signals

Posted on:2024-03-29Degree:MasterType:Thesis
Country:ChinaCandidate:Q WuFull Text:PDF
GTID:2530307103975509Subject:Computer technology
Abstract/Summary:PDF Full Text Request
As an important branch of artificial intelligence,research on emotion recognition not only has significant strategic significance but also has a wide range of application values in improving human life quality and promoting social development.This paper conducts research on single-modal and multi-modal emotion recognition based on electroencephalogram(EEG)signals,explores emotion feature extraction from EEG signals,and then conducts research on single-modal and multi-modal emotion recognition based on EEG signals.Currently,research on emotion recognition based on EEG can be broadly divided into two categories: single-modal emotion recognition based on EEG and multimodal emotion recognition based on EEG.The former focuses solely on using EEG data for emotion recognition research.However,a single modality signal is insufficient to comprehensively describe emotions,and the use of multiple modalities allows for a more comprehensive depiction of human emotional states.With the development of emotion recognition research based on other physiological signals,an increasing number of scholars have started to explore multimodal emotion recognition based on EEG.In the field of EEG-based single-modal emotion recognition,the performance of Graph Convolutional Networks(GCN)in emotion recognition has been confirmed.However,GCN models have limitations,and using GCN alone makes it challenging to extract meaningful features effectively.Furthermore,existing research has overlooked the implicit information among different sample classes.In the field of EEG-based multi-modal emotion recognition,existing fusion frameworks have not captured the relationship between different modalities of the same emotional category or the differences between different emotions and modalities.Furthermore,some studies have also neglected to considering the contribution of different modalities to emotion recognition.In addition,most existing research only involves the fusion of two modalities,without considering the fusion of three or more modalities,which fails to demonstrate the model’s scalability.This thesis addresses these issues with the following improvements:1)The Siamese Graph Convolutional Attention Network(Siam-GCAN)is proposed for single-modality emotion recognition based on EEG signals.This approach addresses the limitations of GCN in emotion recognition and the issue of insufficient exploitation of sample information in existing models.The model first uses graph convolution to extract spatial information,then uses deep attention modules to extract deeper and more valuable information,and at the same time,uses a Siamese network module to fully explore the relationship between samples of the same class and different classes.The experiments demonstrated that on the SEED dataset,the Siam-GCAN model achieved a 4.04% improvement in emotion recognition accuracy compared to the baseline model,reaching 94.78%.2)The Adaptive Pseudo-Siamese Fusion Network(APSFN)is proposed for multi-modal emotion recognition based on EEG and eye-tracking signals.This approach addresses the issues of insufficient exploitation of intermodal information in existing models and the lack of consideration for the contribution of different modalities in emotion recognition.The model uses a pseudo-siamese network module to transform the emotion information of each modality and uses similarity constraints to coordinate the features of different modalities into a similar hyperspace.Then,an adaptive feature fusion module is used to learn the contribution of different modalities to emotion recognition,which is used as the weight of each modality in the model’s classification decision.The experiments confirmed that on the SEED-IV dataset,APSFN achieved a 6.41% improvement in emotion recognition accuracy compared to the simple fusion network,reaching 82.75%.3)The Adaptive Pseudo-Multiple Fusion Network(APMFN)is proposed for multimodal emotion recognition based on EEG,EOG,and EMG signals.The model extends the APSFN model by exploring three fusion methods and providing a solution for fusing more modalities.The model uses a pseudo-multiple-twin network module to coordinate the features of different modalities in a similar hyper-space.Then,an adaptive feature fusion module is used to adaptively fuse the information of multiple modalities.The APMFN model fixes the dimension of the fusion features,making it somewhat extensible.The experiment shows that even if the signal quality of each modality varies greatly,the model still has a good fusion effect.On the DEAP dataset,the APMFN model achieved a 2.15% improvement in emotion recognition accuracy compared to the feature concatenation network,reaching 96.64%.
Keywords/Search Tags:Emotion recognition, EEG signal, Graph convolution neural network, Siamese network, Pseudo-siamese network
PDF Full Text Request
Related items