Font Size: a A A

A Multimodal Emotion Recognition System Based On EEG Signals

Posted on:2024-04-29Degree:MasterType:Thesis
Country:ChinaCandidate:H WangFull Text:PDF
GTID:2530307115999069Subject:Electronic Information (Computer Technology) (Professional Degree)
Abstract/Summary:PDF Full Text Request
Emotion is an expression of the inner human experience,and it plays a crucial role in daily life.In addition to playing an important role in understanding personal emotional states,assessing emotional health,and enhancing human-computer interaction,emotion recognition is also widely used in healthcare,defense and security,and business and finance.Traditional emotion recognition methods are mainly implemented through facial ex-pressions,among which face-based emotion recognition is an important method for non-physiological signal emotion recognition because it is convenient and efficient.However,the recognition results of this method are easily affected in the case of deliberate human subjective hiding.Existing studies have shown that using physiological signals(e.g.,EEG signals,EYE signals,etc.)for emotion recognition can effectively avoid the above problems and more accu-rately identify real human emotions.Therefore,multimodal emotion recognition methods based on EEG signals and face images combine physiological and expression information,which can further improve the accuracy and robustness of emotion recognition.It has wide application prospects in many fields.In this paper,we focus on the methods related to both EEG and face image modalities in the direction of emotion recognition,and the main work is as follows:(1)We propose to fuse the experimental base signal(EEG signal recorded without stimulation),and experiments show that it helps to improve the accuracy of emotion recognition.We propose a three-dimensional input form for EEG segments that fuses the differential entropy and power spectral density dual fea-tures,preserving the spatial information between its different electrodes.The two-dimensional planes containing feature vectors are obtained by means of equivalent two-dimensional matrices and stacked into three-dimensional EEG cubes.We propose a continuous convolutional neural network model with a 3D EEG cube as input and discard the pooling layer in the convolutional neural network.Extensive classification experiments on DEAP EEG dataset show that the pro-posed method and model can better explore the mechanism of emotional activity in the brain and effectively implement emotional classification.(2)We propose a multimodal emotion recognition decision fusion framework,optimize the network architecture of CNN for face modality,and add a channel attention mechanism.The classification results of EEG signals and face images are fused at the decision level,and a multimodal emotion recognition system is designed and developed.The system uses Zebo Vi-sion Bluetooth dry electrode acquisition headset and local camera to acquire raw EEG and face signals for analysis and processing respectively,and provides graphs for emotion visualization.The system also supports the functions of training neural network models,setting parameters,and automating the processing of data sets with good interactivity and extensibility.
Keywords/Search Tags:multimodal emotion recognition, EEG signal, face image, feature fusion, continuous convolutional neural network
PDF Full Text Request
Related items