Font Size: a A A

Research Of The Fusion Of EEG Signals And Facial Expressions Based Emotion Recognition

Posted on:2021-04-10Degree:MasterType:Thesis
Country:ChinaCandidate:Z WangFull Text:PDF
GTID:2518306464477444Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Emotion recognition system aims to establish a harmonious HCI by endowing computers with the ability to recognize,understand,and adapt human emotions.Recently,with the development of noninvasive sensor technologies,machine learning algorithms and compute capability of computers,the cognitive science has been promoted.Therefore,emotion recognition as the frontier of cognitive science has received more attention.The multimodal emotion recognition outperformed the single modality.The reason is that the multimodal emotion recognition could introduce more complementary emotional information than the single modality.Hence,the fusion of EEG signal and facial expression based emotion recognition framework was proposed in this thesis.The main work of this thesis is given as follows:(1)Visual interface based emotion annotation mode was designed to improve the reliability of annotations.The performance of the emotion recognition is significantly influenced by annotations of emotions.According to the problem,the SAM system was applied as the standard of annotations in this work.The visual annotation interface was designed,and the annotators could continuously annotate the valence of the subjects' facial expression response by the joystick operation.The experimental results have shown that the proposed mode has advantages in consistency,real-time performance and operability of annotations.(2)The t-SNE based EEG feature reduction was researched to improve the performance of emotion recognition.Due to EEG signals are non-linear and nonstationary,manifold learning algorithms have more advantages in feature mapping than traditional linear feature reduction algorithms.Therefore,manifold learning algorithm t-SNE was applied for the EEG feature reduction.The experimental results have shown that t-SNE could obtained more representational features of EEG signals,the average concordance correlation coefficient(CCC)between annotations and results have reached 0.534±0.028.(3)The facial landmark localization based feature extraction was applied for the implement of facial expression based emotion recognition.The valence is evaluated by the changes of eyebrows,eyes and mouth in the SAM system.According to the characteristics of the evaluation in the SAM system,we proposed a facial landmark localization model based feature extraction method for emotion recognition.The selected facial geometric features are intuitive,explainable,and highly relevant that is beneficial to the performance,and average CCC between annotations and results have reached 0.568±0.031.(4)The long short-term memory network(LSTM)based decision level fusion was proposed to achieved the fusion of EEG signals and facial expressions based emotion recognition.There is an inherent contextual relationship between emotions in the time domain.The representation of the temporal context of emotions is beneficial to improve the performance of emotion recognition.Hence,the LSTM algorithm was applied to accomplish the decision level fusion between the EEG signals and facial expressions,and captured the temporal conte of emotions.According to experimental results,the proposed decision level fusion outperformed the single modality,and the average CCC between the annotations and results have reached 0.625±0.029.
Keywords/Search Tags:Multimodal Emotion Recognition, Emotion Annotation, EEG Signal Processing, Facial Expression Recognition, Decision Level Fusion
PDF Full Text Request
Related items