Font Size: a A A

Research On Emotion Recognition Based On Time-Frequency Feature Fusion Network

Posted on:2022-03-25Degree:MasterType:Thesis
Country:ChinaCandidate:Y ZhengFull Text:PDF
GTID:2518306527955239Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
As a research hotspot of brain-computer interface,emotion recognition plays an increasingly important role in many fields such as mental health,biomedicine,and art evaluation.Compared with the previous emotion recognition based on facial expressions,the experimental research of emotion based on EEG signals can not only overcome the false recognition caused by the unobvious facial expression features and artificially hidden emotions made wrong facial expressions,but also can use deep learning methods to deeply dig deep features and highly recognize complex emotions.However,EEG signals have the characteristics of complex and diverse frequency bands,strong noise interference,and imbalance.When using deep learning methods based on EEG signals for emotion recognition,if a single EEG feature such as time domain features and frequency domain features are used for classification,the accuracy of image emotion recognition will be severe.How to use more features to describe EEG signals and combine the advantages of multiple features to improve the accuracy of emotion recognition has always been a hot research topic.Based on this,this article focuses on the extraction and fusion of time-domain features and frequency-domain features,as well as the construction of fusion networks,and proposes an emotion recognition method based on the time-frequency feature fusion network.The specific content is as follows:1.An emotion recognition method based on multi-feature fusion is proposed to help EEG signal fully express.This method extracts multiple types of features(energy entropy,differential entropy,wavelet mean,and wavelet standard deviation),and combines these features in pairs to distinguish them from the existing multi-dimensional features.Mathematical methods are used to perform fusion calculations to obtain fusion features.The CNN network built by the fused feature input is classified to calculate the classification accuracy of emotion recognition,and the optimal feature combination strategy is found by comparison.This method is tested on the DEAP data set to compare the accuracy of a single feature with the accuracy of multi-feature fusion.The results show that the accuracy of emotion classification of multi-feature fusion is higher than that of single feature recognition,and the accuracy rate can be improved at the highest.5.1%,verifying the feasibility of this method in emotion recognition.2.A fusion neural network based on CNN&LSTM is proposed to perform the fusion of frequency domain features and time domain features for emotion recognition.This method inputs the two types of features into the CNN part and LSTM part of the CNN&LSTM fusion network by calculating the frequency domain features and time domain features of the EEG signal.After feature extraction,the two types of features are connected to a time-frequency fusion feature vector to complete the feature Fusion,and finally completed emotion recognition.In the experimental process and single network feature recognition,a comparative experiment was set up.The experimental results show that on the DEAP public data set,the classification accuracy of the features obtained by using the CNN&LSTM fusion network is better than using a single CNN frequency domain feature Or the classification accuracy of LSTM temporal features has increased by 7.3%,which verifies the feasibility of this method in emotion recognition.
Keywords/Search Tags:Brain-computer interface, emotion recognition, feature fusion, fusion network
PDF Full Text Request
Related items