Font Size: a A A

Research Of Multimodal Emotion Recognition Based On Feature Fusion

Posted on:2022-08-27Degree:MasterType:Thesis
Country:ChinaCandidate:S WangFull Text:PDF
GTID:2518306350480684Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
As an important part of human-computer interaction,more and more attention has been paid to human emotion recognition.Emotion recognition is closely related to human daily life.If the machine can effectively identify the user's emotional state,it can not only improve the effect of human-computer interaction,but also avoid accidents and face dangers.The first mock exam is to extract facial expression features from facial expression sequences,and to extract the voice features from human speech signals.Single modal emotion recognition is performed separately for facial expression or speech features.Multimodal recognition is to combine these two elements for emotion recognition.For the facial expression features of human face,this paper needs to obtain the key frames in the video,process the key frames,use Ada Boost algorithm to detect the face,then preprocess,extract dense SIFT features,and finally use multi-core support vector machine classifier for emotion recognition.For emotion recognition based on speech features,MFCC features and LPCC features are extracted respectively on the basis of time-domain and frequency-domain analysis.MFCC and LPCC features are fused and multi-core support vector machine is used to realize emotion recognition.In the multi-modal fusion method,firstly,the classical fusion methods such as serial fusion,PCA fusion and CCA are studied.On this basis,the feature fusion method based on deep learning is used to fuse the features of the two modes.This method simulates the cognitive process of human brain,so its fusion results are more in line with the actual situation.Two constrained Boltzmann machines are obtained by training the features of expression and speech respectively,and then the output of the two models is fused to get a new fusion feature,which is sent to multi-core support vector machine classifier for emotion recognition.The first mock exam and multi-modal emotion recognition system is designed and implemented on SAVEE dataset.The expression in video is recognized in real time,and the generalization performance is displayed.The training process is also recorded in log form,and a good recognition result is achieved.
Keywords/Search Tags:emotion recognition, feature extraction, multimodality, feature fusion, multicore support vector machine
PDF Full Text Request
Related items