Font Size: a A A

Research On Affect Analysis Method Based On Facial Image In Continuous Space

Posted on:2020-12-14Degree:MasterType:Thesis
Country:ChinaCandidate:L LiuFull Text:PDF
GTID:2428330596979244Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
Computer accurately recognizes the affect of human beings is the basis of intelligent interaction,while facial image is the main carrier of emotional information,the recognition of facial emotion has great research value.However,due to the low accuracy of affect analysis,the application of affect analysis technology is limited.In this paper,two effective solutions are proposed according to two factors which affect the accuracy of affect analysis:facial emotion analysis is susceptible to non-affective factors and facial affect analysis based on deep learning always ignores the correlation between two dimensions of emotion(arousal and valence).For the problem that facial affect analysis is susceptible to non-affective factors,we extract features of active facial patches and reduce the dimension of the features,finally obtaining the discriminative features of emotion.Since facial emotions are usually conveyed by the movement of several facial muscles,existing algorithms mostly extract emotional features based on the entire face,which not only causes high dimensionality of features,but also makes features susceptible to non-affective factors such as head pose,illumination and the difference of appearance which finally influence the accuracy of affect recognition.Therefore,this paper firstly locates the active facial patches and facial landmarks,at the same time we extracts their local appearance feature;then we combines the two types of features and input them into an improved salient denoising stack autoencoder which can reduce the dimensional of emotional features and select discriminative emotional features,Finally,the selected features is used to analysis affect.Experiments show that the proposed method can effectively improve the accuracy of emotion recognition.For the problem that facial affect analysis based on deep learning always ignores the correlation between two dimensions of emotion,we use a multi-output mean square loss function to model the two dimensions of emotion.The sample used in affect analysis is obtained under unconstrained conditions.It has the characteristics of unfixed head pose and uneven illumination.Facial landmark is an emotional descriptor that is robust to illumination.The position of the facial landmarks can accurately reflect the state of the emotional information.In this paper,a fully connected layer is used to fuse the face features extracted by the network and the location information of the facial landmark.Since the two emotional labels have a positive correlation.The multi-output loss function used in this paper contains both the overall loss and the difference between the two labels.Experiments on the Affect-Net dataset show that the proposed method has better prediction performance than the traditional single-output convolutional network.
Keywords/Search Tags:Affect analysis, Auto encoder, Feature select, Arousal and valence, Deep learning
PDF Full Text Request
Related items