Font Size: a A A

Research On Face Expression Recognition Based On Double-layer Convolutional Neural Network

Posted on:2022-10-10Degree:MasterType:Thesis
Country:ChinaCandidate:F F ZuFull Text:PDF
GTID:2518306530473494Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Facial expression is the external mapping of human inner world,and it is an important way to analyze emotion.Through the study of facial expression,we can understand the changes of emotion,so as to correctly guide emotion.Facial expression is a complex research object,involving biology,sociology,psychology,behavioral cognitive science and other fields.Nowadays,with the significant progress of some key disciplines,facial expression recognition research has become a hot field,and has been applied in many scenarios,such as telemedicine,safe driving,emotional analysis and so on.For the research of facial expression recognition,the current mainstream method is the deep learning method based on convolution neural network.This method realizes the automatic extraction of facial expression features by convolution of facial expression images,and can achieve good recognition results.However,most deep learning methods need to stack too many neural network layers to extract abstract features with strong representation ability,which makes the network architecture have a large number of parameters,so it is not easy to train.To solve this problem,this paper makes some modifications to the traditional convolutional neural network architecture,and designs three new convolutional neural network architectures for facial expression recognition.Firstly,a neural network architecture with only 9 layers of neural units is designed.The convolution layer of the architecture uses two layers to stack continuously,which ensures that the feature extraction is not easy to produce interference,and the expressive feature vectors can be extracted.In order to ensure the accuracy of the network,the use of a more concise structure,less parameters also make the network easy to train.The second method is to add a branch to form a two-way neural network on the basis of the double-layer convolution neural network architecture.The branch structure of the network is simpler than that of the main circuit.The double-layer convolution structure is replaced by a single-layer structure.The feature vectors extracted from the main circuit and the branch are fused by adding elements one by one,so as to improve the recognition accuracy.In addition,the branch network and the main network need different data types.The main network uses the original data,and the branch network uses the face detection algorithm to detect the cut expression image.The third method is to introduce the center loss function on the basis of the double-layer convolutional neural network.The specific method is to add a center loss module.The two inputs of the module are the correct vector and the expression feature extraction vector.After unifying the dimensions of the two vectors,the gap between the two vectors is calculated,and the gap is narrowed in each training.The central loss function can be used to cooperate with the original Softmax loss function to achieve the purpose of increasing the difference between classes and narrowing the difference within classes,so as to improve the recognition accuracy.In order to verify the proposed three kinds of network architecture,three kinds of facial expression data sets are selected,which are fer2013 data set,Jaffe data set and CK + data set.Each network architecture is tested on these three data sets to illustrate the generalization ability of the model.A total of nine experimental results are obtained.Compared with other methods,the advantages of the three kinds of network architecture are fully proved.
Keywords/Search Tags:Deep Learning, Central Loss function, Convolutional Neural Networks, Facial Expression Recognition
PDF Full Text Request
Related items