Font Size: a A A

Research On Micro-expression Recognition Based On Representative AU Regions

Posted on:2022-09-06Degree:MasterType:Thesis
Country:ChinaCandidate:W H WeiFull Text:PDF
GTID:2518306311961589Subject:Electronics and Communications Engineering
Abstract/Summary:PDF Full Text Request
Micro-expression is a facial expression with short duration and small movement.It is made unconsciously and exposes the true emotions that people try to hide.Micro-expression recognition has played an important role in lie recognition,psychological diagnosis,business negotiations,national security and other fields.Most of the existing micro-expression recognition methods directly extract features from micro-expression image sequences,mainly including methods based on optical flow,texture description and deep learning.However,these methods ignore the emotional information contained in the facial action unit(AU),resulting in the extracted features may contain redundant information that has nothing to do with expressions,such as appearance,identity and other information.In addition,the sample size of existing micro-expression databases is generally small,and it is prone to overfitting,which seriously affects the accuracy of micro-expression recognition.In response to the above problems,this thesis proposes a representative AU region extraction method based on multi-task learning and a micro-expression recognition method based on cross-modal quadruplet loss.Specifically,the main contributions of this thesis are as follows:Firstly,a representative AU region extraction method based on multi-task learning is proposed.This method constructs a two-stream network for AU mask feature extraction.One task learns to extract all AU regions of the micro-expression sequence,while the other task learns representative AU regions.In the second task,spatial attention and temporal attention mechanism modules are used to assist in extracting representative AU regions.The extracted representative AU area mask is multiplied with the original micro-expression image to obtain a weighted micro-expression image sequence.Taking into account the characteristics of two different tasks of the two-stream network,cross-entropy loss is used when extracting all AU regions because all AU regions account for about half of the image,and the dice loss is used when extracting the representative AU regions because the representative AU regions occupy a relatively small proportion of the image.The experimental results in the MMEW,SAMM,and CASME ? databases show that this method can effectively extract the representative AU regions of micro expressions.Secondly,a micro-expression recognition method based on cross-modal quadruplet loss is proposed.A micro-expression sample is selected as an anchor sample.The micro-expression sample with the same emotion as the anchor sample is used as the positive micro-expression sample.The macro-expression sample with the same emotion as the anchor sample is regarded as the positive macro-expression sample.According to the influence of different stimulus intensity on emotions,macro-expression with emotion"similar" to the anchor sample is taken as negative macro-expression sample.These samples are formed into the cross-modal quadruplet,which greatly expands the number of training samples and speeds up the convergence of the network.In this method,the original micro-expression sequence is first input into the AU mask feature extraction network,and the representative AU region of the micro-expression sequence is extracted.Then the weighted micro-expression image sequence is sent to the 3D-ResNet containing non-local modules as the basic network.Non-local modules increase the receptive field of the high-level network and capture the long-term dependence of the network.During the network training process,the loss function is the sum of the cross-modal quadruple loss and the cross-entropy loss.Only cross-entropy loss is used in the testing process.In this thesis,the subject-independent five-fold cross-validation method is used.Experiments are carried out on the MMEW,SAMM and CASME II databases,which proved that the method in this thesis can achieve better micro-expression recognition results.
Keywords/Search Tags:micro-expression recognition, action unit, attention mechanism, cross-modal quadruplet loss
PDF Full Text Request
Related items