Font Size: a A A

Research On Facial Action Unit Detection Algorithm Constrained By Co-ocurrence Relationship

Posted on:2023-06-29Degree:MasterType:Thesis
Country:ChinaCandidate:S Z ShiFull Text:PDF
GTID:2558306845498994Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
Facial expression is one of the important way of emotional communication between individuals.As an important part of facial expression,facial Action Unit(AU)can be used to explain human emotion.Therefore,facial Action Unit detection technology has been widely applied in fields such as human-computer interaction and affective computing.Relevant researches have important theoretical significance and application value.Most of the previous work focused on designing or learning complex regional feature representation,while ignoring the modeling of co-ocurrence relationship between AU.This paper focuses on the co-ocurrence relationship between space and time of AU,studies the detection of facial AU from different dimensions,and proposes three detection models for facial action units.The main research and work of this paper are summarized as follows:(1)A description of regional co-ocurrence relationship for facial action unit detection model was proposed.The model includes the multi-scale region learning module and the region co-ocurrence relationship learning module.The multi-scale region learning module can divide the face regions with different scales,which can solve the problem of uneven distribution of face AU region size,and does not depend on the location of landmark of the face.The regional co-ocurrence relationship learning module models the co-ocurrence relationship between various regions through LSTM,where it uses rich local information to learn the local co-ocurrence relationship of face.The F1 scores of the model on BP4 D and DISFA datasets are improved by 1.0% and 2.1%,respectively,compared with LP-net using landmark information.The results verify that the multi-scale regional features and their co-ocurrence relationship can effectively improve the performance of AU detection.(2)A dual attention-guided facial action unit detection model was proposed.The model uses predefined explicit attention and learnable implicit attention to guide facial AU feature extraction from spatial and channel dimensions.In addition,since global features and local features can represent different face attributes and complement each other,the proposed model further learns feature representation from global and local perspectives,and fuses global and local features through the designed module to achieve better generalization performance.The proposed model can perform face AU detection in an end-to-end form.On the BP4 D and DISFA datasets,the F1 scores of the proposed model are improved by 1.1% and 6.7% over the results of the SRERL model,which show that the explicit attention and implicit attention introduced in this chapter are helpful for facial AU detection.(3)A facial action unit detection model constrained by Spatial-Temporal coocurrence was proposed.In this model,facial images are decoupled to independent AU features by attention-based feature decoupling module so that the relationship can be modeled by subsequent modules.Therefore,this paper proposes a group of heterogeneous modeling modules for Spatial-Temporal relationship.The specific co-ocurrence knowledge graph module guides the Spatial-Temporal relations by introducing prior relations,and the Spatial-Temporal Transformer module adaptively extracts the SpatialTemporal relations through the interaction of Spatial-Temporal features.In order to further model the temporal relationship between AU,this paper uses the self-attention mechanism to fuse AU features in the temporal relationship dimension.Compared with the mainstream UGN-B and AU-Transformer models,the F1 scores of the proposed model in the BP4 D and DISFA datasets are improved by 1.3%,0.4%,2.6%,and 1.1%,respectively,confirming the important role of Spatial-Temporal relations in facial AU detection.
Keywords/Search Tags:Facial Action Unit, Co-occurrence Relationship, Multi-scale Feature, Attention Mechanism, Spatial-Temporal Interaction
PDF Full Text Request
Related items