Font Size: a A A

Animation Facial Expression Transformation Based On Improved Generative Adversarial Network

Posted on:2023-01-25Degree:MasterType:Thesis
Country:ChinaCandidate:Y W WuFull Text:PDF
GTID:2568306800466674Subject:Software engineering
Abstract/Summary:PDF Full Text Request
The rapid development of computer vision and graphics has driven the related research in the field of animation.Animation Facial Expression Transformation can be regarded as a multi-domain image-to-image translation task on unpaired data.This technology can reduce the labor investment of art creators,inspire creators’ inspiration and help amateurs realize artistic creation,However,there are still the following problems:(1)the existing methods are based on the data set marked by discrete expression tags and multi-domain image-to-image translation methods,but they can not refine the expression control and have a weak degree of expression control.(2)The existing methods use reconstruction loss to maintain the identity information of animation face,but for some pictures,it is difficult to maintain the identity information of face while transforming expression,and the effect of expression transformation is restrained.To solve the problem(1),this paper proposes an animation face expression conversion method based on decoder control.With the generative adversarial network as the basic framework,a multi-domain image-to-image translation model AFET-GAN for animation facial expression transformation is designed.Its generator part has two subnetworks,the first one is the control information mapping network,which maps discrete expression tags to high-dimensional potential control information through affine and non-linear transformations.The second subnetwork is the expression transformation network,which implements multi-level feature control by injecting control information into the decoder section multiple times in the form of Ada IN.Finally,through the qualitative and quantitative evaluation of the experimental results,the AFET-GAN model enhances the effect of expression control,solves the problem that discrete emoticon tags cannot refine the expression control,and the existing multi-domain image-to-image translation model has weak intensity of expression control,and verifies that this method is superior to other methods in animation facial expression conversion.To solve the problem(2),this paper proposes an animation face expression transformation method for maintain identity information.On the basis of AFET-GAN,this method designs an AFET-GAN V2 model by integrating the attention masking mechanism,which enables the expression control of the model to focus on the expression-related regions and avoids the influence of the features of unrelated regions.In the process of model training,an identity information maintenance method based on context loss is designed.The feature similarity between the output image and the input image of the model is calculated by using a pre-training loss network instead of rebuilding the loss to maintain face identity information.Finally,through the qualitative and quantitative evaluation of the experimental results,the AFET-GAN V2 model solves the problem that it is difficult to maintain face identity information based on the reconstruction loss method in some cases,and makes the effect of expression conversion more natural.It verifies that this method is superior to other methods in maintaining identity information in animated facial expression conversion tasks.
Keywords/Search Tags:Animation content generation, Expression transformation, Generate adversarial networks, Image-to-image translation
PDF Full Text Request
Related items