Font Size: a A A

Facial Image Domain Transfer Via Generative Adversarial Networks

Posted on:2020-07-23Degree:MasterType:Thesis
Country:ChinaCandidate:S J ShiFull Text:PDF
GTID:2428330605967987Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Face image domain transfer aims at converting facial images between different modalities,such as photos,sketches,etc.It is of wide applications including digital entertainment and law enforcement.Precisely depicting face photos/sketches remains challenging due to the restrictions on structural realism and textural consistency.While existing methods achieve compelling results,they mostly yield blurred effects and great deformation over various facial components,leading to the unrealistic feeling of synthesized images.In view of the excellent performance of the generative adversarial networks in the field of multi-domain transfer,we carried out research work of face image domain transfer,which is based on generative adversarial networks.Specifically,we take the task of face photo-sketch synthesis as an example to study how to generate a corresponding sketch/photo given a photo/sketch.The research work in this paper mainly includes the following two aspects:Firstly,we propose a composition-aided generative adversarial network,which uses facial composition information to help the generation of face sketches/photos.In this method,we utilize paired inputs including a face photo/sketch and the corresponding pixel-wise face labels for generating a sketch/photo,propose a compositional reconstruction loss to focus training on hard-generated components and delicate facial structures,and use stacked composition-aided generative adversarial network to further rectify defects and add compelling details.The experimental results on multiple standard datasets show that the photos/sketches generated by this method have good visual quality and effectively maintain the facial identity information.Secondly,we propose deformable generative adversarial networks with multiple distribution constraints to synthesize face photos/sketches.In this work,we introduce a deformable convolution in the benchmark of generative adversarial network to overcome the geometric distortion between photos and sketches,improve the U-net skip-connection structure to achieve accurate information transmission,add constraints of unconditional discrimination,identity loss and structural loss to improve the authenticity of the generated results,and finally use progressive growing training method to improve the robustness of the training process and the quality of the generated results.The experimental results on multiple standard datasets show that the photos/sketches generated by this method have good visual quality,especially in the textured areas such as hair,which achieved compelling results,and effectively maintain the facial identity information.In conclusion,these two methods we proposed have effectively solved the problem of poor visual quality and geometric distortion in the work of face image domain transfer from different ways.Our results are more clear in visual effects,more natural and realistic in texture,and also more robust.It is better or equivalent to other methods in terms of recognition accuracy and quality evaluation.Besides,our models can also be extended to different face image domain transfer tasks,which is of great value for theoretical research and application promotion in the field of face image domain transfer.
Keywords/Search Tags:Domain Transfer, Face Photo-sketch Synthesis, Generative Adversarial Network, Deep Learning, Image-to-image Translation
PDF Full Text Request
Related items