| Face generation is one of the most popular research directions in the field of computer vision.Researchers widely use face generation for face editing,makeup editing,face repair,face reconstruction,and face translation.In recent years,with the popularity and dissemination of live broadcast,beauty,etc.,as well as the continuous development and progress of deep learning,especially the proposal of generative adversarial network,makeup editing has become one of the most concerned researches.The purpose of makeup editing is to apply makeup or remove makeup to a human face while maintaining the identity of the face.Such single-attribute editing problems are more challenging than traditional domain-to-domain transition tasks,especially when training data cannot be paired.Although the current research based on generative adversarial networks has developed rapidly,and more and more instance-level generative tasks are being implemented,there is still a lack of sufficient understanding of how to map this process from random encoding to image space.Based on such observations,this paper focuses on the research on facial makeup editing.The main work and innovations are as follows:(1)In view of the poor interpretability of generative adversarial networks and the difficulty of extracting and expressing semantic features of makeup attributes,this paper first studies the semantics of face attributes in latent space,and proposes a reversible encoder for disentanglement.Learn the best latent code composition to represent makeup attributes in the latent space of.Then,attribute labels are added to the image and its corresponding latent code,and the two-classification of latent space is realized by means of supervision and constraint.Combined with the related methods of latent code editing,makeup editing is realized.(2)Makeup features include both global makeup styles and local attribute features.In view of this characteristic,the generative network proposed in this paper adopts progressive training and applies instance normalization to perform two style edits at each layer,aligns the mean and variance of the content image to the mean and variance of the style image,and performs global makeup style editing once,to make partial makeup style edits at a time.In the instance-level makeup editing task,for the color and position features of makeup,loss functions and constraints are designed,so that the method proposed in this paper can better match the makeup editing of real faces.Extensive experimental results show that makeup editing based on global and local styles can better preserve identity information while performing facial makeup editing.(3)Since this paper adopts a progressive method in network training,the amount of computation and parameters of the network is high,which is not conducive to the transfer of the model or the application of lightweight electronic devices such as smartphones.Combined with the relevant theory of knowledge distillation,this paper designs a compact student generation network,which replaces the ordinary convolution in the generator with convolution kernels of different sizes for depthwise separable convolution,thereby expanding the receptive field and performing channel mixing.Wash to meet the fusion of information between features to form a wider channel map.At the same time,the generation network designed in this paper is used as a teacher network to perform knowledge distillation,which also realizes the lightweight of the network under the premise of ensuring the performance of the generator. |