| Facial attribute editing is a challenging topic in the task of style transfer.It aims to transferring some specific attributes of a facial image to generate an image with the same face identity but different attributes.In this paper,three kinds of facial attribute editing models can simultaneously obtain the local and global features of the facial image,and can realize the editing of single or multiple attributes of the face and the strength control of the facial attribute.First of all,this paper proposes a facial attribute editing method based on feature mask.This method adds a transformer module to the generator for facial attribute transfer.HDC attention module is introduced to obtain global information for generating facial feature mask.Then,the facial feature mask and transformer are used to realize the single or multiple attribute transfer and the strength control of facial attributes,which makes the generated image more realistic.The method is tested and analyzed quantitatively and qualitatively on CelebA dataset,which proves the effectiveness of the method.Secondly,this paper proposes a facial attribute editing method based on self-attention mechanism.The generator of this method is composed of Convolutional Neural Networks and Vision Transformer,which respectively obtain local and global features.The structure of Unet is also adopted,and its jump connection is used to fuse the shallow and deep features of the face.This method also adds a CBAM attention mechanism to prevent feature information loss.The generator also generates semantic object attention mask and content mask,which help to separate the foreground and background of the image.Finally,training and experiment are carried out on CelebA dataset,which proves the effectiveness of the experiment.Finally,this paper proposes a method of facial attribute editing based on global perception.The encoder of this method consists of residual blocks and global perception modules,making the image have local and global perception.The transformer consists of a multi-scale attribute feature fusion module and a multi-layer perceptron transformer.The multi-scale attribute feature fusion module is used to compensate for the lost attribute features on the face in the down-sampling of encoder.The multi-layer perceptron transformer is used for attribute decoupling.The facial attribute vector is divided into related and unrelated attribute vectors,and the irrelevant attribute vectors are kept unchanged,and the related attribute vectors are transfered.This method obtains good results on CelebA-HQ dataset. |