| The generation of character dressing effect is to decompose the key attributes in several images and synthesize the target character image.It has attracted more and more attention in the fields of virtual fitting,clothing design,image editing,personal identity recognition,etc.How to decompose and controllable the character components in the image into the target character image has become an important challenge.Therefore,seeking an effective and controllable method for generating character dressing effects for displaying character dressing effects has important research significance and application value.In existing research on the generation of character dressing effects,active attempts have been made to address changes in character dressing posture,clothing,and style transfer,achieving significant results.However,these studies mostly focus on specific networks with a single set factor,and there is a strong dependence on data samples.During the process of character pose transformation,the texture details and background of character clothing often change,resulting in clothing distortion and other issues.The main challenge faced in this article is how to decouple the control factors in character images and flexibly set generation targets to achieve zero sample controllable synthesis of character images.In this regard,this article proposes a controllable multi style image generation network based on group supervised zero sample learning for the controllable synthesis of character images.The work of this article is mainly divided into the following three parts:(1)A group supervised zero sample learning controllable image synthesis network(C-GZS)is proposed for the generation of controllable character images.The network consists of two independent paths,one for extracting control factors from the image and the other for synthesizing zero sample character images.Control factor extraction maps samples to a decoupled latent representation space,thereby obtaining a global representation of image control factors.The latter reconstructs the desired character image through a weighted combination of control factors.The C-GZS network is designed specifically for data mining with controllable attributes,adding control factors that not only apply comprehensive unwrapping constraints,but also give users more flexible and continuous control over character attributes.Simultaneously using encoder decoder for pose transfer instead of manual semantic segmentation greatly improves the efficiency of attribute decomposition.(2)A controllable multi style character synthesis model based on group supervised learning is proposed.This model adds style information as another control factor on the basis of C-GZS network,and outputs the character images with controllable style.This model first improves the generative adversarial network(Style GAN)based on the style generator architecture,and utilizes the improved Style GAN network to generate style images;Secondly,in order to solve the problem of losing some details generated by style images on the C-GZS network,a self attention mechanism is added to the C-GZS network to better match style features;Finally,qualitative and quantitative experiments are used to evaluate the generated results.The results indicate that the model proposed in this article generates clearer style images and performs better in evaluation criteria such as structural similarity indicators.(3)In view of the specific requirements of group supervision network learning on the training data set,and taking into account the need to display the dressing effect of the characters,this paper designs and implements a display system of the dressing effect of the characters.The system realizes the classification and organization of the control factors of the data set through data pre-processing,and embeds the controllable multi style character image generative model of group supervision zero sample learning proposed in this paper,It can achieve controllable character dress image generation according to user needs. |