Font Size: a A A

Fashion Image Attribute Editing Based On Generative Adversarial Networks

Posted on:2022-08-14Degree:MasterType:Thesis
Country:ChinaCandidate:Q H WangFull Text:PDF
GTID:2531307070452234Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
At present,artificial intelligence has been widely used in the field of fashion,such as virtual try-on,fashion retrieval,fashion attribute editing,etc.Among them,fashion attribute editing can not only meet the needs of consumers to retrieve clothes with specific attributes,but also help fashion designers to design a variety of clothes with different attributes conveniently and quickly.Although some results have been achieved in attribute editing algorithms based on deep learning,due to the complex textures and diverse attributes of fashion images,these methods are inadequate when dealing with fashion attribute editing.First,the current algorithms affect un-target attributes when editing the target attribute.Secondly,the texture of the fashion images finally generated by these methods is different from the original images,and the quality of the images is lower.In view of the above problems of fashion attribute editing,this paper has conducted in-depth research on the method of fashion attribute editing based on generative adversarial networks.The main work is as follows:(1)A coarse-to-fine attribute editing method for fashion images is proposed.In the coarse stage,this method proposes an attention mechanism based on landmarks of clothes.By generating an attention map focusing on the target attribute through the attention mechanism,the un-target attributes are not changed when the target attribute is edited.In the refine stage,the method uses the inpainting model Deep Fillv2 to refine the results of the first stage,and generates textures consistent with the original image through the inpainting model.The qualitative and quantitative experimental results on the fashion dataset OUTFIT and Shopping100 k verify the effectiveness of the proposed method.(2)A fashion attribute editing method based on triplet loss is proposed.First,the triplet loss is used to constrain the discriminator to extract the un-target attribute features of the input image.Then the triplet loss is used to constrain the un-target attribute features of the generated image to be similar to the original image as much as possible,keeping the un-target attributes unchanged.In addition,this method also introduces the high-frequency skip connection into the generative adversarial networks,so that the high-frequency signal lost in the down-sampling process is transmitted to the up-sampling process and the generated fashion images can retain high-frequency signals such as patterns and improve image quality.The qualitative and quantitative experimental results on the fashion dataset OUTFIT and Shopping100 k show that this method can not only modify the target attributes better,but also keep the un-target attributes unchanged.(3)A fashion attribute editing method based on fashion discrimination loss is proposed.This method takes the un-target attributes of a fashion image as the discrimination information,and uses the pre-trained convolutional neural network to extract the features of the edited image and the original image and calculate the fashion discrimination loss.The fashion discrimination loss ensures that only the target attribute is changed during the attribute editing process.In addition,the method also provides a pre-trained convolutional network selection strategy to emphasize the extraction of un-target attribute features.The qualitative and quantitative experimental results on the fashion dataset OUTFIT and Shopping100 k show that this method is better than the current mainstream methods.
Keywords/Search Tags:Generative Adversarial Networks, Fashion Attribute Editing, Convolutional Neural Network, Feature Representation, Contrastive Learning
PDF Full Text Request
Related items