Font Size: a A A

Research On Image Style Transfer Based On Generative Adversarial Networks

Posted on:2020-04-28Degree:MasterType:Thesis
Country:ChinaCandidate:L R KongFull Text:PDF
GTID:2428330596495131Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Image style transfer based on deep learning is a new research hotspot in the field of digital image processing.The mathematical modeling process of the traditional image style transfer method is too complicated,and the image synthesis effect is too terrible.In contrast,the image style transfer method based on deep learning has the advantages of powerful functions,excellent effects,and flexible modes,and has achieved breakthrough results that have gained much attention.Constructing a high-level abstract feature space for an image through deep learning can effectively separate and reconstruct the specific features of the image,and thus successfully complete the task of image style transfer.Gatys et al.firstly proposed an image style transfer method using a convolutional neural network model as a feature extractor.The basic idea of the method is the feature fitting of image data.Firstly,the pre-trained VGG-19 convolutional neural network model is used as the feature extractor,and then the content abstract feature representation and the style abstract feature representation of the image are separated and reconstructed,and finally merge into a new stylized image.However,the method which Gatys et al.proposed relies too much on the feature extractor,and the effect of stylized images is not realistic enough,and its way of feature acquisition is relatively fixed.Therefore,this paper chooses the method of generative adversarial networks to achieve image style transfer.The basic idea of generative adversarial networks is to fit the distribution of image data,aiming at obtaining high-quality and high-fidelity of visual effects.Based on the analysis of representative convolutional neural network model and generative adversarial networks model,this paper takes into account the shortcomings of existing image style transfer methods and fully considers the impact of adversarial training process and related constraints on image generation.In order to further improve the quality of image generation and optimize the modeling method of image style transfer method,this paper respectively proposes a supervised and unsupervised image style transfer method which based on the current mainstream deep learning technology.And the main work content has the following two points.Firstly,in the supervised image style transfer method,the regular Chinese font data is taken as the research object,and the Wasserstein GAN is used as the basis for the generation of adversarial network framework,and the residual network block is used to optimize the performance of the generative model,and the mean square error is used as the constraint of the final result,and achieve one-to-one and many-to-many high-quality image style transfer between Chinese fonts.Secondly,in the unsupervised image style transfer method,the CelebFaces Attributes Dataset(CelebA)is taken as the research object,and the unsupervised adversarial training method of CycleGAN is adopted,and has further optimized the generative model,and in order to effectively separate and reconstruct the image background and abstract features,the image mask is built in the generative model,and the fully dynamically generated image mask effectively solves the problem of regional interference that is rather easy to occur in the unsupervised learning process,and finally got excellent visual effects.
Keywords/Search Tags:image style transfer, generative adversarial networks, image mask, convolutional neural network, deep learning
PDF Full Text Request
Related items