Font Size: a A A

Research On Image Style Transfer Method Based On Generative Adversarial Network

Posted on:2021-01-23Degree:MasterType:Thesis
Country:ChinaCandidate:Z H HuangFull Text:PDF
GTID:2428330602978119Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the rise of deep learning,the image style transfer model based on deep learning has developed rapidly and has a wide range of applications,promoted by generative network model to generate adversarial network,the ability of the image style transfer model based on the generation adversarial network has been further improved.However,as an auxiliary tool for artistic creation,it is far from meeting the requirements,the image style transfer modcl based on generative adversarial network has the problems of unsatisfactory converted image quality and unstable network model.Aiming at the above problems,this paper makes improvements to CycleGAN.CycleGAN is an unsupervised image conversion model,although CycleGAN can be trained without paired data sets and can quickly style images,the model itself still has the disadvantages of traditional generation against the network,such as mode collapse,insufficient training stability,and image the stylized quality is not good.This paper first makes two improvements to the model.First introduce spectral normalization in the discriminator,the function of spectral normalization is to normalize the spectral norm of the parameter matrix after convolution,so that the neural network is limited to a range,to achieve the 1-lipschitz limit on the discriminator,making the function smoother.Subsequently,we also made improvements to the generator,a new type of residual structure was added to the generator,the new type of residual structure optimizes the signal propagation through a reasonable combination of activation functions,convolutional layers,and normalization operations,and reduces training error,the identity mapping is then used to ensure smooth gradients and stable networks.Finally,a set of ablation experiments and comparison of data results from four different artistic styles verify the improved conversion quality of the improved model algorithm.The conversion result not only inherits the source domain image in color and structure information,the image conversion result is the most stable among several models,and it has no mechanized texture and is more creative.This paper continues to use an improved algorithm to perform experiments on the anime style dataset,however,the converted images of the anime style lack the clear lines and bright colors that the anime style should have,inspired by the attention mechanism,a suitable attention mechanism network model was designed for the animation style,the model is learned by capturing the main feature images of the source domain image,and use the landscape data set for training,the converted images obtained through the pre-trained model not only has clear lines and vivid colors,but the global content of the images is also more anime,finally,the model was verified by a set of ablation experiments and a set of comparative experiments.
Keywords/Search Tags:generative adversarial network, image style transfer, spectral normalization, new residual structure, attention mechanism
PDF Full Text Request
Related items