Font Size: a A A

Research On Cartoon Style Transfer Method Based On Deep Learning

Posted on:2020-07-03Degree:MasterType:Thesis
Country:ChinaCandidate:X Q WuFull Text:PDF
GTID:2428330596479564Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
In the field of image processing,some researchers have used deep learning to achieve good results in processing image stylization.Image stylization is the process of generating an image with another style without changing the content of photo.The development of cartoons can be traced back to the period of archaic homo sapiens.Nowadays,comics are also popular among people of all walks of life as a popular way of entertainment.The current research results show that the style maps of most style transfer algorithms are images with strong textures such as oil paintings,but the styles of cartoon images with black and white dotted lines are rarely involved,So this article has done research on the style migration of comic images,and the details are as follows:(1)After reading the related literature of style transfer,the experiments of fast patch-based style transfer of arbitrary style,the style transfer based on adaptive normalization,the style transfer-l based on loss perception function and the cartoon confrontation generation network are carried out,and the four methods were compared.The quality of the image is generated in the cartoon style transfer,and the results are analyzed.The model based on the perceptual loss function is selected as the basis for the cartoon-style transfer model.(2)Real-time style conversion based on loss perception function,First,the normalization method was changed based on the original network.Since image generation results mainly depend on an image instance,instance normalization is used instead of batch normalization.Second,when generating images using neural networks,images are often constructed from low-resolution and high-order descriptions,which allows the network to first depict rough images,fill in details,and often produce checkerboard patterns.Therefore,the nearest neighbor interpolation method is used to scale the image,and then convolution,in this way instead of deconvolution,can improve the quality of the generated image,away from the checkerboard effect.(3)The feature maps extracted fr-om different layers of the neural network are analyzed.Based on a large number of experiments,the features extracted by the shallow network are more biased towards the point line,while the deep network extracts the features more toward the whole.Therefore,the weight ratio of different convolutional layers is selected to extract features from the comic image,making the result more suitable for comic style transfer.(4)Most of the previous style migration model training used the most commonly used Coco dataset.The images in the Coco dataset are photos.In order to get a more suitable comic style model,this article builds a comic-style image,this dataset contains more than 1,000 standard black-and-white cartoon images.The model is trained by the comic dataset.By comparing the results of different datasets under the same parameters,the best effect of style transfer in comic data set training.(5)Under the operating system of Ubuntu,built Pyqt interface development tool based on Python language.The interface was developed by using Pyqt.The system can select any image input,and different parameters can be selected for adjustment in the interface,compare the effects of different parameters.The effect is also to train different images to get different cartoon style models.
Keywords/Search Tags:Cartoon Style, Style Transfer, Deep Learning, Pyqt, Neural Network
PDF Full Text Request
Related items