Animation style video games have been increasingly popular.However,high quality background rendering requires a lot of resources.In the development of such game,it is usually necessary to draw the difference images under different time conditions,and the background pictures should be richer in texture or color and clearer in edge than the general animation pictures.The existing animation style transfer methods only support the one-way mapping from photo domain images to animation domain images,and are not competent for high quality rendering.To solve the problem,from the perspectives of loss function,network model and architecture optimization,a method of generating animation style video game background pictures from photos based on conditional generative adversarial networks is proposed.i.Traditional anime stylization algorithms are usually based on convolutional operations,which are difficult to generate high-quality anime style images.The generative adversarial networks based anime stylization algorithm uses unpaired datasets for training,which can generate high-quality anime images with obvious anime features,but the generated images color tend to become distortion,lose the content features of the input image,and artifact problems will occur.To solve these problems,a generative adversarial network algorithm based on the improved image segmentation algorithm is proposed,and it is experimentally demonstrated that the algorithm can convert photos into anime images with less color distortion and smoother textures.ii.The anime stylization algorithm based on generative adversarial networks can only accept images as input and cannot control the generated data,which possesses low generality,while the image backgrounds in games usually need to be drawn under different time condition nodes,in order to solve this problem,an algorithm based on edge detection with conditional generative adversarial networks is proposed.The experimental results show that the algorithm makes the generated images possess the features of the input vector data on the basis of stable generation of high-quality stylized anime images.iii.The network using a single generator and discriminator architecture suffers from the problem of color distortion in the generated images,where the generators are more sensitive to the edge line features of the images and paint the wrong colors in specific region.In order to make the generated images have more stable colors while maintaining the input photo content,a paired generator and discriminator network structure is proposed and the model is configured on a Tensorflow.js-based web platform.The experimental result shows that the improvement based on the network structure can further improve the quality of the generated images,reduce the chance of color distortion,and improve the stability of the proposed. |