| For a long time,artistic images often require a lot of human effort and time to be polished.With the development of human civilization,the emergence of computers has changed many human living habits.In the research of past dynasties,how to realize fast artistic painting with the help of computer has become a new direction in the field of painting.Style transfer came into being,and so far it is mainly divided into neural style transfer and non-neural network methods based on manual design.Using the powerful computing power of the computer,ordinary people can experience the generation of a new image with just a few lines of code.Currently designed migration models use feature statistics alignment or attention to achieve style fusion but often suffer from information loss or slow inference speed.For example,the forward reasoning method generates high quality but slow speed,and the arbitrary style transfer speed is fast,but the quality and texture will be unsatisfactory.This paper discusses the diversity of style transfer on the basis of generative adversarial networks by studying the style transfer algorithms at home and abroad in recent years.This type of method trains the model through confrontational learning,which has a good generation effect.On this basis,this paper uses the current advanced feature fusion algorithm to enhance the image transfer effect on the style image guide.The specific content is as follows:(1)On the basis of CycleGAN,a multi-scale attention fusion mechanism is added to achieve arbitrary style transfer,and the comparative learning module is used to enhance the transfer effect.CycleGAN solves the data mismatch problem by recurrent training and introducing a cycle consistency loss.However,it only pays attention to the whole in the migration,resulting in the lack of details of the generated image or the visual image that violates human cognition.In this paper,by adding a multi-scale attention mechanism,the content features and style features are fused at the corresponding scales,and at the same time,the previous shallow features are fully utilized to supplement the lost texture information to achieve high-quality transfer effects.Then this paper further uses contrastive learning to strengthen the differences of different categories of styles and improve the diversity of images generated by the model.In addition,considering the complex calculation amount of the model and the possibility of deployment,this paper further optimizes the model to achieve lightweight.Specifically,the depthwise separable convolution is used to replace the ordinary convolution to complete the feature acquisition and upsampling process,so as to reduce the amount of calculation.(2)After discussing the current style fusion method,it is found that the feature matching method based on the Gram matrix produces high computational consumption,while the artificially designed statistical alignment method based on Adaptive Instance Normalization(Ada IN)The learning ability of the model cannot be fully exploited.Compared with the above methods,this paper designs a new style transfer model,which automatically learns style features through the model to generate a series of guide weights.Content features implement stylized transformations under guidance.Then,a hybrid attention mechanism is used to enhance the fusion effect on progressive upsampling.After conducting a large number of experiments,data evaluation methods such as Frechet Inception Distance(FID)and Structural Similarity(SSIM)are used to consider the quality of the generated images.Compared with the current advanced methods,the model proposed in this paper can balance the generated image quality and inference efficiency,and the inference time reaches more than ten milliseconds.This paper tries to improve the quality and efficiency of style transfer by exploring several fusion methods and using the current advanced models.The research results prove the feasibility of the improved model in this paper. |