As we all know,font are an important tool of expression and communication in daily life and social life.However,with the change of the times and the evolution of culture,the function of font style in reflecting culture succession,emotional appeal,visual aesthetics,brand image,product information,industry characteristics,etc,has been paid more and more attention.Especially in recent years,with the vigorous development of digital carriers,font design has gradually become a daily demand of today's society and life.Chinese fonts have different style requirements in different scenes,and the demand for new personalized font style will continue to increase.However,Chinese font design needs to be written word by word by professional designers and their teams,which is a heavy task and time-consuming labor-intensive work.Therefore,the research on Chinese font style transfer has important significance for the design of new fonts.Chinese font style transfer is a style conversion from source font to target font through a specific model,but the traditional Chinese font style migration method is less efficient,less generalized,and the effect is not satisfactory.With the successful application of deep learning technology in the field of image style transfer,the method of Chinese font style transfer based on convolutional neural network has attracted great attention.Especially the recently proposed Chinese font style transfer algorithm based on Generative Adversarial Nets(GAN),which takes the more flexible residual network structure as the core,constructs a generative model for adversarial training,and implements an effective transfer between Chinese fonts of different styles.In this paper,based on the existing in deep learning methods,the Chinese font style transfer algorithm based on the GAN is discussed,mainly from the following aspects:(1)An end-to-end generation model based on residual-dense connection is proposed.The model increase a Hybrid Dilated Convolution(HDC)structure to enhance feature transfer at different scales and reduce the loss of feature caused by down sampling and up sampling.Experiments show that this structure improves the quality of generated fonts.(2)In addition to using MSE content loss to constrain source fonts and real target fonts in the generation model,we also introduced the pre-training network model to obtain the deep feature loss of the generated font image and the real target font image,namely the perception loss.The experimental results show that adding the perceptual loss can improve the visual reality of generating font quality.(3)WGAN-GP algorithm is used as the optimization strategy of the network model to make the network training more stable.(4)Combining condition GAN,adding condition information to the generation model,and then putting forward a many-to-many font style transfer algorithm,which improves the generalization of the model. |