Font Size: a A A

Automated Chinese Fonts Synthesis Based On Image Transformation

Posted on:2020-04-06Degree:MasterType:Thesis
Country:ChinaCandidate:J ChangFull Text:PDF
GTID:2428330623463712Subject:Electronics and Communications Engineering
Abstract/Summary:PDF Full Text Request
Font design is an important part of media content creation.However,Chinese fonts design is a gruelling and time-consuming task,as well as costly.Automated Chinese fonts generation is expected to significantly reduce the cost of font design.Existing researches mostly focus on methods based on Chinese character synthesis.This type of methods is extremely cumbersome and sometimes need unavoidable manual intervention,however.In recent years,inspired from the success of applying deep convolutional neural networks(CNN)to imageto-image translation tasks,some automated font generation methods based on image-to-image translation have been gradually explored.This type of method leverage CNNs to transfer the original input font to the target one.Such methods have achieve pretty good performance for Chinese printed fonts,however,there are still some problems remaining to be resolved.First,how to effectively generate more difficult Chinese handwritten fonts with irregular structure,cursive and thin strokes;second,how to further improve the sharpness and fidelity of detail strokes reconstructed by model;and third,small dataset learning,namely,how to reduce the scale of training data sets so that the cost of manually fonts designing can be further decreased.Based on the above three shortcomings,this paper focuses on two aspects of novel exploration.For the synthesis of Chinese handwriting fonts,this paper proposes a font style transfer model based on hierarchical adversarial networks(HAN).Considering that Chinese characters are graphics and featured with subtle structure which is fairly different from the rasterized images,HAN proposes a staged decoder leveraging multiple feature maps to reconstruct target font,which can capture the global character structure as well as the local stroke details.In addition,HAN proposes a hierarchical discriminator.It utilizes different feature maps extracted from multiple layers in discriminator to enable the discriminator to dynamically evaluates the distribution discrepancy between the target domain and the generated domains according to the representation with different abstraction.The improvement of discriminator then promotes the ability of the generator to fit the true distribution.Experiments demonstrate that,compared with the state-of-art image transformation models and Chinese font synthesis models,HAN achieves the impressive improvement on RMSE metric and Turing test for Chinese printed fonts generation,as well as for Chinese handwritten fonts.For improving the fidelity in detail strokes of generated characters,this paper proposes a collaborative stroke refinement(CSR)strategy which can refine the generated strokes of original font transfer task by constructing a highly-related auxiliary task.For the limitation of the training data,this paper proposes an online zoom-augmentation(OZA)strategy which enables the CNN model to learn the location-diversity and deformation-diversity of the “basic unit” in Chinese characters,as well as implicitly model the common structure of Chinese characters.Experiments demonstrate that,compared with the previous Chinese font synthesis models,CSR-OZA achieves the highest fidelity of generated characters under the same scale of training set,and further decreases the number of the training samples to 750,which has approached the current requirement of commercial use.
Keywords/Search Tags:font synthesis, adversarial learning, style transfer, collaborative learning
PDF Full Text Request
Related items