Font Size: a A A

Research And Implementation Of Wideband High Resolution Frequency Synthesizer

Posted on:2021-05-29Degree:MasterType:Thesis
Country:ChinaCandidate:M H HuFull Text:PDF
GTID:2428330623968576Subject:Engineering
Abstract/Summary:PDF Full Text Request
In natural language processing,text generation is a very important field.It enables computers to write high-quality text like human beings.It has applications in abstract extraction,text style conversion,password decoding and so on.Generation Adversarial Network(GAN)is a good tool framework in text generation.Due to the discreteness of the current language model,if GAN is used for text generation,back propagation cannot be used for parameter updating during model training.And because the original idea of GAN is to learn a generation model,so that it can map a noise distribution to a priori real text distribution,and most of the current text generation tasks are text generation learning at the character level,this learning method is easy to generate text with low novelty and the situation of pattern collapse.For the above problems,this paper proposes a text generation model based on generation countermeasure network.For this reason,the work of this paper is as follows:(1)Based on the idea of optimal transmission and GAN,this paper proposes a new unsupervised text generation model,which quantifies the distance between two different text feature distributions and proposes a new divergence,and take the minimum distance as the training objective to optimize the generation model.At the same time,in order to extend the generation countermeasure network from continuous space to discrete space,this paper uses the differentiable function based on softmax transformation to approximate the original non differentiable function.In order to adapt to different sample sets,the "cost function" in the original optimal transmission formula is trained as a neural network,and the unsupervised text generation task is trained in MS COCO captions.The experimental results show that compared with the comparison model,the unsupervised text generation model proposed in this paper has higher similarity and diversity between the generated text distribution and the real text distribution.(2)For the task of emotional style transformation in conditional text generation,this paper uses Autoencoder as the generation model,and on the basis of the above work,uses the distance between distributions provided by the discriminator based on the optimal transmission idea as the additional loss item of the generator,and uses an emotional classifier to implicitly separate the text content and text style,so that not only the goal can be obtained The text with emotional style can retain the content of the original text to the greatest extent.Compared with the benchmark model,the model proposed in this paper gets higher scores in emotional accuracy and content retention.These experimental results show that the optimal transmission theory is reasonable in the field of text generation,and provide a reference for further work.
Keywords/Search Tags:deep learning, generating adversary network, optimal transmission, conditional text generation
PDF Full Text Request
Related items