Font Size: a A A

Research And Implementation Of Problem Generation Method Based On Deep Learning

Posted on:2021-01-30Degree:MasterType:Thesis
Country:ChinaCandidate:Y ZhaoFull Text:PDF
GTID:2428330620964210Subject:Engineering
Abstract/Summary:PDF Full Text Request
The subject of this thesis is the task of automatic problem generation of unstructured text based on deep learning in the field of natural language processing.This area is a more challenging generative task than the tasks of other branches of natural language processing.Automated question generation is dedicated to asking questions from contextual sentences,which includes what to ask and how to ask these two basic aspects.In contrast,how to ask is a question that this article is more concerned about.After all,there is a broad baseline for asking what this question is.In recent years,neural network-based schemes have adopted sequence-to-sequence-based models,using answers and context sentences as inputs,and then predicting a related question as the result.Such a model has the following two problems.First,the match between the sentence that generates the problem and the type of the problem is not satisfactory.On the other hand,if the model copies the context word too far from the answer,it will cause semantic defects in the sentence.In view of the above problems,this thesis studies a series of sequence to sequence-based automatic problem generation models.Attempts were made to incorporate a multi-feature input algorithm model,and based on this,the advantages of ensemble learning were used to improve the performance of the model,and a better generation effect was obtained through the integration method.There is more and more emerging research in the field of problem generation,integrating more and more input features,with the goal of generating more complex high-level problems.These trends indicate that problem generation has become more and more mature.By analyzing the research status of this task at home and abroad,the research contents of this thesis mainly include the following parts:(1)This thesis proposes a method of fusing multiple models based on deep learning to solve the problem of irrational grammar generation.The main method is to improve the problem generation based on the sequence model,train multiple models separately,and then score through the fusion steps to get better results in multiple models.Based on the multi-model fusion method for problem generation,the implementation of the encoder uses two methods based on Gated Recurrent Unit(GRU)and self-attention.Train the two models at the same time,and finally use the modelfusion module to optimize the output of the two models.(2)A method for generating problem based on the GRU network is proposed.First,according to the sentence,different types of questions are used to train a classifier.In the process of feature extraction,grammatical information is integrated,and multi-dimensional features such as answer positions and question types are integrated for training.After the encoder,a The network generates the category and then outputs it through the decoder.The experimental results show that the accuracy of the problem types has improved accordingly.(3)Based on the application of question generation in sequence translation,multiple models are trained by means of fine-tuning,model parameter averaging,etc.The adaptive optimization of integration parameters in multi-model integration can achieve better.effect than a single model in the data setsIn order to verify the feasibility of the method,the corresponding experiments are carried out.The experimental results show that the method is better than the classical sequence model.
Keywords/Search Tags:deep learning, Automatic question generation, encoder-decoder, Integrated learning, Sequence model
PDF Full Text Request
Related items