Font Size: a A A

Answer Assessment Method For Open Questions

Posted on:2021-04-15Degree:MasterType:Thesis
Country:ChinaCandidate:Y F YuFull Text:PDF
GTID:2428330611999433Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Text assessment is one of the tasks in the field of natural language processing research.In the face of massive text data,realizing end-to-end automatic text processing and assessment can save a lot of labor and material costs and greatly improve work efficiency.At the same time,mining text content features from new perspectives through the deep learning ability of computers,and realizing the intelligent analysis,understanding and assessment of unstructured texts,which are conducive to advancing decision assistance and high-level human-computer interaction.As for the answer assessment for open field questions,its openness determines the diversity and unexhaustibility of the answer.In the face of open answers,the traditional artificial assessment process relies more on the expertise,experience and knowledge accumulation of the evaluators,so that the assessment results cannot guarantee absolute objectivity and fairness.However,the existing systems that have reached the level of expert assessment rely on the support of feature knowledge base or standard reference answers,and do not reach the openness in the real sense.Therefore,this paper focuses on natural language understanding,and conducts research on intelligent answer assessment for open field questions based on deep learning models,completely gets rid of the dependence on standard reference answers in the process,concentrating on exploring the intrinsic connection between question and answer and improving the judgment of the model on the quality of different answers under the questions.This paper abstracts the specific task into constructing a system that can automatically evaluate the text answers to questions.Namely,input a text pair of question and answer,the system can judge whether the answer is relevent to the question or not,whether the answer sentences are fluent,whether the answer content is logical,etc,and then output a comprehensive assessment result of the answer quality.In view of the assessment requirement of relevence,this paper proposes a classification-based assessment method,using the attention mechanism to make the question and the answer text fully interactive and enhanced for judgment,the result on the Chinese domain dataset is better than these of the ordinary classification models from 1.22% to 2.64% in accuracy.In view of the assessment requirement of openness,this paper proposes a ranking-based evaluation method,using pre-trained language model based on big data to improve the basic stability and generalization of the system;besides,it uses the comparative learning method of positive and negative cases and the optimization of the loss function to improve the discriminative judgment of the system on the answer to the question.The performance on multiple datasets is better than the results of recent advanced models,and the accuracy is improved by 0.6% compared with the classification-based assessment method.In order to solve the problem of limited amount and low quality of annotated corpus in practical applications,this paper proposes a new way of data enhancement by negative example expansion based on the original corpus.The method proposed in the paper has been tested and evaluated on the corpus provided by real application scenarios.The experimental results show that the method can effectively assess the answers of open questions,and the paper model has been applied online.
Keywords/Search Tags:open text assessment, attention mechanism, pre-trained language model, data enhancement
PDF Full Text Request
Related items