| Automatic Essay Scoring(AES)aims to use computers to evaluate essay quality automatically.With the promotion of deep learning,more and more researchers expect to develop a set of more intelligent algorithms to help teachers evaluate essays more efficiently and intelligently.Meanwhile,they also expect that these algorithms can assist students in writing essays,thus further enhancing the fairness of education.This dissertation mainly conducts an in-depth study on automatic Chinese essay scoring,and the main contents are as follows:(1)Automatic Chinese Essay Scoring on Multi-perspective ModelingIn view of the fact that most of the previous studies only considered the semantics or organization of the essay from a single perspective,and did not consider higher-level factors such as logic.This dissertation proposes a Multi-Perspective Evaluation framework(MPE)to more objective and reliably evaluate the essay from semantics,organization,and logic.MPE first utilizes the pre-trained model to encode sentences and obtain three levels semantic information to evaluate the essay’s semantic expression.Then,it combines sentence function identification and paragraph function identification to evaluate the essay’s organization.Moreover,MPE evaluates the essay’s logic by calculating the coherence between paragraphs.Finally,the framework scores the essay by integrating these three evaluation perspectives.The experimental results show that the proposed MPE can effectively score the essays at various qualities,outperforming all the baselines.(2)Automatic Chinese Essay Scoring from Multiple CriteriaAiming at the problem that previous studies focused on giving the overall score or a single criteria score of essays,and cannot obtain the overall score and real-time feedback from all aspects at the same time.This dissertation first annotates a dataset ACEA(Automated Chinese Essay Assessment),which includes the overall score of the essays and the score of four criteria,i.e.,the essay organization,topic,logic,and language.And then this dissertation designs a Hierarchical Multi-task Criteria Scorer(HMCS)to evaluate the quality of writing by modeling these four criteria.Moreover,it proposes an inter-sequence attention mechanism to enhance information interaction between different tasks and design the topic task-specific feature in AES.The experimental results on ACEA show that our HMCS can effectively score essays from multiple criteria,outperforming several strong baselines.(3)Automatic Chinese Essay Scoring based on Beautiful Sentence EvaluationGiven the weakness of learners’ language ability in writing,this dissertation proposes a method of automatic Chinese essay scoring based on beautiful sentence evaluation.This dissertation starts from two perspectives:essay scoring and essay writing,using beautiful sentences evaluation and generations assist essay scoring and writing,respectively.Firstly,a beautiful sentence dataset containing 17040 beautiful sentences was constructed.A beautiful sentence evaluation model was trained using this dataset and applied to the automated Chinese essay scoring.The experimental results showed the effectiveness of beautiful sentence features in the automated Chinese essay scoring.Secondly,this dissertation used this dataset to fine-tune the pre-trained model GPT-2 and trained a beautiful sentence generator to improve the writing ability of the author.The experimental results showed that the beautiful sentence generator can effectively assist in writing.Through the above three methods,this dissertation studies some shortcomings of the Automatic Chinese Essay Scoring task and improves its performance.It provides a reference for further research on Automatic Chinese Essay Scoring. |