Font Size: a A A

Design And Implementation Of Automated Scoring System For English Non-essay Writing Questions

Posted on:2021-08-10Degree:MasterType:Thesis
Country:ChinaCandidate:Q Q YeFull Text:PDF
GTID:2518306107468984Subject:Computer technology
Abstract/Summary:PDF Full Text Request
As a part of the process of selecting talents through examinations,grading bears the important responsibility of ensuring fair and equitable results.Although the demand for objective and fairness of the grading process is gradually improved,there are still some subjective differences in the current mainstream online assessment method because of the need to use manual grading in the subjective question assessment.The automatic essay scoring in English subjects has become completely useful,while the automatic scoring of writing questions other than essay has drawn little attention.The incomplete coverage of question type and the need for excessive manual intervention normally exist in the current automatic scoring methods of none-essay writing questions.Recently widely-applied machine learning can learn and refine development laws from the data and make predictions and judgments under the corresponding background consequently.Score-related features which can be calculated give English non-essay writing questions the possibility of automatic scoring through machine learning.Therefore,we design and implement an automatic scoring system for English non-composition writing questions based on machine learning,in order to improve the fairness,accuracy and efficiency of English test scoring.Firstly,we summarize and sort out the English non-essay writing question types in various school entrance examinations,and analyze the manual scoring standards for the four question types including filling in blanks,correcting errors,translating English into Chinese,and short-answer question.Filling in blanks and correcting errors can be scored through comparison to standard answers due to their objectivity and fixed answers.The EnglishChinese translation and the short-answer question are subjective questions because of more freedom in answering and the absence of absolute standard answers,so we design the scoring features from three aspects including basic features,formal features and semantic features,forming a scoring feature set,and give an extraction algorithm for each scoring feature,and prove the validity of the feature set.Secondly,based on the valid feature set,we collect data sets and test English-Chinese translation scoring and short-answer question scoring on four representative machine learning models which are multiple linear regression,support vector regression,extreme random tree and multi-layer perceptron.Considering the test results and scoring efficiency,we select multiple linear regression and extreme random trees respectively as scoring model of English-Chinese translation and short-answer question.The results of experiments verified the usability of the scoring models.According to the actual assessment process and requirements,we design and implement a complete English non-essay writing question automatic scoring system based on the constructed scoring model.System test results show that the system contains basic scoring ability for English non-essay writing questions and meets the grading needs of various English exams.
Keywords/Search Tags:subjective questions scoring, English none-essay writing question, machine learning, automatic scoring model
PDF Full Text Request
Related items