Font Size: a A A

Development And Validation Of A Scoring Rubric In The Final Examination For Chinese-English Translation Course

Posted on:2023-09-14Degree:MasterType:Thesis
Country:ChinaCandidate:S X ZhouFull Text:PDF
GTID:2545306617968169Subject:English translation
Abstract/Summary:PDF Full Text Request
Nowadays,there are few studies focusing on the rating issues of Chinese-toEnglish translation test in a final examination in some higher colleges.Most of the papers in the final examination are rated based on the impression and intuition of the teachers.This study intends to develop a rating scale for the Chinese-to-English translation test in the final examination in the school,based on the theory and method from language testing.The key part of translation test is the rating scale.Rating scale includes the elements of rating category or dimension,rating criterion,rank and etc.Rating scale should reflect the translation competence it measured.At the same time,raters need to recognize the discourse features consistent with the rating scale descriptors,and the rank of the scale should meet the expectation.In accordance with these requirements,the paper is about to answer the following three research questions:(1)Whether the rating scale for Chinese-to-English translation in a terminal examination could be verified based on the literature method?(2)Whether raters could attend the discourse features corresponding to the rubric?(3)Whether the function of the scale can meet the expectation?The above three research questions are connected within an inner logic system.The method of developing the scale is to make adaptation from the previous scale and discourse analysis,thus it is necessary to testify the theoretical base of the scale and validate the descriptors of this scale through comparing with other authoritative testing scales.In order to meet that goal,literature method is conducted in this process.That is all what research question 1 focuses on.After the development of the rating scale,raters’rating process should be paid close attention to,that is,whether the discourse features raters attend in their rating process are consistent with the scale descriptors.In order to find out the result,a verbal protocol analysis is conducted.That is all what research question 2 focuses on.As the rating results are reflective of the rating process,the validation is conducted through the analysis of ratings afterwards,and many-facet Rasch analysis is undertaken.That is all what research question 3 focuses on.The research results are as follows:(1)The scale designed could be supported by theoretical model,which can be reflected by the explanation of the translation competence model constructed by PACTE.The descriptors could also be supported by other scales.(2)It is found that,through think-aloud experiment,raters could attend the discourse features corresponding to the scale descriptors,which further demonstrate the validity of the scale descriptors.(3)It is found by Rasch analysis that the scale properties are as intended,and that raters rate reliably at task level.The above questions initially constitute a reasoning chain:since scale descriptors could reflect the translation competence model,and raters could attend the discourse features corresponding to the scale descriptors,and also the function of the scale can be validated(e.g.,Category setting and rank level setting),the author could infer the examinees’ translation abilities from the ratings.Therefore,the validation of the rating scale is justified.Meanwhile,this paper also points out some improvements in the scale,which is expected to lay a foundation for the future studies.
Keywords/Search Tags:Validity Argument, Scoring Rubric, Many-facet Rasch Analysis, Translation Testing
PDF Full Text Request
Related items