Font Size: a A A

Measuring student writing abilities in a large-scale writing assessment

Posted on:1996-03-24Degree:Ph.DType:Dissertation
University:The University of ChicagoCandidate:Du, YiFull Text:PDF
GTID:1467390014484746Subject:Education
Abstract/Summary:PDF Full Text Request
In a large-scale direct writing assessment (DWA), it is impossible to have each rater score every student's essay or to have each student conduct all writing tasks. It is also inadequate to score essays by a dichotomous scale. These factors: rater severity, topic difficulty and the rating scale, contribute to variability in students' writing scores.; The study identified rater, topic and scale variations and adjusted student measures based on these variations, using the many-faceted Rasch model. The data used in this study were from the 1993 Illinois Goal Assessment Program writing assessment. First, this study found significant differences in rater severity even after these raters were extensively trained. Second, this study found significant differences in topic difficulty even after these topics were carefully pre-equated. Third, this study found a better scoring scale for these data. These results suggest that when adjustments for rater severity and topic difficulty are not made, the writing abilities of students are over- or under-estimated depending on who happened to rate them and which topics they chose. These results also suggest that the objective, accurate and fair measures cannot be obtained unless the multiple sources of variation have been identified and adjusted.; This study also examined rater and student characteristics that may affect the observed ratings of students. Results show that raters' gender and ethnic backgrounds did not effect their ratings. Different facet functioning (DFF) statistics were conducted to examine whether raters or writing topics were biased for or against student groups. Biased raters were identified and the magnitude of their biases on student writing measures were checked. The impact of different topics on student gender, ethnic and grade groups were also identified.; Finally, data analyses show the feasibility and applicability of the many-faceted Rasch model in DWA. The comparison of results from the many-faceted Rasch model and the conventional assessment approach demonstrates three important advantages of the Rasch model. First, this model provides sufficient estimates to identify and adjust the multiple sources of variation and is able to construct rater-free, scale-free, task-free, student-free and error-free measurement. Second, the many-faceted Rasch model is flexible enough to be used in either large-scale assessment or small sample tests. Third, DFF statistics provide an important tool to control rater bias, topic bias, item bias and other bias from student measures. Generally, no other method solves as many problems in DWA or provides as many important functions in measuring writing abilities as the many-faceted Rasch model.; Any kind of performance assessment must address the measurement problems mentioned above. Because of its advantages in identifying and adjusting multiple sources of variations, flexible uses in many situations and its ability to conduct DFF, the many-faceted Rasch model offers a potent approach to the examination of variations in a wide variety of performance assessment. (Abstract shortened by UMI.)...
Keywords/Search Tags:Assessment, Writing, Student, Scale, Many-faceted rasch model, Rater, DWA, Variations
PDF Full Text Request
Related items