Font Size: a A A

A comparison of subscore reporting methods for a state assessment of English language proficiency

Posted on:2016-10-13Degree:Ph.DType:Dissertation
University:University of KansasCandidate:Longabach, TanyaFull Text:PDF
GTID:1475390017485585Subject:Educational tests & measurements
Abstract/Summary:
Educational tests that assess multiple content domains related to varying degrees often have subsections based on these content domains; scores assigned to these subsections are commonly known as subscores. Testing programs face increasing customer demands for the reporting of subscores in addition to the total test scores in today's accountability-oriented educational environment. While reporting subscores can provide much-needed information for teachers, administrators, and students about proficiency in the test domains, one of the major drawbacks of subscore reporting includes their lower reliability as compared to the test as a whole. This dissertation explored several methods of assigning subscores to the four domains of an English language proficiency test (listening, reading, writing, and speaking), including classical test theory (CTT)-based number correct, unidimensional item response theory (UIRT), augmented item response theory (A-IRT), and multidimensional item response theory (MIRT), and compared the reliability and precision of these different methods across language domains and grade bands. CTT and UIRT methods were found to have similar reliability and precision that was lower than that of augmented IRT and MIRT methods. The reliability of augmented IRT and MIRT was found to be comparable for most domains and grade bands. The policy implications and limitations of this study, as well as directions for further research, were discussed.
Keywords/Search Tags:Methods, Domains, Reporting, Item response theory, Test, Language
Related items