Font Size: a A A

A Validation Study Of Reading-Writing Integrated Asseesment In College English Classroom

Posted on:2012-08-27Degree:DoctorType:Dissertation
Country:ChinaCandidate:L JiangFull Text:PDF
GTID:1115330368475803Subject:English Language and Literature
Abstract/Summary:PDF Full Text Request
Where there is language instruction, there is language testing. Effective language testing measures how much teaching goals have been achieved and how much students have learned, and has positive washback effects on instruction as well. Writing is regarded as important language output. Writing test is valued as one of the most scientific and effective ways to evaluate learners'English language proficiency, and, thus, becomes an important and necessary component of many large-scale English tests. Both TOEFL iBT and academic writing assessment in American universities include integrated writing tasks, which gives inspiration to writing assessment in Chinese universities.Two goals are expected to achieve in this study. On the one hand, this dissertation focuses on review and overview of theories on writing and writing assessment, after which a framework for designing reading-writing integrated assessment (RWIA) tasks is proposed to be used in college English classroom in China. Theoretical foundations of RWIA include exploration of nature of writing and writing process, explanation of constructs of writing ability and their measurable variables, classification of writing tasks and their merits and demerits, examination of Teaching Requirements and testing syllabuses of CET, and interpretation of Bachman and Palmer's authenticity and interactiveness. Essence of classroom assessment and test specification are described, on the basis of which a detailed RWIA framework is presented. On the other hand, experimental tests are carried out to verify practicality, reliability and validity of the framework of RWIA. The validation study is composed of finding possible problems of RWIA, testing its reliability and validity when applied to different levels and on a large scale, and identifying the difference of students'performance in the integrated task and the independent task through comparison of discourse. Quantitative data is collected and analyzed to provide interpretation.In nature, writing is a linguistic, social-cultural and cognitive activity. Written language keeps the features of formality, accuracy and complexity in contrast to that of speaking. Writing is a means of communication and is influenced by social and cultural conventions. Writing is also a cognitive process in the writer's mind, recursive rather than linear. Expert writers tend to involve knowledge transforming, while the novice writers are likely to generate texts in the way of knowledge telling. Writing process is mainly the interaction between the individual writer and the task environment, influenced by individual's working memory, motivation and affect, cognitive processes, and long-term memory. Hymes (1972), Canale and Swain (1980), and Bachman (1990) divide language knowledge into three types: linguistic knowledge, discourse knowledge and sociolinguistic knowledge. Later, Bachman and Palmer (1996) introduce strategic competence into communicative language ability. Built on their work, writing ability is summarized by the author as linguistic knowledge, discourse knowledge and strategic competence. Writing ability is multi-componential and its dimensions can be measured by complexity, accuracy and fluency of the language (Skehan, 1998; Ellis, 2008). Backed by their study, the author summarizes ten quantitative measures, which are average word length, type/token ratio of words, average number of words per T-unit, proportion of clauses to T-units, percentage of dependent clauses of total clauses, percentage of error-free T-units, percentage of error-free clauses, average number of words per text, average number of T-units per text, average number of clauses per text. A review of large-scale writing assessment home and abroad finds that the writing task is generally divided into the independent task and the integrated task. And the integrated writing task is broken down into the reading-writing integrated task and the listening-reading-writing integrated task. Merits and demerits of each type are discussed and RWIA is proposed to be used for English classroom assessment in universities in China (the reading-writing integrated task is more feasible than the listening-reading-writing integrated task for English teachers in classroom).The RWIA framework aims to provide theoretical guide and practical reference for college teachers to design writing tasks for classroom assessment, and therefore, should be driven by Teaching Requirements and testing syllabuses of CET. Teaching Requirements describe writing ability in three levels, while testing syllabuses define topic, genre, words, time and marking criteria. Bachman and Palmer's (1996) authenticity and interactiveness lay theoretical foundation for RWIA in that the integrated writing task mirrors the academic or professional task in the real life, and interacts with students'interest, topic knowledge, and linguistic competence and strategic competence, and therefore, facilitates students'performance in writing at an optimal level.Designing the RWIA framework involves discussion of the essence of classroom writing assessment and description of test specification. It's an achievement test and a criterion-reference test as well, evaluating effectiveness of teaching and learning. Holistic scoring is widely used and quick, while analytic scoring is able to provide detailed reports but time-consuming. On the basis of Weigle's (2007) summary of writing dimensions and Bachman and Palmer's (1996) account of characteristics of input the author sets forth a detailed RWIA framework, according to which sample tasks are designed.The RWIA framework undergoes two stages of experimental tests, which are conducted to test its practicality, reliability and validity.The pilot study mainly concerns the possible problems of RWIA application and its reliability and validity. 30 candidates in Northeastern University participated in the study. They were asked to take part in a reading-writing integrated test and finish a candidate's questionnaire immediately after the writing test. Their writings were analytically marked by 2 experienced raters and holistically scored by another experienced rater using same scoring rubrics. The raters were asked to finish a rater's questionnaire immediately after rating. The average score of 2 analytic ratings was the core parameter in the study. The candidates scores in CET4 (including total score and score of writing in CET4) and that of writing exercises in English class were used as external measures. Parameters such as reliability quotient alpha, M-estimators and correlation coefficients were tested and satisfactory results were gained. First of all, the reliability quotient alpha of RWIA is 0.704, above the theoretical demand of writing test. Secondly, the sub-total correlation coefficients are 0.779-0.859 and those of sub-sub correlation are 0.473-0.639, which demonstrate acceptable construct validity of RWIA. Thirdly, the correlation coefficients of RAT-CET4W (0.723) and RAT-SWE (0.712) indicate desirable criterion-related validity of RWIA. What's more, the results of questionnaire show most students and all teachers hold positive attitude towards RWIA, believing that RWIA is able to evaluate Chinese college students'English ability of writing. Therefore, it can be concluded that RWIA exhibits reasonable reliability and validity in application and it is applicable for classroom use.The field study focuses on reliability and validity of RWIA when it is applied to different levels and on a large scale, and comparative study with the traditional independent writing assessment (IWA). 90 candidates took part in the study, including 30 freshmen, 30 sophomores and 30 graduates. The stratified random selection was intended to represent all levels of students who are taking English as a compulsory course in most Chinese universities. All candidates were required to take an IWA task first, a RWIA task of the same topic one month later, and a questionnaire immediately after the RWIA task. Their writings in two tasks were all analytically marked by 2 experienced raters and holistically scored by another experienced rater on the same rating criteria. The average score of 2 analytic ratings were taken as the core component of the study. Similar to the pilot study, tests of reliability, M-estimators and correlation were carried out to demonstrate the following findings. On the one hand, when applied to different levels of candidates, RWIA exhibits reasonable reliability quotient alphas (0.718, 0.753 and 0.701 respectively) and acceptable construct validity (with satisfactory sub-total correlation coefficients at 0.736-0.764, 0.841-0.906 and 0.778-0.873 respectively and generally acceptable sub-sub correlation coefficients at 0.278-0.560, 0,535-0.817 and 0.467-0.677 respectively). On the other hand, when applied on a large scale, RWIA also displays satisfactory reliability quotient alpha (0.713) and acceptable construct validity (with sub-total correlation coefficients at 0.790-0.843 and sub-sub correlation coefficients at 0.445-0.676). It can, thus, be concluded that RWIA would be applied in classroom assessment to different levels of students, as well as on a large scale, to evaluate Chinese college students'English ability of writing with acceptable reliability and validity.Furthermore, discourse-based comparative studies on 180 essays by 90 candidates are made to find the similarities and differences of candidates'performance in these two writing tasks. For one thing, the subjective analytic measurement was analyzed from aspects of content, organization, language accuracy and language complexity; for another, the objective discourse analysis was examined in terms of language complexity and fluency. Both stratified sample comparison and entire sample comparison were under discussion. Results show that candidates tend to perform better in the RWIA. On the one hand, both stratified sample and entire sample generally obtain higher scores (including total score and those of sub-categories) in the RWIA task. On the other hand, both stratified sample and entire sample are likely to use longer words, make longer sentences (except sophomore candidates) and produce more complex sentences (CS) and compound-complex sentences (CCS). The proportions of usage of passive voice and normalization are both doubled. On the whole, the fluency is enhanced by using more words, more sentences, more CSs and more CCSs. However, both stratified sample and entire sample demonstrate lower word type ratio in the RWIA task, indicating that candidates are likely to exhibit same amount of vocabulary in these two tasks. And to some extent, findings of comparative study conform to those of questionnaire. Results of questionnaire show that students mostly hold that the reading material in the RWIA task help them produce better writing. For the freshman candidates, facilitation would occur to all four aspects, including content, organization and language accuracy and complexity. For the candidates of higher levels, ideas for content and organization may be highly promoted. Therefore, it can be concluded that RWIA might well help college students produce better essays.In conclusion, this dissertation proposes a RWIA framework for classroom assessment on the basis of theoretical review and verifies its reliability and validity by experimental tests. This empirical study attempts to promote a change in writing assessment in college English classroom and is intended to provide reference and implication to English teachers. It is hoped that this framework will have positive washback effects on writing instruction in English in Chinese universities.
Keywords/Search Tags:Reading-Writing Integrated Writing Assessment, Validation Study, Reliability, Construct Validity, Criterion-Related Validity, Authenticity, Interactivenss
PDF Full Text Request
Related items