Font Size: a A A

Examining the construct validity of a web-based academic listening test: An investigation of the effects of response formats in a web-based listening test

Posted on:2008-01-26Degree:Ph.DType:Dissertation
University:University of California, Los AngelesCandidate:Shin, SunyoungFull Text:PDF
GTID:1445390005466475Subject:Language
Abstract/Summary:PDF Full Text Request
As computer-based or web-based assessments evolve into the standard mode of delivery in second/foreign language testing situations, it is becoming increasingly common to present listening texts in a video format, potentially enhancing the situational authenticity of the test by replicating the features of real-world listening. Furthermore, automatic response analysis in web-based listening tests may improve scoring efficiency in large-scale tests and make the extended response format possible increasing interactional authenticity over the selected response format since the former requires a greater breadth and depth of language knowledge and more use of strategic competence.;However, compared to the extensive studies of the effect of visual images on listening comprehension, little research has been conducted to investigate the validity of the constructed responses in a web-based listening test. In addition, we lack research on how different types of response formats in web-based academic listening tests might measure the different aspects of the academic listening comprehension constructs and affect test takers' listening performance differently. Therefore, this dissertation investigates the construct validity of a web-based listening test by examining the effects of task types on test takers' performance in a web-based academic listening test context.;Specifically, this research addresses the following two research questions: (1) To what extent does response format affect test takers' performance on a web-based academic listening test? (2) To what extent do different response formats measure the same aspects of academic listening comprehension?;The results of this study show that task order effect was not found. However, the results of repeated measures ANOVA suggest that listeners were better able to extract the main and major ideas in the summary task than in the open-ended question and incomplete outline tasks, whereas they performed better in identifying the supporting details in the incomplete outline task than in the open-ended and the summary task. Thus, it can be concluded that the different response formats lead to significant difference in extracting idea units in a hierarchical manner.;Nevertheless, the CFA-MTMM analyses indicate that three different web-based response formats---summary, open-ended questions, and incomplete outline task---indeed measure the test takers' academic listening comprehension in terms of their understanding of hierarchical lecture structure, which is defined as main ideas, major ideas and supporting details.
Keywords/Search Tags:Test, Web-based, Listening, Response, Validity
PDF Full Text Request
Related items