Font Size: a A A

Estimates of Type I error and power for indices of differential bundle and test functioning

Posted on:2006-02-19Degree:Ph.DType:Dissertation
University:Bowling Green State UniversityCandidate:Russell, Steven SFull Text:PDF
GTID:1450390008967094Subject:Psychology
Abstract/Summary:
Analyzing single items for differential item functioning (DIF) provides valuable psychometric information, especially during the early stages of test validation. There are also numerous advantages to be gained by conducting DIF analyses at aggregate levels, such as comparing multiple items simultaneously, allowing DIF amplification or cancellation effects, and exploring sources of test bias. This study evaluated the aggregate-level DIF detection capabilities of two IRT-based techniques---DFIT and SIBTEST---using Monte Carlo performance criteria. Estimates of Type I error and power rates were computed across several variables (e.g., sample size, test dimensionality) for differential test functioning (using DFIT) and differential bundle functioning (using SIBTEST). The overall Type I error rates for both techniques were inflated beyond the nominal rejection rate, and varied quite a bit according to the manipulated variables in the study. The overall power rate for DFIT was fairly low, whereas SIBTEST demonstrated satisfactory power. In follow-up analyses, the techniques were used jointly to examine archival datasets. Implications for organizational research are discussed, including future research directions.
Keywords/Search Tags:DIF, Test, Differential, Functioning, Power, Type, Error
Related items