Font Size: a A A

Using planned comparisons for training evaluation: An application and assessment for the 2 x 2 mixed factorial design

Posted on:1999-01-28Degree:Ph.DType:Thesis
University:State University of New York at AlbanyCandidate:Katzman, StevenFull Text:PDF
GTID:2469390014471538Subject:Occupational psychology
Abstract/Summary:
Statistically evaluating the results of a training intervention is an important method for determining whether training is successful. The 2 x 2 mixed factorial design, consisting of a pretest and posttest for both an experimental and a control group, is frequently used by researchers interested in statistically evaluating a training intervention. Traditional statistical methods for analyzing this design include posttest-only, gain-scores, and analysis of covariance (ANCOVA). Arvey et al. (1985; 1989) examined the statistical power of the three traditional methods, recommending ANCOVA for its power advantages in most cases. However, the three traditional methods do not explicitly test the theory-driven predicted effect of a training intervention on group means in the 2 x 2 design: an ordinal interaction. The current study examines the statistical properties of two types of planned comparison strategies for evaluating the 2 x 2 mixed factorial design. The two-step and three-step planned comparison strategies to evaluate the 2 x 2 mixed factorial training evaluation design is consistent with statistician's calls (e.g., Bobko, 1986) for examining data for the theory-driven predicted pattern of results.;The current study uses a Monte Carlo strategy to simulate training evaluation data. Parameters examined in this study include sample size, effect size, correlation between pretest and posttest, and experimental group change in variability. Also, manipulations to simulate differential group mean change and non-equivalent pretest means were studied. Power and Type I error rates for the traditional statistical strategies and the two-step and three-step planned comparison strategies were examined.;Results found the two-step and three-step strategies kept Type I error rates at or below nominal levels. Results demonstrated that choice of analysis strategy makes little difference when effect size is large or small--power is either sufficiently high or low in most instances for all statistical strategies. It is with moderate effect sizes that the study methods showed promising results. The two-step strategy tended to be more powerful than the traditional methods with low to moderate pretest-posttest correlations, but less powerful at higher correlations. The three-step strategy was often more powerful than all other methods under many parameter combinations, except when parameters were most ideal for high power (i.e., high sample and effect sizes).;Differential group mean change reduced the power of all statistical methods. Non-equivalence at pretest made the posttest-only strategy most powerful, although all statistical procedures may yield misleading results with non-equivalence. Results are discussed in the more general context of null hypothesis statistical significance testing, training evaluation, and practical significance.
Keywords/Search Tags:Training, Statistical, Results, Mixed factorial, Planned comparison
Related items