Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.
Date of Award
Doctor of Philosophy (PhD)
Department of Graduate Psychology
Sara J. Finney
Donna L. Sundre
To ensure program quality and meet accountability mandates, it is becoming increasingly important for educational institutions to show “value-added” for attending students. Value-added is often evidenced by some form of pre-post assessment, where a change in scores on a construct of interest is considered indicative of student growth. Although missing data is a common problem for these pre-post designs, missingness is rarely addressed and cases with missing data are often listwise deleted. The current study examined the mechanism underlying, and bias resulting from, missingness due to posttest nonattendance in a higher-education accountability testing context. Although data were missing for some students due to posttest nonattendance, these initially missing data were subsequently collected via makeup testing sessions, thus allowing for the empirical examination of the mechanism underlying the missingness and the biasing effects of the missingness. Parameter estimates and standard errors were compared between the “complete” (i.e., including makeup) data and a number of different missing data techniques. These comparisons were completed across varying percentages of missingness and across noncognitive (i.e., developmental) and cognitive (i.e., knowledge-based) measures. For both noncognitive and cognitive measures, posttest data was found to be missing-not-at-random (MNAR), indicating that bias should occur when utilizing any missing data handling technique. As expected, the inclusion of auxiliary variables (i.e., variables related to missingness, the variable with missing values, or both) decreased the conditional relationship between the posttest noncognitive measure scores and posttest attendance (i.e., missingness); however, it increased the conditional relationship between posttest cognitive measure scores and posttest attendance. Thus, utilizing advanced missing data handling with auxiliary variables resulted in reduced parameter bias and reduced standard error inflation for the noncognitive measure, but increased parameter bias for some parameters (posttest mean and pre-post mean change) for the cognitive measure. These effects became more exaggerated as missingness percentages increased. With respect to future research, additional examination of bias-inducing effects when employing missing data techniques is needed. With respect to testing practice, assessment practitioners are advised to avoid missingness if possible through well-designed assessment methods, and to attempt to thoroughly understand the missingness mechanism when missingness is unavoidable.
Kopp, Jason P., "The treatment of missing data when estimating student growth with pre-post educational accountability data" (2014). Dissertations. 79.