Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.
Date of Award
Doctor of Philosophy (PhD)
Department of Graduate Psychology
Donna L. Sundre
Christine E. DeMars
Meta-assessment, or the assessment of assessment, can provide meaningful information about the trustworthiness of an academic program’s assessment results (Bresciani, Gardner, & Hickmott, 2009; Palomba & Banta, 1999; Suskie, 2009). Many institutions conduct meta-assessments for their academic programs (Fulcher, Swain, & Orem, 2012), but no research exists to validate the uses of these processes’ results. This study developed the validity argument for the uses of a meta-assessment instrument at one mid-sized university in the mid-Atlantic. The meta-assessment instrument is a fourteen-element rubric that aligns with a general outcomes assessment model. Trained raters apply the rubric to annual assessment reports that are submitted by all academic programs at the institution. Based on these ratings, feedback is provided to programs about the effectiveness of their assessment processes. Prior research had used Generalizability theory to derive the dependability of the ratings provided by graduate students with advanced training in assessment and measurement techniques. This research focused on the dependability of the ratings provided to programs by faculty raters. In order to extend the generalizability of the meta-assessment ratings, a new fully-crossed G-study was conducted with eight faculty raters to compare the dependability of their ratings to those of the previous graduate student study. Results showed that the relative and absolute dependability of two-rater teams of faculty (ρ2 = .90, Φ = .88) were comparable to the dependability estimates of two-rater teams of graduate students. Faculty raters were more imprecise than graduate students in their ratings of individual elements, but not substantially. Based on the results, the generalizability of the meta-assessment ratings was expanded to a larger universe of raters. Rater inconsistencies for elements highlighted potential weaknesses in rater trainings. Additional evidence should be gathered to support several assumptions of the validity argument. The current research provides a roadmap for stakeholders to conduct meta-assessments and outlines the importance of validating meta-assessment uses at the program, institutional, and national levels.
Orem, Chris D., "Demonstrating Validity Evidence of Meta-Assessment Scores Using Generalizability Theory" (2012). Dissertations. 65.