Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

ORCID

http://orcid.org/0000-0002-3593-4685

Date of Graduation

Spring 2015

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Department of Graduate Psychology

Advisor(s)

Dena A. Pastor

Abstract

This study represents an attempt to distinguish two classes of examinees – random responders and valid responders – on non-cognitive assessments in low-stakes testing. The majority of existing literature regarding the detection of random responders in low-stakes settings exists in regard to cognitive tests that are dichotomously scored. However, evidence suggests that random responding occurs on non-cognitive assessments, and as with cognitive measures, the data derived from such measures are used to inform practice. Thus, a threat to test score validity exists if examinees’ response selections do not accurately reflect their underlying level on the construct being assessed. As with cognitive tests, using data from measures in which students did not give their best effort could have negative implications for future decisions. Thus, there is a need for a method of detecting random responders on non-cognitive assessments that are polytomously scored.

This dissertation provides an overview of existing techniques for identifying low-motivated or amotivated examinees within low-stakes cognitive testing contexts including motivation filtering, response time effort, and item response theory mixture modeling, with particular attention paid to an IRT mixture model referred to in this dissertation as the Random Responders model – Graded Response model (RRM-GRM). Two studies, a simulation and an applied study, were conducted to explore the utility of the RRM-GRM for detecting and accounting for random responders on non-cognitive instruments in low-stakes testing settings. The findings from the simulation study show considerable bias and RMSE in parameter estimates and bias in theta estimates when the proportion of random responders is greater than 5%. Use of the RRM-GRM with the same data sets provides parameter estimates with minimal to no bias and RMSE and theta estimates that are essentially bias free. The applied study demonstrated that when fitting the RRM-GRM to authentic data, 5.6% of the responders were identified as random responders. Respondents classified as random responders were found to have higher odds of being males and of having lower scores on importance of the test, as well as lower average total scores on the UMUM-15 measure used in the study. Limitations of the RRM-GRM technique are discussed.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.