Date of Award

Spring 2013

Document Type


Degree Name

Doctor of Philosophy (PhD)


Department of Graduate Psychology


Dena A. Pastor

Deborah Bandalos

Christine DeMars


As the demand for accountability and transparency in higher education has increased, so too has the call for direct assessment of student learning outcomes. Accompanying this increase of knowledge-based, cognitive assessments administered in a higher education context is an increased emphasis on assessing various noncognitive aspects of student growth and development over the course of their college career. Noncognitive outcomes are most often evaluated via self-report instruments associated with Likert-type response scales, posing unique challenges for researchers and assessment practitioners hoping to draw valid conclusions based upon this data. One long-debated characteristic of such assessments is the midpoint response option. More specifically, prior research suggests that respondents may be more or less likely to endorse the midpoint response option under different measurement and respondent dispositional conditions thus introducing construct-irrelevant variance within respondent scores. The current study expanded upon previous work to examine the effects of various item and respondent characteristics on endorsement and conceptualization of the midpoint response option in a noncognitive assessment context. A mixed-methods approach was employed in order to fully address research questions associated with two studies – one quantitative and one qualitative in nature. Study 1 employed hierarchical generalized linear modeling to simultaneously examine the effects of respondent characteristics and experimentally manipulated item characteristics on the probability of midpoint response option endorsement. Respondent characteristics included self-reported effort expended on the assessments administered and respondent levels of verbal aptitude (SAT verbal scores). Respondents were randomly assigned different forms of the instrument which varied in item set location (scales administered earlier versus later in the instrument) and midpoint response option label (unlabeled, neutral, undecided, neither agree nor disagree). Experimental manipulation of these variables allowed for a stronger examination of these variables’ influence and how they might interact with respondent characteristics (i.e., effort, verbal aptitude) relative to previous studies investigating the issue. Study 2 employed a think-aloud protocol to further examine and understand respondent use and conceptualization of the midpoint response option upon manipulation of midpoint response option label (unlabeled, neutral, undecided, neither agree nor disagree). Four female and four male participants were randomly selected to participate in the think-aloud process using a subset of the items administered in Study 1. Findings from both studies suggest that the midpoint response option is prone to abuse in practice. Results of Study 1 indicate that respondent characteristics, the experimental manipulation of item characteristics, and their interactions have the potential to significantly affect probability of midpoint response option endorsement. Results of Study 2 reveal that justifications provided by respondents for midpoint response endorsement are mostly construct-irrelevant and differences in conceptualization of the midpoint response option across variations in label appear to be idiosyncratic. These findings have significant implications for the validity of inferences made based upon noncognitive assessment scores and the improvement of assessment practice.

Included in

Psychology Commons



To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.