The Question Behavior Effect: Increasing Test-Taking Motivation at a Low Cost

Faculty Advisor Name

Sara Finney

Department

Department of Graduate Psychology

Description

Questions concerning the quality of instruction, student learning, and the affordability of education have prompted calls for learning outcomes assessment data and comparability studies across states and institutions (US Department of Education, 2006) and countries (Progress in International Reading Literacy, Trends in International Mathematics and Science Study). Performance on these assessments may have no personal consequences for students (i.e., low-stakes tests), yet results are highly relevant for accountability mandates and cross-country comparisons. Lack of motivation to perform well on tests threatens the validity of inferences from the test scores and, in turn, decisions based on those inferences (Wise & Smith, 2011). Given this concern of making inaccurate inferences from low-stakes assessments, practitioners have assessed strategies that may increase students’ test-taking effort, and, in turn, improve the accuracy of test scores. Four broad strategies have been proposed to increase students’ test-taking effort: (a) offering external incentives; (b) increasing test relevance; (c) modifying assessment design; and (d) promising feedback. Providing external incentives and increasing test relevance had the largest impact on increasing test-taking effort; the other two interventions were ineffective. At our institution, monetary incentives are not practically feasible. Likewise, any modification of our informational messaging or test instructions to increase personal relevance would be untrue and unethical.

Our study investigates the utility of an intervention based on the question-behavior effect. That is, research has shown that questioning individuals about a future behavior influences the subsequent performance of that behavior (Spangenberg et al., 2016). Moreover, this question-behavior effort is enhanced when the questions include positive self-identify prompts; individuals engage in the behavior to assume the positive self-identity. To evaluate the effectiveness of this intervention in a test context, we randomly assigned students to one of three conditions prior to completing a low-stakes test: answering five questions regarding intended effort prior to the test, answering five questions regarding intended effort that referenced positive self-identity prior to the test, and a control condition (no questions prior to the test). We then administered a multiple-choice test and collected self-reported effort and perceived test importance. As predicted, answering the questions about effort prior to completing the test resulted in significantly and practically higher self-reported effort (d = .30) and perceived test importance (d = .23). Moreover, significantly fewer students were identified to be filtered from the dataset due to low effort in the question conditions. Specifically, 15.3% of the students in the no prompt condition would be filtered compared to only 11.8% and 10.3% in the verb and noun prompt conditions, respectively. There was no difference in test performance across the question conditions; however, test performance had significantly lower construct irrelevant variance due to effort in the question conditions. Thus, the cheap and easy strategy of simply asking students to report their intended effort prior to completing a low-stakes test appears to lessen the effects of low test-taking effort. In addition to these results informing testing practice, this study extends the utility of the question-behavior effect beyond the typical domains of consumer, health, and pro-social behaviors.

This document is currently not available here.

Share

COinS
 

The Question Behavior Effect: Increasing Test-Taking Motivation at a Low Cost

Questions concerning the quality of instruction, student learning, and the affordability of education have prompted calls for learning outcomes assessment data and comparability studies across states and institutions (US Department of Education, 2006) and countries (Progress in International Reading Literacy, Trends in International Mathematics and Science Study). Performance on these assessments may have no personal consequences for students (i.e., low-stakes tests), yet results are highly relevant for accountability mandates and cross-country comparisons. Lack of motivation to perform well on tests threatens the validity of inferences from the test scores and, in turn, decisions based on those inferences (Wise & Smith, 2011). Given this concern of making inaccurate inferences from low-stakes assessments, practitioners have assessed strategies that may increase students’ test-taking effort, and, in turn, improve the accuracy of test scores. Four broad strategies have been proposed to increase students’ test-taking effort: (a) offering external incentives; (b) increasing test relevance; (c) modifying assessment design; and (d) promising feedback. Providing external incentives and increasing test relevance had the largest impact on increasing test-taking effort; the other two interventions were ineffective. At our institution, monetary incentives are not practically feasible. Likewise, any modification of our informational messaging or test instructions to increase personal relevance would be untrue and unethical.

Our study investigates the utility of an intervention based on the question-behavior effect. That is, research has shown that questioning individuals about a future behavior influences the subsequent performance of that behavior (Spangenberg et al., 2016). Moreover, this question-behavior effort is enhanced when the questions include positive self-identify prompts; individuals engage in the behavior to assume the positive self-identity. To evaluate the effectiveness of this intervention in a test context, we randomly assigned students to one of three conditions prior to completing a low-stakes test: answering five questions regarding intended effort prior to the test, answering five questions regarding intended effort that referenced positive self-identity prior to the test, and a control condition (no questions prior to the test). We then administered a multiple-choice test and collected self-reported effort and perceived test importance. As predicted, answering the questions about effort prior to completing the test resulted in significantly and practically higher self-reported effort (d = .30) and perceived test importance (d = .23). Moreover, significantly fewer students were identified to be filtered from the dataset due to low effort in the question conditions. Specifically, 15.3% of the students in the no prompt condition would be filtered compared to only 11.8% and 10.3% in the verb and noun prompt conditions, respectively. There was no difference in test performance across the question conditions; however, test performance had significantly lower construct irrelevant variance due to effort in the question conditions. Thus, the cheap and easy strategy of simply asking students to report their intended effort prior to completing a low-stakes test appears to lessen the effects of low test-taking effort. In addition to these results informing testing practice, this study extends the utility of the question-behavior effect beyond the typical domains of consumer, health, and pro-social behaviors.