Rating scale issues: Investigating the stability of extreme and midpoint response style
Faculty Advisor Name
Brian Leventhal
Department
Department of Graduate Psychology
Description
A response style (RS) is the systematic tendency to response to rating scale questions, independent of the content of the rating scale (Paulhus, 1991). For example, respondents who exhibit an extreme response style (ERS) tend to endorse response options that are more extreme than their true attitude. Respondents who exhibit a midpoint response style (MRS) tend to endorse options more neutral than their true attitude. By confounding attitudinal information, ERS and MRS make it difficult to accurately interpret information from attitudinal scales (Leventhal & Stone, 2018).
One method to combat and investigate RS are IRTrees, which parse out attitudinal information from RS information to determine attitude trait scores (Plieninger & Meiser, 2014). We extend the use of IRTrees in order to investigate the stability of RS across constructs. Some research indicates RSs are stable across constructs (e.g. He & Vivjer, 2013), but other research indicates RSs as unstable across constructs (e.g. Cabooter, Weijters, De Beuckelaer, & Davidov, 2016). Gregg & Leventhal (2019) hypothesize that the homogeneity of items across subscales may affect the stability of ERS due to the ill-defined dimensionality of the subscales. Thus, we investigated the stability of ERS and MRS across heterogenous scales given during the same testing setting.
Five-hundred undergraduate students responded to the Willingness (6 items) and Self-Efficacy (9 items; SE) scales. We compared four IRTree models to evaluate the construct dependency of ERS and MRS across Willingness and SE constructs. Using Bayesian estimation, we compared model-fit using Deviance Information Criteria (DIC; Spiegelhalter et al., 2002).
We found that both ERS and MRS are dependent on the content of the rating scale. Reconsider the definition of response style: a systematic tendency to respond to rating scale questions, independent of the content of the rating scale (Paulhus, 1991). The research regarding the stability of response styles across content domains calls into question whether we can continue to define response styles as independent of the rating scale content. Furthermore, the results from our current study provides context regarding the complexity of response styles, and a promising method to further investigate multiple response styles through IRTree methods.
Rating scale issues: Investigating the stability of extreme and midpoint response style
A response style (RS) is the systematic tendency to response to rating scale questions, independent of the content of the rating scale (Paulhus, 1991). For example, respondents who exhibit an extreme response style (ERS) tend to endorse response options that are more extreme than their true attitude. Respondents who exhibit a midpoint response style (MRS) tend to endorse options more neutral than their true attitude. By confounding attitudinal information, ERS and MRS make it difficult to accurately interpret information from attitudinal scales (Leventhal & Stone, 2018).
One method to combat and investigate RS are IRTrees, which parse out attitudinal information from RS information to determine attitude trait scores (Plieninger & Meiser, 2014). We extend the use of IRTrees in order to investigate the stability of RS across constructs. Some research indicates RSs are stable across constructs (e.g. He & Vivjer, 2013), but other research indicates RSs as unstable across constructs (e.g. Cabooter, Weijters, De Beuckelaer, & Davidov, 2016). Gregg & Leventhal (2019) hypothesize that the homogeneity of items across subscales may affect the stability of ERS due to the ill-defined dimensionality of the subscales. Thus, we investigated the stability of ERS and MRS across heterogenous scales given during the same testing setting.
Five-hundred undergraduate students responded to the Willingness (6 items) and Self-Efficacy (9 items; SE) scales. We compared four IRTree models to evaluate the construct dependency of ERS and MRS across Willingness and SE constructs. Using Bayesian estimation, we compared model-fit using Deviance Information Criteria (DIC; Spiegelhalter et al., 2002).
We found that both ERS and MRS are dependent on the content of the rating scale. Reconsider the definition of response style: a systematic tendency to respond to rating scale questions, independent of the content of the rating scale (Paulhus, 1991). The research regarding the stability of response styles across content domains calls into question whether we can continue to define response styles as independent of the rating scale content. Furthermore, the results from our current study provides context regarding the complexity of response styles, and a promising method to further investigate multiple response styles through IRTree methods.