The (non)impact of misfitting items in computerized adaptive testing

Publication Date

2022

Document Type

Article

Abstract

To assess the potential impact of misfitting items, simulated examinees received varying percentages of misfitting items. The fit was manipulated to be poor near what would otherwise be the point of maximum information. With 30% misfitting items, the absolute value of the bias of the ability estimates tended to be larger than it was with 0% or 10% misfitting items. However, the magnitude of this effect was small. For most abilities and test lengths, the empirical standard error did not vary greatly with the percentage of misfitting items. The standard error estimated from the information function tended to underestimate the empirical standard error when there were 30% misfitting items, but only for higher ability levels. Overall, the misfit had little practical impact.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Share

COinS