Distractor Analysis: A Comparison of CTT and IRT

Presenter Information

Kathryn ThompsonFollow

Faculty Advisor Name

Brian Leventhal

Department

Department of Graduate Psychology

Description

Assessments are frequently used in higher education to examine students’ knowledge. They come in many forms, ranging from final exams that measure students’ knowledge to high-stakes general education tests that a program needs for accreditation purposes. Although these tests vary in their purposes at the surface level, practitioners use assessments as a tool to help achieve their long-term goal of providing students with a better education. Without reliable and valid scores from items, we are not able to generalize about students’ abilities. Thus, inadequate items could hinder future students’ growth. One way to achieve producing better items is to examine the way in which the items are functioning.

In a typical multiple-choice item, there are four options: three distractors and one correct option. This name, distractor, refers to the incorrect options of an item (Gierl, Bulut, Guo, & Zhang, 2017). We want to see that those with a higher ability will select the correct option, whereas those with a lower ability will choose a distractor. Distractor analysis allows us to discriminate between low and high ability students. By performing a distractor analysis, we can examine if distractors are functioning well. If no one is choosing a distractor, it is not worthwhile to have it on a test since it is not providing information about students’ abilities (Haladyna, 2016). Distractors that are chosen frequently may be common misconceptions. By identifying them, we have the potential to improve students’ knowledge (Gierl, Bulut, Guo, & Zhang, 2017).

There are two common ways to perform a distractor analysis: 1) Using Classical Test theory and Item response Theory. Classical test theory (CTT) is a psychometric theory that focuses on the observed scores of the test takers, typically in the form of the total score (deAyla, 2009, p.5). To analyze distractors, we investigate simple descriptive statistics, such as the proportion of examinees selecting each response and how well the item discriminates between high and low ability students. On the other hand, Item Response Theory (IRT) allows us to investigate how well distractors are working across the student ability continuum by comparing their responses to other students’ responses on the items (Gierl, Bulut, Guo, & Zhang, 2017).

In the current study, we investigated item distractors on an 82-item information literacy examination. Faculty at James Madison University (JMU) built this test to assess whether students were competent in information literacy. IRT was the perspective primarily used in this study to identify and assess distractors, but CTT difficulty and discrimination indexes were calculated to compare the theories. The findings of this analysis will be used to improve items on the examination. By removing low functioning distractors and analyzing high functioning ones, benefits include seeing where students have misconceptions and improving their competency of information literacy.

This document is currently not available here.

Share

COinS
 

Distractor Analysis: A Comparison of CTT and IRT

Assessments are frequently used in higher education to examine students’ knowledge. They come in many forms, ranging from final exams that measure students’ knowledge to high-stakes general education tests that a program needs for accreditation purposes. Although these tests vary in their purposes at the surface level, practitioners use assessments as a tool to help achieve their long-term goal of providing students with a better education. Without reliable and valid scores from items, we are not able to generalize about students’ abilities. Thus, inadequate items could hinder future students’ growth. One way to achieve producing better items is to examine the way in which the items are functioning.

In a typical multiple-choice item, there are four options: three distractors and one correct option. This name, distractor, refers to the incorrect options of an item (Gierl, Bulut, Guo, & Zhang, 2017). We want to see that those with a higher ability will select the correct option, whereas those with a lower ability will choose a distractor. Distractor analysis allows us to discriminate between low and high ability students. By performing a distractor analysis, we can examine if distractors are functioning well. If no one is choosing a distractor, it is not worthwhile to have it on a test since it is not providing information about students’ abilities (Haladyna, 2016). Distractors that are chosen frequently may be common misconceptions. By identifying them, we have the potential to improve students’ knowledge (Gierl, Bulut, Guo, & Zhang, 2017).

There are two common ways to perform a distractor analysis: 1) Using Classical Test theory and Item response Theory. Classical test theory (CTT) is a psychometric theory that focuses on the observed scores of the test takers, typically in the form of the total score (deAyla, 2009, p.5). To analyze distractors, we investigate simple descriptive statistics, such as the proportion of examinees selecting each response and how well the item discriminates between high and low ability students. On the other hand, Item Response Theory (IRT) allows us to investigate how well distractors are working across the student ability continuum by comparing their responses to other students’ responses on the items (Gierl, Bulut, Guo, & Zhang, 2017).

In the current study, we investigated item distractors on an 82-item information literacy examination. Faculty at James Madison University (JMU) built this test to assess whether students were competent in information literacy. IRT was the perspective primarily used in this study to identify and assess distractors, but CTT difficulty and discrimination indexes were calculated to compare the theories. The findings of this analysis will be used to improve items on the examination. By removing low functioning distractors and analyzing high functioning ones, benefits include seeing where students have misconceptions and improving their competency of information literacy.