Date of Award

Spring 2010

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Department of Graduate Psychology

Advisor(s)

Christine E. DeMars

Robin Anderson

Josh Goodman

Abstract

In order to be able to fairly compare scores derived from different forms of the same test within the Item Response Theory framework, all individual item parameters must be on the same scale. A new approach, the RPA method, which is based on transformations of predicted score distributions was evaluated here and was shown to produce results comparable to the widely used Stocking-Lord (SL) method under varying conditions of test length, number of common items, and differing ability distributions in a simulation study. The new method was also examined using actual student data and a resampling analysis. Both the simulation study and actual student data study resulted in very similar transformation constants for the RPA and SL methods when 15 or 10 common items were used. However, the RPA method produced greater variance, especially when only 5 common items were used in the actual student data analysis compared to the SL method. The simulated and actual data research findings demonstrate that the RPA method is a viable option for producing the transformation constants necessary for transforming separately calibrated item parameter estimates prior to equating.

Included in

Psychology Commons

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.