Weighting procedures for robust ability estimation in item response theory
Methods of ability parameter estimation in educational testing are subject to the biases inherent in various estimation procedures. This is especially true in the case of tests whose properties do not meet the asymptotic assumptions of estimation procedures like Maximum Likelihood Estimation. The item weighting procedures in this study were developed as a means to improve the robustness of such ability estimates. A series of procedures to weight the contribution of items to examinees' scores are described and empirically tested using a simulation study under a variety of reasonable conditions. Item weights are determined to minimize the contribution of some items while simultaneously maximizing the contribution of others. These procedures differentially weight the contribution of items to examinees' scores, by accounting for either (1) the amount of information with respect to trait estimation, or (2) the relative precision of item parameter estimates. Results indicate that weighting by item information produced ability estimates that were moderately less biased at the tails of the ability distribution and had substantially lower standard errors than scores derived from a traditional item response theory framework. Areas for future research using this scoring method are suggested.
Year of publication: |
2004-01-01
|
---|---|
Authors: | Skorupski, William P |
Publisher: |
UMass Amherst |
Subject: | Educational evaluation | Psychological tests |
Saved in:
Saved in favorites
Similar items by subject
-
Yoo, Jin Eun, (2006)
-
The effect of test characteristics on aberrant response patterns in computer adaptive testing
Rizavi, Saba M, (2001)
-
Accuracy of parameter estimation on polytomous IRT models
Park, Chung, (1997)
- More ...
Similar items by person