Using item features to calibrate educational test items: Comparing artificial intelligence and classical approaches
DOI:
https://doi.org/10.55284/ajel.v10i2.1543Keywords:
Artificial intelligence, Educational testing, Item response theory, Large language models, Test development, Validity.Abstract
Educational test items are typically calibrated onto a score scale using item response theory (IRT). This approach requires administering the items to hundreds of test takers to characterize their difficulty. For educational tests designed for criterion-referenced purposes, characterizing item difficulty in this way presents two problems: one theoretical, the other practical. Theoretically, tests designed to provide criterion-referenced information should report test takers’ performance with respect to the knowledge and skills they have mastered, rather than on how well they performed relative to others. The traditional IRT calibration approach expresses item difficulty on a scale determined solely by test takers’ performance on the items. Practically, the traditional IRT approach requires large numbers of test takers, who are not always available and who are not always motivated to do well. In this study, we use the construct-relevant features of test items to characterize their difficulty. In one approach, we code the item features; two other approaches are based on artificial intelligence (chain-of-thought prompting and LLM finetuning). The results indicate the coding and LLM finetuning approaches reflect the difficulty parameters calibrated using IRT, accounting for approximately 60% of the variation. These results suggest educational test items can be calibrated using construct-relevant features of the items, rather than only administering them to samples of test takers. Implications for future research and practice in this area are discussed.