Educational Sciences: Theory & Practice

ISSN: 2630-5984

Examining Content Control in Adaptive Tests: Computerized Adaptive Testing vs. Computerized Adaptive Multistage Testing

Halil İbrahim Sari
Muallim Rifat Faculty of Education, Kilis 7 Aralik University, Kilis 79100 Turkey.
Anne Corinne Huggins-Manley
College of Education, University of Florida, 119A Norman Hall, PO Box 117049, Gainesville, FL, 32611, USA.


We conducted a simulation study to explore the precision of test outcomes across computerized adaptive testing (CAT) and computerized adaptive multistage testing (ca-MST) when the number of different content areas was varied across a variety of test lengths. We compared one CAT and two ca-MST designs (1-3 and 1-3-3 panel designs) across several manipulated conditions including total test length (24-item and 48-item test length) and number of controlled content areas. The five levels of the content area condition included zero (no content control), two, four, six and eight content area. We fully crossed all manipulated conditions within CAT and ca-MST with one another, and generated 4000 examinees from N (0,1). We fixed all other conditions such as IRT model, exposure rate across the CAT and ca-MSTs. Results indicated that test length and the type of test administration model impacted the outcomes more than the number of content area. The main finding was that regardless of any study condition, CAT outperformed the two ca-MSTs, and the two ca-MSTs were comparable. We discussed the results in connection to the control over test design, test content, cost effectiveness and item pool usage and provided recommendations for practitioner and also listed limitations for further research.

Computerized adaptive testing, Computerized adaptive multistage testing, Content balancing.