Examining Content Control in Adaptive Tests: Computerized Adaptive Testing vs. Computerized Adaptive Multistage Testing

Main Article Content

Halil İbrahim Sari
Anne Corinne Huggins-Manley


Computerized adaptive testing, Computerized adaptive multistage testing, Content balancing


We conducted a simulation study to explore the precision of test outcomes across computerized adaptive testing (CAT) and computerized adaptive multistage testing (ca-MST) when the number of different content areas was varied across a variety of test lengths. We compared one CAT and two ca-MST designs (1-3 and 1-3-3 panel designs) across several manipulated conditions including total test length (24-item and 48-item test length) and number of controlled content areas. The five levels of the content area condition included zero (no content control), two, four, six and eight content area. We fully crossed all manipulated conditions within CAT and ca-MST with one another, and generated 4000 examinees from N (0,1). We fixed all other conditions such as IRT model, exposure rate across the CAT and ca-MSTs. Results indicated that test length and the type of test administration model impacted the outcomes more than the number of content area. The main finding was that regardless of any study condition, CAT outperformed the two ca-MSTs, and the two ca-MSTs were comparable. We discussed the results in connection to the control over test design, test content, cost effectiveness and item pool usage and provided recommendations for practitioner and also listed limitations for further research.


Download data is not yet available.
Abstract 157