Charles Explorer logo
🇬🇧

Feasibility of computerized adaptive testing evaluated by Monte-Carlo and post-hoc simulations

Publication at First Faculty of Medicine, Faculty of Education |
2020

Abstract

Computerized adaptive testing (CAT) is a modern alternative to classical paper and pencil testing. CAT is based on an automated selection of optimal item corresponding to current estimate of test-taker's ability, which is in contrast to fixed predefined items assigned in linear test.

Advantages of CAT include lowered test anxiety and shortened test length, increased precision of estimates of test-takers' abilities, and lowered level of item exposure thus better security. Challenges are high technical demands on the whole test work-flow and need of large item banks.

In this study, we analyze feasibility and advantages of computerized adaptive testing using a Monte-Carlo simulation and posthoc analysis based on a real linear admission test administrated at a medical college. We compare various settings of the adaptive test in terms of precision of ability estimates and test length.

We find out that with adaptive item selection, the test length can be reduced to 40 out of 100 items while keeping the precision of ability estimates within the prescribed range and obtaining ability estimates highly correlated to estimates based on complete linear test (Pearson's ρ = 0.96). We also demonstrate positive effect of content balancing and item exposure rate control on item composition.