Educational Sciences: Theory & Practice

ISSN: 2630-5984

Can Computerized Adaptive Testing Work in Students’ Admission to Higher Education Programs in Turkey?

Ilker Kalender
Graduate School of Education, Ihsan Dogramaci Bilkent University, Ankara Turkey
Giray Berberoglu
Department of Education Sciences, Baskent University, Ankara Turkey

Abstract

Admission into university in Turkey is very competitive and features a number of practical problems regarding not only the test administration process itself, but also concerning the psychometric properties of test scores. Computerized adaptive testing (CAT) is seen as a possible alternative approach to solve these problems. In the first phase of the study, a series of CAT simulations based on real students’ responses to science items were conducted in order to determine which test termination rule produced more comparable results with scores made on the paper and pencil version of the test. An average of 17 items was used to terminate the CAT administration for a reasonable reliability level as opposed to the normal 45 items. Moreover, CAT based science scores not only produced similar correlations when using mathematics subtest scores as an external criterion, but also ranked the students similarly to the paper and pencil test version. In the second phase, a live CAT administration was implemented using an item bank composed of 242 items with a group of students who had previously taken the exam the paper and pencil version of the test. A correlation of .76 was found between the CAT and paper and pencil scores for this group. The results seem to support the CAT version of the subtests as a feasible alternative approach in Turkey’s university admission system.

Keywords
Computerized adaptive testing, Item response theory, University admission examinations, Validity, Classification.