A Comparison of Anchor-Item Designs for the Concurrent Calibration of Large Banks of Likert-Type Items



Downloads per month over past year

García Pérez, Miguel Ángel and Alcalá Quintana, Rocío and García Cueto, Eduardo (2010) A Comparison of Anchor-Item Designs for the Concurrent Calibration of Large Banks of Likert-Type Items. Applied Psychological Measurement, 34 (8). pp. 580-599. ISSN 0146-6216

[thumbnail of A Comparison of Anchor-Item t-2010-García-Pérez-.pdf] PDF
Restringido a Repository staff only


Official URL: http://dx.doi.org/10.1177/0146621609351259


Current interest in measuring quality of life is generating interest in the construction of computerized adaptive tests (CATs) with Likert-type items. Calibration of an item bank for use in CAT requires collecting responses to a large number of candidate items. However, the number is usually too large to administer to each subject in the calibration sample. The concurrent anchor-item design solves this problem by splitting the items into separate subtests, with some common items across subtests; then administering each subtest to a different sample; and finally running estimation algorithms once on the aggregated data array, from which a substantial number of responses are then missing. Although the use of anchor-item designs is widespread, the consequences of several configuration decisions on the accuracy of parameter estimates have never been studied in the polytomous case. The present study addresses this question by simulation, comparing the outcomes of several alternatives on the configuration of the anchor-item design. The factors defining variants of the anchor-item design are (a) subtest size, (b) balance of common and unique items per subtest, (c) characteristics of the common items, and (d) criteria for the distribution of unique items across subtests. The results of this study indicate that maximizing accuracy in item parameter recovery requires subtests of the largest possible number of items and the smallest possible number of common items; the characteristics of the common items and the criterion for distribution of unique items do not affect accuracy.

Item Type:Article
Uncontrolled Keywords:Computerized adaptive testing, Item calibration, Graded response model, Linking, Questionnaires, Simulation, Anchor-item designs, Health status, Attitudes
Subjects:Medical sciences > Psychology > Experimental psychology
ID Code:35707
Deposited On:23 Sep 2016 09:39
Last Modified:23 Sep 2016 09:39

Origin of downloads

Repository Staff Only: item control page