Examining item compromise: an introduction to deterministic gated item response theory model (DGIRT)

Authors

  • Yusuf Olayinka Shogbesan Author

DOI:

https://doi.org/10.71291/jocatia.v2i.25

Keywords:

tests compromise,, score inflation,, cheating,, deterministic gated item response theory model (DGIRT)

Abstract

The calibration of item parameters (difficulty, discrimination and guessing parameters) estimate may only consider the true and not the cheating abilities of examinees. In an effort to detect the occurrence of test cheating due to compromise in multiple-Choice items, the Deterministic Gated Item Response Theory Model was developed to provide information about cheating effectiveness of examinees, measure the extent of item fit for the compromised items, assess the sensitivity and ascertain the specificity to detect cheating due to item compromise. Hence, the model was meant to provide information on the extent of which the item response theory psychometric estimates are sensitive to item compromise when cheating occurs in large-scale examinations. This paper examine the concepts of cheating, item compromise and provides a brief overview of the Deterministic Gated Item Response Theory Model (DGIRT). It was recommended among others that Psychometricians should consider the validation of Deterministic Gated IRT model and other new IRT models that will account for the cheating ability of examinee unlike the “normal” IRT model that produces the probability of an item response for varying values of θ (ability). 

 

References

Downloads

Published

2023-06-18

Issue

Section

Computer Adaptive Testing Research