EXAMINING ITEM COMPROMISE
AN INTRODUCTION TO DETERMINISTIC GATED ITEM RESPONSE THEORY MODEL (DGIRT)
Keywords:
Tests Compromise, Score Inflation, Cheating, Deterministic Gated Item Response Theory Model (DGIRT)Abstract
The calibration of item parameters (difficulty, discrimination and guessing parameters) estimate may only consider the true and not the cheating abilities of examinees. In an effort to detect the occurrence of test cheating due to compromise in multiple-Choice items, the Deterministic Gated Item Response Theory Model was developed to provide information about cheating effectiveness of examinees, measure the extent of item fit for the compromised items, assess the sensitivity and ascertain the specificity to detect cheating due to item compromise. Hence, the model was meant to provide information on the extent of which the item response theory psychometric estimates are sensitive to item compromise when cheating occurs in large scale examinations. This paper examine the concepts of cheating, item compromise and provides a brief overview of the Deterministic Gated Item Response Theory Model (DGIRT). It was recommended among others that Psychometricians should consider the validation of Deterministic Gated IRT model and other new IRT models that will account for the cheating ability of examinee unlike the “normal” IRT model that produces the probability of an item response for varying values of ? (ability).
Downloads
References
Afolabi, E. R. I. (2012). Tests and Measurement: A Tale Bearer or True Witness? Inaugural lecture series 253.Obafemi Awolowo University, Ile-Ife. Nigeria.
Cizek, G. (2001).An overview of issues concerning cheating on large-scale tests. Paper presented at the annual meeting of the National Council on Measurement in Education. Seattle
Drasgow, F., Nye, C. D., Guo, J., &Tay, L. (2009). Cheating on proctored tests: The other side of the unproctored debate. Industrial and Organizational Psychology, 2, 46-48.
Idika, D. (2015). Parents’ concern about the use of Computer-Based Testing (CBT) for UTME in Cross River State. Nigerian Journal of Educational Research and Evaluation. 14 (3), 1-9.
Idika, D.,Shogbesan, Y. O. & Ogunsakin, I. B. (2016). Effect of test Item Compromise and test Item practice on the Validity of Economics Achievement Tests scores among secondary school students in Cross-River State. African Journal of Theory and Practice of Educational Assessment (AJTPEA). 4: 33-47.
Jurich, D. P., DeMars, C. E. & Goodman, J. T.(2012). Investigating the Impact of Compromised Anchor Items on IRT Equating Under the Nonequivalent Anchor Test Design. Applied Psychological Measurement. 36: 291-308, doi:10.1177/0146621612445575.
Lievens, F., &Burke, E. (2011). Dealing with threats inherent in unproctored Internet testing of cognitive ability: Resultsfrom a large-scale operational test program. Journal of Occupational and Organizational Psychology, 84,817-824.
McLeod, L. D., Schnipke, D. L. (1999). Detecting Items That Have Been Memorized in the Computerized Adaptive Testing Environment.Paper presented at the Annual Meeting of the NationalCouncil on Measurement in Education. Montreal, Quebec, Canada.
Ojerinde, D. (2015). Innovations in Asssessment: JAMB Experience. Nigerian Journal of Educational Research and Evaluation. 14 (3), 1-9.
Royal, K. D., & Puffer, J. C. (2012).Cheating: its implications for ABFM examinees. American Board of Family Medicine.10(3) 274-275. doi: 10.1370/afm.1408.
Shogbesan, Y. O. (2021). Sensitivity of Economics Multiple-choice Item Parameters to Item Compromise among Secondary School Students in Ogun State, Nigeria. An unpublished PhD. Thesis, Faculty of Education, Obafemi Awolowo University, Ile- Ife, Nigeria.
Shu Z, Henson R, Luecht R. (2013). Using Deterministic, Gated Item Response Theory Model to detect Test cheating due to Item Compromise. Psychometrika. 78 (3):481–97.
Tambawal, M. U. (2013).Examination Malpractices, Causes, Effects and Solutions. Being a paper presented at the stake holders forum on raising integrity in the conduct of examinations in the Nigerian educational system on Thursday, 7th February 2013.
Wollack, J. A., & Cizek, G. J., (2017). Security Issues in Professional Certification/licensure Testing. In S. Davis-Becker, & C. W. Burkendahl (2017): Testing in the professions: credentialing policies and practices. (New York): Routledge.
Zara, A. & Pearson, V. (2006). Defining Item Compromise.Paper presented at the 2006 annual meeting of the National Council on Measurement in Education, San Francisco.
Zimmermann, S., Klusmann, D., Hampe, W. (2016). Are Exam Questions Known in Advance? Using Local Dependence to Detect Cheating. PLoS ONE, 11(12): e0167545. https://doi.org/10.1371/journal.pone.0167545
Zumbo, B. D., &Hubley, A. M. (1998). Differential item functioning (DIF) analysis of a synthetic CFAT. [Technical Note 98-4, Personnel Research Team], Ottawa ON: Department of National Defense.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Yusuf Shogbesan
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.