Comparative Psychometric Properties of Electronically-Tested and Conventionally-Tested Items of a Selected University Course in Nigeria

Authors

  • Olumide Dorcas Olagundoye Obafemi Awolowo University
  • Eyitayo Rufus Ifedayo Afolabi Obafemi Awolowo University

DOI:

https://doi.org/10.71291/jocatia.v2i.35

Keywords:

psychometric properties, electronically-tested, conventionally-tested

Abstract

The study assessed the comparative psychometric properties of electronically and conventionally tested items of a selected university course in Nigeria using classical test theory and item response theory models. These were with a view to providing information on the overall quality of electronic mode of testing (E-Testing). The results showed that the electronic mode of testing was moderately reliable at KR20 value of 0.73 with an acceptable level of content validity. Also, the conventional mode of item testing was equally reliable with KR value of 0.70. This indicated that the two modes of testing were moderately reliable with a slight difference of 0.03. Item analysis of the testing modes showed that both theories produced similar results on the difficulty and discrimination strengths of the items. Comparing assessment modes, result revealed that they were significantly equivalent and homogenous at Z2 = 0.75. The SDs of 4.72 and 4.44 for the testing modes indicated that for C-testing, majority of the examinees scored between 15 and 25 marks while for E-testing some of the examinees scored between 14 and 23 marks. Item analysis showed that the models harmoniously supported the acceptability and otherwise of 17 and 18 items for the testing modes respectively in terms of the difficulty and discrimination strengths of the items. In conclusion, the psychometric properties of the assessment modes were comparable in terms of relationship, reliability, difficulty, and discrimination strengths of the items with E-Testing being more effective in terms of the reliability of test mode effects.

Downloads

Download data is not yet available.

References

Carlo, M. (2009). Demonstrating the Difference Between Classical Test Theory and Item Response Theory Using Derived Theory. De La Salle University, Manila. International Journal of Educational Psychological Assessment, 1(1).

Eleje, I. Chidiebere, C. A. & Fredrick, E. O. (2018). Comparative Study of Classical Test Theory and Item Response Theory Using Diagnostic Quantitative Economics Skill Test Item Analysis Result. European Journal of Educational and Social Sciences, 1(3), 27-45.

Folk, V. G., & Smith, R. L. (2002). Models for delivery of CBTS. Cited in Mills, C. N., Potenza, M. T., Fremer, J. J., & Ward, W. C. (2000.). Computer-Based Testing: Building the Foundation for Future Assessments. Lawrence Erlbaum Associates. Mahwah, NJ.

Furr, R. M., & Bacharach, V. R. (2008). Psychometrics: An introduction, Sage publication Inc. Lagos.

Jacqueline, P. L., Mark, J. G., & Stephen, M. H. (2004). The Attribute Hierarchy Method for Cognitive Assessment: A Variation on Tatsuoka’s Ruler-Space Approach. Journal of Educational Measurement.

Jennifer, p. & Chery, D. C. (2015). Using CTT, IRT and Rash Model Theory to Evaluate patient- Reported Outcome Measures. A Comparison of Worked Example. Value in health bulletin, 8(1), 11-45.

Kilber, M. E. (1999). Classroom Assessment and Learning. New York: Longman, an in-print of Addison Wesley Longman. Inc.

Ojerinde, D., & Wiegers, J. (2015). Strategic Planning and policy implementation in the introduction of large – scale computer – based test: Perspectives on Cito and JAMB Experiences. 33rd AEAAConference, Accra, Ghana. JAMB 2016 Publication.

Wenjing Y. U. & Noriko I. (2021). Comparison of Test Performance on Paper-Based Testing (PBT) and Computer-Based Testing (CBT) By English Majored Undergraduate Students in China.

Downloads

Published

2023-12-23

How to Cite

Olagundoye, O. D., & Afolabi, E. R. I. (2023). Comparative Psychometric Properties of Electronically-Tested and Conventionally-Tested Items of a Selected University Course in Nigeria. Journal of Computer Adaptive Testing in Africa, 2, 38–48. https://doi.org/10.71291/jocatia.v2i.35

Issue

Section

2023 ACATA CONFERENCE PAPERS