Automated Essay Grading Software Sustainability in Assessment: A Critical Review for Quality Feedback and Stakeholders Involvement

Authors

  • Damilola D. Olaoye Author

DOI:

https://doi.org/10.71291/jocatia.v2i.33

Keywords:

automated essay grading,, assessment,, software development,, artificial intelligence,, stakeholders’ involvement.

Abstract

This paper explores critical review of literatures on automated essay grading software and system development procedure through the nomenclature of technology in assessment. Various techniques and methodology used in essay grading software were identified as well as different software that are valid and reliable in scoring both short and extended essay test items which various stakeholders can leverage on for cost effectiveness, scoring consistency, objectivity, timely result delivery, and quick feedback. Software development stages that are required in the developing automated scoring system are discussed. The state of heart as regard the AES software that require training of manually marked essays and those that does not require training are embedded in this review with various advantages automated essay scoring exhibits over human scoring and its criticism. The evaluation matrices for validating automated essay grading system with human raters were also identified. This reviewed study conclude that with the development in artificial intelligence a reliable and valid assessment in scoring of short-answer and extended essays is viable and realisable with prompt feedback, reduced cost and time wastage and thereby promote objectivity and fairness in scoring to learners that human expert scoring may not achieve. Finally, it was recommended from this review that more automated essay grading software that does not require training with manually marked essay and able to marked different subjects need to be developed and explored.    

References

Downloads

Published

2023-06-18

Issue

Section

Computer Adaptive Testing Research