Abstract
This research paper introduces a new area of study in the field of educational Natural Language Processing (NLP): Automated Long Answer Grading (ALAG). Distinguishing itself from traditional Automated Short Answer Grading (ASAG) and open-ended Automated Essay Grading (AEG), ALAG presents unique challenges due to the complexity and multifaceted nature of fact-based long answers. To facilitate the study of ALAG, we introduce RiceChem, a specialized dataset derived from a college-level chemistry course, featuring real student responses to long-answer questions with an average word count notably higher than typical ASAG datasets. We propose a novel approach to ALAG by formulating it as a rubric entailment problem, employing natural language inference models to verify whether each criterion, represented by a rubric item, is addressed in the student’s response. This formulation enables the effective use of large-scale datasets like MNLI for transfer learning, significantly improving the performance of models on the RiceChem dataset. We demonstrate the importance of rubric-based formulation in ALAG, showcasing its superiority over traditional score-based approaches in capturing the nuances and multiple facets of student responses. Furthermore, we investigate the performance of models in cold start scenarios, providing valuable insights into the data efficiency and practical deployment considerations in educational settings. Lastly, we benchmark state-of-the-art open-sourced Large Language Models (LLMs) on RiceChem and compare their results to GPT models, highlighting the increased complexity of ALAG compared to ASAG. Despite leveraging the benefits of a rubric-based approach and transfer learning from MNLI, the lower performance of LLMs on RiceChem underscores the significant difficulty posed by the ALAG task. With this work, we offer a fresh perspective on grading long, fact-based answers and introduce a new dataset to stimulate further research in this important area. The code and dataset can be found at https://github.com/luffycodes/Automated-Long-Answer-Grading.
S. Sonkar and K. Ni—Equal contribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bai, J., et al.: Qwen technical report. arXiv preprint arXiv:2309.16609 (2023)
Blanchard, D., Tetreault, J., Higgins, D., Cahill, A., Chodorow, M.: Toefl11: A corpus of non-native english. ETS Research Report Series 2013(2), i–15 (2013)
Bonthu, Sridevi, Rama Sree, S.., Krishna Prasad, M.. H.. M..: Automated short answer grading using deep learning: a survey. In: Holzinger, Andreas, Kieseberg, Peter, Tjoa, A Min, Weippl, Edgar (eds.) CD-MAKE 2021. LNCS, vol. 12844, pp. 61–78. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-84060-0_5
Bubeck, S., et al.: Sparks of artificial general intelligence: early experiments with GPT-4. arXiv preprint arXiv:2303.12712 (2023)
Burrows, S., Gurevych, I., Stein, B.: The eras and trends of automatic short answer grading. Int. J. Artif. Intell. Educ. 25, 60–117 (2015)
Chiang, W.L., et al.: Vicuna: an open-source chatbot impressing gpt-4 with 90%* chatgpt quality (2023). https://lmsys.org/blog/2023-03-30-vicuna/
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Dzikovska, M., et al.: Semeval-2013 task 7: the joint student response analysis and 8th recognizing textual entailment challenge. In: Proceedings of the Second Joint Conference on Lexical and Computational Semantics, vol. 2, pp. 263–274 (2013)
Granger, S., Dagneaux, E., Meunier, F., Paquot, M., et al.: International corpus of learner English, vol. 2. Presses universitaires de Louvain Louvain-la-Neuve (2009)
Groeneveld, D., et al.: Olmo: accelerating the science of language models. arXiv preprint arXiv:2402.00838 (2024)
Hamner, B., Morgan, J., Lynnvandev, M.S., Ark, T.V.: The hewlett foundation: automated essay scoring (2012). https://kaggle.com/competitions/asap-aes
Jiang, A.Q., et al.: Mistral 7b. arXiv preprint arXiv:2310.06825 (2023)
Klebanov, B.B., Madnani, N.: Automated Essay Scoring. Springer, Heidelberg (2022)
Kortemeyer, G.: Performance of the pre-trained large language model gpt-4 on automated short answer grading. arXiv preprint arXiv:2309.09338 (2023)
Kumar, R., Mathias, S., Saha, S., Bhattacharyya, P.: Many hands make light work: using essay traits to automatically score essays. arXiv preprint arXiv:2102.00781 (2021)
Lewis, M., et al.: Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension (2019)
Li, Y., Bubeck, S., Eldan, R., Del Giorno, A., Gunasekar, S., Lee, Y.T.: Textbooks are all you need ii: phi-1.5 technical report. arXiv preprint arXiv:2309.05463 (2023)
Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach. CoRR arxiv:1907.11692 (2019)
Loshchilov, I., Hutter, F.: Decoupled weight decay regularization (2019)
Marvaniya, S., Saha, S., Dhamecha, T.I., Foltz, P., Sindhgatta, R., Sengupta, B.: Creating scoring rubric from representative student answers for improved short answer grading. In: Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pp. 993–1002 (2018)
Mohler, M., Bunescu, R., Mihalcea, R.: Learning to grade short answer questions using semantic similarity measures and dependency graph alignments. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 752–762 (2011)
Mosbach, M., Andriushchenko, M., Klakow, D.: On the stability of fine-tuning bert: misconceptions, explanations, and strong baselines (2021)
Nicholls, D.: The cambridge learner corpus: error coding and analysis for lexicography and elt. In: Proceedings of the Corpus Linguistics 2003 Conference, vol. 16, pp. 572–581. Cambridge University Press Cambridge (2003)
Ouyang, L., et al.: Training language models to follow instructions with human feedback. Adv. Neural. Inf. Process. Syst. 35, 27730–27744 (2022)
Ramesh, D., Sanampudi, S.K.: An automated essay scoring systems: a systematic literature review. Artif. Intell. Rev. 55(3), 2495–2527 (2022)
Sonkar, S., Chen, X., Le, M., Liu, N., Basu Mallick, D., Baraniuk, R.: Code soliloquies for accurate calculations in large language models. In: Proceedings of the 14th Learning Analytics and Knowledge Conference, pp. 828–835 (2024)
Sonkar, S., Liu, N., Mallick, D., Baraniuk, R.: Class: a design framework for building intelligent tutoring systems based on learning science principles. In: Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 1941–1961 (2023)
Sonkar, S., Ni, K., Chaudhary, S., Baraniuk, R.G.: Pedagogical alignment of large language models. arXiv preprint arXiv:2402.05000 (2024)
Team, G., et al.: Gemma: open models based on gemini research and technology. arXiv preprint arXiv:2403.08295 (2024)
Tunstall, L., et al.: Zephyr: direct distillation of LM alignment. arXiv preprint arXiv:2310.16944 (2023)
Williams, A., Nangia, N., Bowman, S.R.: A broad-coverage challenge corpus for sentence understanding through inference. CoRR arxiv:1704.05426 (2017)
Wolf, T., et al.: Huggingface’s transformers: state-of-the-art natural language processing (2020)
Young, A., et al.: Yi: open foundation models by 01. ai. arXiv preprint arXiv:2403.04652 (2024)
Acknowledgements
This work was supported by NSF grant 1842378, ONR grant N0014-20-1-2534, AFOSR grant FA9550-22-1-0060, a Vannevar Bush Faculty Fellowship, and ONR grant N00014-18-1-2047.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sonkar, S., Ni, K., Tran Lu, L., Kincaid, K., Hutchinson, J.S., Baraniuk, R.G. (2024). Automated Long Answer Grading with RiceChem Dataset. In: Olney, A.M., Chounta, IA., Liu, Z., Santos, O.C., Bittencourt, I.I. (eds) Artificial Intelligence in Education. AIED 2024. Lecture Notes in Computer Science(), vol 14829. Springer, Cham. https://doi.org/10.1007/978-3-031-64302-6_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-64302-6_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-64301-9
Online ISBN: 978-3-031-64302-6
eBook Packages: Computer ScienceComputer Science (R0)