Skip to main content

Automated Long Answer Grading with RiceChem Dataset

  • Conference paper
  • First Online:
Artificial Intelligence in Education (AIED 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14829))

Included in the following conference series:

  • 1170 Accesses

Abstract

This research paper introduces a new area of study in the field of educational Natural Language Processing (NLP): Automated Long Answer Grading (ALAG). Distinguishing itself from traditional Automated Short Answer Grading (ASAG) and open-ended Automated Essay Grading (AEG), ALAG presents unique challenges due to the complexity and multifaceted nature of fact-based long answers. To facilitate the study of ALAG, we introduce RiceChem, a specialized dataset derived from a college-level chemistry course, featuring real student responses to long-answer questions with an average word count notably higher than typical ASAG datasets. We propose a novel approach to ALAG by formulating it as a rubric entailment problem, employing natural language inference models to verify whether each criterion, represented by a rubric item, is addressed in the student’s response. This formulation enables the effective use of large-scale datasets like MNLI for transfer learning, significantly improving the performance of models on the RiceChem dataset. We demonstrate the importance of rubric-based formulation in ALAG, showcasing its superiority over traditional score-based approaches in capturing the nuances and multiple facets of student responses. Furthermore, we investigate the performance of models in cold start scenarios, providing valuable insights into the data efficiency and practical deployment considerations in educational settings. Lastly, we benchmark state-of-the-art open-sourced Large Language Models (LLMs) on RiceChem and compare their results to GPT models, highlighting the increased complexity of ALAG compared to ASAG. Despite leveraging the benefits of a rubric-based approach and transfer learning from MNLI, the lower performance of LLMs on RiceChem underscores the significant difficulty posed by the ALAG task. With this work, we offer a fresh perspective on grading long, fact-based answers and introduce a new dataset to stimulate further research in this important area. The code and dataset can be found at https://github.com/luffycodes/Automated-Long-Answer-Grading.

S. Sonkar and K. Ni—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (Canada)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (Canada)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bai, J., et al.: Qwen technical report. arXiv preprint arXiv:2309.16609 (2023)

  2. Blanchard, D., Tetreault, J., Higgins, D., Cahill, A., Chodorow, M.: Toefl11: A corpus of non-native english. ETS Research Report Series 2013(2), i–15 (2013)

    Article  Google Scholar 

  3. Bonthu, Sridevi, Rama Sree, S.., Krishna Prasad, M.. H.. M..: Automated short answer grading using deep learning: a survey. In: Holzinger, Andreas, Kieseberg, Peter, Tjoa, A Min, Weippl, Edgar (eds.) CD-MAKE 2021. LNCS, vol. 12844, pp. 61–78. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-84060-0_5

    Chapter  Google Scholar 

  4. Bubeck, S., et al.: Sparks of artificial general intelligence: early experiments with GPT-4. arXiv preprint arXiv:2303.12712 (2023)

  5. Burrows, S., Gurevych, I., Stein, B.: The eras and trends of automatic short answer grading. Int. J. Artif. Intell. Educ. 25, 60–117 (2015)

    Article  Google Scholar 

  6. Chiang, W.L., et al.: Vicuna: an open-source chatbot impressing gpt-4 with 90%* chatgpt quality (2023). https://lmsys.org/blog/2023-03-30-vicuna/

  7. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  8. Dzikovska, M., et al.: Semeval-2013 task 7: the joint student response analysis and 8th recognizing textual entailment challenge. In: Proceedings of the Second Joint Conference on Lexical and Computational Semantics, vol. 2, pp. 263–274 (2013)

    Google Scholar 

  9. Granger, S., Dagneaux, E., Meunier, F., Paquot, M., et al.: International corpus of learner English, vol. 2. Presses universitaires de Louvain Louvain-la-Neuve (2009)

    Google Scholar 

  10. Groeneveld, D., et al.: Olmo: accelerating the science of language models. arXiv preprint arXiv:2402.00838 (2024)

  11. Hamner, B., Morgan, J., Lynnvandev, M.S., Ark, T.V.: The hewlett foundation: automated essay scoring (2012). https://kaggle.com/competitions/asap-aes

  12. Jiang, A.Q., et al.: Mistral 7b. arXiv preprint arXiv:2310.06825 (2023)

  13. Klebanov, B.B., Madnani, N.: Automated Essay Scoring. Springer, Heidelberg (2022)

    Google Scholar 

  14. Kortemeyer, G.: Performance of the pre-trained large language model gpt-4 on automated short answer grading. arXiv preprint arXiv:2309.09338 (2023)

  15. Kumar, R., Mathias, S., Saha, S., Bhattacharyya, P.: Many hands make light work: using essay traits to automatically score essays. arXiv preprint arXiv:2102.00781 (2021)

  16. Lewis, M., et al.: Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension (2019)

    Google Scholar 

  17. Li, Y., Bubeck, S., Eldan, R., Del Giorno, A., Gunasekar, S., Lee, Y.T.: Textbooks are all you need ii: phi-1.5 technical report. arXiv preprint arXiv:2309.05463 (2023)

  18. Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach. CoRR arxiv:1907.11692 (2019)

  19. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization (2019)

    Google Scholar 

  20. Marvaniya, S., Saha, S., Dhamecha, T.I., Foltz, P., Sindhgatta, R., Sengupta, B.: Creating scoring rubric from representative student answers for improved short answer grading. In: Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pp. 993–1002 (2018)

    Google Scholar 

  21. Mohler, M., Bunescu, R., Mihalcea, R.: Learning to grade short answer questions using semantic similarity measures and dependency graph alignments. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 752–762 (2011)

    Google Scholar 

  22. Mosbach, M., Andriushchenko, M., Klakow, D.: On the stability of fine-tuning bert: misconceptions, explanations, and strong baselines (2021)

    Google Scholar 

  23. Nicholls, D.: The cambridge learner corpus: error coding and analysis for lexicography and elt. In: Proceedings of the Corpus Linguistics 2003 Conference, vol. 16, pp. 572–581. Cambridge University Press Cambridge (2003)

    Google Scholar 

  24. Ouyang, L., et al.: Training language models to follow instructions with human feedback. Adv. Neural. Inf. Process. Syst. 35, 27730–27744 (2022)

    Google Scholar 

  25. Ramesh, D., Sanampudi, S.K.: An automated essay scoring systems: a systematic literature review. Artif. Intell. Rev. 55(3), 2495–2527 (2022)

    Article  Google Scholar 

  26. Sonkar, S., Chen, X., Le, M., Liu, N., Basu Mallick, D., Baraniuk, R.: Code soliloquies for accurate calculations in large language models. In: Proceedings of the 14th Learning Analytics and Knowledge Conference, pp. 828–835 (2024)

    Google Scholar 

  27. Sonkar, S., Liu, N., Mallick, D., Baraniuk, R.: Class: a design framework for building intelligent tutoring systems based on learning science principles. In: Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 1941–1961 (2023)

    Google Scholar 

  28. Sonkar, S., Ni, K., Chaudhary, S., Baraniuk, R.G.: Pedagogical alignment of large language models. arXiv preprint arXiv:2402.05000 (2024)

  29. Team, G., et al.: Gemma: open models based on gemini research and technology. arXiv preprint arXiv:2403.08295 (2024)

  30. Tunstall, L., et al.: Zephyr: direct distillation of LM alignment. arXiv preprint arXiv:2310.16944 (2023)

  31. Williams, A., Nangia, N., Bowman, S.R.: A broad-coverage challenge corpus for sentence understanding through inference. CoRR arxiv:1704.05426 (2017)

  32. Wolf, T., et al.: Huggingface’s transformers: state-of-the-art natural language processing (2020)

    Google Scholar 

  33. Young, A., et al.: Yi: open foundation models by 01. ai. arXiv preprint arXiv:2403.04652 (2024)

Download references

Acknowledgements

This work was supported by NSF grant 1842378, ONR grant N0014-20-1-2534, AFOSR grant FA9550-22-1-0060, a Vannevar Bush Faculty Fellowship, and ONR grant N00014-18-1-2047.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shashank Sonkar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sonkar, S., Ni, K., Tran Lu, L., Kincaid, K., Hutchinson, J.S., Baraniuk, R.G. (2024). Automated Long Answer Grading with RiceChem Dataset. In: Olney, A.M., Chounta, IA., Liu, Z., Santos, O.C., Bittencourt, I.I. (eds) Artificial Intelligence in Education. AIED 2024. Lecture Notes in Computer Science(), vol 14829. Springer, Cham. https://doi.org/10.1007/978-3-031-64302-6_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-64302-6_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-64301-9

  • Online ISBN: 978-3-031-64302-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics