[edit]
Improving LLM-based Automatic Essay Scoring with Linguistic Features
Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop, PMLR 273:41-65, 2025.
Abstract
Automatic Essay Scoring (AES) assigns scores to student essays, reducing the grading workload for instructors. Developing a scoring system capable of handling essays across diverse prompts is challenging due to the flexibility and diverse nature of the writing task. Previous work has shown promising results in AES by prompting large language models (LLMs). While prompting LLM is data efficient, it does not surpass supervised methods trained with extracted linguistic features Li and Ng (2024). In this paper, we combines both approaches by incorporating linguistic features into LLM-based scoring. Experiments show promising results from this hybrid method for both in-domain and out-of-domain essay prompts.