[edit]
Can LLMs Teach Human Learners to Understand Concepts Through Analogies?
Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop, PMLR 273:191-201, 2025.
Abstract
Large Language Models (LLMs) hold significant potential to revolutionize education by enabling personalized and effective learning experiences. As cognitive learning principles are gradually applied to designing educative LLMs, our research focuses on this crucial question: can LLMs enhance student comprehension of complex concepts through analogy-based tutoring, a pedagogical method proven useful in learning science? To address this, we propose a two-stage experimental framework. First, LLM tutors generate analogies for teaching specific target concepts, leveraging prompting techniques to adapt to simulated or real student profiles. Second, these learners engage with the analogies and subsequently complete multiple-choice question to evaluate their conceptual understanding. Our initial findings reveal that analogy-based tutoring enhances student engagement and conceptual mastery, achieving a notable improvement in comprehension. These results underscore the effectiveness of LLM-driven analogy-based tutoring in advancing educational outcomes and pave the way for future research in this domain.