[edit]
MultiTutor: Collaborative LLM Agents for Multimodal Student Support
Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop, PMLR 273:174-190, 2025.
Abstract
The advent of Large Language Models (LLMs) has revolutionized education, introducing AI tools that enhance teaching and learning. Once purely natural language processors, LLMs have evolved into autonomous agents capable of complex tasks, from software development to high-level trading decisions. However, most educational applications only focus on classroom simulations or single-agent automation, leaving the potential of multi-agent systems for personalized support underexplored. To address this, we propose MultiTutor, a multi-agent tutoring framework tailored to individual student needs. MultiTutor uses internet searches and code generation to produce multimodal outputs like images and animations while expert agents synthesize information to deliver explanatory text, create visualizations, suggest resources, design practice problems, and develop interactive simulations. By identifying knowledge gaps and scaffolding learning, MultiTutor offers a transformative, accessible approach to education. Evaluation against baseline models across metrics like cognitive complexity, readability, depth, and diversity shows MultiTutor consistently outperforms in quality and relevance. Case studies further highlight its potential as an innovative solution for automated tutoring and student support.