MultiTutor: Collaborative LLM Agents for Multimodal Student Support

Edward Sun, LeAnn Tai
Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop, PMLR 273:174-190, 2025.

Abstract

The advent of Large Language Models (LLMs) has revolutionized education, introducing AI tools that enhance teaching and learning. Once purely natural language processors, LLMs have evolved into autonomous agents capable of complex tasks, from software development to high-level trading decisions. However, most educational applications only focus on classroom simulations or single-agent automation, leaving the potential of multi-agent systems for personalized support underexplored. To address this, we propose MultiTutor, a multi-agent tutoring framework tailored to individual student needs. MultiTutor uses internet searches and code generation to produce multimodal outputs like images and animations while expert agents synthesize information to deliver explanatory text, create visualizations, suggest resources, design practice problems, and develop interactive simulations. By identifying knowledge gaps and scaffolding learning, MultiTutor offers a transformative, accessible approach to education. Evaluation against baseline models across metrics like cognitive complexity, readability, depth, and diversity shows MultiTutor consistently outperforms in quality and relevance. Case studies further highlight its potential as an innovative solution for automated tutoring and student support.

Cite this Paper


BibTeX
@InProceedings{pmlr-v273-sun25a, title = {MultiTutor: Collaborative LLM Agents for Multimodal Student Support}, author = {Sun, Edward and Tai, LeAnn}, booktitle = {Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop}, pages = {174--190}, year = {2025}, editor = {Wang, Zichao and Woodhead, Simon and Ananda, Muktha and Mallick, Debshila Basu and Sharpnack, James and Burstein, Jill}, volume = {273}, series = {Proceedings of Machine Learning Research}, month = {03 Mar}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v273/main/assets/sun25a/sun25a.pdf}, url = {https://proceedings.mlr.press/v273/sun25a.html}, abstract = {The advent of Large Language Models (LLMs) has revolutionized education, introducing AI tools that enhance teaching and learning. Once purely natural language processors, LLMs have evolved into autonomous agents capable of complex tasks, from software development to high-level trading decisions. However, most educational applications only focus on classroom simulations or single-agent automation, leaving the potential of multi-agent systems for personalized support underexplored. To address this, we propose MultiTutor, a multi-agent tutoring framework tailored to individual student needs. MultiTutor uses internet searches and code generation to produce multimodal outputs like images and animations while expert agents synthesize information to deliver explanatory text, create visualizations, suggest resources, design practice problems, and develop interactive simulations. By identifying knowledge gaps and scaffolding learning, MultiTutor offers a transformative, accessible approach to education. Evaluation against baseline models across metrics like cognitive complexity, readability, depth, and diversity shows MultiTutor consistently outperforms in quality and relevance. Case studies further highlight its potential as an innovative solution for automated tutoring and student support.} }
Endnote
%0 Conference Paper %T MultiTutor: Collaborative LLM Agents for Multimodal Student Support %A Edward Sun %A LeAnn Tai %B Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop %C Proceedings of Machine Learning Research %D 2025 %E Zichao Wang %E Simon Woodhead %E Muktha Ananda %E Debshila Basu Mallick %E James Sharpnack %E Jill Burstein %F pmlr-v273-sun25a %I PMLR %P 174--190 %U https://proceedings.mlr.press/v273/sun25a.html %V 273 %X The advent of Large Language Models (LLMs) has revolutionized education, introducing AI tools that enhance teaching and learning. Once purely natural language processors, LLMs have evolved into autonomous agents capable of complex tasks, from software development to high-level trading decisions. However, most educational applications only focus on classroom simulations or single-agent automation, leaving the potential of multi-agent systems for personalized support underexplored. To address this, we propose MultiTutor, a multi-agent tutoring framework tailored to individual student needs. MultiTutor uses internet searches and code generation to produce multimodal outputs like images and animations while expert agents synthesize information to deliver explanatory text, create visualizations, suggest resources, design practice problems, and develop interactive simulations. By identifying knowledge gaps and scaffolding learning, MultiTutor offers a transformative, accessible approach to education. Evaluation against baseline models across metrics like cognitive complexity, readability, depth, and diversity shows MultiTutor consistently outperforms in quality and relevance. Case studies further highlight its potential as an innovative solution for automated tutoring and student support.
APA
Sun, E. & Tai, L.. (2025). MultiTutor: Collaborative LLM Agents for Multimodal Student Support. Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop, in Proceedings of Machine Learning Research 273:174-190 Available from https://proceedings.mlr.press/v273/sun25a.html.

Related Material