Leveraging Grounded Large Language Models to Automate Educational Presentation Generation

Eric Xie, Guangzhi Xiong, Haolin Yang, Olivia Coleman, Michael Kennedy, Aidong Zhang
Proceedings of Large Foundation Models for Educational Assessment, PMLR 264:207-220, 2025.

Abstract

Large Language Models (LLMs) have shown great potential in education, which may significantly facilitate course preparation from making quiz questions to automatically evaluating student answers. By helping educators quickly generate high-quality educational content, LLMs enable an increased focus on student engagement, lesson planning, and personalized instruction, ultimately enhancing the overall learning experience. While slide preparation is a crucial step in education, which helps instructors present the course in an organized way, there have been few attempts at using LLMs for slide generation. Due to the hallucination problem of LLMs and the requirement of accurate knowledge in education, there is a distinct lack of LLM tools that generate presentations tailored for education, especially in specific domains such as biomedicine. To address this gap, we design a new framework to accelerate and automate the slide preparation step in biomedical education using knowledge-enhanced LLMs. Specifically, we leverage the code generation capabilities of LLMs to bridge the gap between modalities of texts and slides in presentation. The retrieval-augmented generation (RAG) is also incorporated into our framework to enhance the slide generation with external knowledge bases and ground the generated content with traceable sources. Our experiments demonstrate the utility of our framework in terms of relevance and depth, which reflect the potential of LLMs in facilitating slide preparation for education.

Cite this Paper


BibTeX
@InProceedings{pmlr-v264-xie25a, title = {Leveraging Grounded Large Language Models to Automate Educational Presentation Generation}, author = {Xie, Eric and Xiong, Guangzhi and Yang, Haolin and Coleman, Olivia and Kennedy, Michael and Zhang, Aidong}, booktitle = {Proceedings of Large Foundation Models for Educational Assessment}, pages = {207--220}, year = {2025}, editor = {Li, Sheng and Cui, Zhongmin and Lu, Jiasen and Harris, Deborah and Jing, Shumin}, volume = {264}, series = {Proceedings of Machine Learning Research}, month = {15--16 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v264/main/assets/xie25a/xie25a.pdf}, url = {https://proceedings.mlr.press/v264/xie25a.html}, abstract = {Large Language Models (LLMs) have shown great potential in education, which may significantly facilitate course preparation from making quiz questions to automatically evaluating student answers. By helping educators quickly generate high-quality educational content, LLMs enable an increased focus on student engagement, lesson planning, and personalized instruction, ultimately enhancing the overall learning experience. While slide preparation is a crucial step in education, which helps instructors present the course in an organized way, there have been few attempts at using LLMs for slide generation. Due to the hallucination problem of LLMs and the requirement of accurate knowledge in education, there is a distinct lack of LLM tools that generate presentations tailored for education, especially in specific domains such as biomedicine. To address this gap, we design a new framework to accelerate and automate the slide preparation step in biomedical education using knowledge-enhanced LLMs. Specifically, we leverage the code generation capabilities of LLMs to bridge the gap between modalities of texts and slides in presentation. The retrieval-augmented generation (RAG) is also incorporated into our framework to enhance the slide generation with external knowledge bases and ground the generated content with traceable sources. Our experiments demonstrate the utility of our framework in terms of relevance and depth, which reflect the potential of LLMs in facilitating slide preparation for education.} }
Endnote
%0 Conference Paper %T Leveraging Grounded Large Language Models to Automate Educational Presentation Generation %A Eric Xie %A Guangzhi Xiong %A Haolin Yang %A Olivia Coleman %A Michael Kennedy %A Aidong Zhang %B Proceedings of Large Foundation Models for Educational Assessment %C Proceedings of Machine Learning Research %D 2025 %E Sheng Li %E Zhongmin Cui %E Jiasen Lu %E Deborah Harris %E Shumin Jing %F pmlr-v264-xie25a %I PMLR %P 207--220 %U https://proceedings.mlr.press/v264/xie25a.html %V 264 %X Large Language Models (LLMs) have shown great potential in education, which may significantly facilitate course preparation from making quiz questions to automatically evaluating student answers. By helping educators quickly generate high-quality educational content, LLMs enable an increased focus on student engagement, lesson planning, and personalized instruction, ultimately enhancing the overall learning experience. While slide preparation is a crucial step in education, which helps instructors present the course in an organized way, there have been few attempts at using LLMs for slide generation. Due to the hallucination problem of LLMs and the requirement of accurate knowledge in education, there is a distinct lack of LLM tools that generate presentations tailored for education, especially in specific domains such as biomedicine. To address this gap, we design a new framework to accelerate and automate the slide preparation step in biomedical education using knowledge-enhanced LLMs. Specifically, we leverage the code generation capabilities of LLMs to bridge the gap between modalities of texts and slides in presentation. The retrieval-augmented generation (RAG) is also incorporated into our framework to enhance the slide generation with external knowledge bases and ground the generated content with traceable sources. Our experiments demonstrate the utility of our framework in terms of relevance and depth, which reflect the potential of LLMs in facilitating slide preparation for education.
APA
Xie, E., Xiong, G., Yang, H., Coleman, O., Kennedy, M. & Zhang, A.. (2025). Leveraging Grounded Large Language Models to Automate Educational Presentation Generation. Proceedings of Large Foundation Models for Educational Assessment, in Proceedings of Machine Learning Research 264:207-220 Available from https://proceedings.mlr.press/v264/xie25a.html.

Related Material