Challenges and Opportunities of Moderating Usage of Large Language Models in Education

Lars Krupp, Steffen Steinert, Maximilian Kiefer-Emmanouilidis, Karina E Avila, Paul Lukowicz, Jochen Kuhn, Stefan Küchemann, Jakob Karolus
Proceedings of the 2024 AAAI Conference on Artificial Intelligence, PMLR 257:9-18, 2024.

Abstract

The increased presence of large language models (LLMs) in educational settings has ignited debates concerning negative repercussions, including overreliance and inadequate task reflection. Our work advocates moderated usage of such models, designed in a way that supports students and encourages critical thinking. We developed two moderated interaction methods with ChatGPT: hint-based assistance and presenting multiple answer choices. In a study with students (N=40) answering physics questions, we compared the effects of our moderated models against two baseline settings: unmoderated ChatGPT access and internet searches. We analyzed the interaction strategies and found that the moderated versions exhibited less unreflected usage (e.g., copy & paste) compared to the unmoderated condition. However, neither ChatGPT-supported condition could match the ratio of reflected usage present in internet searches. Our research highlights the potential benefits of moderating language models, showing a research direction toward designing effective AI-supported educational strategies.

Cite this Paper


BibTeX
@InProceedings{pmlr-v257-krupp24a, title = {Challenges and Opportunities of Moderating Usage of Large Language Models in Education}, author = {Krupp, Lars and Steinert, Steffen and Kiefer-Emmanouilidis, Maximilian and Avila, Karina E and Lukowicz, Paul and Kuhn, Jochen and K{\"u}chemann, Stefan and Karolus, Jakob}, booktitle = {Proceedings of the 2024 AAAI Conference on Artificial Intelligence}, pages = {9--18}, year = {2024}, editor = {Ananda, Muktha and Malick, Debshila Basu and Burstein, Jill and Liu, Lydia T. and Liu, Zitao and Sharpnack, James and Wang, Zichao and Wang, Serena}, volume = {257}, series = {Proceedings of Machine Learning Research}, month = {26--27 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v257/main/assets/krupp24a/krupp24a.pdf}, url = {https://proceedings.mlr.press/v257/krupp24a.html}, abstract = {The increased presence of large language models (LLMs) in educational settings has ignited debates concerning negative repercussions, including overreliance and inadequate task reflection. Our work advocates moderated usage of such models, designed in a way that supports students and encourages critical thinking. We developed two moderated interaction methods with ChatGPT: hint-based assistance and presenting multiple answer choices. In a study with students (N=40) answering physics questions, we compared the effects of our moderated models against two baseline settings: unmoderated ChatGPT access and internet searches. We analyzed the interaction strategies and found that the moderated versions exhibited less unreflected usage (e.g., copy & paste) compared to the unmoderated condition. However, neither ChatGPT-supported condition could match the ratio of reflected usage present in internet searches. Our research highlights the potential benefits of moderating language models, showing a research direction toward designing effective AI-supported educational strategies.} }
Endnote
%0 Conference Paper %T Challenges and Opportunities of Moderating Usage of Large Language Models in Education %A Lars Krupp %A Steffen Steinert %A Maximilian Kiefer-Emmanouilidis %A Karina E Avila %A Paul Lukowicz %A Jochen Kuhn %A Stefan Küchemann %A Jakob Karolus %B Proceedings of the 2024 AAAI Conference on Artificial Intelligence %C Proceedings of Machine Learning Research %D 2024 %E Muktha Ananda %E Debshila Basu Malick %E Jill Burstein %E Lydia T. Liu %E Zitao Liu %E James Sharpnack %E Zichao Wang %E Serena Wang %F pmlr-v257-krupp24a %I PMLR %P 9--18 %U https://proceedings.mlr.press/v257/krupp24a.html %V 257 %X The increased presence of large language models (LLMs) in educational settings has ignited debates concerning negative repercussions, including overreliance and inadequate task reflection. Our work advocates moderated usage of such models, designed in a way that supports students and encourages critical thinking. We developed two moderated interaction methods with ChatGPT: hint-based assistance and presenting multiple answer choices. In a study with students (N=40) answering physics questions, we compared the effects of our moderated models against two baseline settings: unmoderated ChatGPT access and internet searches. We analyzed the interaction strategies and found that the moderated versions exhibited less unreflected usage (e.g., copy & paste) compared to the unmoderated condition. However, neither ChatGPT-supported condition could match the ratio of reflected usage present in internet searches. Our research highlights the potential benefits of moderating language models, showing a research direction toward designing effective AI-supported educational strategies.
APA
Krupp, L., Steinert, S., Kiefer-Emmanouilidis, M., Avila, K.E., Lukowicz, P., Kuhn, J., Küchemann, S. & Karolus, J.. (2024). Challenges and Opportunities of Moderating Usage of Large Language Models in Education. Proceedings of the 2024 AAAI Conference on Artificial Intelligence, in Proceedings of Machine Learning Research 257:9-18 Available from https://proceedings.mlr.press/v257/krupp24a.html.

Related Material