Using Large Language Models to Assess Tutors’ Performance in Reacting to Students Making Math Errors

Sanjit Kakarla, Danielle R Thomas, Jionghao Lin, Shivang Gupta, Kenneth R Koedinger
Proceedings of the 2024 AAAI Conference on Artificial Intelligence, PMLR 257:77-84, 2024.

Abstract

Research suggests that tutors should adopt a strategic approach when addressing math errors made by low-efficacy students. Rather than drawing direct attention to the error, tutors should guide the students to identify and correct their mistakes on their own. While tutor lessons have introduced this pedagogical skill, human evaluation of tutors applying this strategy is arduous and time-consuming. Large language models (LLMs) show promise in providing real-time assessment to tutors during their actual tutoring sessions, yet little is known regarding their accuracy in this context. In this study, we investigate the capacity of generative AI to evaluate real-life tutors’ performance in responding to students making math errors. By analyzing 50 real-life tutoring dialogues, we find both GPT-3.5-Turbo and GPT-4 demonstrate proficiency in assessing the criteria related to reacting to students making errors. However, both models exhibit limitations in recognizing instances where the student made an error. Notably, GPT-4 tends to overidentify instances of students making errors, often attributing student uncertainty or inferring potential errors where human evaluators did not. Future work will focus on enhancing generalizability by assessing a larger dataset of dialogues and evaluating learning transfer. Specifically, we will analyze the performance of tutors in real-life scenarios when responding to students’ math errors before and after lesson completion on this crucial tutoring skill.

Cite this Paper


BibTeX
@InProceedings{pmlr-v257-kakarla24a, title = {Using Large Language Models to Assess Tutors’ Performance in Reacting to Students Making Math Errors}, author = {Kakarla, Sanjit and Thomas, Danielle R and Lin, Jionghao and Gupta, Shivang and Koedinger, Kenneth R}, booktitle = {Proceedings of the 2024 AAAI Conference on Artificial Intelligence}, pages = {77--84}, year = {2024}, editor = {Ananda, Muktha and Malick, Debshila Basu and Burstein, Jill and Liu, Lydia T. and Liu, Zitao and Sharpnack, James and Wang, Zichao and Wang, Serena}, volume = {257}, series = {Proceedings of Machine Learning Research}, month = {26--27 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v257/main/assets/kakarla24a/kakarla24a.pdf}, url = {https://proceedings.mlr.press/v257/kakarla24a.html}, abstract = {Research suggests that tutors should adopt a strategic approach when addressing math errors made by low-efficacy students. Rather than drawing direct attention to the error, tutors should guide the students to identify and correct their mistakes on their own. While tutor lessons have introduced this pedagogical skill, human evaluation of tutors applying this strategy is arduous and time-consuming. Large language models (LLMs) show promise in providing real-time assessment to tutors during their actual tutoring sessions, yet little is known regarding their accuracy in this context. In this study, we investigate the capacity of generative AI to evaluate real-life tutors’ performance in responding to students making math errors. By analyzing 50 real-life tutoring dialogues, we find both GPT-3.5-Turbo and GPT-4 demonstrate proficiency in assessing the criteria related to reacting to students making errors. However, both models exhibit limitations in recognizing instances where the student made an error. Notably, GPT-4 tends to overidentify instances of students making errors, often attributing student uncertainty or inferring potential errors where human evaluators did not. Future work will focus on enhancing generalizability by assessing a larger dataset of dialogues and evaluating learning transfer. Specifically, we will analyze the performance of tutors in real-life scenarios when responding to students’ math errors before and after lesson completion on this crucial tutoring skill.} }
Endnote
%0 Conference Paper %T Using Large Language Models to Assess Tutors’ Performance in Reacting to Students Making Math Errors %A Sanjit Kakarla %A Danielle R Thomas %A Jionghao Lin %A Shivang Gupta %A Kenneth R Koedinger %B Proceedings of the 2024 AAAI Conference on Artificial Intelligence %C Proceedings of Machine Learning Research %D 2024 %E Muktha Ananda %E Debshila Basu Malick %E Jill Burstein %E Lydia T. Liu %E Zitao Liu %E James Sharpnack %E Zichao Wang %E Serena Wang %F pmlr-v257-kakarla24a %I PMLR %P 77--84 %U https://proceedings.mlr.press/v257/kakarla24a.html %V 257 %X Research suggests that tutors should adopt a strategic approach when addressing math errors made by low-efficacy students. Rather than drawing direct attention to the error, tutors should guide the students to identify and correct their mistakes on their own. While tutor lessons have introduced this pedagogical skill, human evaluation of tutors applying this strategy is arduous and time-consuming. Large language models (LLMs) show promise in providing real-time assessment to tutors during their actual tutoring sessions, yet little is known regarding their accuracy in this context. In this study, we investigate the capacity of generative AI to evaluate real-life tutors’ performance in responding to students making math errors. By analyzing 50 real-life tutoring dialogues, we find both GPT-3.5-Turbo and GPT-4 demonstrate proficiency in assessing the criteria related to reacting to students making errors. However, both models exhibit limitations in recognizing instances where the student made an error. Notably, GPT-4 tends to overidentify instances of students making errors, often attributing student uncertainty or inferring potential errors where human evaluators did not. Future work will focus on enhancing generalizability by assessing a larger dataset of dialogues and evaluating learning transfer. Specifically, we will analyze the performance of tutors in real-life scenarios when responding to students’ math errors before and after lesson completion on this crucial tutoring skill.
APA
Kakarla, S., Thomas, D.R., Lin, J., Gupta, S. & Koedinger, K.R.. (2024). Using Large Language Models to Assess Tutors’ Performance in Reacting to Students Making Math Errors. Proceedings of the 2024 AAAI Conference on Artificial Intelligence, in Proceedings of Machine Learning Research 257:77-84 Available from https://proceedings.mlr.press/v257/kakarla24a.html.

Related Material