The LEVI training Hub: Evidence-Based Evaluation for AI in Education

J. M. Alexandra L. Andres, John Whitmer
Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop, PMLR 273:202-211, 2025.

Abstract

The rapid growth of education technology (ed tech) tools, including AI-powered applications, has highlighted the need for robust evaluation frameworks, particularly at early development stages. Current evaluation models, such as the Every Student Succeeds Act (ESSA) evidence tiers created by the U.S. Department of Education, may be appropriate for many education research activities, but miss critical stages in emerging AI-driven interventions. To support the Learning Engineering Virtual Institute (LEVI), a research collaboratory with the goal of doubling math learning rates in middle school students, we have developed a new evidence matrix to bridge this gap. This matrix incorporates a two-dimensional approach that evaluates research methods alongside outcome variables, enabling nuanced assessments of interventions along an ordered process. By categorizing research methods into five levels — ranging from randomized controlled trials to qualitative studies and modeling efforts, this matrix ensures comprehensive evaluation. Complementary outcome measures, emphasizing math learning gains, engagement, and model performance, contextualize these findings. This framework fosters alignment between research rigor and practical application, offering valuable insights into scaling educational innovations responsibly.

Cite this Paper


BibTeX
@InProceedings{pmlr-v273-andres25a, title = {The LEVI training Hub: Evidence-Based Evaluation for AI in Education}, author = {Andres, J. M. Alexandra L. and Whitmer, John}, booktitle = {Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop}, pages = {202--211}, year = {2025}, editor = {Wang, Zichao and Woodhead, Simon and Ananda, Muktha and Mallick, Debshila Basu and Sharpnack, James and Burstein, Jill}, volume = {273}, series = {Proceedings of Machine Learning Research}, month = {03 Mar}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v273/main/assets/andres25a/andres25a.pdf}, url = {https://proceedings.mlr.press/v273/andres25a.html}, abstract = {The rapid growth of education technology (ed tech) tools, including AI-powered applications, has highlighted the need for robust evaluation frameworks, particularly at early development stages. Current evaluation models, such as the Every Student Succeeds Act (ESSA) evidence tiers created by the U.S. Department of Education, may be appropriate for many education research activities, but miss critical stages in emerging AI-driven interventions. To support the Learning Engineering Virtual Institute (LEVI), a research collaboratory with the goal of doubling math learning rates in middle school students, we have developed a new evidence matrix to bridge this gap. This matrix incorporates a two-dimensional approach that evaluates research methods alongside outcome variables, enabling nuanced assessments of interventions along an ordered process. By categorizing research methods into five levels — ranging from randomized controlled trials to qualitative studies and modeling efforts, this matrix ensures comprehensive evaluation. Complementary outcome measures, emphasizing math learning gains, engagement, and model performance, contextualize these findings. This framework fosters alignment between research rigor and practical application, offering valuable insights into scaling educational innovations responsibly.} }
Endnote
%0 Conference Paper %T The LEVI training Hub: Evidence-Based Evaluation for AI in Education %A J. M. Alexandra L. Andres %A John Whitmer %B Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop %C Proceedings of Machine Learning Research %D 2025 %E Zichao Wang %E Simon Woodhead %E Muktha Ananda %E Debshila Basu Mallick %E James Sharpnack %E Jill Burstein %F pmlr-v273-andres25a %I PMLR %P 202--211 %U https://proceedings.mlr.press/v273/andres25a.html %V 273 %X The rapid growth of education technology (ed tech) tools, including AI-powered applications, has highlighted the need for robust evaluation frameworks, particularly at early development stages. Current evaluation models, such as the Every Student Succeeds Act (ESSA) evidence tiers created by the U.S. Department of Education, may be appropriate for many education research activities, but miss critical stages in emerging AI-driven interventions. To support the Learning Engineering Virtual Institute (LEVI), a research collaboratory with the goal of doubling math learning rates in middle school students, we have developed a new evidence matrix to bridge this gap. This matrix incorporates a two-dimensional approach that evaluates research methods alongside outcome variables, enabling nuanced assessments of interventions along an ordered process. By categorizing research methods into five levels — ranging from randomized controlled trials to qualitative studies and modeling efforts, this matrix ensures comprehensive evaluation. Complementary outcome measures, emphasizing math learning gains, engagement, and model performance, contextualize these findings. This framework fosters alignment between research rigor and practical application, offering valuable insights into scaling educational innovations responsibly.
APA
Andres, J.M.A.L. & Whitmer, J.. (2025). The LEVI training Hub: Evidence-Based Evaluation for AI in Education. Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop, in Proceedings of Machine Learning Research 273:202-211 Available from https://proceedings.mlr.press/v273/andres25a.html.

Related Material