Adaptive Knowledge Assessment In Simulated Coding Interviews

Michael Ion, Sumit Ashana, Fengquan Jiao, Tianyi Wang, Kevyn Collins-Thompson
Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop, PMLR 273:260-262, 2025.

Abstract

We present a system for simulating student coding interview responses to sequential inter-view questions, with the goal of accurately inferring student expertise levels. With these simulated students, we explored fixed and adaptive question selection policies, where the adaptive policy exploits a knowledge component dependency graph to maximize information gain. Our results show that adaptive questioning policies show increasing benefits compared to a fixed policy as student expertise levels rise, achieving expert assessment F1-scores of 0.4-0.8 for student expertise prediction compared to 0.25-0.35 for fixed strategies.

Cite this Paper


BibTeX
@InProceedings{pmlr-v273-ion25a, title = {Adaptive Knowledge Assessment In Simulated Coding Interviews}, author = {Ion, Michael and Ashana, Sumit and Jiao, Fengquan and Wang, Tianyi and Collins-Thompson, Kevyn}, booktitle = {Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop}, pages = {260--262}, year = {2025}, editor = {Wang, Zichao and Woodhead, Simon and Ananda, Muktha and Mallick, Debshila Basu and Sharpnack, James and Burstein, Jill}, volume = {273}, series = {Proceedings of Machine Learning Research}, month = {03 Mar}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v273/main/assets/ion25a/ion25a.pdf}, url = {https://proceedings.mlr.press/v273/ion25a.html}, abstract = {We present a system for simulating student coding interview responses to sequential inter-view questions, with the goal of accurately inferring student expertise levels. With these simulated students, we explored fixed and adaptive question selection policies, where the adaptive policy exploits a knowledge component dependency graph to maximize information gain. Our results show that adaptive questioning policies show increasing benefits compared to a fixed policy as student expertise levels rise, achieving expert assessment F1-scores of 0.4-0.8 for student expertise prediction compared to 0.25-0.35 for fixed strategies.} }
Endnote
%0 Conference Paper %T Adaptive Knowledge Assessment In Simulated Coding Interviews %A Michael Ion %A Sumit Ashana %A Fengquan Jiao %A Tianyi Wang %A Kevyn Collins-Thompson %B Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop %C Proceedings of Machine Learning Research %D 2025 %E Zichao Wang %E Simon Woodhead %E Muktha Ananda %E Debshila Basu Mallick %E James Sharpnack %E Jill Burstein %F pmlr-v273-ion25a %I PMLR %P 260--262 %U https://proceedings.mlr.press/v273/ion25a.html %V 273 %X We present a system for simulating student coding interview responses to sequential inter-view questions, with the goal of accurately inferring student expertise levels. With these simulated students, we explored fixed and adaptive question selection policies, where the adaptive policy exploits a knowledge component dependency graph to maximize information gain. Our results show that adaptive questioning policies show increasing benefits compared to a fixed policy as student expertise levels rise, achieving expert assessment F1-scores of 0.4-0.8 for student expertise prediction compared to 0.25-0.35 for fixed strategies.
APA
Ion, M., Ashana, S., Jiao, F., Wang, T. & Collins-Thompson, K.. (2025). Adaptive Knowledge Assessment In Simulated Coding Interviews. Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop, in Proceedings of Machine Learning Research 273:260-262 Available from https://proceedings.mlr.press/v273/ion25a.html.

Related Material