[edit]
Adaptive Knowledge Assessment In Simulated Coding Interviews
Proceedings of the Innovation and Responsibility in AI-Supported Education Workshop, PMLR 273:260-262, 2025.
Abstract
We present a system for simulating student coding interview responses to sequential inter-view questions, with the goal of accurately inferring student expertise levels. With these simulated students, we explored fixed and adaptive question selection policies, where the adaptive policy exploits a knowledge component dependency graph to maximize information gain. Our results show that adaptive questioning policies show increasing benefits compared to a fixed policy as student expertise levels rise, achieving expert assessment F1-scores of 0.4-0.8 for student expertise prediction compared to 0.25-0.35 for fixed strategies.