Results and Insights from Diagnostic Questions: The NeurIPS 2020 Education Challenge

Zichao Wang, Angus Lamb, Evgeny Saveliev, Pashmina Cameron, Jordan Zaykov, Jose Miguel Hernandez-Lobato, Richard E. Turner, Richard G. Baraniuk, Craig Barton, Simon Peyton Jones, Simon Woodhead, Cheng Zhang
Proceedings of the NeurIPS 2020 Competition and Demonstration Track, PMLR 133:191-205, 2021.

Abstract

This competition concerns educational diagnostic questions, which are pedagogically effective, multiple-choice questions (MCQs) whose distractors embody misconceptions. With a large and ever-increasing number of such questions, it becomes overwhelming for teachers to know which questions are the best ones to use for their students. We thus seek to answer the following question: how can we use data on hundreds of millions of answers to MCQs to drive automatic personalized learning in large-scale learning scenarios where manual personalization is infeasible? Success in using MCQ data at scale helps build more intelligent, personalized learning platforms that ultimately improve the quality of education en masse. To this end, we introduce a new, large-scale, real-world dataset and formulate 4 data mining tasks on MCQs that mimic real learning scenarios and target various aspects of the above question in a competition setting at NeurIPS 2020. We report on our NeurIPS competition in which nearly 400 teams submitted approximately 4000 submissions, with encouragingly diverse and effective approaches to each of our tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v133-wang21a, title = {Results and Insights from Diagnostic Questions: The NeurIPS 2020 Education Challenge}, author = {Wang, Zichao and Lamb, Angus and Saveliev, Evgeny and Cameron, Pashmina and Zaykov, Jordan and Hernandez-Lobato, Jose Miguel and Turner, Richard E. and Baraniuk, Richard G. and Craig Barton, Eedi and Peyton Jones, Simon and Woodhead, Simon and Zhang, Cheng}, booktitle = {Proceedings of the NeurIPS 2020 Competition and Demonstration Track}, pages = {191--205}, year = {2021}, editor = {Escalante, Hugo Jair and Hofmann, Katja}, volume = {133}, series = {Proceedings of Machine Learning Research}, month = {06--12 Dec}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v133/wang21a/wang21a.pdf}, url = {https://proceedings.mlr.press/v133/wang21a.html}, abstract = {This competition concerns educational diagnostic questions, which are pedagogically effective, multiple-choice questions (MCQs) whose distractors embody misconceptions. With a large and ever-increasing number of such questions, it becomes overwhelming for teachers to know which questions are the best ones to use for their students. We thus seek to answer the following question: how can we use data on hundreds of millions of answers to MCQs to drive automatic personalized learning in large-scale learning scenarios where manual personalization is infeasible? Success in using MCQ data at scale helps build more intelligent, personalized learning platforms that ultimately improve the quality of education en masse. To this end, we introduce a new, large-scale, real-world dataset and formulate 4 data mining tasks on MCQs that mimic real learning scenarios and target various aspects of the above question in a competition setting at NeurIPS 2020. We report on our NeurIPS competition in which nearly 400 teams submitted approximately 4000 submissions, with encouragingly diverse and effective approaches to each of our tasks.} }
Endnote
%0 Conference Paper %T Results and Insights from Diagnostic Questions: The NeurIPS 2020 Education Challenge %A Zichao Wang %A Angus Lamb %A Evgeny Saveliev %A Pashmina Cameron %A Jordan Zaykov %A Jose Miguel Hernandez-Lobato %A Richard E. Turner %A Richard G. Baraniuk %A Craig Barton %A Simon Peyton Jones %A Simon Woodhead %A Cheng Zhang %B Proceedings of the NeurIPS 2020 Competition and Demonstration Track %C Proceedings of Machine Learning Research %D 2021 %E Hugo Jair Escalante %E Katja Hofmann %F pmlr-v133-wang21a %I PMLR %P 191--205 %U https://proceedings.mlr.press/v133/wang21a.html %V 133 %X This competition concerns educational diagnostic questions, which are pedagogically effective, multiple-choice questions (MCQs) whose distractors embody misconceptions. With a large and ever-increasing number of such questions, it becomes overwhelming for teachers to know which questions are the best ones to use for their students. We thus seek to answer the following question: how can we use data on hundreds of millions of answers to MCQs to drive automatic personalized learning in large-scale learning scenarios where manual personalization is infeasible? Success in using MCQ data at scale helps build more intelligent, personalized learning platforms that ultimately improve the quality of education en masse. To this end, we introduce a new, large-scale, real-world dataset and formulate 4 data mining tasks on MCQs that mimic real learning scenarios and target various aspects of the above question in a competition setting at NeurIPS 2020. We report on our NeurIPS competition in which nearly 400 teams submitted approximately 4000 submissions, with encouragingly diverse and effective approaches to each of our tasks.
APA
Wang, Z., Lamb, A., Saveliev, E., Cameron, P., Zaykov, J., Hernandez-Lobato, J.M., Turner, R.E., Baraniuk, R.G., Barton, C., Peyton Jones, S., Woodhead, S. & Zhang , C.. (2021). Results and Insights from Diagnostic Questions: The NeurIPS 2020 Education Challenge. Proceedings of the NeurIPS 2020 Competition and Demonstration Track, in Proceedings of Machine Learning Research 133:191-205 Available from https://proceedings.mlr.press/v133/wang21a.html.

Related Material