Teaching Uncertainty Quantification in Machine Learning through Use Cases

Matias Valdenegro-Toro
Proceedings of the Second Teaching Machine Learning and Artificial Intelligence Workshop, PMLR 170:57-61, 2022.

Abstract

Uncertainty in machine learning is not generally taught as general knowledge in Machine Learning course curricula. In this paper we propose a short curriculum for a course about uncertainty in machine learning, and complement the course with a selection of use cases, aimed to trigger discussion and let students play with the concepts of uncertainty in a programming setting. Our use cases cover the concept of output uncertainty, Bayesian neural networks and weight distributions, sources of uncertainty, and out of distribution detection. We expect that this curriculum and set of use cases motivates the community to adopt these important concepts into courses for safety in AI.

Cite this Paper


BibTeX
@InProceedings{pmlr-v170-valdenegro-toro22a, title = {Teaching Uncertainty Quantification in Machine Learning through Use Cases}, author = {Valdenegro-Toro, Matias}, booktitle = {Proceedings of the Second Teaching Machine Learning and Artificial Intelligence Workshop}, pages = {57--61}, year = {2022}, editor = {Kinnaird, Katherine M. and Steinbach, Peter and Guhr, Oliver}, volume = {170}, series = {Proceedings of Machine Learning Research}, month = {08--13 Sep}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v170/valdenegro-toro22a/valdenegro-toro22a.pdf}, url = {https://proceedings.mlr.press/v170/valdenegro-toro22a.html}, abstract = {Uncertainty in machine learning is not generally taught as general knowledge in Machine Learning course curricula. In this paper we propose a short curriculum for a course about uncertainty in machine learning, and complement the course with a selection of use cases, aimed to trigger discussion and let students play with the concepts of uncertainty in a programming setting. Our use cases cover the concept of output uncertainty, Bayesian neural networks and weight distributions, sources of uncertainty, and out of distribution detection. We expect that this curriculum and set of use cases motivates the community to adopt these important concepts into courses for safety in AI.} }
Endnote
%0 Conference Paper %T Teaching Uncertainty Quantification in Machine Learning through Use Cases %A Matias Valdenegro-Toro %B Proceedings of the Second Teaching Machine Learning and Artificial Intelligence Workshop %C Proceedings of Machine Learning Research %D 2022 %E Katherine M. Kinnaird %E Peter Steinbach %E Oliver Guhr %F pmlr-v170-valdenegro-toro22a %I PMLR %P 57--61 %U https://proceedings.mlr.press/v170/valdenegro-toro22a.html %V 170 %X Uncertainty in machine learning is not generally taught as general knowledge in Machine Learning course curricula. In this paper we propose a short curriculum for a course about uncertainty in machine learning, and complement the course with a selection of use cases, aimed to trigger discussion and let students play with the concepts of uncertainty in a programming setting. Our use cases cover the concept of output uncertainty, Bayesian neural networks and weight distributions, sources of uncertainty, and out of distribution detection. We expect that this curriculum and set of use cases motivates the community to adopt these important concepts into courses for safety in AI.
APA
Valdenegro-Toro, M.. (2022). Teaching Uncertainty Quantification in Machine Learning through Use Cases. Proceedings of the Second Teaching Machine Learning and Artificial Intelligence Workshop, in Proceedings of Machine Learning Research 170:57-61 Available from https://proceedings.mlr.press/v170/valdenegro-toro22a.html.

Related Material