Is Epistemic Uncertainty Faithfully Represented by Evidential Deep Learning Methods?

Mira Juergens, Nis Meinert, Viktor Bengs, Eyke Hüllermeier, Willem Waegeman
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:22624-22642, 2024.

Abstract

Trustworthy ML systems should not only return accurate predictions, but also a reliable representation of their uncertainty. Bayesian methods are commonly used to quantify both aleatoric and epistemic uncertainty, but alternative approaches, such as evidential deep learning methods, have become popular in recent years. The latter group of methods in essence extends empirical risk minimization (ERM) for predicting second-order probability distributions over outcomes, from which measures of epistemic (and aleatoric) uncertainty can be extracted. This paper presents novel theoretical insights of evidential deep learning, highlighting the difficulties in optimizing second-order loss functions and interpreting the resulting epistemic uncertainty measures. With a systematic setup that covers a wide range of approaches for classification, regression and counts, it provides novel insights into issues of identifiability and convergence in second-order loss minimization, and the relative (rather than absolute) nature of epistemic uncertainty measures.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-juergens24a, title = {Is Epistemic Uncertainty Faithfully Represented by Evidential Deep Learning Methods?}, author = {Juergens, Mira and Meinert, Nis and Bengs, Viktor and H\"{u}llermeier, Eyke and Waegeman, Willem}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {22624--22642}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/juergens24a/juergens24a.pdf}, url = {https://proceedings.mlr.press/v235/juergens24a.html}, abstract = {Trustworthy ML systems should not only return accurate predictions, but also a reliable representation of their uncertainty. Bayesian methods are commonly used to quantify both aleatoric and epistemic uncertainty, but alternative approaches, such as evidential deep learning methods, have become popular in recent years. The latter group of methods in essence extends empirical risk minimization (ERM) for predicting second-order probability distributions over outcomes, from which measures of epistemic (and aleatoric) uncertainty can be extracted. This paper presents novel theoretical insights of evidential deep learning, highlighting the difficulties in optimizing second-order loss functions and interpreting the resulting epistemic uncertainty measures. With a systematic setup that covers a wide range of approaches for classification, regression and counts, it provides novel insights into issues of identifiability and convergence in second-order loss minimization, and the relative (rather than absolute) nature of epistemic uncertainty measures.} }
Endnote
%0 Conference Paper %T Is Epistemic Uncertainty Faithfully Represented by Evidential Deep Learning Methods? %A Mira Juergens %A Nis Meinert %A Viktor Bengs %A Eyke Hüllermeier %A Willem Waegeman %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-juergens24a %I PMLR %P 22624--22642 %U https://proceedings.mlr.press/v235/juergens24a.html %V 235 %X Trustworthy ML systems should not only return accurate predictions, but also a reliable representation of their uncertainty. Bayesian methods are commonly used to quantify both aleatoric and epistemic uncertainty, but alternative approaches, such as evidential deep learning methods, have become popular in recent years. The latter group of methods in essence extends empirical risk minimization (ERM) for predicting second-order probability distributions over outcomes, from which measures of epistemic (and aleatoric) uncertainty can be extracted. This paper presents novel theoretical insights of evidential deep learning, highlighting the difficulties in optimizing second-order loss functions and interpreting the resulting epistemic uncertainty measures. With a systematic setup that covers a wide range of approaches for classification, regression and counts, it provides novel insights into issues of identifiability and convergence in second-order loss minimization, and the relative (rather than absolute) nature of epistemic uncertainty measures.
APA
Juergens, M., Meinert, N., Bengs, V., Hüllermeier, E. & Waegeman, W.. (2024). Is Epistemic Uncertainty Faithfully Represented by Evidential Deep Learning Methods?. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:22624-22642 Available from https://proceedings.mlr.press/v235/juergens24a.html.

Related Material