Identifying regions of trusted predictions

Nivasini Ananthakrishnan, Shai Ben-David, Tosca Lechner, Ruth Urner
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:2125-2134, 2021.

Abstract

Quantifying the probability of a label prediction being correct on a given test point or a given sub-population enables users to better decide how to use and when to trust machine learning derived predictors. In this work, combining aspects of prior work on conformal predictions and selective classification, we provide a unifying framework for confidence requirements that allows for distinguishing between various sources of uncertainty in the learning process as well as various region specifications. We then consider a set of common prior assumptions on the data generating process and show how these allow learning justifiably trusted predictors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v161-ananthakrishnan21a, title = {Identifying regions of trusted predictions}, author = {Ananthakrishnan, Nivasini and Ben-David, Shai and Lechner, Tosca and Urner, Ruth}, booktitle = {Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence}, pages = {2125--2134}, year = {2021}, editor = {de Campos, Cassio and Maathuis, Marloes H.}, volume = {161}, series = {Proceedings of Machine Learning Research}, month = {27--30 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v161/ananthakrishnan21a/ananthakrishnan21a.pdf}, url = {https://proceedings.mlr.press/v161/ananthakrishnan21a.html}, abstract = {Quantifying the probability of a label prediction being correct on a given test point or a given sub-population enables users to better decide how to use and when to trust machine learning derived predictors. In this work, combining aspects of prior work on conformal predictions and selective classification, we provide a unifying framework for confidence requirements that allows for distinguishing between various sources of uncertainty in the learning process as well as various region specifications. We then consider a set of common prior assumptions on the data generating process and show how these allow learning justifiably trusted predictors.} }
Endnote
%0 Conference Paper %T Identifying regions of trusted predictions %A Nivasini Ananthakrishnan %A Shai Ben-David %A Tosca Lechner %A Ruth Urner %B Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2021 %E Cassio de Campos %E Marloes H. Maathuis %F pmlr-v161-ananthakrishnan21a %I PMLR %P 2125--2134 %U https://proceedings.mlr.press/v161/ananthakrishnan21a.html %V 161 %X Quantifying the probability of a label prediction being correct on a given test point or a given sub-population enables users to better decide how to use and when to trust machine learning derived predictors. In this work, combining aspects of prior work on conformal predictions and selective classification, we provide a unifying framework for confidence requirements that allows for distinguishing between various sources of uncertainty in the learning process as well as various region specifications. We then consider a set of common prior assumptions on the data generating process and show how these allow learning justifiably trusted predictors.
APA
Ananthakrishnan, N., Ben-David, S., Lechner, T. & Urner, R.. (2021). Identifying regions of trusted predictions. Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 161:2125-2134 Available from https://proceedings.mlr.press/v161/ananthakrishnan21a.html.

Related Material