On the Interpretability of Conditional Probability Estimates in the Agnostic Setting

Yihan Gao, Aditya Parameswaran, Jian Peng
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:1367-1374, 2017.

Abstract

We study the interpretability of conditional probability estimates for binary classification under the agnostic setting or scenario. Under the agnostic setting, conditional probability estimates do not necessarily reflect the true conditional probabilities. Instead, they have a certain calibration property: among all data points that the classifier has predicted $P(Y = 1|X) = p$, $p$ portion of them actually have label $Y = 1$. For cost-sensitive decision problems, this calibration property provides adequate support for us to use Bayes Decision Theory. In this paper, we define a novel measure for the calibration property together with its empirical counterpart, and prove an uniform convergence result between them. This new measure enables us to formally justify the calibration property of conditional probability estimations, and provides new insights on the problem of estimating and calibrating conditional probabilities.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-gao17a, title = {{On the Interpretability of Conditional Probability Estimates in the Agnostic Setting}}, author = {Gao, Yihan and Parameswaran, Aditya and Peng, Jian}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {1367--1374}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/gao17a/gao17a.pdf}, url = {https://proceedings.mlr.press/v54/gao17a.html}, abstract = {We study the interpretability of conditional probability estimates for binary classification under the agnostic setting or scenario. Under the agnostic setting, conditional probability estimates do not necessarily reflect the true conditional probabilities. Instead, they have a certain calibration property: among all data points that the classifier has predicted $P(Y = 1|X) = p$, $p$ portion of them actually have label $Y = 1$. For cost-sensitive decision problems, this calibration property provides adequate support for us to use Bayes Decision Theory. In this paper, we define a novel measure for the calibration property together with its empirical counterpart, and prove an uniform convergence result between them. This new measure enables us to formally justify the calibration property of conditional probability estimations, and provides new insights on the problem of estimating and calibrating conditional probabilities. } }
Endnote
%0 Conference Paper %T On the Interpretability of Conditional Probability Estimates in the Agnostic Setting %A Yihan Gao %A Aditya Parameswaran %A Jian Peng %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-gao17a %I PMLR %P 1367--1374 %U https://proceedings.mlr.press/v54/gao17a.html %V 54 %X We study the interpretability of conditional probability estimates for binary classification under the agnostic setting or scenario. Under the agnostic setting, conditional probability estimates do not necessarily reflect the true conditional probabilities. Instead, they have a certain calibration property: among all data points that the classifier has predicted $P(Y = 1|X) = p$, $p$ portion of them actually have label $Y = 1$. For cost-sensitive decision problems, this calibration property provides adequate support for us to use Bayes Decision Theory. In this paper, we define a novel measure for the calibration property together with its empirical counterpart, and prove an uniform convergence result between them. This new measure enables us to formally justify the calibration property of conditional probability estimations, and provides new insights on the problem of estimating and calibrating conditional probabilities.
APA
Gao, Y., Parameswaran, A. & Peng, J.. (2017). On the Interpretability of Conditional Probability Estimates in the Agnostic Setting. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:1367-1374 Available from https://proceedings.mlr.press/v54/gao17a.html.

Related Material