Coverage vs Acceptance-Error Curves for Conformal Classification Models

Evgueni Smirnov
Proceedings of the Twelfth Symposium on Conformal and Probabilistic Prediction with Applications, PMLR 204:534-545, 2023.

Abstract

In this paper, we introduce coverage vs acceptance-error graphs as a visualization tool for comparing the performance of conformal predictors at a given significance level $\epsilon$ for any k-class classification task with k $\geq$ 2. We show that by plotting the performance of each predictor for different significance levels in $\epsilon$ $\in$ [0, 1], we receive a coverage vs acceptanceerror curve for that predictor. The area under this curve represents the probability that the p-value of randomly chosen true class-label of any test instance is greater than the p-value of any other false class-label for the same or any other test instance. This area can be used as a metric for predictive efficiency of a conformal predictor, when the validity has been established. The new metric is unique in that it is related to the empirical coverage rate, and extensive experiments confirmed its utility and difference from existing predictive efficiency criteria.

Cite this Paper


BibTeX
@InProceedings{pmlr-v204-smirnov23a, title = {Coverage vs Acceptance-Error Curves for Conformal Classification Models}, author = {Smirnov, Evgueni}, booktitle = {Proceedings of the Twelfth Symposium on Conformal and Probabilistic Prediction with Applications}, pages = {534--545}, year = {2023}, editor = {Papadopoulos, Harris and Nguyen, Khuong An and Boström, Henrik and Carlsson, Lars}, volume = {204}, series = {Proceedings of Machine Learning Research}, month = {13--15 Sep}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v204/smirnov23a/smirnov23a.pdf}, url = {https://proceedings.mlr.press/v204/smirnov23a.html}, abstract = {In this paper, we introduce coverage vs acceptance-error graphs as a visualization tool for comparing the performance of conformal predictors at a given significance level $\epsilon$ for any k-class classification task with k $\geq$ 2. We show that by plotting the performance of each predictor for different significance levels in $\epsilon$ $\in$ [0, 1], we receive a coverage vs acceptanceerror curve for that predictor. The area under this curve represents the probability that the p-value of randomly chosen true class-label of any test instance is greater than the p-value of any other false class-label for the same or any other test instance. This area can be used as a metric for predictive efficiency of a conformal predictor, when the validity has been established. The new metric is unique in that it is related to the empirical coverage rate, and extensive experiments confirmed its utility and difference from existing predictive efficiency criteria.} }
Endnote
%0 Conference Paper %T Coverage vs Acceptance-Error Curves for Conformal Classification Models %A Evgueni Smirnov %B Proceedings of the Twelfth Symposium on Conformal and Probabilistic Prediction with Applications %C Proceedings of Machine Learning Research %D 2023 %E Harris Papadopoulos %E Khuong An Nguyen %E Henrik Boström %E Lars Carlsson %F pmlr-v204-smirnov23a %I PMLR %P 534--545 %U https://proceedings.mlr.press/v204/smirnov23a.html %V 204 %X In this paper, we introduce coverage vs acceptance-error graphs as a visualization tool for comparing the performance of conformal predictors at a given significance level $\epsilon$ for any k-class classification task with k $\geq$ 2. We show that by plotting the performance of each predictor for different significance levels in $\epsilon$ $\in$ [0, 1], we receive a coverage vs acceptanceerror curve for that predictor. The area under this curve represents the probability that the p-value of randomly chosen true class-label of any test instance is greater than the p-value of any other false class-label for the same or any other test instance. This area can be used as a metric for predictive efficiency of a conformal predictor, when the validity has been established. The new metric is unique in that it is related to the empirical coverage rate, and extensive experiments confirmed its utility and difference from existing predictive efficiency criteria.
APA
Smirnov, E.. (2023). Coverage vs Acceptance-Error Curves for Conformal Classification Models. Proceedings of the Twelfth Symposium on Conformal and Probabilistic Prediction with Applications, in Proceedings of Machine Learning Research 204:534-545 Available from https://proceedings.mlr.press/v204/smirnov23a.html.

Related Material