[edit]
Coverage vs Acceptance-Error Curves for Conformal Classification Models
Proceedings of the Twelfth Symposium on Conformal
and Probabilistic Prediction with Applications, PMLR 204:534-545, 2023.
Abstract
In this paper, we introduce coverage vs
acceptance-error graphs as a visualization tool for
comparing the performance of conformal predictors at
a given significance level $\epsilon$ for any
k-class classification task with k $\geq$ 2. We show
that by plotting the performance of each predictor
for different significance levels in $\epsilon$
$\in$ [0, 1], we receive a coverage vs
acceptanceerror curve for that predictor. The area
under this curve represents the probability that the
p-value of randomly chosen true class-label of any
test instance is greater than the p-value of any
other false class-label for the same or any other
test instance. This area can be used as a metric for
predictive efficiency of a conformal predictor, when
the validity has been established. The new metric is
unique in that it is related to the empirical
coverage rate, and extensive experiments confirmed
its utility and difference from existing predictive
efficiency criteria.