[edit]
On the Calibration of Aggregated Conformal Predictors
Proceedings of the Sixth Workshop on Conformal and Probabilistic Prediction and Applications, PMLR 60:154-173, 2017.
Abstract
Conformal prediction is a learning framework that produces models that associate with each of their predictions a measure of statistically valid confidence.
These models are typically constructed on top of traditional machine learning algorithms.
An important result of conformal prediction theory is that the models produced are provably valid under relatively weak assumptions—in particular,
their validity is independent of the specific underlying learning algorithm on which they are based.
Since validity is automatic, much research on conformal predictors has been focused on improving their informational and computational efficiency.
As part of the efforts in constructing efficient conformal predictors, aggregated conformal predictors were developed,
drawing inspiration from the field of classification and regression ensembles.
Unlike early definitions of conformal prediction procedures, the validity of aggregated conformal predictors is not fully understood—while it has been shown
that they might attain empirical exact validity under certain circumstances,
their theoretical validity is conditional on additional assumptions that require further clarification.
In this paper, we show why validity is not automatic for aggregated conformal predictors,
and provide a revised definition of aggregated conformal predictors that gains approximate validity
conditional on properties of the underlying learning algorithm.