Interpretable and specialized conformal predictors
[edit]
Proceedings of the Eighth Symposium on Conformal and Probabilistic Prediction and Applications, PMLR 105:322, 2019.
Abstract
In realworld scenarios, interpretable models are often required to explain predictions,
and to allow for inspection and analysis of the model.
The overall purpose of oracle coaching is to produce highly accurate, but interpretable, models
optimized for a specific test set.
Oracle coaching is applicable to the very common scenario
where explanations and insights are needed for a specific batch of predictions,
and the input vectors for this test set are available when building the predictive model.
In this paper, oracle coaching is used for generating underlying classifiers for conformal prediction.
The resulting conformal classifiers output valid label sets,
i.e., the error rate on the test data is bounded by a preset significance level,
as long as the labeled data used for calibration is exchangeable with the test set.
Since validity is guaranteed for all conformal predictors, the key performance metric is efficiency,
i.e., the size of the label sets, where smaller sets are more informative.
The main contribution of this paper is the design of setups
making sure that when oraclecoached decision trees,
that per definition utilize knowledge about test data,
are used as underlying models for conformal classifiers,
the exchangeability between calibration and test data is maintained.
Consequently, the resulting conformal classifiers retain the validity guarantees.
In the experimentation, using a large number of publicly available data sets,
the validity of the suggested setups is empirically demonstrated.
Furthermore, the results show that the more accurate underlying models produced by oracle coaching
also improved the efficiency of the corresponding conformal classifiers.
Related Material


