Explaining Set-Valued Predictions: SHAP Analysis for Conformal Classification

Ulf Johansson, Cecilia Sönströd, Aicha Maalej
Proceedings of the Fourteenth Symposium on Conformal and Probabilistic Prediction with Applications, PMLR 266:359-378, 2025.

Abstract

Conformal prediction offers a principled framework for uncertainty quantification in classification tasks by outputting prediction sets with guaranteed error control. However, the interpretability of these set-valued predictions, and consequently their practical usefulness, remains underexplored. In this paper, we introduce a method for explaining conformal classification outputs using SHAP (SHapley Additive exPlanations), enabling model-agnostic local and global feature attributions for the p-values associated with individual class labels. This approach allows for rich, class-specific explanations in which feature effects need not be symmetrically distributed across classes. The resulting flexibility supports the detection of ambiguous predictions and potential out-of-distribution instances in a transparent and structured way. While our primary focus is on explaining p-values, we also outline how the same framework can be applied to related targets, including label inclusion, set predictions, and the derived confidence and credibility measures. We demonstrate the method on several benchmark datasets and show that SHAP-enhanced conformal predictors offer improved interpretability by revealing the drivers behind set predictions, thereby providing actionable insights in high-stakes decision-making contexts.

Cite this Paper


BibTeX
@InProceedings{pmlr-v266-johansson25a, title = {Explaining Set-Valued Predictions: SHAP Analysis for Conformal Classification}, author = {Johansson, Ulf and S\"{o}nstr\"{o}d, Cecilia and Maalej, Aicha}, booktitle = {Proceedings of the Fourteenth Symposium on Conformal and Probabilistic Prediction with Applications}, pages = {359--378}, year = {2025}, editor = {Nguyen, Khuong An and Luo, Zhiyuan and Papadopoulos, Harris and Löfström, Tuwe and Carlsson, Lars and Boström, Henrik}, volume = {266}, series = {Proceedings of Machine Learning Research}, month = {10--12 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v266/main/assets/johansson25a/johansson25a.pdf}, url = {https://proceedings.mlr.press/v266/johansson25a.html}, abstract = {Conformal prediction offers a principled framework for uncertainty quantification in classification tasks by outputting prediction sets with guaranteed error control. However, the interpretability of these set-valued predictions, and consequently their practical usefulness, remains underexplored. In this paper, we introduce a method for explaining conformal classification outputs using SHAP (SHapley Additive exPlanations), enabling model-agnostic local and global feature attributions for the p-values associated with individual class labels. This approach allows for rich, class-specific explanations in which feature effects need not be symmetrically distributed across classes. The resulting flexibility supports the detection of ambiguous predictions and potential out-of-distribution instances in a transparent and structured way. While our primary focus is on explaining p-values, we also outline how the same framework can be applied to related targets, including label inclusion, set predictions, and the derived confidence and credibility measures. We demonstrate the method on several benchmark datasets and show that SHAP-enhanced conformal predictors offer improved interpretability by revealing the drivers behind set predictions, thereby providing actionable insights in high-stakes decision-making contexts.} }
Endnote
%0 Conference Paper %T Explaining Set-Valued Predictions: SHAP Analysis for Conformal Classification %A Ulf Johansson %A Cecilia Sönströd %A Aicha Maalej %B Proceedings of the Fourteenth Symposium on Conformal and Probabilistic Prediction with Applications %C Proceedings of Machine Learning Research %D 2025 %E Khuong An Nguyen %E Zhiyuan Luo %E Harris Papadopoulos %E Tuwe Löfström %E Lars Carlsson %E Henrik Boström %F pmlr-v266-johansson25a %I PMLR %P 359--378 %U https://proceedings.mlr.press/v266/johansson25a.html %V 266 %X Conformal prediction offers a principled framework for uncertainty quantification in classification tasks by outputting prediction sets with guaranteed error control. However, the interpretability of these set-valued predictions, and consequently their practical usefulness, remains underexplored. In this paper, we introduce a method for explaining conformal classification outputs using SHAP (SHapley Additive exPlanations), enabling model-agnostic local and global feature attributions for the p-values associated with individual class labels. This approach allows for rich, class-specific explanations in which feature effects need not be symmetrically distributed across classes. The resulting flexibility supports the detection of ambiguous predictions and potential out-of-distribution instances in a transparent and structured way. While our primary focus is on explaining p-values, we also outline how the same framework can be applied to related targets, including label inclusion, set predictions, and the derived confidence and credibility measures. We demonstrate the method on several benchmark datasets and show that SHAP-enhanced conformal predictors offer improved interpretability by revealing the drivers behind set predictions, thereby providing actionable insights in high-stakes decision-making contexts.
APA
Johansson, U., Sönströd, C. & Maalej, A.. (2025). Explaining Set-Valued Predictions: SHAP Analysis for Conformal Classification. Proceedings of the Fourteenth Symposium on Conformal and Probabilistic Prediction with Applications, in Proceedings of Machine Learning Research 266:359-378 Available from https://proceedings.mlr.press/v266/johansson25a.html.

Related Material