Sparse Activations as Conformal Predictors

Margarida M Campos, João Cálem, Sophia Sklaviadis, Mario A. T. Figueiredo, Andre Martins
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:2674-2682, 2025.

Abstract

Conformal prediction is a distribution-free framework for uncertainty quantification that replaces point predictions with sets, offering marginal coverage guarantees (i.e., ensuring that the sets contain the true label with a specified probability, in expectation). In this paper, we uncover a novel connection between conformal prediction and sparse "softmax-like" transformations, such as sparsemax and $\gamma$-entmax (with $\gamma> 1$), which assign nonzero probability only to some labels. We introduce new non-conformity scores for classification that make the calibration process correspond to the widely used temperature scaling method. At test time, applying these sparse transformations with the calibrated temperature leads to a support set (i.e., the set of labels with nonzero probability) that automatically inherits the coverage guarantees of conformal prediction. Through experiments on computer vision and text classification benchmarks, we demonstrate that the proposed method achieves competitive results in terms of coverage, efficiency, and adaptiveness compared to standard non-conformity scores based on softmax.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-campos25a, title = {Sparse Activations as Conformal Predictors}, author = {Campos, Margarida M and C{\'a}lem, Jo{\~a}o and Sklaviadis, Sophia and Figueiredo, Mario A. T. and Martins, Andre}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {2674--2682}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/campos25a/campos25a.pdf}, url = {https://proceedings.mlr.press/v258/campos25a.html}, abstract = {Conformal prediction is a distribution-free framework for uncertainty quantification that replaces point predictions with sets, offering marginal coverage guarantees (i.e., ensuring that the sets contain the true label with a specified probability, in expectation). In this paper, we uncover a novel connection between conformal prediction and sparse "softmax-like" transformations, such as sparsemax and $\gamma$-entmax (with $\gamma> 1$), which assign nonzero probability only to some labels. We introduce new non-conformity scores for classification that make the calibration process correspond to the widely used temperature scaling method. At test time, applying these sparse transformations with the calibrated temperature leads to a support set (i.e., the set of labels with nonzero probability) that automatically inherits the coverage guarantees of conformal prediction. Through experiments on computer vision and text classification benchmarks, we demonstrate that the proposed method achieves competitive results in terms of coverage, efficiency, and adaptiveness compared to standard non-conformity scores based on softmax.} }
Endnote
%0 Conference Paper %T Sparse Activations as Conformal Predictors %A Margarida M Campos %A João Cálem %A Sophia Sklaviadis %A Mario A. T. Figueiredo %A Andre Martins %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-campos25a %I PMLR %P 2674--2682 %U https://proceedings.mlr.press/v258/campos25a.html %V 258 %X Conformal prediction is a distribution-free framework for uncertainty quantification that replaces point predictions with sets, offering marginal coverage guarantees (i.e., ensuring that the sets contain the true label with a specified probability, in expectation). In this paper, we uncover a novel connection between conformal prediction and sparse "softmax-like" transformations, such as sparsemax and $\gamma$-entmax (with $\gamma> 1$), which assign nonzero probability only to some labels. We introduce new non-conformity scores for classification that make the calibration process correspond to the widely used temperature scaling method. At test time, applying these sparse transformations with the calibrated temperature leads to a support set (i.e., the set of labels with nonzero probability) that automatically inherits the coverage guarantees of conformal prediction. Through experiments on computer vision and text classification benchmarks, we demonstrate that the proposed method achieves competitive results in terms of coverage, efficiency, and adaptiveness compared to standard non-conformity scores based on softmax.
APA
Campos, M.M., Cálem, J., Sklaviadis, S., Figueiredo, M.A.T. & Martins, A.. (2025). Sparse Activations as Conformal Predictors. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:2674-2682 Available from https://proceedings.mlr.press/v258/campos25a.html.

Related Material