Identifying Homogeneous and Interpretable Groups for Conformal Prediction

Natalia Martinez Gil, Dhaval Patel, Chandra Reddy, Giri Ganapavarapu, Roman Vaculin, Jayant Kalagnanam
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, PMLR 244:2471-2485, 2024.

Abstract

Conformal prediction methods are a tool for uncertainty quantification of a model’s prediction, providing a model-agnostic and distribution-free statistical wrapper that generates prediction intervals/sets for a given model with finite sample generalization guarantees. However, these guarantees hold only on average, or conditioned on the output values of the predictor or on a set of predefined groups, which a-priori may not relate to the prediction task at hand. We propose a method to learn a generalizable partition function of the input space (or representation mapping) into interpretable groups of varying sizes where the non-conformity scores - a measure of discrepancy between prediction and target - are as homogeneous as possible when conditioned to the group. The learned partition can be integrated with any of the group conditional conformal approaches to produce conformal sets with group conditional guarantees on the discovered regions. Since these learned groups are expressed as strictly a function of the input, they can be used for downstream tasks such as data collection or model selection. We show the effectiveness of our method in reducing worst case group coverage outcomes in a variety of datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v244-martinez-gil24a, title = {Identifying Homogeneous and Interpretable Groups for Conformal Prediction}, author = {Martinez Gil, Natalia and Patel, Dhaval and Reddy, Chandra and Ganapavarapu, Giri and Vaculin, Roman and Kalagnanam, Jayant}, booktitle = {Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence}, pages = {2471--2485}, year = {2024}, editor = {Kiyavash, Negar and Mooij, Joris M.}, volume = {244}, series = {Proceedings of Machine Learning Research}, month = {15--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v244/main/assets/martinez-gil24a/martinez-gil24a.pdf}, url = {https://proceedings.mlr.press/v244/martinez-gil24a.html}, abstract = {Conformal prediction methods are a tool for uncertainty quantification of a model’s prediction, providing a model-agnostic and distribution-free statistical wrapper that generates prediction intervals/sets for a given model with finite sample generalization guarantees. However, these guarantees hold only on average, or conditioned on the output values of the predictor or on a set of predefined groups, which a-priori may not relate to the prediction task at hand. We propose a method to learn a generalizable partition function of the input space (or representation mapping) into interpretable groups of varying sizes where the non-conformity scores - a measure of discrepancy between prediction and target - are as homogeneous as possible when conditioned to the group. The learned partition can be integrated with any of the group conditional conformal approaches to produce conformal sets with group conditional guarantees on the discovered regions. Since these learned groups are expressed as strictly a function of the input, they can be used for downstream tasks such as data collection or model selection. We show the effectiveness of our method in reducing worst case group coverage outcomes in a variety of datasets.} }
Endnote
%0 Conference Paper %T Identifying Homogeneous and Interpretable Groups for Conformal Prediction %A Natalia Martinez Gil %A Dhaval Patel %A Chandra Reddy %A Giri Ganapavarapu %A Roman Vaculin %A Jayant Kalagnanam %B Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2024 %E Negar Kiyavash %E Joris M. Mooij %F pmlr-v244-martinez-gil24a %I PMLR %P 2471--2485 %U https://proceedings.mlr.press/v244/martinez-gil24a.html %V 244 %X Conformal prediction methods are a tool for uncertainty quantification of a model’s prediction, providing a model-agnostic and distribution-free statistical wrapper that generates prediction intervals/sets for a given model with finite sample generalization guarantees. However, these guarantees hold only on average, or conditioned on the output values of the predictor or on a set of predefined groups, which a-priori may not relate to the prediction task at hand. We propose a method to learn a generalizable partition function of the input space (or representation mapping) into interpretable groups of varying sizes where the non-conformity scores - a measure of discrepancy between prediction and target - are as homogeneous as possible when conditioned to the group. The learned partition can be integrated with any of the group conditional conformal approaches to produce conformal sets with group conditional guarantees on the discovered regions. Since these learned groups are expressed as strictly a function of the input, they can be used for downstream tasks such as data collection or model selection. We show the effectiveness of our method in reducing worst case group coverage outcomes in a variety of datasets.
APA
Martinez Gil, N., Patel, D., Reddy, C., Ganapavarapu, G., Vaculin, R. & Kalagnanam, J.. (2024). Identifying Homogeneous and Interpretable Groups for Conformal Prediction. Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 244:2471-2485 Available from https://proceedings.mlr.press/v244/martinez-gil24a.html.

Related Material