Probabilistic Explanations for Regression Models

Frédéric Koriche, Jean-Marie Lagniez, Chi Tran
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:2345-2362, 2025.

Abstract

Formal explainability is an emerging field that aims to provide mathematically guaranteed explanations for the predictions made by machine learning models. Recent work in this area focuses on computing “probabilistic explanations” for the predictions made by classifiers based on specific data instances. The goal of this paper is to extend the concept of probabilistic explanations to the regression setting, treating the target regressor as a black box function. The class of probabilistic explanations consists of linear functions that meet a sparsity constraint, alongside a hyperplane constraint defined for the data instance being explained. While minimizing the precision error of such explanations is generally $\text{NP}^{\text{PP}}$-hard, we demonstrate that it can be approximated by substituting the precision measure with a fidelity measure. Optimal explanations based on this fidelity objective can be effectively approached using Mixed Integer Programming (MIP). Moreover, we show that for certain distributions used to define the precision measure, explanations with approximation guarantees can be computed in polynomial time using a variant of Iterative Hard Thresholding (IHT). Experiments conducted on various datasets indicate that both the MIP and IHT approaches outperform the state-of-the-art LIME and MAPLE explainers.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-koriche25a, title = {Probabilistic Explanations for Regression Models}, author = {Koriche, Fr\'{e}d\'{e}ric and Lagniez, Jean-Marie and Tran, Chi}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {2345--2362}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/koriche25a/koriche25a.pdf}, url = {https://proceedings.mlr.press/v286/koriche25a.html}, abstract = {Formal explainability is an emerging field that aims to provide mathematically guaranteed explanations for the predictions made by machine learning models. Recent work in this area focuses on computing “probabilistic explanations” for the predictions made by classifiers based on specific data instances. The goal of this paper is to extend the concept of probabilistic explanations to the regression setting, treating the target regressor as a black box function. The class of probabilistic explanations consists of linear functions that meet a sparsity constraint, alongside a hyperplane constraint defined for the data instance being explained. While minimizing the precision error of such explanations is generally $\text{NP}^{\text{PP}}$-hard, we demonstrate that it can be approximated by substituting the precision measure with a fidelity measure. Optimal explanations based on this fidelity objective can be effectively approached using Mixed Integer Programming (MIP). Moreover, we show that for certain distributions used to define the precision measure, explanations with approximation guarantees can be computed in polynomial time using a variant of Iterative Hard Thresholding (IHT). Experiments conducted on various datasets indicate that both the MIP and IHT approaches outperform the state-of-the-art LIME and MAPLE explainers.} }
Endnote
%0 Conference Paper %T Probabilistic Explanations for Regression Models %A Frédéric Koriche %A Jean-Marie Lagniez %A Chi Tran %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-koriche25a %I PMLR %P 2345--2362 %U https://proceedings.mlr.press/v286/koriche25a.html %V 286 %X Formal explainability is an emerging field that aims to provide mathematically guaranteed explanations for the predictions made by machine learning models. Recent work in this area focuses on computing “probabilistic explanations” for the predictions made by classifiers based on specific data instances. The goal of this paper is to extend the concept of probabilistic explanations to the regression setting, treating the target regressor as a black box function. The class of probabilistic explanations consists of linear functions that meet a sparsity constraint, alongside a hyperplane constraint defined for the data instance being explained. While minimizing the precision error of such explanations is generally $\text{NP}^{\text{PP}}$-hard, we demonstrate that it can be approximated by substituting the precision measure with a fidelity measure. Optimal explanations based on this fidelity objective can be effectively approached using Mixed Integer Programming (MIP). Moreover, we show that for certain distributions used to define the precision measure, explanations with approximation guarantees can be computed in polynomial time using a variant of Iterative Hard Thresholding (IHT). Experiments conducted on various datasets indicate that both the MIP and IHT approaches outperform the state-of-the-art LIME and MAPLE explainers.
APA
Koriche, F., Lagniez, J. & Tran, C.. (2025). Probabilistic Explanations for Regression Models. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:2345-2362 Available from https://proceedings.mlr.press/v286/koriche25a.html.

Related Material