Approximating probabilistic explanations via supermodular minimization

Louenas Bounia, Frederic Koriche
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, PMLR 216:216-225, 2023.

Abstract

Explaining in accurate and intelligible terms the predictions made by classifiers is a key challenge of eXplainable Artificial Intelligence (XAI). To this end, an abductive explanation for the predicted label of some data instance is a subset-minimal collection of features such that the restriction of the instance to these features is sufficient to determine the prediction. However, due to cognitive limitations, abductive explanations are often too large to be interpretable. In those cases, we need to reduce the size of abductive explanations, while still determining the predicted label with high probability. In this paper, we show that finding such probabilistic explanations is NP-hard, even for decision trees. In order to circumvent this issue, we investigate the approximability of probabilistic explanations through the lens of supermodularity. We examine both greedy descent and greedy ascent approaches for supermodular minimization, whose approximation guarantees depend on the curvature of the “unnormalized” error function that evaluates the precision of the explanation. Based on various experiments for explaining decision tree predictions, we show that our greedy algorithms provide an efficient alternative to the state-of-the-art constraint optimization method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v216-bounia23a, title = {Approximating probabilistic explanations via supermodular minimization}, author = {Bounia, Louenas and Koriche, Frederic}, booktitle = {Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence}, pages = {216--225}, year = {2023}, editor = {Evans, Robin J. and Shpitser, Ilya}, volume = {216}, series = {Proceedings of Machine Learning Research}, month = {31 Jul--04 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v216/bounia23a/bounia23a.pdf}, url = {https://proceedings.mlr.press/v216/bounia23a.html}, abstract = {Explaining in accurate and intelligible terms the predictions made by classifiers is a key challenge of eXplainable Artificial Intelligence (XAI). To this end, an abductive explanation for the predicted label of some data instance is a subset-minimal collection of features such that the restriction of the instance to these features is sufficient to determine the prediction. However, due to cognitive limitations, abductive explanations are often too large to be interpretable. In those cases, we need to reduce the size of abductive explanations, while still determining the predicted label with high probability. In this paper, we show that finding such probabilistic explanations is NP-hard, even for decision trees. In order to circumvent this issue, we investigate the approximability of probabilistic explanations through the lens of supermodularity. We examine both greedy descent and greedy ascent approaches for supermodular minimization, whose approximation guarantees depend on the curvature of the “unnormalized” error function that evaluates the precision of the explanation. Based on various experiments for explaining decision tree predictions, we show that our greedy algorithms provide an efficient alternative to the state-of-the-art constraint optimization method.} }
Endnote
%0 Conference Paper %T Approximating probabilistic explanations via supermodular minimization %A Louenas Bounia %A Frederic Koriche %B Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2023 %E Robin J. Evans %E Ilya Shpitser %F pmlr-v216-bounia23a %I PMLR %P 216--225 %U https://proceedings.mlr.press/v216/bounia23a.html %V 216 %X Explaining in accurate and intelligible terms the predictions made by classifiers is a key challenge of eXplainable Artificial Intelligence (XAI). To this end, an abductive explanation for the predicted label of some data instance is a subset-minimal collection of features such that the restriction of the instance to these features is sufficient to determine the prediction. However, due to cognitive limitations, abductive explanations are often too large to be interpretable. In those cases, we need to reduce the size of abductive explanations, while still determining the predicted label with high probability. In this paper, we show that finding such probabilistic explanations is NP-hard, even for decision trees. In order to circumvent this issue, we investigate the approximability of probabilistic explanations through the lens of supermodularity. We examine both greedy descent and greedy ascent approaches for supermodular minimization, whose approximation guarantees depend on the curvature of the “unnormalized” error function that evaluates the precision of the explanation. Based on various experiments for explaining decision tree predictions, we show that our greedy algorithms provide an efficient alternative to the state-of-the-art constraint optimization method.
APA
Bounia, L. & Koriche, F.. (2023). Approximating probabilistic explanations via supermodular minimization. Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 216:216-225 Available from https://proceedings.mlr.press/v216/bounia23a.html.

Related Material