Tackling the XAI Disagreement Problem with Regional Explanations

Gabriel Laberge, Yann Batiste Pequignot, Mario Marchand, Foutse Khomh
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2017-2025, 2024.

Abstract

The XAI Disagreement Problem concerns the fact that various explainability methods yield different local/global insights on model behavior. Thus, given the lack of ground truth in explainability, practitioners are left wondering “Which explanation should I believe?”. In this work, we approach the Disagreement Problem from the point of view of Functional Decomposition (FD). First, we demonstrate that many XAI techniques disagree because they handle feature interactions differently. Secondly, we reduce interactions locally by fitting a so-called FD-Tree, which partitions the input space into regions where the model is approximately additive. Thus instead of providing global explanations aggregated over the whole dataset, we advocate reporting the FD-Tree structure as well as the regional explanations extracted from its leaves. The beneficial effects of FD-Trees on the Disagreement Problem are demonstrated on toy and real datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-laberge24a, title = { Tackling the {XAI} Disagreement Problem with Regional Explanations }, author = {Laberge, Gabriel and Batiste Pequignot, Yann and Marchand, Mario and Khomh, Foutse}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {2017--2025}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/laberge24a/laberge24a.pdf}, url = {https://proceedings.mlr.press/v238/laberge24a.html}, abstract = { The XAI Disagreement Problem concerns the fact that various explainability methods yield different local/global insights on model behavior. Thus, given the lack of ground truth in explainability, practitioners are left wondering “Which explanation should I believe?”. In this work, we approach the Disagreement Problem from the point of view of Functional Decomposition (FD). First, we demonstrate that many XAI techniques disagree because they handle feature interactions differently. Secondly, we reduce interactions locally by fitting a so-called FD-Tree, which partitions the input space into regions where the model is approximately additive. Thus instead of providing global explanations aggregated over the whole dataset, we advocate reporting the FD-Tree structure as well as the regional explanations extracted from its leaves. The beneficial effects of FD-Trees on the Disagreement Problem are demonstrated on toy and real datasets. } }
Endnote
%0 Conference Paper %T Tackling the XAI Disagreement Problem with Regional Explanations %A Gabriel Laberge %A Yann Batiste Pequignot %A Mario Marchand %A Foutse Khomh %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-laberge24a %I PMLR %P 2017--2025 %U https://proceedings.mlr.press/v238/laberge24a.html %V 238 %X The XAI Disagreement Problem concerns the fact that various explainability methods yield different local/global insights on model behavior. Thus, given the lack of ground truth in explainability, practitioners are left wondering “Which explanation should I believe?”. In this work, we approach the Disagreement Problem from the point of view of Functional Decomposition (FD). First, we demonstrate that many XAI techniques disagree because they handle feature interactions differently. Secondly, we reduce interactions locally by fitting a so-called FD-Tree, which partitions the input space into regions where the model is approximately additive. Thus instead of providing global explanations aggregated over the whole dataset, we advocate reporting the FD-Tree structure as well as the regional explanations extracted from its leaves. The beneficial effects of FD-Trees on the Disagreement Problem are demonstrated on toy and real datasets.
APA
Laberge, G., Batiste Pequignot, Y., Marchand, M. & Khomh, F.. (2024). Tackling the XAI Disagreement Problem with Regional Explanations . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:2017-2025 Available from https://proceedings.mlr.press/v238/laberge24a.html.

Related Material