Provably Better Explanations with Optimized Aggregation of Feature Attributions

Thomas Decker, Ananta R. Bhattarai, Jindong Gu, Volker Tresp, Florian Buettner
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:10267-10286, 2024.

Abstract

Using feature attributions for post-hoc explanations is a common practice to understand and verify the predictions of opaque machine learning models. Despite the numerous techniques available, individual methods often produce inconsistent and unstable results, putting their overall reliability into question. In this work, we aim to systematically improve the quality of feature attributions by combining multiple explanations across distinct methods or their variations. For this purpose, we propose a novel approach to derive optimal convex combinations of feature attributions that yield provable improvements of desired quality criteria such as robustness or faithfulness to the model behavior. Through extensive experiments involving various model architectures and popular feature attribution techniques, we demonstrate that our combination strategy consistently outperforms individual methods and existing baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-decker24a, title = {Provably Better Explanations with Optimized Aggregation of Feature Attributions}, author = {Decker, Thomas and Bhattarai, Ananta R. and Gu, Jindong and Tresp, Volker and Buettner, Florian}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {10267--10286}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/decker24a/decker24a.pdf}, url = {https://proceedings.mlr.press/v235/decker24a.html}, abstract = {Using feature attributions for post-hoc explanations is a common practice to understand and verify the predictions of opaque machine learning models. Despite the numerous techniques available, individual methods often produce inconsistent and unstable results, putting their overall reliability into question. In this work, we aim to systematically improve the quality of feature attributions by combining multiple explanations across distinct methods or their variations. For this purpose, we propose a novel approach to derive optimal convex combinations of feature attributions that yield provable improvements of desired quality criteria such as robustness or faithfulness to the model behavior. Through extensive experiments involving various model architectures and popular feature attribution techniques, we demonstrate that our combination strategy consistently outperforms individual methods and existing baselines.} }
Endnote
%0 Conference Paper %T Provably Better Explanations with Optimized Aggregation of Feature Attributions %A Thomas Decker %A Ananta R. Bhattarai %A Jindong Gu %A Volker Tresp %A Florian Buettner %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-decker24a %I PMLR %P 10267--10286 %U https://proceedings.mlr.press/v235/decker24a.html %V 235 %X Using feature attributions for post-hoc explanations is a common practice to understand and verify the predictions of opaque machine learning models. Despite the numerous techniques available, individual methods often produce inconsistent and unstable results, putting their overall reliability into question. In this work, we aim to systematically improve the quality of feature attributions by combining multiple explanations across distinct methods or their variations. For this purpose, we propose a novel approach to derive optimal convex combinations of feature attributions that yield provable improvements of desired quality criteria such as robustness or faithfulness to the model behavior. Through extensive experiments involving various model architectures and popular feature attribution techniques, we demonstrate that our combination strategy consistently outperforms individual methods and existing baselines.
APA
Decker, T., Bhattarai, A.R., Gu, J., Tresp, V. & Buettner, F.. (2024). Provably Better Explanations with Optimized Aggregation of Feature Attributions. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:10267-10286 Available from https://proceedings.mlr.press/v235/decker24a.html.

Related Material