Trust Regions for Explanations via Black-Box Probabilistic Certification

Amit Dhurandhar, Swagatam Haldar, Dennis Wei, Karthikeyan Natesan Ramamurthy
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:10736-10764, 2024.

Abstract

Given the black box nature of machine learning models, a plethora of explainability methods have been developed to decipher the factors behind individual decisions. In this paper, we introduce a novel problem of black box (probabilistic) explanation certification. We ask the question: Given a black box model with only query access, an explanation for an example and a quality metric (viz. fidelity, stability), can we find the largest hypercube (i.e., $\ell_{\infty}$ ball) centered at the example such that when the explanation is applied to all examples within the hypercube, (with high probability) a quality criterion is met (viz. fidelity greater than some value)? Being able to efficiently find such a trust region has multiple benefits: i) insight into model behavior in a region, with a guarantee; ii) ascertained stability of the explanation; iii) explanation reuse, which can save time, energy and money by not having to find explanations for every example; and iv) a possible meta-metric to compare explanation methods. Our contributions include formalizing this problem, proposing solutions, providing theoretical guarantees for these solutions that are computable, and experimentally showing their efficacy on synthetic and real data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-dhurandhar24a, title = {Trust Regions for Explanations via Black-Box Probabilistic Certification}, author = {Dhurandhar, Amit and Haldar, Swagatam and Wei, Dennis and Natesan Ramamurthy, Karthikeyan}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {10736--10764}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/dhurandhar24a/dhurandhar24a.pdf}, url = {https://proceedings.mlr.press/v235/dhurandhar24a.html}, abstract = {Given the black box nature of machine learning models, a plethora of explainability methods have been developed to decipher the factors behind individual decisions. In this paper, we introduce a novel problem of black box (probabilistic) explanation certification. We ask the question: Given a black box model with only query access, an explanation for an example and a quality metric (viz. fidelity, stability), can we find the largest hypercube (i.e., $\ell_{\infty}$ ball) centered at the example such that when the explanation is applied to all examples within the hypercube, (with high probability) a quality criterion is met (viz. fidelity greater than some value)? Being able to efficiently find such a trust region has multiple benefits: i) insight into model behavior in a region, with a guarantee; ii) ascertained stability of the explanation; iii) explanation reuse, which can save time, energy and money by not having to find explanations for every example; and iv) a possible meta-metric to compare explanation methods. Our contributions include formalizing this problem, proposing solutions, providing theoretical guarantees for these solutions that are computable, and experimentally showing their efficacy on synthetic and real data.} }
Endnote
%0 Conference Paper %T Trust Regions for Explanations via Black-Box Probabilistic Certification %A Amit Dhurandhar %A Swagatam Haldar %A Dennis Wei %A Karthikeyan Natesan Ramamurthy %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-dhurandhar24a %I PMLR %P 10736--10764 %U https://proceedings.mlr.press/v235/dhurandhar24a.html %V 235 %X Given the black box nature of machine learning models, a plethora of explainability methods have been developed to decipher the factors behind individual decisions. In this paper, we introduce a novel problem of black box (probabilistic) explanation certification. We ask the question: Given a black box model with only query access, an explanation for an example and a quality metric (viz. fidelity, stability), can we find the largest hypercube (i.e., $\ell_{\infty}$ ball) centered at the example such that when the explanation is applied to all examples within the hypercube, (with high probability) a quality criterion is met (viz. fidelity greater than some value)? Being able to efficiently find such a trust region has multiple benefits: i) insight into model behavior in a region, with a guarantee; ii) ascertained stability of the explanation; iii) explanation reuse, which can save time, energy and money by not having to find explanations for every example; and iv) a possible meta-metric to compare explanation methods. Our contributions include formalizing this problem, proposing solutions, providing theoretical guarantees for these solutions that are computable, and experimentally showing their efficacy on synthetic and real data.
APA
Dhurandhar, A., Haldar, S., Wei, D. & Natesan Ramamurthy, K.. (2024). Trust Regions for Explanations via Black-Box Probabilistic Certification. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:10736-10764 Available from https://proceedings.mlr.press/v235/dhurandhar24a.html.

Related Material