Analyzing Explainer Robustness via Probabilistic Lipschitzness of Prediction Functions

Zulqarnain Q Khan, Davin Hill, Aria Masoomi, Joshua T Bone, Jennifer Dy
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:1378-1386, 2024.

Abstract

Machine learning methods have significantly improved in their predictive capabilities, but at the same time they are becoming more complex and less transparent. As a result, explainers are often relied on to provide interpretability to these black-box prediction models. As crucial diagnostics tools, it is important that these explainers themselves are robust. In this paper we focus on one particular aspect of robustness, namely that an explainer should give similar explanations for similar data inputs. We formalize this notion by introducing and defining explainer astuteness, analogous to astuteness of prediction functions. Our formalism allows us to connect explainer robustness to the predictor’s probabilistic Lipschitzness, which captures the probability of local smoothness of a function. We provide lower bound guarantees on the astuteness of a variety of explainers (e.g., SHAP, RISE, CXPlain) given the Lipschitzness of the prediction function. These theoretical results imply that locally smooth prediction functions lend themselves to locally robust explanations. We evaluate these results empirically on simulated as well as real datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-q-khan24a, title = { Analyzing Explainer Robustness via Probabilistic {L}ipschitzness of Prediction Functions }, author = {Q Khan, Zulqarnain and Hill, Davin and Masoomi, Aria and T Bone, Joshua and Dy, Jennifer}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {1378--1386}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/q-khan24a/q-khan24a.pdf}, url = {https://proceedings.mlr.press/v238/q-khan24a.html}, abstract = { Machine learning methods have significantly improved in their predictive capabilities, but at the same time they are becoming more complex and less transparent. As a result, explainers are often relied on to provide interpretability to these black-box prediction models. As crucial diagnostics tools, it is important that these explainers themselves are robust. In this paper we focus on one particular aspect of robustness, namely that an explainer should give similar explanations for similar data inputs. We formalize this notion by introducing and defining explainer astuteness, analogous to astuteness of prediction functions. Our formalism allows us to connect explainer robustness to the predictor’s probabilistic Lipschitzness, which captures the probability of local smoothness of a function. We provide lower bound guarantees on the astuteness of a variety of explainers (e.g., SHAP, RISE, CXPlain) given the Lipschitzness of the prediction function. These theoretical results imply that locally smooth prediction functions lend themselves to locally robust explanations. We evaluate these results empirically on simulated as well as real datasets. } }
Endnote
%0 Conference Paper %T Analyzing Explainer Robustness via Probabilistic Lipschitzness of Prediction Functions %A Zulqarnain Q Khan %A Davin Hill %A Aria Masoomi %A Joshua T Bone %A Jennifer Dy %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-q-khan24a %I PMLR %P 1378--1386 %U https://proceedings.mlr.press/v238/q-khan24a.html %V 238 %X Machine learning methods have significantly improved in their predictive capabilities, but at the same time they are becoming more complex and less transparent. As a result, explainers are often relied on to provide interpretability to these black-box prediction models. As crucial diagnostics tools, it is important that these explainers themselves are robust. In this paper we focus on one particular aspect of robustness, namely that an explainer should give similar explanations for similar data inputs. We formalize this notion by introducing and defining explainer astuteness, analogous to astuteness of prediction functions. Our formalism allows us to connect explainer robustness to the predictor’s probabilistic Lipschitzness, which captures the probability of local smoothness of a function. We provide lower bound guarantees on the astuteness of a variety of explainers (e.g., SHAP, RISE, CXPlain) given the Lipschitzness of the prediction function. These theoretical results imply that locally smooth prediction functions lend themselves to locally robust explanations. We evaluate these results empirically on simulated as well as real datasets.
APA
Q Khan, Z., Hill, D., Masoomi, A., T Bone, J. & Dy, J.. (2024). Analyzing Explainer Robustness via Probabilistic Lipschitzness of Prediction Functions . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:1378-1386 Available from https://proceedings.mlr.press/v238/q-khan24a.html.

Related Material