Robust and Stable Black Box Explanations

Himabindu Lakkaraju, Nino Arsov, Osbert Bastani
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:5628-5638, 2020.

Abstract

As machine learning black boxes are increasingly being deployed in real-world applications, there has been a growing interest in developing post hoc explanations that summarize the behaviors of these black boxes. However, existing algorithms for generating such explanations have been shown to lack stability and robustness to distribution shifts. We propose a novel framework for generating robust and stable explanations of black box models based on adversarial training. Our framework optimizes a minimax objective that aims to construct the highest fidelity explanation with respect to the worst-case over a set of adversarial perturbations. We instantiate this algorithm for explanations in the form of linear models and decision sets by devising the required optimization procedures. To the best of our knowledge, this work makes the first attempt at generating post hoc explanations that are robust to a general class of adversarial perturbations that are of practical interest. Experimental evaluation with real-world and synthetic datasets demonstrates that our approach substantially improves robustness of explanations without sacrificing their fidelity on the original data distribution.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-lakkaraju20a, title = {Robust and Stable Black Box Explanations}, author = {Lakkaraju, Himabindu and Arsov, Nino and Bastani, Osbert}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {5628--5638}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/lakkaraju20a/lakkaraju20a.pdf}, url = {https://proceedings.mlr.press/v119/lakkaraju20a.html}, abstract = {As machine learning black boxes are increasingly being deployed in real-world applications, there has been a growing interest in developing post hoc explanations that summarize the behaviors of these black boxes. However, existing algorithms for generating such explanations have been shown to lack stability and robustness to distribution shifts. We propose a novel framework for generating robust and stable explanations of black box models based on adversarial training. Our framework optimizes a minimax objective that aims to construct the highest fidelity explanation with respect to the worst-case over a set of adversarial perturbations. We instantiate this algorithm for explanations in the form of linear models and decision sets by devising the required optimization procedures. To the best of our knowledge, this work makes the first attempt at generating post hoc explanations that are robust to a general class of adversarial perturbations that are of practical interest. Experimental evaluation with real-world and synthetic datasets demonstrates that our approach substantially improves robustness of explanations without sacrificing their fidelity on the original data distribution.} }
Endnote
%0 Conference Paper %T Robust and Stable Black Box Explanations %A Himabindu Lakkaraju %A Nino Arsov %A Osbert Bastani %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-lakkaraju20a %I PMLR %P 5628--5638 %U https://proceedings.mlr.press/v119/lakkaraju20a.html %V 119 %X As machine learning black boxes are increasingly being deployed in real-world applications, there has been a growing interest in developing post hoc explanations that summarize the behaviors of these black boxes. However, existing algorithms for generating such explanations have been shown to lack stability and robustness to distribution shifts. We propose a novel framework for generating robust and stable explanations of black box models based on adversarial training. Our framework optimizes a minimax objective that aims to construct the highest fidelity explanation with respect to the worst-case over a set of adversarial perturbations. We instantiate this algorithm for explanations in the form of linear models and decision sets by devising the required optimization procedures. To the best of our knowledge, this work makes the first attempt at generating post hoc explanations that are robust to a general class of adversarial perturbations that are of practical interest. Experimental evaluation with real-world and synthetic datasets demonstrates that our approach substantially improves robustness of explanations without sacrificing their fidelity on the original data distribution.
APA
Lakkaraju, H., Arsov, N. & Bastani, O.. (2020). Robust and Stable Black Box Explanations. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:5628-5638 Available from https://proceedings.mlr.press/v119/lakkaraju20a.html.

Related Material