Towards the Unification and Robustness of Perturbation and Gradient Based Explanations

Sushant Agarwal, Shahin Jabbari, Chirag Agarwal, Sohini Upadhyay, Steven Wu, Himabindu Lakkaraju
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:110-119, 2021.

Abstract

As machine learning black boxes are increasingly being deployed in critical domains such as healthcare and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner. In this work, we analyze two popular post hoc interpretation techniques: SmoothGrad which is a gradient based method, and a variant of LIME which is a perturbation based method. More specifically, we derive explicit closed form expressions for the explanations output by these two methods and show that they both converge to the same explanation in expectation, i.e., when the number of perturbed samples used by these methods is large. We then leverage this connection to establish other desirable properties, such as robustness, for these techniques. We also derive finite sample complexity bounds for the number of perturbations required for these methods to converge to their expected explanation. Finally, we empirically validate our theory using extensive experimentation on both synthetic and real-world datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-agarwal21c, title = {Towards the Unification and Robustness of Perturbation and Gradient Based Explanations}, author = {Agarwal, Sushant and Jabbari, Shahin and Agarwal, Chirag and Upadhyay, Sohini and Wu, Steven and Lakkaraju, Himabindu}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {110--119}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/agarwal21c/agarwal21c.pdf}, url = {https://proceedings.mlr.press/v139/agarwal21c.html}, abstract = {As machine learning black boxes are increasingly being deployed in critical domains such as healthcare and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner. In this work, we analyze two popular post hoc interpretation techniques: SmoothGrad which is a gradient based method, and a variant of LIME which is a perturbation based method. More specifically, we derive explicit closed form expressions for the explanations output by these two methods and show that they both converge to the same explanation in expectation, i.e., when the number of perturbed samples used by these methods is large. We then leverage this connection to establish other desirable properties, such as robustness, for these techniques. We also derive finite sample complexity bounds for the number of perturbations required for these methods to converge to their expected explanation. Finally, we empirically validate our theory using extensive experimentation on both synthetic and real-world datasets.} }
Endnote
%0 Conference Paper %T Towards the Unification and Robustness of Perturbation and Gradient Based Explanations %A Sushant Agarwal %A Shahin Jabbari %A Chirag Agarwal %A Sohini Upadhyay %A Steven Wu %A Himabindu Lakkaraju %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-agarwal21c %I PMLR %P 110--119 %U https://proceedings.mlr.press/v139/agarwal21c.html %V 139 %X As machine learning black boxes are increasingly being deployed in critical domains such as healthcare and criminal justice, there has been a growing emphasis on developing techniques for explaining these black boxes in a post hoc manner. In this work, we analyze two popular post hoc interpretation techniques: SmoothGrad which is a gradient based method, and a variant of LIME which is a perturbation based method. More specifically, we derive explicit closed form expressions for the explanations output by these two methods and show that they both converge to the same explanation in expectation, i.e., when the number of perturbed samples used by these methods is large. We then leverage this connection to establish other desirable properties, such as robustness, for these techniques. We also derive finite sample complexity bounds for the number of perturbations required for these methods to converge to their expected explanation. Finally, we empirically validate our theory using extensive experimentation on both synthetic and real-world datasets.
APA
Agarwal, S., Jabbari, S., Agarwal, C., Upadhyay, S., Wu, S. & Lakkaraju, H.. (2021). Towards the Unification and Robustness of Perturbation and Gradient Based Explanations. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:110-119 Available from https://proceedings.mlr.press/v139/agarwal21c.html.

Related Material