A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions

Daniel D Lundstrom, Tianjian Huang, Meisam Razaviyayn
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:14485-14508, 2022.

Abstract

As deep learning (DL) efficacy grows, concerns for poor model explainability grow also. Attribution methods address the issue of explainability by quantifying the importance of an input feature for a model prediction. Among various methods, Integrated Gradients (IG) sets itself apart by claiming other methods failed to satisfy desirable axioms, while IG and methods like it uniquely satisfy said axioms. This paper comments on fundamental aspects of IG and its applications/extensions: 1) We identify key differences between IG function spaces and the supporting literature’s function spaces which problematize previous claims of IG uniqueness. We show that with the introduction of an additional axiom, non-decreasing positivity, the uniqueness claims can be established. 2) We address the question of input sensitivity by identifying function classes where IG is/is not Lipschitz in the attributed input. 3) We show that axioms for single-baseline methods have analogous properties for methods with probability distribution baselines. 4) We introduce a computationally efficient method of identifying internal neurons that contribute to specified regions of an IG attribution map. Finally, we present experimental results validating this method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-lundstrom22a, title = {A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions}, author = {Lundstrom, Daniel D and Huang, Tianjian and Razaviyayn, Meisam}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {14485--14508}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/lundstrom22a/lundstrom22a.pdf}, url = {https://proceedings.mlr.press/v162/lundstrom22a.html}, abstract = {As deep learning (DL) efficacy grows, concerns for poor model explainability grow also. Attribution methods address the issue of explainability by quantifying the importance of an input feature for a model prediction. Among various methods, Integrated Gradients (IG) sets itself apart by claiming other methods failed to satisfy desirable axioms, while IG and methods like it uniquely satisfy said axioms. This paper comments on fundamental aspects of IG and its applications/extensions: 1) We identify key differences between IG function spaces and the supporting literature’s function spaces which problematize previous claims of IG uniqueness. We show that with the introduction of an additional axiom, non-decreasing positivity, the uniqueness claims can be established. 2) We address the question of input sensitivity by identifying function classes where IG is/is not Lipschitz in the attributed input. 3) We show that axioms for single-baseline methods have analogous properties for methods with probability distribution baselines. 4) We introduce a computationally efficient method of identifying internal neurons that contribute to specified regions of an IG attribution map. Finally, we present experimental results validating this method.} }
Endnote
%0 Conference Paper %T A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions %A Daniel D Lundstrom %A Tianjian Huang %A Meisam Razaviyayn %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-lundstrom22a %I PMLR %P 14485--14508 %U https://proceedings.mlr.press/v162/lundstrom22a.html %V 162 %X As deep learning (DL) efficacy grows, concerns for poor model explainability grow also. Attribution methods address the issue of explainability by quantifying the importance of an input feature for a model prediction. Among various methods, Integrated Gradients (IG) sets itself apart by claiming other methods failed to satisfy desirable axioms, while IG and methods like it uniquely satisfy said axioms. This paper comments on fundamental aspects of IG and its applications/extensions: 1) We identify key differences between IG function spaces and the supporting literature’s function spaces which problematize previous claims of IG uniqueness. We show that with the introduction of an additional axiom, non-decreasing positivity, the uniqueness claims can be established. 2) We address the question of input sensitivity by identifying function classes where IG is/is not Lipschitz in the attributed input. 3) We show that axioms for single-baseline methods have analogous properties for methods with probability distribution baselines. 4) We introduce a computationally efficient method of identifying internal neurons that contribute to specified regions of an IG attribution map. Finally, we present experimental results validating this method.
APA
Lundstrom, D.D., Huang, T. & Razaviyayn, M.. (2022). A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:14485-14508 Available from https://proceedings.mlr.press/v162/lundstrom22a.html.

Related Material