XAI for Transformers: Better Explanations through Conservative Propagation

Ameen Ali, Thomas Schnake, Oliver Eberle, Grégoire Montavon, Klaus-Robert Müller, Lior Wolf
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:435-451, 2022.

Abstract

Transformers have become an important workhorse of machine learning, with numerous applications. This necessitates the development of reliable methods for increasing their transparency. Multiple interpretability methods, often based on gradient information, have been proposed. We show that the gradient in a Transformer reflects the function only locally, and thus fails to reliably identify the contribution of input features to the prediction. We identify Attention Heads and LayerNorm as main reasons for such unreliable explanations and propose a more stable way for propagation through these layers. Our proposal, which can be seen as a proper extension of the well-established LRP method to Transformers, is shown both theoretically and empirically to overcome the deficiency of a simple gradient-based approach, and achieves state-of-the-art explanation performance on a broad range of Transformer models and datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-ali22a, title = {{XAI} for Transformers: Better Explanations through Conservative Propagation}, author = {Ali, Ameen and Schnake, Thomas and Eberle, Oliver and Montavon, Gr{\'e}goire and M{\"u}ller, Klaus-Robert and Wolf, Lior}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {435--451}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/ali22a/ali22a.pdf}, url = {https://proceedings.mlr.press/v162/ali22a.html}, abstract = {Transformers have become an important workhorse of machine learning, with numerous applications. This necessitates the development of reliable methods for increasing their transparency. Multiple interpretability methods, often based on gradient information, have been proposed. We show that the gradient in a Transformer reflects the function only locally, and thus fails to reliably identify the contribution of input features to the prediction. We identify Attention Heads and LayerNorm as main reasons for such unreliable explanations and propose a more stable way for propagation through these layers. Our proposal, which can be seen as a proper extension of the well-established LRP method to Transformers, is shown both theoretically and empirically to overcome the deficiency of a simple gradient-based approach, and achieves state-of-the-art explanation performance on a broad range of Transformer models and datasets.} }
Endnote
%0 Conference Paper %T XAI for Transformers: Better Explanations through Conservative Propagation %A Ameen Ali %A Thomas Schnake %A Oliver Eberle %A Grégoire Montavon %A Klaus-Robert Müller %A Lior Wolf %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-ali22a %I PMLR %P 435--451 %U https://proceedings.mlr.press/v162/ali22a.html %V 162 %X Transformers have become an important workhorse of machine learning, with numerous applications. This necessitates the development of reliable methods for increasing their transparency. Multiple interpretability methods, often based on gradient information, have been proposed. We show that the gradient in a Transformer reflects the function only locally, and thus fails to reliably identify the contribution of input features to the prediction. We identify Attention Heads and LayerNorm as main reasons for such unreliable explanations and propose a more stable way for propagation through these layers. Our proposal, which can be seen as a proper extension of the well-established LRP method to Transformers, is shown both theoretically and empirically to overcome the deficiency of a simple gradient-based approach, and achieves state-of-the-art explanation performance on a broad range of Transformer models and datasets.
APA
Ali, A., Schnake, T., Eberle, O., Montavon, G., Müller, K. & Wolf, L.. (2022). XAI for Transformers: Better Explanations through Conservative Propagation. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:435-451 Available from https://proceedings.mlr.press/v162/ali22a.html.

Related Material