Attention Meets Post-hoc Interpretability: A Mathematical Perspective

Gianluigi Lopardo, Frederic Precioso, Damien Garreau
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:32781-32800, 2024.

Abstract

Attention-based architectures, in particular transformers, are at the heart of a technological revolution. Interestingly, in addition to helping obtain state-of-the-art results on a wide range of applications, the attention mechanism intrinsically provides meaningful insights on the internal behavior of the model. Can these insights be used as explanations? Debate rages on. In this paper, we mathematically study a simple attention-based architecture and pinpoint the differences between post-hoc and attention-based explanations. We show that they provide quite different results, and that, despite their limitations, post-hoc methods are capable of capturing more useful insights than merely examining the attention weights.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-lopardo24a, title = {Attention Meets Post-hoc Interpretability: A Mathematical Perspective}, author = {Lopardo, Gianluigi and Precioso, Frederic and Garreau, Damien}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {32781--32800}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/lopardo24a/lopardo24a.pdf}, url = {https://proceedings.mlr.press/v235/lopardo24a.html}, abstract = {Attention-based architectures, in particular transformers, are at the heart of a technological revolution. Interestingly, in addition to helping obtain state-of-the-art results on a wide range of applications, the attention mechanism intrinsically provides meaningful insights on the internal behavior of the model. Can these insights be used as explanations? Debate rages on. In this paper, we mathematically study a simple attention-based architecture and pinpoint the differences between post-hoc and attention-based explanations. We show that they provide quite different results, and that, despite their limitations, post-hoc methods are capable of capturing more useful insights than merely examining the attention weights.} }
Endnote
%0 Conference Paper %T Attention Meets Post-hoc Interpretability: A Mathematical Perspective %A Gianluigi Lopardo %A Frederic Precioso %A Damien Garreau %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-lopardo24a %I PMLR %P 32781--32800 %U https://proceedings.mlr.press/v235/lopardo24a.html %V 235 %X Attention-based architectures, in particular transformers, are at the heart of a technological revolution. Interestingly, in addition to helping obtain state-of-the-art results on a wide range of applications, the attention mechanism intrinsically provides meaningful insights on the internal behavior of the model. Can these insights be used as explanations? Debate rages on. In this paper, we mathematically study a simple attention-based architecture and pinpoint the differences between post-hoc and attention-based explanations. We show that they provide quite different results, and that, despite their limitations, post-hoc methods are capable of capturing more useful insights than merely examining the attention weights.
APA
Lopardo, G., Precioso, F. & Garreau, D.. (2024). Attention Meets Post-hoc Interpretability: A Mathematical Perspective. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:32781-32800 Available from https://proceedings.mlr.press/v235/lopardo24a.html.

Related Material