Statistical Test for Attention Maps in Vision Transformers

Tomohiro Shiraishi, Daiki Miwa, Teruyuki Katsuoka, Vo Nguyen Le Duy, Kouichi Taji, Ichiro Takeuchi
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:45079-45096, 2024.

Abstract

The Vision Transformer (ViT) demonstrates exceptional performance in various computer vision tasks. Attention is crucial for ViT to capture complex wide-ranging relationships among image patches, allowing the model to weigh the importance of image patches and aiding our understanding of the decision-making process. However, when utilizing the attention of ViT as evidence in high-stakes decision-making tasks such as medical diagnostics, a challenge arises due to the potential of attention mechanisms erroneously focusing on irrelevant regions. In this study, we propose a statistical test for ViT’s attentions, enabling us to use the attentions as reliable quantitative evidence indicators for ViT’s decision-making with a rigorously controlled error rate. Using the framework called selective inference, we quantify the statistical significance of attentions in the form of p-values, which enables the theoretically grounded quantification of the false positive detection probability of attentions. We demonstrate the validity and the effectiveness of the proposed method through numerical experiments and applications to brain image diagnoses.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-shiraishi24a, title = {Statistical Test for Attention Maps in Vision Transformers}, author = {Shiraishi, Tomohiro and Miwa, Daiki and Katsuoka, Teruyuki and Duy, Vo Nguyen Le and Taji, Kouichi and Takeuchi, Ichiro}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {45079--45096}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/shiraishi24a/shiraishi24a.pdf}, url = {https://proceedings.mlr.press/v235/shiraishi24a.html}, abstract = {The Vision Transformer (ViT) demonstrates exceptional performance in various computer vision tasks. Attention is crucial for ViT to capture complex wide-ranging relationships among image patches, allowing the model to weigh the importance of image patches and aiding our understanding of the decision-making process. However, when utilizing the attention of ViT as evidence in high-stakes decision-making tasks such as medical diagnostics, a challenge arises due to the potential of attention mechanisms erroneously focusing on irrelevant regions. In this study, we propose a statistical test for ViT’s attentions, enabling us to use the attentions as reliable quantitative evidence indicators for ViT’s decision-making with a rigorously controlled error rate. Using the framework called selective inference, we quantify the statistical significance of attentions in the form of p-values, which enables the theoretically grounded quantification of the false positive detection probability of attentions. We demonstrate the validity and the effectiveness of the proposed method through numerical experiments and applications to brain image diagnoses.} }
Endnote
%0 Conference Paper %T Statistical Test for Attention Maps in Vision Transformers %A Tomohiro Shiraishi %A Daiki Miwa %A Teruyuki Katsuoka %A Vo Nguyen Le Duy %A Kouichi Taji %A Ichiro Takeuchi %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-shiraishi24a %I PMLR %P 45079--45096 %U https://proceedings.mlr.press/v235/shiraishi24a.html %V 235 %X The Vision Transformer (ViT) demonstrates exceptional performance in various computer vision tasks. Attention is crucial for ViT to capture complex wide-ranging relationships among image patches, allowing the model to weigh the importance of image patches and aiding our understanding of the decision-making process. However, when utilizing the attention of ViT as evidence in high-stakes decision-making tasks such as medical diagnostics, a challenge arises due to the potential of attention mechanisms erroneously focusing on irrelevant regions. In this study, we propose a statistical test for ViT’s attentions, enabling us to use the attentions as reliable quantitative evidence indicators for ViT’s decision-making with a rigorously controlled error rate. Using the framework called selective inference, we quantify the statistical significance of attentions in the form of p-values, which enables the theoretically grounded quantification of the false positive detection probability of attentions. We demonstrate the validity and the effectiveness of the proposed method through numerical experiments and applications to brain image diagnoses.
APA
Shiraishi, T., Miwa, D., Katsuoka, T., Duy, V.N.L., Taji, K. & Takeuchi, I.. (2024). Statistical Test for Attention Maps in Vision Transformers. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:45079-45096 Available from https://proceedings.mlr.press/v235/shiraishi24a.html.

Related Material