Rethinking Attention-Model Explainability through Faithfulness Violation Test

Yibing Liu, Haoliang Li, Yangyang Guo, Chenqi Kong, Jing Li, Shiqi Wang
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:13807-13824, 2022.

Abstract

Attention mechanisms are dominating the explainability of deep models. They produce probability distributions over the input, which are widely deemed as feature-importance indicators. However, in this paper, we find one critical limitation in attention explanations: weakness in identifying the polarity of feature impact. This would be somehow misleading – features with higher attention weights may not faithfully contribute to model predictions; instead, they can impose suppression effects. With this finding, we reflect on the explainability of current attention-based techniques, such as Attention $\bigodot$ Gradient and LRP-based attention explanations. We first propose an actionable diagnostic methodology (henceforth faithfulness violation test) to measure the consistency between explanation weights and the impact polarity. Through the extensive experiments, we then show that most tested explanation methods are unexpectedly hindered by the faithfulness violation issue, especially the raw attention. Empirical analyses on the factors affecting violation issues further provide useful observations for adopting explanation methods in attention models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-liu22i, title = {Rethinking Attention-Model Explainability through Faithfulness Violation Test}, author = {Liu, Yibing and Li, Haoliang and Guo, Yangyang and Kong, Chenqi and Li, Jing and Wang, Shiqi}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {13807--13824}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/liu22i/liu22i.pdf}, url = {https://proceedings.mlr.press/v162/liu22i.html}, abstract = {Attention mechanisms are dominating the explainability of deep models. They produce probability distributions over the input, which are widely deemed as feature-importance indicators. However, in this paper, we find one critical limitation in attention explanations: weakness in identifying the polarity of feature impact. This would be somehow misleading – features with higher attention weights may not faithfully contribute to model predictions; instead, they can impose suppression effects. With this finding, we reflect on the explainability of current attention-based techniques, such as Attention $\bigodot$ Gradient and LRP-based attention explanations. We first propose an actionable diagnostic methodology (henceforth faithfulness violation test) to measure the consistency between explanation weights and the impact polarity. Through the extensive experiments, we then show that most tested explanation methods are unexpectedly hindered by the faithfulness violation issue, especially the raw attention. Empirical analyses on the factors affecting violation issues further provide useful observations for adopting explanation methods in attention models.} }
Endnote
%0 Conference Paper %T Rethinking Attention-Model Explainability through Faithfulness Violation Test %A Yibing Liu %A Haoliang Li %A Yangyang Guo %A Chenqi Kong %A Jing Li %A Shiqi Wang %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-liu22i %I PMLR %P 13807--13824 %U https://proceedings.mlr.press/v162/liu22i.html %V 162 %X Attention mechanisms are dominating the explainability of deep models. They produce probability distributions over the input, which are widely deemed as feature-importance indicators. However, in this paper, we find one critical limitation in attention explanations: weakness in identifying the polarity of feature impact. This would be somehow misleading – features with higher attention weights may not faithfully contribute to model predictions; instead, they can impose suppression effects. With this finding, we reflect on the explainability of current attention-based techniques, such as Attention $\bigodot$ Gradient and LRP-based attention explanations. We first propose an actionable diagnostic methodology (henceforth faithfulness violation test) to measure the consistency between explanation weights and the impact polarity. Through the extensive experiments, we then show that most tested explanation methods are unexpectedly hindered by the faithfulness violation issue, especially the raw attention. Empirical analyses on the factors affecting violation issues further provide useful observations for adopting explanation methods in attention models.
APA
Liu, Y., Li, H., Guo, Y., Kong, C., Li, J. & Wang, S.. (2022). Rethinking Attention-Model Explainability through Faithfulness Violation Test. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:13807-13824 Available from https://proceedings.mlr.press/v162/liu22i.html.

Related Material