Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers

Arda Sahiner, Tolga Ergen, Batu Ozturkler, John Pauly, Morteza Mardani, Mert Pilanci
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:19050-19088, 2022.

Abstract

Vision transformers using self-attention or its proposed alternatives have demonstrated promising results in many image related tasks. However, the underpinning inductive bias of attention is not well understood. To address this issue, this paper analyzes attention through the lens of convex duality. For the non-linear dot-product self-attention, and alternative mechanisms such as MLP-mixer and Fourier Neural Operator (FNO), we derive equivalent finite-dimensional convex problems that are interpretable and solvable to global optimality. The convex programs lead to block nuclear-norm regularization that promotes low rank in the latent feature and token dimensions. In particular, we show how self-attention networks implicitly clusters the tokens, based on their latent similarity. We conduct experiments for transferring a pre-trained transformer backbone for CIFAR-100 classification by fine-tuning a variety of convex attention heads. The results indicate the merits of the bias induced by attention compared with the existing MLP or linear heads.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-sahiner22a, title = {Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers}, author = {Sahiner, Arda and Ergen, Tolga and Ozturkler, Batu and Pauly, John and Mardani, Morteza and Pilanci, Mert}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {19050--19088}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/sahiner22a/sahiner22a.pdf}, url = {https://proceedings.mlr.press/v162/sahiner22a.html}, abstract = {Vision transformers using self-attention or its proposed alternatives have demonstrated promising results in many image related tasks. However, the underpinning inductive bias of attention is not well understood. To address this issue, this paper analyzes attention through the lens of convex duality. For the non-linear dot-product self-attention, and alternative mechanisms such as MLP-mixer and Fourier Neural Operator (FNO), we derive equivalent finite-dimensional convex problems that are interpretable and solvable to global optimality. The convex programs lead to block nuclear-norm regularization that promotes low rank in the latent feature and token dimensions. In particular, we show how self-attention networks implicitly clusters the tokens, based on their latent similarity. We conduct experiments for transferring a pre-trained transformer backbone for CIFAR-100 classification by fine-tuning a variety of convex attention heads. The results indicate the merits of the bias induced by attention compared with the existing MLP or linear heads.} }
Endnote
%0 Conference Paper %T Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers %A Arda Sahiner %A Tolga Ergen %A Batu Ozturkler %A John Pauly %A Morteza Mardani %A Mert Pilanci %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-sahiner22a %I PMLR %P 19050--19088 %U https://proceedings.mlr.press/v162/sahiner22a.html %V 162 %X Vision transformers using self-attention or its proposed alternatives have demonstrated promising results in many image related tasks. However, the underpinning inductive bias of attention is not well understood. To address this issue, this paper analyzes attention through the lens of convex duality. For the non-linear dot-product self-attention, and alternative mechanisms such as MLP-mixer and Fourier Neural Operator (FNO), we derive equivalent finite-dimensional convex problems that are interpretable and solvable to global optimality. The convex programs lead to block nuclear-norm regularization that promotes low rank in the latent feature and token dimensions. In particular, we show how self-attention networks implicitly clusters the tokens, based on their latent similarity. We conduct experiments for transferring a pre-trained transformer backbone for CIFAR-100 classification by fine-tuning a variety of convex attention heads. The results indicate the merits of the bias induced by attention compared with the existing MLP or linear heads.
APA
Sahiner, A., Ergen, T., Ozturkler, B., Pauly, J., Mardani, M. & Pilanci, M.. (2022). Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:19050-19088 Available from https://proceedings.mlr.press/v162/sahiner22a.html.

Related Material