Visual Attention Emerges from Recurrent Sparse Reconstruction

Baifeng Shi, Yale Song, Neel Joshi, Trevor Darrell, Xin Wang
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:20041-20056, 2022.

Abstract

Visual attention helps achieve robust perception under noise, corruption, and distribution shifts in human vision, which are areas where modern neural networks still fall short. We present VARS, Visual Attention from Recurrent Sparse reconstruction, a new attention formulation built on two prominent features of the human visual attention mechanism: recurrency and sparsity. Related features are grouped together via recurrent connections between neurons, with salient objects emerging via sparse regularization. VARS adopts an attractor network with recurrent connections that converges toward a stable pattern over time. Network layers are represented as ordinary differential equations (ODEs), formulating attention as a recurrent attractor network that equivalently optimizes the sparse reconstruction of input using a dictionary of “templates” encoding underlying patterns of data. We show that self-attention is a special case of VARS with a single-step optimization and no sparsity constraint. VARS can be readily used as a replacement for self-attention in popular vision transformers, consistently improving their robustness across various benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-shi22e, title = {Visual Attention Emerges from Recurrent Sparse Reconstruction}, author = {Shi, Baifeng and Song, Yale and Joshi, Neel and Darrell, Trevor and Wang, Xin}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {20041--20056}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/shi22e/shi22e.pdf}, url = {https://proceedings.mlr.press/v162/shi22e.html}, abstract = {Visual attention helps achieve robust perception under noise, corruption, and distribution shifts in human vision, which are areas where modern neural networks still fall short. We present VARS, Visual Attention from Recurrent Sparse reconstruction, a new attention formulation built on two prominent features of the human visual attention mechanism: recurrency and sparsity. Related features are grouped together via recurrent connections between neurons, with salient objects emerging via sparse regularization. VARS adopts an attractor network with recurrent connections that converges toward a stable pattern over time. Network layers are represented as ordinary differential equations (ODEs), formulating attention as a recurrent attractor network that equivalently optimizes the sparse reconstruction of input using a dictionary of “templates” encoding underlying patterns of data. We show that self-attention is a special case of VARS with a single-step optimization and no sparsity constraint. VARS can be readily used as a replacement for self-attention in popular vision transformers, consistently improving their robustness across various benchmarks.} }
Endnote
%0 Conference Paper %T Visual Attention Emerges from Recurrent Sparse Reconstruction %A Baifeng Shi %A Yale Song %A Neel Joshi %A Trevor Darrell %A Xin Wang %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-shi22e %I PMLR %P 20041--20056 %U https://proceedings.mlr.press/v162/shi22e.html %V 162 %X Visual attention helps achieve robust perception under noise, corruption, and distribution shifts in human vision, which are areas where modern neural networks still fall short. We present VARS, Visual Attention from Recurrent Sparse reconstruction, a new attention formulation built on two prominent features of the human visual attention mechanism: recurrency and sparsity. Related features are grouped together via recurrent connections between neurons, with salient objects emerging via sparse regularization. VARS adopts an attractor network with recurrent connections that converges toward a stable pattern over time. Network layers are represented as ordinary differential equations (ODEs), formulating attention as a recurrent attractor network that equivalently optimizes the sparse reconstruction of input using a dictionary of “templates” encoding underlying patterns of data. We show that self-attention is a special case of VARS with a single-step optimization and no sparsity constraint. VARS can be readily used as a replacement for self-attention in popular vision transformers, consistently improving their robustness across various benchmarks.
APA
Shi, B., Song, Y., Joshi, N., Darrell, T. & Wang, X.. (2022). Visual Attention Emerges from Recurrent Sparse Reconstruction. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:20041-20056 Available from https://proceedings.mlr.press/v162/shi22e.html.

Related Material