Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers

Krzysztof Choromanski, Shanda Li, Valerii Likhosherstov, Kumar Avinava Dubey, Shengjie Luo, Di He, Yiming Yang, Tamas Sarlos, Thomas Weingarten, Adrian Weller
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2278-2286, 2024.

Abstract

We propose a new class of linear Transformers called FourierLearner-Transformers (FLTs), which incorporate a wide range of relative positional encoding mechanisms (RPEs). These include regular RPE techniques applied for sequential data, as well as novel RPEs operating on geometric data embedded in higher-dimensional Euclidean spaces. FLTs construct the optimal RPE mechanism implicitly by learning its spectral representation. As opposed to other architectures combining efficient low-rank linear attention with RPEs, FLTs remain practical in terms of their memory usage and do not require additional assumptions about the structure of the RPE mask. Besides, FLTs allow for applying certain structural inductive bias techniques to specify masking strategies, e.g. they provide a way to learn the so-called local RPEs introduced in this paper and give accuracy gains as compared with several other linear Transformers for language modeling. We also thoroughly test FLTs on other data modalities and tasks, such as image classification, 3D molecular modeling, and learnable optimizers. To the best of our knowledge, for 3D molecular data, FLTs are the first Transformer architectures providing linear attention and incorporating RPE masking.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-choromanski24a, title = { Learning a {F}ourier Transform for Linear Relative Positional Encodings in Transformers }, author = {Choromanski, Krzysztof and Li, Shanda and Likhosherstov, Valerii and Avinava Dubey, Kumar and Luo, Shengjie and He, Di and Yang, Yiming and Sarlos, Tamas and Weingarten, Thomas and Weller, Adrian}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {2278--2286}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/choromanski24a/choromanski24a.pdf}, url = {https://proceedings.mlr.press/v238/choromanski24a.html}, abstract = { We propose a new class of linear Transformers called FourierLearner-Transformers (FLTs), which incorporate a wide range of relative positional encoding mechanisms (RPEs). These include regular RPE techniques applied for sequential data, as well as novel RPEs operating on geometric data embedded in higher-dimensional Euclidean spaces. FLTs construct the optimal RPE mechanism implicitly by learning its spectral representation. As opposed to other architectures combining efficient low-rank linear attention with RPEs, FLTs remain practical in terms of their memory usage and do not require additional assumptions about the structure of the RPE mask. Besides, FLTs allow for applying certain structural inductive bias techniques to specify masking strategies, e.g. they provide a way to learn the so-called local RPEs introduced in this paper and give accuracy gains as compared with several other linear Transformers for language modeling. We also thoroughly test FLTs on other data modalities and tasks, such as image classification, 3D molecular modeling, and learnable optimizers. To the best of our knowledge, for 3D molecular data, FLTs are the first Transformer architectures providing linear attention and incorporating RPE masking. } }
Endnote
%0 Conference Paper %T Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers %A Krzysztof Choromanski %A Shanda Li %A Valerii Likhosherstov %A Kumar Avinava Dubey %A Shengjie Luo %A Di He %A Yiming Yang %A Tamas Sarlos %A Thomas Weingarten %A Adrian Weller %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-choromanski24a %I PMLR %P 2278--2286 %U https://proceedings.mlr.press/v238/choromanski24a.html %V 238 %X We propose a new class of linear Transformers called FourierLearner-Transformers (FLTs), which incorporate a wide range of relative positional encoding mechanisms (RPEs). These include regular RPE techniques applied for sequential data, as well as novel RPEs operating on geometric data embedded in higher-dimensional Euclidean spaces. FLTs construct the optimal RPE mechanism implicitly by learning its spectral representation. As opposed to other architectures combining efficient low-rank linear attention with RPEs, FLTs remain practical in terms of their memory usage and do not require additional assumptions about the structure of the RPE mask. Besides, FLTs allow for applying certain structural inductive bias techniques to specify masking strategies, e.g. they provide a way to learn the so-called local RPEs introduced in this paper and give accuracy gains as compared with several other linear Transformers for language modeling. We also thoroughly test FLTs on other data modalities and tasks, such as image classification, 3D molecular modeling, and learnable optimizers. To the best of our knowledge, for 3D molecular data, FLTs are the first Transformer architectures providing linear attention and incorporating RPE masking.
APA
Choromanski, K., Li, S., Likhosherstov, V., Avinava Dubey, K., Luo, S., He, D., Yang, Y., Sarlos, T., Weingarten, T. & Weller, A.. (2024). Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:2278-2286 Available from https://proceedings.mlr.press/v238/choromanski24a.html.

Related Material