Positional Attention: Expressivity and Learnability of Algorithmic Computation

Artur Back De Luca, George Giapitzakis, Shenghao Yang, Petar Veličković, Kimon Fountoulakis
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:2335-2390, 2025.

Abstract

There is a growing interest in the ability of neural networks to execute algorithmic tasks (e.g., arithmetic, summary statistics, and sorting). The goal of this work is to better understand the role of attention in Transformers for algorithmic execution. Its importance for algorithmic execution has been studied theoretically and empirically using parallel computational models. Notably, many parallel algorithms communicate between processors solely using positional information. Inspired by this observation, we investigate how Transformers can execute algorithms using positional attention, where attention weights depend exclusively on positional encodings. We prove that Transformers with positional attention (positional Transformers) maintain the same expressivity of parallel computational models, incurring a logarithmic depth cost relative to the input length. We analyze their in-distribution learnability and explore how parameter norms in positional attention affect sample complexity. Our results show that positional Transformers introduce a learning trade-off: while they exhibit better theoretical dependence on parameter norms, certain tasks may require more layers, which can, in turn, increase sample complexity. Finally, we empirically explore the out-of-distribution performance of positional Transformers and find that they perform well in tasks where their underlying algorithmic solution relies on positional information.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-back-de-luca25a, title = {Positional Attention: Expressivity and Learnability of Algorithmic Computation}, author = {Back De Luca, Artur and Giapitzakis, George and Yang, Shenghao and Veli\v{c}kovi\'{c}, Petar and Fountoulakis, Kimon}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {2335--2390}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/back-de-luca25a/back-de-luca25a.pdf}, url = {https://proceedings.mlr.press/v267/back-de-luca25a.html}, abstract = {There is a growing interest in the ability of neural networks to execute algorithmic tasks (e.g., arithmetic, summary statistics, and sorting). The goal of this work is to better understand the role of attention in Transformers for algorithmic execution. Its importance for algorithmic execution has been studied theoretically and empirically using parallel computational models. Notably, many parallel algorithms communicate between processors solely using positional information. Inspired by this observation, we investigate how Transformers can execute algorithms using positional attention, where attention weights depend exclusively on positional encodings. We prove that Transformers with positional attention (positional Transformers) maintain the same expressivity of parallel computational models, incurring a logarithmic depth cost relative to the input length. We analyze their in-distribution learnability and explore how parameter norms in positional attention affect sample complexity. Our results show that positional Transformers introduce a learning trade-off: while they exhibit better theoretical dependence on parameter norms, certain tasks may require more layers, which can, in turn, increase sample complexity. Finally, we empirically explore the out-of-distribution performance of positional Transformers and find that they perform well in tasks where their underlying algorithmic solution relies on positional information.} }
Endnote
%0 Conference Paper %T Positional Attention: Expressivity and Learnability of Algorithmic Computation %A Artur Back De Luca %A George Giapitzakis %A Shenghao Yang %A Petar Veličković %A Kimon Fountoulakis %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-back-de-luca25a %I PMLR %P 2335--2390 %U https://proceedings.mlr.press/v267/back-de-luca25a.html %V 267 %X There is a growing interest in the ability of neural networks to execute algorithmic tasks (e.g., arithmetic, summary statistics, and sorting). The goal of this work is to better understand the role of attention in Transformers for algorithmic execution. Its importance for algorithmic execution has been studied theoretically and empirically using parallel computational models. Notably, many parallel algorithms communicate between processors solely using positional information. Inspired by this observation, we investigate how Transformers can execute algorithms using positional attention, where attention weights depend exclusively on positional encodings. We prove that Transformers with positional attention (positional Transformers) maintain the same expressivity of parallel computational models, incurring a logarithmic depth cost relative to the input length. We analyze their in-distribution learnability and explore how parameter norms in positional attention affect sample complexity. Our results show that positional Transformers introduce a learning trade-off: while they exhibit better theoretical dependence on parameter norms, certain tasks may require more layers, which can, in turn, increase sample complexity. Finally, we empirically explore the out-of-distribution performance of positional Transformers and find that they perform well in tasks where their underlying algorithmic solution relies on positional information.
APA
Back De Luca, A., Giapitzakis, G., Yang, S., Veličković, P. & Fountoulakis, K.. (2025). Positional Attention: Expressivity and Learnability of Algorithmic Computation. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:2335-2390 Available from https://proceedings.mlr.press/v267/back-de-luca25a.html.

Related Material