How do Transformers Perform In-Context Autoregressive Learning ?

Michael Eli Sander, Raja Giryes, Taiji Suzuki, Mathieu Blondel, Gabriel Peyré
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:43235-43254, 2024.

Abstract

Transformers have achieved state-of-the-art performance in language modeling tasks. However, the reasons behind their tremendous success are still unclear. In this paper, towards a better understanding, we train a Transformer model on a simple next token prediction task, where sequences are generated as a first-order autoregressive process $s_{t+1} = W s_t$. We show how a trained Transformer predicts the next token by first learning $W$ in-context, then applying a prediction mapping. We call the resulting procedure in-context autoregressive learning. More precisely, focusing on commuting orthogonal matrices $W$, we first show that a trained one-layer linear Transformer implements one step of gradient descent for the minimization of an inner objective function, when considering augmented tokens. When the tokens are not augmented, we characterize the global minima of a one-layer diagonal linear multi-head Transformer. Importantly, we exhibit orthogonality between heads and show that positional encoding captures trigonometric relations in the data. On the experimental side, we consider the general case of non-commuting orthogonal matrices and generalize our theoretical findings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-sander24a, title = {How do Transformers Perform In-Context Autoregressive Learning ?}, author = {Sander, Michael Eli and Giryes, Raja and Suzuki, Taiji and Blondel, Mathieu and Peyr\'{e}, Gabriel}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {43235--43254}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/sander24a/sander24a.pdf}, url = {https://proceedings.mlr.press/v235/sander24a.html}, abstract = {Transformers have achieved state-of-the-art performance in language modeling tasks. However, the reasons behind their tremendous success are still unclear. In this paper, towards a better understanding, we train a Transformer model on a simple next token prediction task, where sequences are generated as a first-order autoregressive process $s_{t+1} = W s_t$. We show how a trained Transformer predicts the next token by first learning $W$ in-context, then applying a prediction mapping. We call the resulting procedure in-context autoregressive learning. More precisely, focusing on commuting orthogonal matrices $W$, we first show that a trained one-layer linear Transformer implements one step of gradient descent for the minimization of an inner objective function, when considering augmented tokens. When the tokens are not augmented, we characterize the global minima of a one-layer diagonal linear multi-head Transformer. Importantly, we exhibit orthogonality between heads and show that positional encoding captures trigonometric relations in the data. On the experimental side, we consider the general case of non-commuting orthogonal matrices and generalize our theoretical findings.} }
Endnote
%0 Conference Paper %T How do Transformers Perform In-Context Autoregressive Learning ? %A Michael Eli Sander %A Raja Giryes %A Taiji Suzuki %A Mathieu Blondel %A Gabriel Peyré %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-sander24a %I PMLR %P 43235--43254 %U https://proceedings.mlr.press/v235/sander24a.html %V 235 %X Transformers have achieved state-of-the-art performance in language modeling tasks. However, the reasons behind their tremendous success are still unclear. In this paper, towards a better understanding, we train a Transformer model on a simple next token prediction task, where sequences are generated as a first-order autoregressive process $s_{t+1} = W s_t$. We show how a trained Transformer predicts the next token by first learning $W$ in-context, then applying a prediction mapping. We call the resulting procedure in-context autoregressive learning. More precisely, focusing on commuting orthogonal matrices $W$, we first show that a trained one-layer linear Transformer implements one step of gradient descent for the minimization of an inner objective function, when considering augmented tokens. When the tokens are not augmented, we characterize the global minima of a one-layer diagonal linear multi-head Transformer. Importantly, we exhibit orthogonality between heads and show that positional encoding captures trigonometric relations in the data. On the experimental side, we consider the general case of non-commuting orthogonal matrices and generalize our theoretical findings.
APA
Sander, M.E., Giryes, R., Suzuki, T., Blondel, M. & Peyré, G.. (2024). How do Transformers Perform In-Context Autoregressive Learning ?. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:43235-43254 Available from https://proceedings.mlr.press/v235/sander24a.html.

Related Material