The underlying structures of self-attention: symmetry, directionality, and emergent dynamics in Transformer training

Matteo Saponati, Pascal Josef Sager, Pau Vilimelis Aceituno, Thilo Stadelmann, Benjamin F Grewe
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:52958-52994, 2025.

Abstract

Self-attention is essential to Transformer architectures, yet how information is embedded in the self-attention matrices and how different objective functions impact this process remains unclear. We present a mathematical framework to analyze self-attention matrices by deriving the structures governing their weight updates. Using this framework, we demonstrate that bidirectional training induces symmetry in the weight matrices, while autoregressive training results in directionality and column dominance. Our theoretical findings are validated across multiple Transformer models — including ModernBERT, GPT, LLaMA3, and Mistral — and input modalities like text, vision, and audio. Finally, we apply these insights by showing that symmetric initialization improves the performance of encoder-only models on language tasks. This mathematical analysis offers a novel theoretical perspective on how information is embedded through self-attention, thereby improving the interpretability of Transformer models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-saponati25a, title = {The underlying structures of self-attention: symmetry, directionality, and emergent dynamics in Transformer training}, author = {Saponati, Matteo and Sager, Pascal Josef and Vilimelis Aceituno, Pau and Stadelmann, Thilo and Grewe, Benjamin F}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {52958--52994}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/saponati25a/saponati25a.pdf}, url = {https://proceedings.mlr.press/v267/saponati25a.html}, abstract = {Self-attention is essential to Transformer architectures, yet how information is embedded in the self-attention matrices and how different objective functions impact this process remains unclear. We present a mathematical framework to analyze self-attention matrices by deriving the structures governing their weight updates. Using this framework, we demonstrate that bidirectional training induces symmetry in the weight matrices, while autoregressive training results in directionality and column dominance. Our theoretical findings are validated across multiple Transformer models — including ModernBERT, GPT, LLaMA3, and Mistral — and input modalities like text, vision, and audio. Finally, we apply these insights by showing that symmetric initialization improves the performance of encoder-only models on language tasks. This mathematical analysis offers a novel theoretical perspective on how information is embedded through self-attention, thereby improving the interpretability of Transformer models.} }
Endnote
%0 Conference Paper %T The underlying structures of self-attention: symmetry, directionality, and emergent dynamics in Transformer training %A Matteo Saponati %A Pascal Josef Sager %A Pau Vilimelis Aceituno %A Thilo Stadelmann %A Benjamin F Grewe %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-saponati25a %I PMLR %P 52958--52994 %U https://proceedings.mlr.press/v267/saponati25a.html %V 267 %X Self-attention is essential to Transformer architectures, yet how information is embedded in the self-attention matrices and how different objective functions impact this process remains unclear. We present a mathematical framework to analyze self-attention matrices by deriving the structures governing their weight updates. Using this framework, we demonstrate that bidirectional training induces symmetry in the weight matrices, while autoregressive training results in directionality and column dominance. Our theoretical findings are validated across multiple Transformer models — including ModernBERT, GPT, LLaMA3, and Mistral — and input modalities like text, vision, and audio. Finally, we apply these insights by showing that symmetric initialization improves the performance of encoder-only models on language tasks. This mathematical analysis offers a novel theoretical perspective on how information is embedded through self-attention, thereby improving the interpretability of Transformer models.
APA
Saponati, M., Sager, P.J., Vilimelis Aceituno, P., Stadelmann, T. & Grewe, B.F.. (2025). The underlying structures of self-attention: symmetry, directionality, and emergent dynamics in Transformer training. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:52958-52994 Available from https://proceedings.mlr.press/v267/saponati25a.html.

Related Material