Memorization in Attention-only Transformers

Léo Dana, Muni Sreenivas Pydi, Yann Chevaleyre
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:3133-3141, 2025.

Abstract

Recent research has explored the memorization capacity of multi-head attention, but these findings are constrained by unrealistic limitations on the context size. We present a novel proof for language-based Transformers that extends the current hypothesis to any context size. Our approach improves upon the state-of-the-art by achieving more effective exact memorization with an attention layer, while also introducing the concept of approximate memorization of distributions. Through experimental validation, we demonstrate that our proposed bounds more accurately reflect the true memorization capacity of language models, and provide a precise comparison with prior work.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-dana25a, title = {Memorization in Attention-only Transformers}, author = {Dana, L{\'e}o and Pydi, Muni Sreenivas and Chevaleyre, Yann}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {3133--3141}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/dana25a/dana25a.pdf}, url = {https://proceedings.mlr.press/v258/dana25a.html}, abstract = {Recent research has explored the memorization capacity of multi-head attention, but these findings are constrained by unrealistic limitations on the context size. We present a novel proof for language-based Transformers that extends the current hypothesis to any context size. Our approach improves upon the state-of-the-art by achieving more effective exact memorization with an attention layer, while also introducing the concept of approximate memorization of distributions. Through experimental validation, we demonstrate that our proposed bounds more accurately reflect the true memorization capacity of language models, and provide a precise comparison with prior work.} }
Endnote
%0 Conference Paper %T Memorization in Attention-only Transformers %A Léo Dana %A Muni Sreenivas Pydi %A Yann Chevaleyre %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-dana25a %I PMLR %P 3133--3141 %U https://proceedings.mlr.press/v258/dana25a.html %V 258 %X Recent research has explored the memorization capacity of multi-head attention, but these findings are constrained by unrealistic limitations on the context size. We present a novel proof for language-based Transformers that extends the current hypothesis to any context size. Our approach improves upon the state-of-the-art by achieving more effective exact memorization with an attention layer, while also introducing the concept of approximate memorization of distributions. Through experimental validation, we demonstrate that our proposed bounds more accurately reflect the true memorization capacity of language models, and provide a precise comparison with prior work.
APA
Dana, L., Pydi, M.S. & Chevaleyre, Y.. (2025). Memorization in Attention-only Transformers. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:3133-3141 Available from https://proceedings.mlr.press/v258/dana25a.html.

Related Material