How Transformers Learn Structured Data: Insights From Hierarchical Filtering

Jerome Garnier-Brun, Marc Mezard, Emanuele Moscato, Luca Saglietti
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:18831-18847, 2025.

Abstract

Understanding the learning process and the embedded computation in transformers is becoming a central goal for the development of interpretable AI. In the present study, we introduce a hierarchical filtering procedure for data models of sequences on trees, allowing us to hand-tune the range of positional correlations in the data. Leveraging this controlled setting, we provide evidence that vanilla encoder-only transformers can approximate the exact inference algorithm when trained on root classification and masked language modeling tasks, and study how this computation is discovered and implemented. We find that correlations at larger distances, corresponding to increasing layers of the hierarchy, are sequentially included by the network during training. By comparing attention maps from models trained with varying degrees of filtering and by probing the different encoder levels, we find clear evidence of a reconstruction of correlations on successive length scales corresponding to the various levels of the hierarchy, which we relate to a plausible implementation of the exact inference algorithm within the same architecture.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-garnier-brun25a, title = {How Transformers Learn Structured Data: Insights From Hierarchical Filtering}, author = {Garnier-Brun, Jerome and Mezard, Marc and Moscato, Emanuele and Saglietti, Luca}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {18831--18847}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/garnier-brun25a/garnier-brun25a.pdf}, url = {https://proceedings.mlr.press/v267/garnier-brun25a.html}, abstract = {Understanding the learning process and the embedded computation in transformers is becoming a central goal for the development of interpretable AI. In the present study, we introduce a hierarchical filtering procedure for data models of sequences on trees, allowing us to hand-tune the range of positional correlations in the data. Leveraging this controlled setting, we provide evidence that vanilla encoder-only transformers can approximate the exact inference algorithm when trained on root classification and masked language modeling tasks, and study how this computation is discovered and implemented. We find that correlations at larger distances, corresponding to increasing layers of the hierarchy, are sequentially included by the network during training. By comparing attention maps from models trained with varying degrees of filtering and by probing the different encoder levels, we find clear evidence of a reconstruction of correlations on successive length scales corresponding to the various levels of the hierarchy, which we relate to a plausible implementation of the exact inference algorithm within the same architecture.} }
Endnote
%0 Conference Paper %T How Transformers Learn Structured Data: Insights From Hierarchical Filtering %A Jerome Garnier-Brun %A Marc Mezard %A Emanuele Moscato %A Luca Saglietti %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-garnier-brun25a %I PMLR %P 18831--18847 %U https://proceedings.mlr.press/v267/garnier-brun25a.html %V 267 %X Understanding the learning process and the embedded computation in transformers is becoming a central goal for the development of interpretable AI. In the present study, we introduce a hierarchical filtering procedure for data models of sequences on trees, allowing us to hand-tune the range of positional correlations in the data. Leveraging this controlled setting, we provide evidence that vanilla encoder-only transformers can approximate the exact inference algorithm when trained on root classification and masked language modeling tasks, and study how this computation is discovered and implemented. We find that correlations at larger distances, corresponding to increasing layers of the hierarchy, are sequentially included by the network during training. By comparing attention maps from models trained with varying degrees of filtering and by probing the different encoder levels, we find clear evidence of a reconstruction of correlations on successive length scales corresponding to the various levels of the hierarchy, which we relate to a plausible implementation of the exact inference algorithm within the same architecture.
APA
Garnier-Brun, J., Mezard, M., Moscato, E. & Saglietti, L.. (2025). How Transformers Learn Structured Data: Insights From Hierarchical Filtering. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:18831-18847 Available from https://proceedings.mlr.press/v267/garnier-brun25a.html.

Related Material