All or None: Identifiable Linear Properties of Next-Token Predictors in Language Modeling

Emanuele Marconato, Sebastien Lachapelle, Sebastian Weichwald, Luigi Gresele
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4123-4131, 2025.

Abstract

We analyze identifiability as a possible explanation for the ubiquity of linear properties across language models, such as the vector difference between the representations of “easy” and “easiest” being parallel to that between “lucky” and “luckiest”. For this, we ask whether finding a linear property in one model implies that any model that induces the same distribution has that property, too. To answer that, we first prove an identifiability result to characterize distribution-equivalent next-token predictors, lifting a diversity requirement of previous results. Second, based on a refinement of relational linearity [Paccanaro and Hinton, 2001; Hernandez et al., 2024], we show how many notions of linearity are amenable to our analysis. Finally, we show that under suitable conditions, these linear properties either hold in all or none distribution equivalent next-token predictors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-marconato25a, title = {All or None: Identifiable Linear Properties of Next-Token Predictors in Language Modeling}, author = {Marconato, Emanuele and Lachapelle, Sebastien and Weichwald, Sebastian and Gresele, Luigi}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4123--4131}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/marconato25a/marconato25a.pdf}, url = {https://proceedings.mlr.press/v258/marconato25a.html}, abstract = {We analyze identifiability as a possible explanation for the ubiquity of linear properties across language models, such as the vector difference between the representations of “easy” and “easiest” being parallel to that between “lucky” and “luckiest”. For this, we ask whether finding a linear property in one model implies that any model that induces the same distribution has that property, too. To answer that, we first prove an identifiability result to characterize distribution-equivalent next-token predictors, lifting a diversity requirement of previous results. Second, based on a refinement of relational linearity [Paccanaro and Hinton, 2001; Hernandez et al., 2024], we show how many notions of linearity are amenable to our analysis. Finally, we show that under suitable conditions, these linear properties either hold in all or none distribution equivalent next-token predictors.} }
Endnote
%0 Conference Paper %T All or None: Identifiable Linear Properties of Next-Token Predictors in Language Modeling %A Emanuele Marconato %A Sebastien Lachapelle %A Sebastian Weichwald %A Luigi Gresele %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-marconato25a %I PMLR %P 4123--4131 %U https://proceedings.mlr.press/v258/marconato25a.html %V 258 %X We analyze identifiability as a possible explanation for the ubiquity of linear properties across language models, such as the vector difference between the representations of “easy” and “easiest” being parallel to that between “lucky” and “luckiest”. For this, we ask whether finding a linear property in one model implies that any model that induces the same distribution has that property, too. To answer that, we first prove an identifiability result to characterize distribution-equivalent next-token predictors, lifting a diversity requirement of previous results. Second, based on a refinement of relational linearity [Paccanaro and Hinton, 2001; Hernandez et al., 2024], we show how many notions of linearity are amenable to our analysis. Finally, we show that under suitable conditions, these linear properties either hold in all or none distribution equivalent next-token predictors.
APA
Marconato, E., Lachapelle, S., Weichwald, S. & Gresele, L.. (2025). All or None: Identifiable Linear Properties of Next-Token Predictors in Language Modeling. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4123-4131 Available from https://proceedings.mlr.press/v258/marconato25a.html.

Related Material