Test-Time Training Provably Improves Transformers as In-context Learners

Halil Alperen Gozeten, Muhammed Emrullah Ildiz, Xuechen Zhang, Mahdi Soltanolkotabi, Marco Mondelli, Samet Oymak
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:20266-20295, 2025.

Abstract

Test-time training (TTT) methods explicitly update the weights of a model to adapt to the specific test instance, and they have found success in a variety of settings, including most recently language modeling and reasoning. To demystify this success, we investigate a gradient-based TTT algorithm for in-context learning, where we train a transformer model on the in-context demonstrations provided in the test prompt. Specifically, we provide a comprehensive theoretical characterization of linear transformers when the update rule is a single gradient step. Our theory (i) delineates the role of alignment between pretraining distribution and target task, (ii) demystifies how TTT can alleviate distribution shift, and (iii) quantifies the sample complexity of TTT including how it can significantly reduce the eventual sample size required for in-context learning. As our empirical contribution, we study the benefits of TTT for TabPFN, a tabular foundation model. In line with our theory, we demonstrate that TTT significantly reduces the required sample size for tabular classification (3 to 5 times fewer) unlocking substantial inference efficiency with a negligible training cost.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-gozeten25a, title = {Test-Time Training Provably Improves Transformers as In-context Learners}, author = {Gozeten, Halil Alperen and Ildiz, Muhammed Emrullah and Zhang, Xuechen and Soltanolkotabi, Mahdi and Mondelli, Marco and Oymak, Samet}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {20266--20295}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/gozeten25a/gozeten25a.pdf}, url = {https://proceedings.mlr.press/v267/gozeten25a.html}, abstract = {Test-time training (TTT) methods explicitly update the weights of a model to adapt to the specific test instance, and they have found success in a variety of settings, including most recently language modeling and reasoning. To demystify this success, we investigate a gradient-based TTT algorithm for in-context learning, where we train a transformer model on the in-context demonstrations provided in the test prompt. Specifically, we provide a comprehensive theoretical characterization of linear transformers when the update rule is a single gradient step. Our theory (i) delineates the role of alignment between pretraining distribution and target task, (ii) demystifies how TTT can alleviate distribution shift, and (iii) quantifies the sample complexity of TTT including how it can significantly reduce the eventual sample size required for in-context learning. As our empirical contribution, we study the benefits of TTT for TabPFN, a tabular foundation model. In line with our theory, we demonstrate that TTT significantly reduces the required sample size for tabular classification (3 to 5 times fewer) unlocking substantial inference efficiency with a negligible training cost.} }
Endnote
%0 Conference Paper %T Test-Time Training Provably Improves Transformers as In-context Learners %A Halil Alperen Gozeten %A Muhammed Emrullah Ildiz %A Xuechen Zhang %A Mahdi Soltanolkotabi %A Marco Mondelli %A Samet Oymak %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-gozeten25a %I PMLR %P 20266--20295 %U https://proceedings.mlr.press/v267/gozeten25a.html %V 267 %X Test-time training (TTT) methods explicitly update the weights of a model to adapt to the specific test instance, and they have found success in a variety of settings, including most recently language modeling and reasoning. To demystify this success, we investigate a gradient-based TTT algorithm for in-context learning, where we train a transformer model on the in-context demonstrations provided in the test prompt. Specifically, we provide a comprehensive theoretical characterization of linear transformers when the update rule is a single gradient step. Our theory (i) delineates the role of alignment between pretraining distribution and target task, (ii) demystifies how TTT can alleviate distribution shift, and (iii) quantifies the sample complexity of TTT including how it can significantly reduce the eventual sample size required for in-context learning. As our empirical contribution, we study the benefits of TTT for TabPFN, a tabular foundation model. In line with our theory, we demonstrate that TTT significantly reduces the required sample size for tabular classification (3 to 5 times fewer) unlocking substantial inference efficiency with a negligible training cost.
APA
Gozeten, H.A., Ildiz, M.E., Zhang, X., Soltanolkotabi, M., Mondelli, M. & Oymak, S.. (2025). Test-Time Training Provably Improves Transformers as In-context Learners. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:20266-20295 Available from https://proceedings.mlr.press/v267/gozeten25a.html.

Related Material