When can in-context learning generalize out of task distribution?

Chase Goddard, Lindsay M. Smith, Vudtiwat Ngampruetikorn, David J. Schwab
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:19585-19599, 2025.

Abstract

In-context learning (ICL) is a remarkable capability of pretrained transformers that allows models to generalize to unseen tasks after seeing only a few examples. We investigate empirically the conditions necessary on the pretraining distribution for ICL to emerge and generalize out-of-distribution. Previous work has focused on the number of distinct tasks necessary in the pretraining dataset. Here, we use a different notion of task diversity to study the emergence of ICL in transformers trained on linear functions. We find that as task diversity increases, transformers undergo a transition from a specialized solution, which exhibits ICL only within the pretraining task distribution, to a solution which generalizes out of distribution to the entire task space. We also investigate the nature of the solutions learned by the transformer on both sides of the transition, and observe similar transitions in nonlinear regression problems. We construct a phase diagram to characterize how our concept of task diversity interacts with the number of pretraining tasks. In addition, we explore how factors such as the depth of the model and the dimensionality of the regression problem influence the transition.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-goddard25a, title = {When can in-context learning generalize out of task distribution?}, author = {Goddard, Chase and Smith, Lindsay M. and Ngampruetikorn, Vudtiwat and Schwab, David J.}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {19585--19599}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/goddard25a/goddard25a.pdf}, url = {https://proceedings.mlr.press/v267/goddard25a.html}, abstract = {In-context learning (ICL) is a remarkable capability of pretrained transformers that allows models to generalize to unseen tasks after seeing only a few examples. We investigate empirically the conditions necessary on the pretraining distribution for ICL to emerge and generalize out-of-distribution. Previous work has focused on the number of distinct tasks necessary in the pretraining dataset. Here, we use a different notion of task diversity to study the emergence of ICL in transformers trained on linear functions. We find that as task diversity increases, transformers undergo a transition from a specialized solution, which exhibits ICL only within the pretraining task distribution, to a solution which generalizes out of distribution to the entire task space. We also investigate the nature of the solutions learned by the transformer on both sides of the transition, and observe similar transitions in nonlinear regression problems. We construct a phase diagram to characterize how our concept of task diversity interacts with the number of pretraining tasks. In addition, we explore how factors such as the depth of the model and the dimensionality of the regression problem influence the transition.} }
Endnote
%0 Conference Paper %T When can in-context learning generalize out of task distribution? %A Chase Goddard %A Lindsay M. Smith %A Vudtiwat Ngampruetikorn %A David J. Schwab %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-goddard25a %I PMLR %P 19585--19599 %U https://proceedings.mlr.press/v267/goddard25a.html %V 267 %X In-context learning (ICL) is a remarkable capability of pretrained transformers that allows models to generalize to unseen tasks after seeing only a few examples. We investigate empirically the conditions necessary on the pretraining distribution for ICL to emerge and generalize out-of-distribution. Previous work has focused on the number of distinct tasks necessary in the pretraining dataset. Here, we use a different notion of task diversity to study the emergence of ICL in transformers trained on linear functions. We find that as task diversity increases, transformers undergo a transition from a specialized solution, which exhibits ICL only within the pretraining task distribution, to a solution which generalizes out of distribution to the entire task space. We also investigate the nature of the solutions learned by the transformer on both sides of the transition, and observe similar transitions in nonlinear regression problems. We construct a phase diagram to characterize how our concept of task diversity interacts with the number of pretraining tasks. In addition, we explore how factors such as the depth of the model and the dimensionality of the regression problem influence the transition.
APA
Goddard, C., Smith, L.M., Ngampruetikorn, V. & Schwab, D.J.. (2025). When can in-context learning generalize out of task distribution?. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:19585-19599 Available from https://proceedings.mlr.press/v267/goddard25a.html.

Related Material