On the Power of Context-Enhanced Learning in LLMs

Xingyu Zhu, Abhishek Panigrahi, Sanjeev Arora
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:80013-80089, 2025.

Abstract

We formalize a new concept for LLMs, context-enhanced learning. It involves standard gradient-based learning on text except that the context is enhanced with additional data on which no auto-regressive gradients are computed. This setting is a gradient-based analog of usual in-context learning (ICL) and appears in some recent works. Using a multi-step reasoning task, we prove in a simplified setting that context-enhanced learning can be exponentially more sample-efficient than standard learning when the model is capable of ICL. At a mechanistic level, we find that the benefit of context-enhancement arises from a more accurate gradient learning signal. We also experimentally demonstrate that it appears hard to detect or recover learning materials that were used in the context during training. This may have implications for data security as well as copyright.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-zhu25p, title = {On the Power of Context-Enhanced Learning in {LLM}s}, author = {Zhu, Xingyu and Panigrahi, Abhishek and Arora, Sanjeev}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {80013--80089}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/zhu25p/zhu25p.pdf}, url = {https://proceedings.mlr.press/v267/zhu25p.html}, abstract = {We formalize a new concept for LLMs, context-enhanced learning. It involves standard gradient-based learning on text except that the context is enhanced with additional data on which no auto-regressive gradients are computed. This setting is a gradient-based analog of usual in-context learning (ICL) and appears in some recent works. Using a multi-step reasoning task, we prove in a simplified setting that context-enhanced learning can be exponentially more sample-efficient than standard learning when the model is capable of ICL. At a mechanistic level, we find that the benefit of context-enhancement arises from a more accurate gradient learning signal. We also experimentally demonstrate that it appears hard to detect or recover learning materials that were used in the context during training. This may have implications for data security as well as copyright.} }
Endnote
%0 Conference Paper %T On the Power of Context-Enhanced Learning in LLMs %A Xingyu Zhu %A Abhishek Panigrahi %A Sanjeev Arora %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-zhu25p %I PMLR %P 80013--80089 %U https://proceedings.mlr.press/v267/zhu25p.html %V 267 %X We formalize a new concept for LLMs, context-enhanced learning. It involves standard gradient-based learning on text except that the context is enhanced with additional data on which no auto-regressive gradients are computed. This setting is a gradient-based analog of usual in-context learning (ICL) and appears in some recent works. Using a multi-step reasoning task, we prove in a simplified setting that context-enhanced learning can be exponentially more sample-efficient than standard learning when the model is capable of ICL. At a mechanistic level, we find that the benefit of context-enhancement arises from a more accurate gradient learning signal. We also experimentally demonstrate that it appears hard to detect or recover learning materials that were used in the context during training. This may have implications for data security as well as copyright.
APA
Zhu, X., Panigrahi, A. & Arora, S.. (2025). On the Power of Context-Enhanced Learning in LLMs. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:80013-80089 Available from https://proceedings.mlr.press/v267/zhu25p.html.

Related Material