Breaking through the learning plateaus of in-context learning in Transformer

Jingwen Fu, Tao Yang, Yuwang Wang, Yan Lu, Nanning Zheng
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:14207-14227, 2024.

Abstract

In-context learning, i.e., learning from context examples, is an impressive ability of Transformer. Training Transformers to possess this in-context learning skill is computationally intensive due to the occurrence of learning plateaus, which are periods within the training process where there is minimal or no enhancement in the model’s in-context learning capability. To study the mechanism behind the learning plateaus, we conceptually separate a component within the model’s internal representation that is exclusively affected by the model’s weights. We call this the “weights component”, and the remainder is identified as the “context component”. By conducting meticulous and controlled experiments on synthetic tasks, we note that the persistence of learning plateaus correlates with compromised functionality of the weights component. Recognizing the impaired performance of the weights component as a fundamental behavior that drives learning plateaus, we have developed three strategies to expedite the learning of Transformers. The effectiveness of these strategies is further confirmed in natural language processing tasks. In conclusion, our research demonstrates the feasibility of cultivating a powerful in-context learning ability within AI systems in an eco-friendly manner.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-fu24h, title = {Breaking through the learning plateaus of in-context learning in Transformer}, author = {Fu, Jingwen and Yang, Tao and Wang, Yuwang and Lu, Yan and Zheng, Nanning}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {14207--14227}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/fu24h/fu24h.pdf}, url = {https://proceedings.mlr.press/v235/fu24h.html}, abstract = {In-context learning, i.e., learning from context examples, is an impressive ability of Transformer. Training Transformers to possess this in-context learning skill is computationally intensive due to the occurrence of learning plateaus, which are periods within the training process where there is minimal or no enhancement in the model’s in-context learning capability. To study the mechanism behind the learning plateaus, we conceptually separate a component within the model’s internal representation that is exclusively affected by the model’s weights. We call this the “weights component”, and the remainder is identified as the “context component”. By conducting meticulous and controlled experiments on synthetic tasks, we note that the persistence of learning plateaus correlates with compromised functionality of the weights component. Recognizing the impaired performance of the weights component as a fundamental behavior that drives learning plateaus, we have developed three strategies to expedite the learning of Transformers. The effectiveness of these strategies is further confirmed in natural language processing tasks. In conclusion, our research demonstrates the feasibility of cultivating a powerful in-context learning ability within AI systems in an eco-friendly manner.} }
Endnote
%0 Conference Paper %T Breaking through the learning plateaus of in-context learning in Transformer %A Jingwen Fu %A Tao Yang %A Yuwang Wang %A Yan Lu %A Nanning Zheng %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-fu24h %I PMLR %P 14207--14227 %U https://proceedings.mlr.press/v235/fu24h.html %V 235 %X In-context learning, i.e., learning from context examples, is an impressive ability of Transformer. Training Transformers to possess this in-context learning skill is computationally intensive due to the occurrence of learning plateaus, which are periods within the training process where there is minimal or no enhancement in the model’s in-context learning capability. To study the mechanism behind the learning plateaus, we conceptually separate a component within the model’s internal representation that is exclusively affected by the model’s weights. We call this the “weights component”, and the remainder is identified as the “context component”. By conducting meticulous and controlled experiments on synthetic tasks, we note that the persistence of learning plateaus correlates with compromised functionality of the weights component. Recognizing the impaired performance of the weights component as a fundamental behavior that drives learning plateaus, we have developed three strategies to expedite the learning of Transformers. The effectiveness of these strategies is further confirmed in natural language processing tasks. In conclusion, our research demonstrates the feasibility of cultivating a powerful in-context learning ability within AI systems in an eco-friendly manner.
APA
Fu, J., Yang, T., Wang, Y., Lu, Y. & Zheng, N.. (2024). Breaking through the learning plateaus of in-context learning in Transformer. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:14207-14227 Available from https://proceedings.mlr.press/v235/fu24h.html.

Related Material