How Well Can Transformers Emulate In-Context Newton’s Method?

Angeliki Giannou, Liu Yang, Tianhao Wang, Dimitris Papailiopoulos, Jason D. Lee
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4843-4851, 2025.

Abstract

Transformer-based models have demonstrated remarkable in-context learning capabilities, prompting extensive research into its underlying mechanisms. Recent studies have suggested that Transformers can implement first-order optimization algorithms for in-context learning and even second order ones for the case of linear regression. In this work, we study whether Transformers can perform higher order optimization methods, beyond the case of linear regression. We establish that linear attention Transformers with ReLU layers can approximate second order optimization algorithms for the task of logistic regression and achieve $\epsilon$ error with only a logarithmic to the error more layers. Our results suggest the ability of the Transformer architecture to implement complex algorithms, beyond gradient descent.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-giannou25a, title = {How Well Can Transformers Emulate In-Context Newton’s Method?}, author = {Giannou, Angeliki and Yang, Liu and Wang, Tianhao and Papailiopoulos, Dimitris and Lee, Jason D.}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4843--4851}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/giannou25a/giannou25a.pdf}, url = {https://proceedings.mlr.press/v258/giannou25a.html}, abstract = {Transformer-based models have demonstrated remarkable in-context learning capabilities, prompting extensive research into its underlying mechanisms. Recent studies have suggested that Transformers can implement first-order optimization algorithms for in-context learning and even second order ones for the case of linear regression. In this work, we study whether Transformers can perform higher order optimization methods, beyond the case of linear regression. We establish that linear attention Transformers with ReLU layers can approximate second order optimization algorithms for the task of logistic regression and achieve $\epsilon$ error with only a logarithmic to the error more layers. Our results suggest the ability of the Transformer architecture to implement complex algorithms, beyond gradient descent.} }
Endnote
%0 Conference Paper %T How Well Can Transformers Emulate In-Context Newton’s Method? %A Angeliki Giannou %A Liu Yang %A Tianhao Wang %A Dimitris Papailiopoulos %A Jason D. Lee %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-giannou25a %I PMLR %P 4843--4851 %U https://proceedings.mlr.press/v258/giannou25a.html %V 258 %X Transformer-based models have demonstrated remarkable in-context learning capabilities, prompting extensive research into its underlying mechanisms. Recent studies have suggested that Transformers can implement first-order optimization algorithms for in-context learning and even second order ones for the case of linear regression. In this work, we study whether Transformers can perform higher order optimization methods, beyond the case of linear regression. We establish that linear attention Transformers with ReLU layers can approximate second order optimization algorithms for the task of logistic regression and achieve $\epsilon$ error with only a logarithmic to the error more layers. Our results suggest the ability of the Transformer architecture to implement complex algorithms, beyond gradient descent.
APA
Giannou, A., Yang, L., Wang, T., Papailiopoulos, D. & Lee, J.D.. (2025). How Well Can Transformers Emulate In-Context Newton’s Method?. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4843-4851 Available from https://proceedings.mlr.press/v258/giannou25a.html.

Related Material