Asymptotics of feature learning in two-layer networks after one gradient-step

Hugo Cui, Luca Pesce, Yatin Dandi, Florent Krzakala, Yue Lu, Lenka Zdeborova, Bruno Loureiro
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:9662-9695, 2024.

Abstract

In this manuscript, we investigate the problem of how two-layer neural networks learn features from data, and improve over the kernel regime, after being trained with a single gradient descent step. Leveraging the insight from (Ba et al., 2022), we model the trained network by a spiked Random Features (sRF) model. Further building on recent progress on Gaussian universality (Dandi et al., 2023), we provide an exact asymptotic description of the generalization error of the sRF in the high-dimensional limit where the number of samples, the width, and the input dimension grow at a proportional rate. The resulting characterization for sRFs also captures closely the learning curves of the original network model. This enables us to understand how adapting to the data is crucial for the network to efficiently learn non-linear functions in the direction of the gradient - where at initialization it can only express linear functions in this regime.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-cui24d, title = {Asymptotics of feature learning in two-layer networks after one gradient-step}, author = {Cui, Hugo and Pesce, Luca and Dandi, Yatin and Krzakala, Florent and Lu, Yue and Zdeborova, Lenka and Loureiro, Bruno}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {9662--9695}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/cui24d/cui24d.pdf}, url = {https://proceedings.mlr.press/v235/cui24d.html}, abstract = {In this manuscript, we investigate the problem of how two-layer neural networks learn features from data, and improve over the kernel regime, after being trained with a single gradient descent step. Leveraging the insight from (Ba et al., 2022), we model the trained network by a spiked Random Features (sRF) model. Further building on recent progress on Gaussian universality (Dandi et al., 2023), we provide an exact asymptotic description of the generalization error of the sRF in the high-dimensional limit where the number of samples, the width, and the input dimension grow at a proportional rate. The resulting characterization for sRFs also captures closely the learning curves of the original network model. This enables us to understand how adapting to the data is crucial for the network to efficiently learn non-linear functions in the direction of the gradient - where at initialization it can only express linear functions in this regime.} }
Endnote
%0 Conference Paper %T Asymptotics of feature learning in two-layer networks after one gradient-step %A Hugo Cui %A Luca Pesce %A Yatin Dandi %A Florent Krzakala %A Yue Lu %A Lenka Zdeborova %A Bruno Loureiro %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-cui24d %I PMLR %P 9662--9695 %U https://proceedings.mlr.press/v235/cui24d.html %V 235 %X In this manuscript, we investigate the problem of how two-layer neural networks learn features from data, and improve over the kernel regime, after being trained with a single gradient descent step. Leveraging the insight from (Ba et al., 2022), we model the trained network by a spiked Random Features (sRF) model. Further building on recent progress on Gaussian universality (Dandi et al., 2023), we provide an exact asymptotic description of the generalization error of the sRF in the high-dimensional limit where the number of samples, the width, and the input dimension grow at a proportional rate. The resulting characterization for sRFs also captures closely the learning curves of the original network model. This enables us to understand how adapting to the data is crucial for the network to efficiently learn non-linear functions in the direction of the gradient - where at initialization it can only express linear functions in this regime.
APA
Cui, H., Pesce, L., Dandi, Y., Krzakala, F., Lu, Y., Zdeborova, L. & Loureiro, B.. (2024). Asymptotics of feature learning in two-layer networks after one gradient-step. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:9662-9695 Available from https://proceedings.mlr.press/v235/cui24d.html.

Related Material