One-pass Stochastic Gradient Descent in overparametrized two-layer neural networks

Hanjing Zhu, Jiaming Xu
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:3673-3681, 2021.

Abstract

There has been a recent surge of interest in understanding the convergence of gradient descent (GD) and stochastic gradient descent (SGD) in overparameterized neural networks. Most previous work assumes that the training data is provided a priori in a batch, while less attention has been paid to the important setting where the training data arrives in a stream. In this paper, we study the streaming data setup and show that with overparamterization and random initialization, the prediction error of two-layer neural networks under one-pass SGD converges in expectation. The convergence rate depends on the eigen-decomposition of the integral operator associated with the so-called neural tangent kernel (NTK). A key step of our analysis is to show a random kernel function converges to the NTK with high probability using the VC dimension and McDiarmid’s inequality.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-zhu21d, title = { One-pass Stochastic Gradient Descent in overparametrized two-layer neural networks }, author = {Zhu, Hanjing and Xu, Jiaming}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {3673--3681}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/zhu21d/zhu21d.pdf}, url = {https://proceedings.mlr.press/v130/zhu21d.html}, abstract = { There has been a recent surge of interest in understanding the convergence of gradient descent (GD) and stochastic gradient descent (SGD) in overparameterized neural networks. Most previous work assumes that the training data is provided a priori in a batch, while less attention has been paid to the important setting where the training data arrives in a stream. In this paper, we study the streaming data setup and show that with overparamterization and random initialization, the prediction error of two-layer neural networks under one-pass SGD converges in expectation. The convergence rate depends on the eigen-decomposition of the integral operator associated with the so-called neural tangent kernel (NTK). A key step of our analysis is to show a random kernel function converges to the NTK with high probability using the VC dimension and McDiarmid’s inequality. } }
Endnote
%0 Conference Paper %T One-pass Stochastic Gradient Descent in overparametrized two-layer neural networks %A Hanjing Zhu %A Jiaming Xu %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-zhu21d %I PMLR %P 3673--3681 %U https://proceedings.mlr.press/v130/zhu21d.html %V 130 %X There has been a recent surge of interest in understanding the convergence of gradient descent (GD) and stochastic gradient descent (SGD) in overparameterized neural networks. Most previous work assumes that the training data is provided a priori in a batch, while less attention has been paid to the important setting where the training data arrives in a stream. In this paper, we study the streaming data setup and show that with overparamterization and random initialization, the prediction error of two-layer neural networks under one-pass SGD converges in expectation. The convergence rate depends on the eigen-decomposition of the integral operator associated with the so-called neural tangent kernel (NTK). A key step of our analysis is to show a random kernel function converges to the NTK with high probability using the VC dimension and McDiarmid’s inequality.
APA
Zhu, H. & Xu, J.. (2021). One-pass Stochastic Gradient Descent in overparametrized two-layer neural networks . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:3673-3681 Available from https://proceedings.mlr.press/v130/zhu21d.html.

Related Material