Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process

Guy Blanc, Neha Gupta, Gregory Valiant, Paul Valiant
Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:483-513, 2020.

Abstract

We consider networks, trained via stochastic gradient descent to minimize $\ell_2$ loss, with the training labels perturbed by independent noise at each iteration. We characterize the behavior of the training dynamics near any parameter vector that achieves zero training error, in terms of an implicit regularization term corresponding to the sum over the data points, of the squared $\ell_2$ norm of the gradient of the model with respect to the parameter vector, evaluated at each data point. This holds for networks of any connectivity, width, depth, and choice of activation function. We interpret this implicit regularization term for three simple settings: matrix sensing, two layer ReLU networks trained on one-dimensional data, and two layer networks with sigmoid activations trained on a single datapoint. For these settings, we show why this new and general implicit regularization effect drives the networks towards “simple” models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-blanc20a, title = {Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process}, author = {Blanc, Guy and Gupta, Neha and Valiant, Gregory and Valiant, Paul}, booktitle = {Proceedings of Thirty Third Conference on Learning Theory}, pages = {483--513}, year = {2020}, editor = {Abernethy, Jacob and Agarwal, Shivani}, volume = {125}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/blanc20a/blanc20a.pdf}, url = {https://proceedings.mlr.press/v125/blanc20a.html}, abstract = { We consider networks, trained via stochastic gradient descent to minimize $\ell_2$ loss, with the training labels perturbed by independent noise at each iteration. We characterize the behavior of the training dynamics near any parameter vector that achieves zero training error, in terms of an implicit regularization term corresponding to the sum over the data points, of the squared $\ell_2$ norm of the gradient of the model with respect to the parameter vector, evaluated at each data point. This holds for networks of any connectivity, width, depth, and choice of activation function. We interpret this implicit regularization term for three simple settings: matrix sensing, two layer ReLU networks trained on one-dimensional data, and two layer networks with sigmoid activations trained on a single datapoint. For these settings, we show why this new and general implicit regularization effect drives the networks towards “simple” models. } }
Endnote
%0 Conference Paper %T Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process %A Guy Blanc %A Neha Gupta %A Gregory Valiant %A Paul Valiant %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-blanc20a %I PMLR %P 483--513 %U https://proceedings.mlr.press/v125/blanc20a.html %V 125 %X We consider networks, trained via stochastic gradient descent to minimize $\ell_2$ loss, with the training labels perturbed by independent noise at each iteration. We characterize the behavior of the training dynamics near any parameter vector that achieves zero training error, in terms of an implicit regularization term corresponding to the sum over the data points, of the squared $\ell_2$ norm of the gradient of the model with respect to the parameter vector, evaluated at each data point. This holds for networks of any connectivity, width, depth, and choice of activation function. We interpret this implicit regularization term for three simple settings: matrix sensing, two layer ReLU networks trained on one-dimensional data, and two layer networks with sigmoid activations trained on a single datapoint. For these settings, we show why this new and general implicit regularization effect drives the networks towards “simple” models.
APA
Blanc, G., Gupta, N., Valiant, G. & Valiant, P.. (2020). Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process. Proceedings of Thirty Third Conference on Learning Theory, in Proceedings of Machine Learning Research 125:483-513 Available from https://proceedings.mlr.press/v125/blanc20a.html.

Related Material