Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise

Spencer Frei, Yuan Cao, Quanquan Gu
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:3427-3438, 2021.

Abstract

We consider a one-hidden-layer leaky ReLU network of arbitrary width trained by stochastic gradient descent (SGD) following an arbitrary initialization. We prove that SGD produces neural networks that have classification accuracy competitive with that of the best halfspace over the distribution for a broad class of distributions that includes log-concave isotropic and hard margin distributions. Equivalently, such networks can generalize when the data distribution is linearly separable but corrupted with adversarial label noise, despite the capacity to overfit. To the best of our knowledge, this is the first work to show that overparameterized neural networks trained by SGD can generalize when the data is corrupted with adversarial label noise.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-frei21b, title = {Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise}, author = {Frei, Spencer and Cao, Yuan and Gu, Quanquan}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {3427--3438}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/frei21b/frei21b.pdf}, url = {https://proceedings.mlr.press/v139/frei21b.html}, abstract = {We consider a one-hidden-layer leaky ReLU network of arbitrary width trained by stochastic gradient descent (SGD) following an arbitrary initialization. We prove that SGD produces neural networks that have classification accuracy competitive with that of the best halfspace over the distribution for a broad class of distributions that includes log-concave isotropic and hard margin distributions. Equivalently, such networks can generalize when the data distribution is linearly separable but corrupted with adversarial label noise, despite the capacity to overfit. To the best of our knowledge, this is the first work to show that overparameterized neural networks trained by SGD can generalize when the data is corrupted with adversarial label noise.} }
Endnote
%0 Conference Paper %T Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise %A Spencer Frei %A Yuan Cao %A Quanquan Gu %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-frei21b %I PMLR %P 3427--3438 %U https://proceedings.mlr.press/v139/frei21b.html %V 139 %X We consider a one-hidden-layer leaky ReLU network of arbitrary width trained by stochastic gradient descent (SGD) following an arbitrary initialization. We prove that SGD produces neural networks that have classification accuracy competitive with that of the best halfspace over the distribution for a broad class of distributions that includes log-concave isotropic and hard margin distributions. Equivalently, such networks can generalize when the data distribution is linearly separable but corrupted with adversarial label noise, despite the capacity to overfit. To the best of our knowledge, this is the first work to show that overparameterized neural networks trained by SGD can generalize when the data is corrupted with adversarial label noise.
APA
Frei, S., Cao, Y. & Gu, Q.. (2021). Provable Generalization of SGD-trained Neural Networks of Any Width in the Presence of Adversarial Label Noise. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:3427-3438 Available from https://proceedings.mlr.press/v139/frei21b.html.

Related Material