On the Power of Over-parametrization in Neural Networks with Quadratic Activation

Simon Du, Jason Lee
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1329-1338, 2018.

Abstract

We provide new theoretical insights on why over-parametrization is effective in learning neural networks. For a $k$ hidden node shallow network with quadratic activation and $n$ training data points, we show as long as $ k \ge \sqrt{2n}$, over-parametrization enables local search algorithms to find a globally optimal solution for general smooth and convex loss functions. Further, despite that the number of parameters may exceed the sample size, using theory of Rademacher complexity, we show with weight decay, the solution also generalizes well if the data is sampled from a regular distribution such as Gaussian. To prove when $k\ge \sqrt{2n}$, the loss function has benign landscape properties, we adopt an idea from smoothed analysis, which may have other applications in studying loss surfaces of neural networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-du18a, title = {On the Power of Over-parametrization in Neural Networks with Quadratic Activation}, author = {Du, Simon and Lee, Jason}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1329--1338}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/du18a/du18a.pdf}, url = {https://proceedings.mlr.press/v80/du18a.html}, abstract = {We provide new theoretical insights on why over-parametrization is effective in learning neural networks. For a $k$ hidden node shallow network with quadratic activation and $n$ training data points, we show as long as $ k \ge \sqrt{2n}$, over-parametrization enables local search algorithms to find a globally optimal solution for general smooth and convex loss functions. Further, despite that the number of parameters may exceed the sample size, using theory of Rademacher complexity, we show with weight decay, the solution also generalizes well if the data is sampled from a regular distribution such as Gaussian. To prove when $k\ge \sqrt{2n}$, the loss function has benign landscape properties, we adopt an idea from smoothed analysis, which may have other applications in studying loss surfaces of neural networks.} }
Endnote
%0 Conference Paper %T On the Power of Over-parametrization in Neural Networks with Quadratic Activation %A Simon Du %A Jason Lee %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-du18a %I PMLR %P 1329--1338 %U https://proceedings.mlr.press/v80/du18a.html %V 80 %X We provide new theoretical insights on why over-parametrization is effective in learning neural networks. For a $k$ hidden node shallow network with quadratic activation and $n$ training data points, we show as long as $ k \ge \sqrt{2n}$, over-parametrization enables local search algorithms to find a globally optimal solution for general smooth and convex loss functions. Further, despite that the number of parameters may exceed the sample size, using theory of Rademacher complexity, we show with weight decay, the solution also generalizes well if the data is sampled from a regular distribution such as Gaussian. To prove when $k\ge \sqrt{2n}$, the loss function has benign landscape properties, we adopt an idea from smoothed analysis, which may have other applications in studying loss surfaces of neural networks.
APA
Du, S. & Lee, J.. (2018). On the Power of Over-parametrization in Neural Networks with Quadratic Activation. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1329-1338 Available from https://proceedings.mlr.press/v80/du18a.html.

Related Material