Gradient Descent Learns One-hidden-layer CNN: Don’t be Afraid of Spurious Local Minima

Simon Du, Jason Lee, Yuandong Tian, Aarti Singh, Barnabas Poczos
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1339-1348, 2018.

Abstract

We consider the problem of learning an one-hidden-layer neural network with non-overlapping convolutional layer and ReLU activation function, i.e., $f(Z; w, a) = \sum_j a_j\sigma(w^\top Z_j)$, in which both the convolutional weights $w$ and the output weights $a$ are parameters to be learned. We prove that with Gaussian input $\mathbf{Z}$ there is a spurious local minimizer. Surprisingly, in the presence of the spurious local minimizer, starting from randomly initialized weights, gradient descent with weight normalization can still be proven to recover the true parameters with constant probability (which can be boosted to probability $1$ with multiple restarts). We also show that with constant probability, the same procedure could also converge to the spurious local minimum, showing that the local minimum plays a non-trivial role in the dynamics of gradient descent. Furthermore, a quantitative analysis shows that the gradient descent dynamics has two phases: it starts off slow, but converges much faster after several iterations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-du18b, title = {Gradient Descent Learns One-hidden-layer {CNN}: Don’t be Afraid of Spurious Local Minima}, author = {Du, Simon and Lee, Jason and Tian, Yuandong and Singh, Aarti and Poczos, Barnabas}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1339--1348}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/du18b/du18b.pdf}, url = {https://proceedings.mlr.press/v80/du18b.html}, abstract = {We consider the problem of learning an one-hidden-layer neural network with non-overlapping convolutional layer and ReLU activation function, i.e., $f(Z; w, a) = \sum_j a_j\sigma(w^\top Z_j)$, in which both the convolutional weights $w$ and the output weights $a$ are parameters to be learned. We prove that with Gaussian input $\mathbf{Z}$ there is a spurious local minimizer. Surprisingly, in the presence of the spurious local minimizer, starting from randomly initialized weights, gradient descent with weight normalization can still be proven to recover the true parameters with constant probability (which can be boosted to probability $1$ with multiple restarts). We also show that with constant probability, the same procedure could also converge to the spurious local minimum, showing that the local minimum plays a non-trivial role in the dynamics of gradient descent. Furthermore, a quantitative analysis shows that the gradient descent dynamics has two phases: it starts off slow, but converges much faster after several iterations.} }
Endnote
%0 Conference Paper %T Gradient Descent Learns One-hidden-layer CNN: Don’t be Afraid of Spurious Local Minima %A Simon Du %A Jason Lee %A Yuandong Tian %A Aarti Singh %A Barnabas Poczos %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-du18b %I PMLR %P 1339--1348 %U https://proceedings.mlr.press/v80/du18b.html %V 80 %X We consider the problem of learning an one-hidden-layer neural network with non-overlapping convolutional layer and ReLU activation function, i.e., $f(Z; w, a) = \sum_j a_j\sigma(w^\top Z_j)$, in which both the convolutional weights $w$ and the output weights $a$ are parameters to be learned. We prove that with Gaussian input $\mathbf{Z}$ there is a spurious local minimizer. Surprisingly, in the presence of the spurious local minimizer, starting from randomly initialized weights, gradient descent with weight normalization can still be proven to recover the true parameters with constant probability (which can be boosted to probability $1$ with multiple restarts). We also show that with constant probability, the same procedure could also converge to the spurious local minimum, showing that the local minimum plays a non-trivial role in the dynamics of gradient descent. Furthermore, a quantitative analysis shows that the gradient descent dynamics has two phases: it starts off slow, but converges much faster after several iterations.
APA
Du, S., Lee, J., Tian, Y., Singh, A. & Poczos, B.. (2018). Gradient Descent Learns One-hidden-layer CNN: Don’t be Afraid of Spurious Local Minima. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1339-1348 Available from https://proceedings.mlr.press/v80/du18b.html.

Related Material