[edit]
Width Provably Matters in Optimization for Deep Linear Neural Networks
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:1655-1664, 2019.
Abstract
We prove that for an L-layer fully-connected linear neural network, if the width of every hidden layer is ˜Ω(L⋅r⋅dout⋅κ3), where r and κ are the rank and the condition number of the input data, and dout is the output dimension, then gradient descent with Gaussian random initialization converges to a global minimum at a linear rate. The number of iterations to find an ϵ-suboptimal solution is O(κlog(1ϵ)). Our polynomial upper bound on the total running time for wide deep linear networks and the exp(Ω(L)) lower bound for narrow deep linear neural networks [Shamir, 2018] together demonstrate that wide layers are necessary for optimizing deep models.