Exponential Convergence Time of Gradient Descent for OneDimensional Deep Linear Neural Networks
[edit]
Proceedings of the ThirtySecond Conference on Learning Theory, PMLR 99:26912713, 2019.
Abstract
We study the dynamics of gradient descent on objective functions of the form $f(\prod_{i=1}^{k} w_i)$ (with respect to scalar parameters $w_1,\ldots,w_k$), which arise in the context of training depth$k$ linear neural networks. We prove that for standard random initializations, and under mild assumptions on $f$, the number of iterations required for convergence scales exponentially with the depth $k$. We also show empirically that this phenomenon can occur in higher dimensions, where each $w_i$ is a matrix. This highlights a potential obstacle in understanding the convergence of gradientbased methods for deep linear neural networks, where $k$ is large.
Related Material


