Training Linear Neural Networks: Non-Local Convergence and Complexity Results

Armin Eftekhari
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2836-2847, 2020.

Abstract

Linear networks provide valuable insights into the workings of neural networks in general. This paper identifies conditions under which the gradient flow provably trains a linear network, in spite of the non-strict saddle points present in the optimization landscape. This paper also provides the computational complexity of training linear networks with gradient flow. To achieve these results, this work develops a machinery to provably identify the stable set of gradient flow, which then enables us to improve over the state of the art in the literature of linear networks (Bah et al., 2019;Arora et al., 2018a). Crucially, our results appear to be the first to break away from the lazy training regime which has dominated the literature of neural networks. This work requires the network to have a layer with one neuron, which subsumes the networks with a scalar output, but extending the results of this theoretical work to all linear networks remains a challenging open problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-eftekhari20a, title = {Training Linear Neural Networks: Non-Local Convergence and Complexity Results}, author = {Eftekhari, Armin}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2836--2847}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/eftekhari20a/eftekhari20a.pdf}, url = {http://proceedings.mlr.press/v119/eftekhari20a.html}, abstract = {Linear networks provide valuable insights into the workings of neural networks in general. This paper identifies conditions under which the gradient flow provably trains a linear network, in spite of the non-strict saddle points present in the optimization landscape. This paper also provides the computational complexity of training linear networks with gradient flow. To achieve these results, this work develops a machinery to provably identify the stable set of gradient flow, which then enables us to improve over the state of the art in the literature of linear networks (Bah et al., 2019;Arora et al., 2018a). Crucially, our results appear to be the first to break away from the lazy training regime which has dominated the literature of neural networks. This work requires the network to have a layer with one neuron, which subsumes the networks with a scalar output, but extending the results of this theoretical work to all linear networks remains a challenging open problem.} }
Endnote
%0 Conference Paper %T Training Linear Neural Networks: Non-Local Convergence and Complexity Results %A Armin Eftekhari %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-eftekhari20a %I PMLR %P 2836--2847 %U http://proceedings.mlr.press/v119/eftekhari20a.html %V 119 %X Linear networks provide valuable insights into the workings of neural networks in general. This paper identifies conditions under which the gradient flow provably trains a linear network, in spite of the non-strict saddle points present in the optimization landscape. This paper also provides the computational complexity of training linear networks with gradient flow. To achieve these results, this work develops a machinery to provably identify the stable set of gradient flow, which then enables us to improve over the state of the art in the literature of linear networks (Bah et al., 2019;Arora et al., 2018a). Crucially, our results appear to be the first to break away from the lazy training regime which has dominated the literature of neural networks. This work requires the network to have a layer with one neuron, which subsumes the networks with a scalar output, but extending the results of this theoretical work to all linear networks remains a challenging open problem.
APA
Eftekhari, A.. (2020). Training Linear Neural Networks: Non-Local Convergence and Complexity Results. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2836-2847 Available from http://proceedings.mlr.press/v119/eftekhari20a.html.

Related Material