Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global

Thomas Laurent, James Brecht
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2902-2907, 2018.

Abstract

We consider deep linear networks with arbitrary convex differentiable loss. We provide a short and elementary proof of the fact that all local minima are global minima if the hidden layers are either 1) at least as wide as the input layer, or 2) at least as wide as the output layer. This result is the strongest possible in the following sense: If the loss is convex and Lipschitz but not differentiable then deep linear networks can have sub-optimal local minima.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-laurent18a, title = {Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global}, author = {Laurent, Thomas and von Brecht, James}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {2902--2907}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/laurent18a/laurent18a.pdf}, url = {http://proceedings.mlr.press/v80/laurent18a.html}, abstract = {We consider deep linear networks with arbitrary convex differentiable loss. We provide a short and elementary proof of the fact that all local minima are global minima if the hidden layers are either 1) at least as wide as the input layer, or 2) at least as wide as the output layer. This result is the strongest possible in the following sense: If the loss is convex and Lipschitz but not differentiable then deep linear networks can have sub-optimal local minima.} }
Endnote
%0 Conference Paper %T Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global %A Thomas Laurent %A James Brecht %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-laurent18a %I PMLR %P 2902--2907 %U http://proceedings.mlr.press/v80/laurent18a.html %V 80 %X We consider deep linear networks with arbitrary convex differentiable loss. We provide a short and elementary proof of the fact that all local minima are global minima if the hidden layers are either 1) at least as wide as the input layer, or 2) at least as wide as the output layer. This result is the strongest possible in the following sense: If the loss is convex and Lipschitz but not differentiable then deep linear networks can have sub-optimal local minima.
APA
Laurent, T. & Brecht, J.. (2018). Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:2902-2907 Available from http://proceedings.mlr.press/v80/laurent18a.html.

Related Material