Understanding the Loss Surface of Neural Networks for Binary Classification

SHIYU LIANG, Ruoyu Sun, Yixuan Li, Rayadurgam Srikant
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2835-2843, 2018.

Abstract

It is widely conjectured that training algorithms for neural networks are successful because all local minima lead to similar performance; for example, see (LeCun et al., 2015; Choromanska et al., 2015; Dauphin et al., 2014). Performance is typically measured in terms of two metrics: training performance and generalization performance. Here we focus on the training performance of neural networks for binary classification, and provide conditions under which the training error is zero at all local minima of appropriately chosen surrogate loss functions. Our conditions are roughly in the following form: the neurons have to be increasing and strictly convex, the neural network should either be single-layered or is multi-layered with a shortcut-like connection, and the surrogate loss function should be a smooth version of hinge loss. We also provide counterexamples to show that, when these conditions are relaxed, the result may not hold.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-liang18a, title = {Understanding the Loss Surface of Neural Networks for Binary Classification}, author = {LIANG, SHIYU and Sun, Ruoyu and Li, Yixuan and Srikant, Rayadurgam}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {2835--2843}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/liang18a/liang18a.pdf}, url = {https://proceedings.mlr.press/v80/liang18a.html}, abstract = {It is widely conjectured that training algorithms for neural networks are successful because all local minima lead to similar performance; for example, see (LeCun et al., 2015; Choromanska et al., 2015; Dauphin et al., 2014). Performance is typically measured in terms of two metrics: training performance and generalization performance. Here we focus on the training performance of neural networks for binary classification, and provide conditions under which the training error is zero at all local minima of appropriately chosen surrogate loss functions. Our conditions are roughly in the following form: the neurons have to be increasing and strictly convex, the neural network should either be single-layered or is multi-layered with a shortcut-like connection, and the surrogate loss function should be a smooth version of hinge loss. We also provide counterexamples to show that, when these conditions are relaxed, the result may not hold.} }
Endnote
%0 Conference Paper %T Understanding the Loss Surface of Neural Networks for Binary Classification %A SHIYU LIANG %A Ruoyu Sun %A Yixuan Li %A Rayadurgam Srikant %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-liang18a %I PMLR %P 2835--2843 %U https://proceedings.mlr.press/v80/liang18a.html %V 80 %X It is widely conjectured that training algorithms for neural networks are successful because all local minima lead to similar performance; for example, see (LeCun et al., 2015; Choromanska et al., 2015; Dauphin et al., 2014). Performance is typically measured in terms of two metrics: training performance and generalization performance. Here we focus on the training performance of neural networks for binary classification, and provide conditions under which the training error is zero at all local minima of appropriately chosen surrogate loss functions. Our conditions are roughly in the following form: the neurons have to be increasing and strictly convex, the neural network should either be single-layered or is multi-layered with a shortcut-like connection, and the surrogate loss function should be a smooth version of hinge loss. We also provide counterexamples to show that, when these conditions are relaxed, the result may not hold.
APA
LIANG, S., Sun, R., Li, Y. & Srikant, R.. (2018). Understanding the Loss Surface of Neural Networks for Binary Classification. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:2835-2843 Available from https://proceedings.mlr.press/v80/liang18a.html.

Related Material