When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations?

Niladri S. Chatterji, Philip M. Long, Peter Bartlett
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:927-1027, 2021.

Abstract

We establish conditions under which gradient descent applied to fixed-width deep networks drives the logistic loss to zero, and prove bounds on the rate of convergence. Our analysis applies for smoothed approximations to the ReLU, such as Swish and the Huberized ReLU, proposed in previous applied work. We provide two sufficient conditions for convergence. The first is simply a bound on the loss at initialization. The second is a data separation condition used in prior analyses.

Cite this Paper


BibTeX
@InProceedings{pmlr-v134-chatterji21a, title = {When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations?}, author = {Chatterji, Niladri S. and Long, Philip M. and Bartlett, Peter}, booktitle = {Proceedings of Thirty Fourth Conference on Learning Theory}, pages = {927--1027}, year = {2021}, editor = {Belkin, Mikhail and Kpotufe, Samory}, volume = {134}, series = {Proceedings of Machine Learning Research}, month = {15--19 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v134/chatterji21a/chatterji21a.pdf}, url = {https://proceedings.mlr.press/v134/chatterji21a.html}, abstract = {We establish conditions under which gradient descent applied to fixed-width deep networks drives the logistic loss to zero, and prove bounds on the rate of convergence. Our analysis applies for smoothed approximations to the ReLU, such as Swish and the Huberized ReLU, proposed in previous applied work. We provide two sufficient conditions for convergence. The first is simply a bound on the loss at initialization. The second is a data separation condition used in prior analyses.} }
Endnote
%0 Conference Paper %T When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations? %A Niladri S. Chatterji %A Philip M. Long %A Peter Bartlett %B Proceedings of Thirty Fourth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Mikhail Belkin %E Samory Kpotufe %F pmlr-v134-chatterji21a %I PMLR %P 927--1027 %U https://proceedings.mlr.press/v134/chatterji21a.html %V 134 %X We establish conditions under which gradient descent applied to fixed-width deep networks drives the logistic loss to zero, and prove bounds on the rate of convergence. Our analysis applies for smoothed approximations to the ReLU, such as Swish and the Huberized ReLU, proposed in previous applied work. We provide two sufficient conditions for convergence. The first is simply a bound on the loss at initialization. The second is a data separation condition used in prior analyses.
APA
Chatterji, N.S., Long, P.M. & Bartlett, P.. (2021). When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations?. Proceedings of Thirty Fourth Conference on Learning Theory, in Proceedings of Machine Learning Research 134:927-1027 Available from https://proceedings.mlr.press/v134/chatterji21a.html.

Related Material