Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins

Spencer Frei, Yuan Cao, Quanquan Gu
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:3417-3426, 2021.

Abstract

We analyze the properties of gradient descent on convex surrogates for the zero-one loss for the agnostic learning of halfspaces. We show that when a quantity we refer to as the \textit{soft margin} is well-behaved—a condition satisfied by log-concave isotropic distributions among others—minimizers of convex surrogates for the zero-one loss are approximate minimizers for the zero-one loss itself. As standard convex optimization arguments lead to efficient guarantees for minimizing convex surrogates of the zero-one loss, our methods allow for the first positive guarantees for the classification error of halfspaces learned by gradient descent using the binary cross-entropy or hinge loss in the presence of agnostic label noise.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-frei21a, title = {Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins}, author = {Frei, Spencer and Cao, Yuan and Gu, Quanquan}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {3417--3426}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/frei21a/frei21a.pdf}, url = {https://proceedings.mlr.press/v139/frei21a.html}, abstract = {We analyze the properties of gradient descent on convex surrogates for the zero-one loss for the agnostic learning of halfspaces. We show that when a quantity we refer to as the \textit{soft margin} is well-behaved—a condition satisfied by log-concave isotropic distributions among others—minimizers of convex surrogates for the zero-one loss are approximate minimizers for the zero-one loss itself. As standard convex optimization arguments lead to efficient guarantees for minimizing convex surrogates of the zero-one loss, our methods allow for the first positive guarantees for the classification error of halfspaces learned by gradient descent using the binary cross-entropy or hinge loss in the presence of agnostic label noise.} }
Endnote
%0 Conference Paper %T Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins %A Spencer Frei %A Yuan Cao %A Quanquan Gu %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-frei21a %I PMLR %P 3417--3426 %U https://proceedings.mlr.press/v139/frei21a.html %V 139 %X We analyze the properties of gradient descent on convex surrogates for the zero-one loss for the agnostic learning of halfspaces. We show that when a quantity we refer to as the \textit{soft margin} is well-behaved—a condition satisfied by log-concave isotropic distributions among others—minimizers of convex surrogates for the zero-one loss are approximate minimizers for the zero-one loss itself. As standard convex optimization arguments lead to efficient guarantees for minimizing convex surrogates of the zero-one loss, our methods allow for the first positive guarantees for the classification error of halfspaces learned by gradient descent using the binary cross-entropy or hinge loss in the presence of agnostic label noise.
APA
Frei, S., Cao, Y. & Gu, Q.. (2021). Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:3417-3426 Available from https://proceedings.mlr.press/v139/frei21a.html.

Related Material