Fast margin maximization via dual acceleration

Ziwei Ji, Nathan Srebro, Matus Telgarsky
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:4860-4869, 2021.

Abstract

We present and analyze a momentum-based gradient method for training linear classifiers with an exponentially-tailed loss (e.g., the exponential or logistic loss), which maximizes the classification margin on separable data at a rate of O(1/t^2). This contrasts with a rate of O(1/log(t)) for standard gradient descent, and O(1/t) for normalized gradient descent. The momentum-based method is derived via the convex dual of the maximum-margin problem, and specifically by applying Nesterov acceleration to this dual, which manages to result in a simple and intuitive method in the primal. This dual view can also be used to derive a stochastic variant, which performs adaptive non-uniform sampling via the dual variables.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-ji21a, title = {Fast margin maximization via dual acceleration}, author = {Ji, Ziwei and Srebro, Nathan and Telgarsky, Matus}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {4860--4869}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/ji21a/ji21a.pdf}, url = {https://proceedings.mlr.press/v139/ji21a.html}, abstract = {We present and analyze a momentum-based gradient method for training linear classifiers with an exponentially-tailed loss (e.g., the exponential or logistic loss), which maximizes the classification margin on separable data at a rate of O(1/t^2). This contrasts with a rate of O(1/log(t)) for standard gradient descent, and O(1/t) for normalized gradient descent. The momentum-based method is derived via the convex dual of the maximum-margin problem, and specifically by applying Nesterov acceleration to this dual, which manages to result in a simple and intuitive method in the primal. This dual view can also be used to derive a stochastic variant, which performs adaptive non-uniform sampling via the dual variables.} }
Endnote
%0 Conference Paper %T Fast margin maximization via dual acceleration %A Ziwei Ji %A Nathan Srebro %A Matus Telgarsky %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-ji21a %I PMLR %P 4860--4869 %U https://proceedings.mlr.press/v139/ji21a.html %V 139 %X We present and analyze a momentum-based gradient method for training linear classifiers with an exponentially-tailed loss (e.g., the exponential or logistic loss), which maximizes the classification margin on separable data at a rate of O(1/t^2). This contrasts with a rate of O(1/log(t)) for standard gradient descent, and O(1/t) for normalized gradient descent. The momentum-based method is derived via the convex dual of the maximum-margin problem, and specifically by applying Nesterov acceleration to this dual, which manages to result in a simple and intuitive method in the primal. This dual view can also be used to derive a stochastic variant, which performs adaptive non-uniform sampling via the dual variables.
APA
Ji, Z., Srebro, N. & Telgarsky, M.. (2021). Fast margin maximization via dual acceleration. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:4860-4869 Available from https://proceedings.mlr.press/v139/ji21a.html.

Related Material