Mini-Batch Primal and Dual Methods for SVMs

Martin Takac, Avleen Bijral, Peter Richtarik, Nati Srebro
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):1022-1030, 2013.

Abstract

We address the issue of using mini-batches in stochastic optimization of SVMs. We show that the same quantity, the spectral norm of the data, controls the parallelization speedup obtained for both primal stochastic subgradient descent(SGD) and stochastic dual coordinate ascent (SCDA) methods and use it to derive novel variants of mini-batched SDCA. Our guarantees for both methods are expressed in terms of the original nonsmooth primal problem based on the hinge-loss.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-takac13, title = {Mini-Batch Primal and Dual Methods for SVMs}, author = {Takac, Martin and Bijral, Avleen and Richtarik, Peter and Srebro, Nati}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {1022--1030}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/takac13.pdf}, url = {https://proceedings.mlr.press/v28/takac13.html}, abstract = {We address the issue of using mini-batches in stochastic optimization of SVMs. We show that the same quantity, the spectral norm of the data, controls the parallelization speedup obtained for both primal stochastic subgradient descent(SGD) and stochastic dual coordinate ascent (SCDA) methods and use it to derive novel variants of mini-batched SDCA. Our guarantees for both methods are expressed in terms of the original nonsmooth primal problem based on the hinge-loss.} }
Endnote
%0 Conference Paper %T Mini-Batch Primal and Dual Methods for SVMs %A Martin Takac %A Avleen Bijral %A Peter Richtarik %A Nati Srebro %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-takac13 %I PMLR %P 1022--1030 %U https://proceedings.mlr.press/v28/takac13.html %V 28 %N 3 %X We address the issue of using mini-batches in stochastic optimization of SVMs. We show that the same quantity, the spectral norm of the data, controls the parallelization speedup obtained for both primal stochastic subgradient descent(SGD) and stochastic dual coordinate ascent (SCDA) methods and use it to derive novel variants of mini-batched SDCA. Our guarantees for both methods are expressed in terms of the original nonsmooth primal problem based on the hinge-loss.
RIS
TY - CPAPER TI - Mini-Batch Primal and Dual Methods for SVMs AU - Martin Takac AU - Avleen Bijral AU - Peter Richtarik AU - Nati Srebro BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-takac13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 1022 EP - 1030 L1 - http://proceedings.mlr.press/v28/takac13.pdf UR - https://proceedings.mlr.press/v28/takac13.html AB - We address the issue of using mini-batches in stochastic optimization of SVMs. We show that the same quantity, the spectral norm of the data, controls the parallelization speedup obtained for both primal stochastic subgradient descent(SGD) and stochastic dual coordinate ascent (SCDA) methods and use it to derive novel variants of mini-batched SDCA. Our guarantees for both methods are expressed in terms of the original nonsmooth primal problem based on the hinge-loss. ER -
APA
Takac, M., Bijral, A., Richtarik, P. & Srebro, N.. (2013). Mini-Batch Primal and Dual Methods for SVMs. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):1022-1030 Available from https://proceedings.mlr.press/v28/takac13.html.

Related Material