Mini-Batch Primal and Dual Methods for SVMs

Martin Takac, Avleen Bijral, Peter Richtarik, Nati Srebro
; Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):1022-1030, 2013.

Abstract

We address the issue of using mini-batches in stochastic optimization of SVMs. We show that the same quantity, the spectral norm of the data, controls the parallelization speedup obtained for both primal stochastic subgradient descent(SGD) and stochastic dual coordinate ascent (SCDA) methods and use it to derive novel variants of mini-batched SDCA. Our guarantees for both methods are expressed in terms of the original nonsmooth primal problem based on the hinge-loss.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-takac13, title = {Mini-Batch Primal and Dual Methods for SVMs}, author = {Martin Takac and Avleen Bijral and Peter Richtarik and Nati Srebro}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {1022--1030}, year = {2013}, editor = {Sanjoy Dasgupta and David McAllester}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/takac13.pdf}, url = {http://proceedings.mlr.press/v28/takac13.html}, abstract = {We address the issue of using mini-batches in stochastic optimization of SVMs. We show that the same quantity, the spectral norm of the data, controls the parallelization speedup obtained for both primal stochastic subgradient descent(SGD) and stochastic dual coordinate ascent (SCDA) methods and use it to derive novel variants of mini-batched SDCA. Our guarantees for both methods are expressed in terms of the original nonsmooth primal problem based on the hinge-loss.} }
Endnote
%0 Conference Paper %T Mini-Batch Primal and Dual Methods for SVMs %A Martin Takac %A Avleen Bijral %A Peter Richtarik %A Nati Srebro %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-takac13 %I PMLR %J Proceedings of Machine Learning Research %P 1022--1030 %U http://proceedings.mlr.press %V 28 %N 3 %W PMLR %X We address the issue of using mini-batches in stochastic optimization of SVMs. We show that the same quantity, the spectral norm of the data, controls the parallelization speedup obtained for both primal stochastic subgradient descent(SGD) and stochastic dual coordinate ascent (SCDA) methods and use it to derive novel variants of mini-batched SDCA. Our guarantees for both methods are expressed in terms of the original nonsmooth primal problem based on the hinge-loss.
RIS
TY - CPAPER TI - Mini-Batch Primal and Dual Methods for SVMs AU - Martin Takac AU - Avleen Bijral AU - Peter Richtarik AU - Nati Srebro BT - Proceedings of the 30th International Conference on Machine Learning PY - 2013/02/13 DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-takac13 PB - PMLR SP - 1022 DP - PMLR EP - 1030 L1 - http://proceedings.mlr.press/v28/takac13.pdf UR - http://proceedings.mlr.press/v28/takac13.html AB - We address the issue of using mini-batches in stochastic optimization of SVMs. We show that the same quantity, the spectral norm of the data, controls the parallelization speedup obtained for both primal stochastic subgradient descent(SGD) and stochastic dual coordinate ascent (SCDA) methods and use it to derive novel variants of mini-batched SDCA. Our guarantees for both methods are expressed in terms of the original nonsmooth primal problem based on the hinge-loss. ER -
APA
Takac, M., Bijral, A., Richtarik, P. & Srebro, N.. (2013). Mini-Batch Primal and Dual Methods for SVMs. Proceedings of the 30th International Conference on Machine Learning, in PMLR 28(3):1022-1030

Related Material