Zheng Qu,
Peter Richtarik,
Martin Takac,
Olivier Fercoq
;
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1823-1832, 2016.
Abstract
We propose a new algorithm for minimizing regularized empirical loss: Stochastic Dual Newton Ascent (SDNA). Our method is dual in nature: in each iteration we update a random subset of the dual variables. However, unlike existing methods such as stochastic dual coordinate ascent, SDNA is capable of utilizing all local curvature information contained in the examples, which leads to striking improvements in both theory and practice – sometimes by orders of magnitude. In the special case when an L2-regularizer is used in the primal, the dual problem is a concave quadratic maximization problem plus a separable term. In this regime, SDNA in each step solves a proximal subproblem involving a random principal submatrix of the Hessian of the quadratic function; whence the name of the method.
@InProceedings{pmlr-v48-qub16,
title = {SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization},
author = {Zheng Qu and Peter Richtarik and Martin Takac and Olivier Fercoq},
booktitle = {Proceedings of The 33rd International Conference on Machine Learning},
pages = {1823--1832},
year = {2016},
editor = {Maria Florina Balcan and Kilian Q. Weinberger},
volume = {48},
series = {Proceedings of Machine Learning Research},
address = {New York, New York, USA},
month = {20--22 Jun},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v48/qub16.pdf},
url = {http://proceedings.mlr.press/v48/qub16.html},
abstract = {We propose a new algorithm for minimizing regularized empirical loss: Stochastic Dual Newton Ascent (SDNA). Our method is dual in nature: in each iteration we update a random subset of the dual variables. However, unlike existing methods such as stochastic dual coordinate ascent, SDNA is capable of utilizing all local curvature information contained in the examples, which leads to striking improvements in both theory and practice – sometimes by orders of magnitude. In the special case when an L2-regularizer is used in the primal, the dual problem is a concave quadratic maximization problem plus a separable term. In this regime, SDNA in each step solves a proximal subproblem involving a random principal submatrix of the Hessian of the quadratic function; whence the name of the method.}
}
%0 Conference Paper
%T SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization
%A Zheng Qu
%A Peter Richtarik
%A Martin Takac
%A Olivier Fercoq
%B Proceedings of The 33rd International Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2016
%E Maria Florina Balcan
%E Kilian Q. Weinberger
%F pmlr-v48-qub16
%I PMLR
%J Proceedings of Machine Learning Research
%P 1823--1832
%U http://proceedings.mlr.press
%V 48
%W PMLR
%X We propose a new algorithm for minimizing regularized empirical loss: Stochastic Dual Newton Ascent (SDNA). Our method is dual in nature: in each iteration we update a random subset of the dual variables. However, unlike existing methods such as stochastic dual coordinate ascent, SDNA is capable of utilizing all local curvature information contained in the examples, which leads to striking improvements in both theory and practice – sometimes by orders of magnitude. In the special case when an L2-regularizer is used in the primal, the dual problem is a concave quadratic maximization problem plus a separable term. In this regime, SDNA in each step solves a proximal subproblem involving a random principal submatrix of the Hessian of the quadratic function; whence the name of the method.
TY - CPAPER
TI - SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization
AU - Zheng Qu
AU - Peter Richtarik
AU - Martin Takac
AU - Olivier Fercoq
BT - Proceedings of The 33rd International Conference on Machine Learning
PY - 2016/06/11
DA - 2016/06/11
ED - Maria Florina Balcan
ED - Kilian Q. Weinberger
ID - pmlr-v48-qub16
PB - PMLR
SP - 1823
DP - PMLR
EP - 1832
L1 - http://proceedings.mlr.press/v48/qub16.pdf
UR - http://proceedings.mlr.press/v48/qub16.html
AB - We propose a new algorithm for minimizing regularized empirical loss: Stochastic Dual Newton Ascent (SDNA). Our method is dual in nature: in each iteration we update a random subset of the dual variables. However, unlike existing methods such as stochastic dual coordinate ascent, SDNA is capable of utilizing all local curvature information contained in the examples, which leads to striking improvements in both theory and practice – sometimes by orders of magnitude. In the special case when an L2-regularizer is used in the primal, the dual problem is a concave quadratic maximization problem plus a separable term. In this regime, SDNA in each step solves a proximal subproblem involving a random principal submatrix of the Hessian of the quadratic function; whence the name of the method.
ER -
Qu, Z., Richtarik, P., Takac, M. & Fercoq, O.. (2016). SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization. Proceedings of The 33rd International Conference on Machine Learning, in PMLR 48:1823-1832
This site last compiled Sat, 04 Nov 2017 20:59:32 +0000