Stochastic Optimization with Importance Sampling for Regularized Loss Minimization

Peilin Zhao, Tong Zhang
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1-9, 2015.

Abstract

Uniform sampling of training data has been commonly used in traditional stochastic optimization algorithms such as Proximal Stochastic Mirror Descent (prox-SMD) and Proximal Stochastic Dual Coordinate Ascent (prox-SDCA). Although uniform sampling can guarantee that the sampled stochastic quantity is an unbiased estimate of the corresponding true quantity, the resulting estimator may have a rather high variance, which negatively affects the convergence of the underlying optimization procedure. In this paper we study stochastic optimization, including prox-SMD and prox-SDCA, with importance sampling, which improves the convergence rate by reducing the stochastic variance. We theoretically analyze the algorithms and empirically validate their effectiveness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-zhaoa15, title = {Stochastic Optimization with Importance Sampling for Regularized Loss Minimization}, author = {Zhao, Peilin and Zhang, Tong}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {1--9}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/zhaoa15.pdf}, url = {https://proceedings.mlr.press/v37/zhaoa15.html}, abstract = {Uniform sampling of training data has been commonly used in traditional stochastic optimization algorithms such as Proximal Stochastic Mirror Descent (prox-SMD) and Proximal Stochastic Dual Coordinate Ascent (prox-SDCA). Although uniform sampling can guarantee that the sampled stochastic quantity is an unbiased estimate of the corresponding true quantity, the resulting estimator may have a rather high variance, which negatively affects the convergence of the underlying optimization procedure. In this paper we study stochastic optimization, including prox-SMD and prox-SDCA, with importance sampling, which improves the convergence rate by reducing the stochastic variance. We theoretically analyze the algorithms and empirically validate their effectiveness.} }
Endnote
%0 Conference Paper %T Stochastic Optimization with Importance Sampling for Regularized Loss Minimization %A Peilin Zhao %A Tong Zhang %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-zhaoa15 %I PMLR %P 1--9 %U https://proceedings.mlr.press/v37/zhaoa15.html %V 37 %X Uniform sampling of training data has been commonly used in traditional stochastic optimization algorithms such as Proximal Stochastic Mirror Descent (prox-SMD) and Proximal Stochastic Dual Coordinate Ascent (prox-SDCA). Although uniform sampling can guarantee that the sampled stochastic quantity is an unbiased estimate of the corresponding true quantity, the resulting estimator may have a rather high variance, which negatively affects the convergence of the underlying optimization procedure. In this paper we study stochastic optimization, including prox-SMD and prox-SDCA, with importance sampling, which improves the convergence rate by reducing the stochastic variance. We theoretically analyze the algorithms and empirically validate their effectiveness.
RIS
TY - CPAPER TI - Stochastic Optimization with Importance Sampling for Regularized Loss Minimization AU - Peilin Zhao AU - Tong Zhang BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-zhaoa15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 1 EP - 9 L1 - http://proceedings.mlr.press/v37/zhaoa15.pdf UR - https://proceedings.mlr.press/v37/zhaoa15.html AB - Uniform sampling of training data has been commonly used in traditional stochastic optimization algorithms such as Proximal Stochastic Mirror Descent (prox-SMD) and Proximal Stochastic Dual Coordinate Ascent (prox-SDCA). Although uniform sampling can guarantee that the sampled stochastic quantity is an unbiased estimate of the corresponding true quantity, the resulting estimator may have a rather high variance, which negatively affects the convergence of the underlying optimization procedure. In this paper we study stochastic optimization, including prox-SMD and prox-SDCA, with importance sampling, which improves the convergence rate by reducing the stochastic variance. We theoretically analyze the algorithms and empirically validate their effectiveness. ER -
APA
Zhao, P. & Zhang, T.. (2015). Stochastic Optimization with Importance Sampling for Regularized Loss Minimization. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:1-9 Available from https://proceedings.mlr.press/v37/zhaoa15.html.

Related Material