Sparse Accelerated Exponential Weights

Pierre Gaillard, Olivier Wintenberger
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:75-82, 2017.

Abstract

We consider the stochastic optimization problem where a convex function is minimized observing recursively the gradients. We introduce SAEW, a new procedure that accelerates exponential weights procedures with the slow rate $1/\sqrtT$ to procedures achieving the fast rate $1/T$. Under the strong convexity of the risk, we achieve the optimal rate of convergence for approximating sparse parameters in $R^d$. The acceleration is achieved by using successive averaging steps in an online fashion. The procedure also produces sparse estimators thanks to additional hard threshold steps.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-gaillard17a, title = {{Sparse Accelerated Exponential Weights}}, author = {Gaillard, Pierre and Wintenberger, Olivier}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {75--82}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/gaillard17a/gaillard17a.pdf}, url = {https://proceedings.mlr.press/v54/gaillard17a.html}, abstract = {We consider the stochastic optimization problem where a convex function is minimized observing recursively the gradients. We introduce SAEW, a new procedure that accelerates exponential weights procedures with the slow rate $1/\sqrtT$ to procedures achieving the fast rate $1/T$. Under the strong convexity of the risk, we achieve the optimal rate of convergence for approximating sparse parameters in $R^d$. The acceleration is achieved by using successive averaging steps in an online fashion. The procedure also produces sparse estimators thanks to additional hard threshold steps. } }
Endnote
%0 Conference Paper %T Sparse Accelerated Exponential Weights %A Pierre Gaillard %A Olivier Wintenberger %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-gaillard17a %I PMLR %P 75--82 %U https://proceedings.mlr.press/v54/gaillard17a.html %V 54 %X We consider the stochastic optimization problem where a convex function is minimized observing recursively the gradients. We introduce SAEW, a new procedure that accelerates exponential weights procedures with the slow rate $1/\sqrtT$ to procedures achieving the fast rate $1/T$. Under the strong convexity of the risk, we achieve the optimal rate of convergence for approximating sparse parameters in $R^d$. The acceleration is achieved by using successive averaging steps in an online fashion. The procedure also produces sparse estimators thanks to additional hard threshold steps.
APA
Gaillard, P. & Wintenberger, O.. (2017). Sparse Accelerated Exponential Weights. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:75-82 Available from https://proceedings.mlr.press/v54/gaillard17a.html.

Related Material