The Power of Factorial Powers: New Parameter settings for (Stochastic) Optimization

Aaron Defazio, Robert M. Gower
Proceedings of The 13th Asian Conference on Machine Learning, PMLR 157:49-64, 2021.

Abstract

The convergence rates for convex and non-convex optimization methods depend on the choice of a host of constants, including step-sizes, Lyapunov function constants and momentum constants. In this work we propose the use of factorial powers as a flexible tool for defining constants that appear in convergence proofs. We list a number of remarkable properties that these sequences enjoy, and show how they can be applied to convergence proofs to simplify or improve the convergence rates of the momentum method, accelerated gradient and the stochastic variance reduced method (SVRG).

Cite this Paper


BibTeX
@InProceedings{pmlr-v157-defazio21a, title = {The Power of Factorial Powers: New Parameter settings for (Stochastic) Optimization}, author = {Defazio, Aaron and Gower, Robert M.}, booktitle = {Proceedings of The 13th Asian Conference on Machine Learning}, pages = {49--64}, year = {2021}, editor = {Balasubramanian, Vineeth N. and Tsang, Ivor}, volume = {157}, series = {Proceedings of Machine Learning Research}, month = {17--19 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v157/defazio21a/defazio21a.pdf}, url = {https://proceedings.mlr.press/v157/defazio21a.html}, abstract = {The convergence rates for convex and non-convex optimization methods depend on the choice of a host of constants, including step-sizes, Lyapunov function constants and momentum constants. In this work we propose the use of factorial powers as a flexible tool for defining constants that appear in convergence proofs. We list a number of remarkable properties that these sequences enjoy, and show how they can be applied to convergence proofs to simplify or improve the convergence rates of the momentum method, accelerated gradient and the stochastic variance reduced method (SVRG).} }
Endnote
%0 Conference Paper %T The Power of Factorial Powers: New Parameter settings for (Stochastic) Optimization %A Aaron Defazio %A Robert M. Gower %B Proceedings of The 13th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Vineeth N. Balasubramanian %E Ivor Tsang %F pmlr-v157-defazio21a %I PMLR %P 49--64 %U https://proceedings.mlr.press/v157/defazio21a.html %V 157 %X The convergence rates for convex and non-convex optimization methods depend on the choice of a host of constants, including step-sizes, Lyapunov function constants and momentum constants. In this work we propose the use of factorial powers as a flexible tool for defining constants that appear in convergence proofs. We list a number of remarkable properties that these sequences enjoy, and show how they can be applied to convergence proofs to simplify or improve the convergence rates of the momentum method, accelerated gradient and the stochastic variance reduced method (SVRG).
APA
Defazio, A. & Gower, R.M.. (2021). The Power of Factorial Powers: New Parameter settings for (Stochastic) Optimization. Proceedings of The 13th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 157:49-64 Available from https://proceedings.mlr.press/v157/defazio21a.html.

Related Material