Towards Statistical and Computational Complexities of Polyak Step Size Gradient Descent

Tongzheng Ren, Fuheng Cui, Alexia Atsidakou, Sujay Sanghavi, Nhat Ho
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:3930-3961, 2022.

Abstract

We study the statistical and computational complexities of the Polyak step size gradient descent algorithm under generalized smoothness and {Ł}ojasiewicz conditions of the population loss function, namely, the limit of the empirical loss function when the sample size goes to infinity, and the stability between the gradients of the empirical and population loss functions, namely, the polynomial growth on the concentration bound between the gradients of sample and population loss functions. We demonstrate that the Polyak step size gradient descent iterates reach a final statistical radius of convergence around the true parameter after logarithmic number of iterations in terms of the sample size. It is computationally cheaper than the polynomial number of iterations on the sample size of the fixed-step size gradient descent algorithm to reach the same final statistical radius when the population loss function is not locally strongly convex. Finally, we illustrate our general theory under three statistical examples: generalized linear model, mixture model, and mixed linear regression model.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-ren22a, title = { Towards Statistical and Computational Complexities of Polyak Step Size Gradient Descent }, author = {Ren, Tongzheng and Cui, Fuheng and Atsidakou, Alexia and Sanghavi, Sujay and Ho, Nhat}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {3930--3961}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/ren22a/ren22a.pdf}, url = {https://proceedings.mlr.press/v151/ren22a.html}, abstract = { We study the statistical and computational complexities of the Polyak step size gradient descent algorithm under generalized smoothness and {Ł}ojasiewicz conditions of the population loss function, namely, the limit of the empirical loss function when the sample size goes to infinity, and the stability between the gradients of the empirical and population loss functions, namely, the polynomial growth on the concentration bound between the gradients of sample and population loss functions. We demonstrate that the Polyak step size gradient descent iterates reach a final statistical radius of convergence around the true parameter after logarithmic number of iterations in terms of the sample size. It is computationally cheaper than the polynomial number of iterations on the sample size of the fixed-step size gradient descent algorithm to reach the same final statistical radius when the population loss function is not locally strongly convex. Finally, we illustrate our general theory under three statistical examples: generalized linear model, mixture model, and mixed linear regression model. } }
Endnote
%0 Conference Paper %T Towards Statistical and Computational Complexities of Polyak Step Size Gradient Descent %A Tongzheng Ren %A Fuheng Cui %A Alexia Atsidakou %A Sujay Sanghavi %A Nhat Ho %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-ren22a %I PMLR %P 3930--3961 %U https://proceedings.mlr.press/v151/ren22a.html %V 151 %X We study the statistical and computational complexities of the Polyak step size gradient descent algorithm under generalized smoothness and {Ł}ojasiewicz conditions of the population loss function, namely, the limit of the empirical loss function when the sample size goes to infinity, and the stability between the gradients of the empirical and population loss functions, namely, the polynomial growth on the concentration bound between the gradients of sample and population loss functions. We demonstrate that the Polyak step size gradient descent iterates reach a final statistical radius of convergence around the true parameter after logarithmic number of iterations in terms of the sample size. It is computationally cheaper than the polynomial number of iterations on the sample size of the fixed-step size gradient descent algorithm to reach the same final statistical radius when the population loss function is not locally strongly convex. Finally, we illustrate our general theory under three statistical examples: generalized linear model, mixture model, and mixed linear regression model.
APA
Ren, T., Cui, F., Atsidakou, A., Sanghavi, S. & Ho, N.. (2022). Towards Statistical and Computational Complexities of Polyak Step Size Gradient Descent . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:3930-3961 Available from https://proceedings.mlr.press/v151/ren22a.html.

Related Material