Acceleration and Implicit Regularization in Gaussian Phase Retrieval

Tyler Maunu, Martin Molina-Fructuoso
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:4060-4068, 2024.

Abstract

We study accelerated optimization methods in the Gaussian phase retrieval problem. In this setting, we prove that gradient methods with Polyak or Nesterov momentum have similar implicit regularization to gradient descent. This implicit regularization ensures that the algorithms remain in a nice region, where the cost function is strongly convex and smooth despite being nonconvex in general. This ensures that these accelerated methods achieve faster rates of convergence than gradient descent. Experimental evidence demonstrates that the accelerated methods converge faster than gradient descent in practice.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-maunu24a, title = {Acceleration and Implicit Regularization in {G}aussian Phase Retrieval}, author = {Maunu, Tyler and Molina-Fructuoso, Martin}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {4060--4068}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/maunu24a/maunu24a.pdf}, url = {https://proceedings.mlr.press/v238/maunu24a.html}, abstract = {We study accelerated optimization methods in the Gaussian phase retrieval problem. In this setting, we prove that gradient methods with Polyak or Nesterov momentum have similar implicit regularization to gradient descent. This implicit regularization ensures that the algorithms remain in a nice region, where the cost function is strongly convex and smooth despite being nonconvex in general. This ensures that these accelerated methods achieve faster rates of convergence than gradient descent. Experimental evidence demonstrates that the accelerated methods converge faster than gradient descent in practice.} }
Endnote
%0 Conference Paper %T Acceleration and Implicit Regularization in Gaussian Phase Retrieval %A Tyler Maunu %A Martin Molina-Fructuoso %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-maunu24a %I PMLR %P 4060--4068 %U https://proceedings.mlr.press/v238/maunu24a.html %V 238 %X We study accelerated optimization methods in the Gaussian phase retrieval problem. In this setting, we prove that gradient methods with Polyak or Nesterov momentum have similar implicit regularization to gradient descent. This implicit regularization ensures that the algorithms remain in a nice region, where the cost function is strongly convex and smooth despite being nonconvex in general. This ensures that these accelerated methods achieve faster rates of convergence than gradient descent. Experimental evidence demonstrates that the accelerated methods converge faster than gradient descent in practice.
APA
Maunu, T. & Molina-Fructuoso, M.. (2024). Acceleration and Implicit Regularization in Gaussian Phase Retrieval. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:4060-4068 Available from https://proceedings.mlr.press/v238/maunu24a.html.

Related Material