Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds

Daniel Dodd, Louis Sharrock, Christopher Nemeth
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:11105-11148, 2024.

Abstract

In recent years, interest in gradient-based optimization over Riemannian manifolds has surged. However, a significant challenge lies in the reliance on hyperparameters, especially the learning rate, which requires meticulous tuning by practitioners to ensure convergence at a suitable rate. In this work, we introduce innovative learning-rate-free algorithms for stochastic optimization over Riemannian manifolds, eliminating the need for hand-tuning and providing a more robust and user-friendly approach. We establish high probability convergence guarantees that are optimal, up to logarithmic factors, compared to the best-known optimally tuned rate in the deterministic setting. Our approach is validated through numerical experiments, demonstrating competitive performance against learning-rate-dependent algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-dodd24a, title = {Learning-Rate-Free Stochastic Optimization over {R}iemannian Manifolds}, author = {Dodd, Daniel and Sharrock, Louis and Nemeth, Christopher}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {11105--11148}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/dodd24a/dodd24a.pdf}, url = {https://proceedings.mlr.press/v235/dodd24a.html}, abstract = {In recent years, interest in gradient-based optimization over Riemannian manifolds has surged. However, a significant challenge lies in the reliance on hyperparameters, especially the learning rate, which requires meticulous tuning by practitioners to ensure convergence at a suitable rate. In this work, we introduce innovative learning-rate-free algorithms for stochastic optimization over Riemannian manifolds, eliminating the need for hand-tuning and providing a more robust and user-friendly approach. We establish high probability convergence guarantees that are optimal, up to logarithmic factors, compared to the best-known optimally tuned rate in the deterministic setting. Our approach is validated through numerical experiments, demonstrating competitive performance against learning-rate-dependent algorithms.} }
Endnote
%0 Conference Paper %T Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds %A Daniel Dodd %A Louis Sharrock %A Christopher Nemeth %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-dodd24a %I PMLR %P 11105--11148 %U https://proceedings.mlr.press/v235/dodd24a.html %V 235 %X In recent years, interest in gradient-based optimization over Riemannian manifolds has surged. However, a significant challenge lies in the reliance on hyperparameters, especially the learning rate, which requires meticulous tuning by practitioners to ensure convergence at a suitable rate. In this work, we introduce innovative learning-rate-free algorithms for stochastic optimization over Riemannian manifolds, eliminating the need for hand-tuning and providing a more robust and user-friendly approach. We establish high probability convergence guarantees that are optimal, up to logarithmic factors, compared to the best-known optimally tuned rate in the deterministic setting. Our approach is validated through numerical experiments, demonstrating competitive performance against learning-rate-dependent algorithms.
APA
Dodd, D., Sharrock, L. & Nemeth, C.. (2024). Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:11105-11148 Available from https://proceedings.mlr.press/v235/dodd24a.html.

Related Material