Convergence and Complexity Guarantee for Inexact First-order Riemannian Optimization Algorithms

Yuchen Li, Laura Balzano, Deanna Needell, Hanbaek Lyu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:27376-27398, 2024.

Abstract

We analyze inexact Riemannian gradient descent (RGD) where Riemannian gradients and retractions are inexactly (and cheaply) computed. Our focus is on understanding when inexact RGD converges and what is the complexity in the general nonconvex and constrained setting. We answer these questions in a general framework of tangential Block Majorization-Minimization (tBMM). We establish that tBMM converges to an $\epsilon$-stationary point within $O(\epsilon^{-2})$ iterations. Under a mild assumption, the results still hold when the subproblem is solved inexactly in each iteration provided the total optimality gap is bounded. Our general analysis applies to a wide range of classical algorithms with Riemannian constraints including inexact RGD and proximal gradient method on Stiefel manifolds. We numerically validate that tBMM shows improved performance over existing methods when applied to various problems, including nonnegative tensor decomposition with Riemannian constraints, regularized nonnegative matrix factorization, and low-rank matrix recovery problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-li24b, title = {Convergence and Complexity Guarantee for Inexact First-order {R}iemannian Optimization Algorithms}, author = {Li, Yuchen and Balzano, Laura and Needell, Deanna and Lyu, Hanbaek}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {27376--27398}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24b/li24b.pdf}, url = {https://proceedings.mlr.press/v235/li24b.html}, abstract = {We analyze inexact Riemannian gradient descent (RGD) where Riemannian gradients and retractions are inexactly (and cheaply) computed. Our focus is on understanding when inexact RGD converges and what is the complexity in the general nonconvex and constrained setting. We answer these questions in a general framework of tangential Block Majorization-Minimization (tBMM). We establish that tBMM converges to an $\epsilon$-stationary point within $O(\epsilon^{-2})$ iterations. Under a mild assumption, the results still hold when the subproblem is solved inexactly in each iteration provided the total optimality gap is bounded. Our general analysis applies to a wide range of classical algorithms with Riemannian constraints including inexact RGD and proximal gradient method on Stiefel manifolds. We numerically validate that tBMM shows improved performance over existing methods when applied to various problems, including nonnegative tensor decomposition with Riemannian constraints, regularized nonnegative matrix factorization, and low-rank matrix recovery problems.} }
Endnote
%0 Conference Paper %T Convergence and Complexity Guarantee for Inexact First-order Riemannian Optimization Algorithms %A Yuchen Li %A Laura Balzano %A Deanna Needell %A Hanbaek Lyu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-li24b %I PMLR %P 27376--27398 %U https://proceedings.mlr.press/v235/li24b.html %V 235 %X We analyze inexact Riemannian gradient descent (RGD) where Riemannian gradients and retractions are inexactly (and cheaply) computed. Our focus is on understanding when inexact RGD converges and what is the complexity in the general nonconvex and constrained setting. We answer these questions in a general framework of tangential Block Majorization-Minimization (tBMM). We establish that tBMM converges to an $\epsilon$-stationary point within $O(\epsilon^{-2})$ iterations. Under a mild assumption, the results still hold when the subproblem is solved inexactly in each iteration provided the total optimality gap is bounded. Our general analysis applies to a wide range of classical algorithms with Riemannian constraints including inexact RGD and proximal gradient method on Stiefel manifolds. We numerically validate that tBMM shows improved performance over existing methods when applied to various problems, including nonnegative tensor decomposition with Riemannian constraints, regularized nonnegative matrix factorization, and low-rank matrix recovery problems.
APA
Li, Y., Balzano, L., Needell, D. & Lyu, H.. (2024). Convergence and Complexity Guarantee for Inexact First-order Riemannian Optimization Algorithms. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:27376-27398 Available from https://proceedings.mlr.press/v235/li24b.html.

Related Material