RECAPP: Crafting a More Efficient Catalyst for Convex Optimization

Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:2658-2685, 2022.

Abstract

The accelerated proximal point method (APPA), also known as "Catalyst", is a well-established reduction from convex optimization to approximate proximal point computation (i.e., regularized minimization). This reduction is conceptually elegant and yields strong convergence rate guarantees. However, these rates feature an extraneous logarithmic term arising from the need to compute each proximal point to high accuracy. In this work, we propose a novel Relaxed Error Criterion for Accelerated Proximal Point (RECAPP) that eliminates the need for high accuracy subproblem solutions. We apply RECAPP to two canonical problems: finite-sum and max-structured minimization. For finite-sum problems, we match the best known complexity, previously obtained by carefully-designed problem-specific algorithms. For minimizing max_y f(x,y) where f is convex in x and strongly-concave in y, we improve on the best known (Catalyst-based) bound by a logarithmic factor.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-carmon22a, title = {{RECAPP}: Crafting a More Efficient Catalyst for Convex Optimization}, author = {Carmon, Yair and Jambulapati, Arun and Jin, Yujia and Sidford, Aaron}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {2658--2685}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/carmon22a/carmon22a.pdf}, url = {https://proceedings.mlr.press/v162/carmon22a.html}, abstract = {The accelerated proximal point method (APPA), also known as "Catalyst", is a well-established reduction from convex optimization to approximate proximal point computation (i.e., regularized minimization). This reduction is conceptually elegant and yields strong convergence rate guarantees. However, these rates feature an extraneous logarithmic term arising from the need to compute each proximal point to high accuracy. In this work, we propose a novel Relaxed Error Criterion for Accelerated Proximal Point (RECAPP) that eliminates the need for high accuracy subproblem solutions. We apply RECAPP to two canonical problems: finite-sum and max-structured minimization. For finite-sum problems, we match the best known complexity, previously obtained by carefully-designed problem-specific algorithms. For minimizing max_y f(x,y) where f is convex in x and strongly-concave in y, we improve on the best known (Catalyst-based) bound by a logarithmic factor.} }
Endnote
%0 Conference Paper %T RECAPP: Crafting a More Efficient Catalyst for Convex Optimization %A Yair Carmon %A Arun Jambulapati %A Yujia Jin %A Aaron Sidford %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-carmon22a %I PMLR %P 2658--2685 %U https://proceedings.mlr.press/v162/carmon22a.html %V 162 %X The accelerated proximal point method (APPA), also known as "Catalyst", is a well-established reduction from convex optimization to approximate proximal point computation (i.e., regularized minimization). This reduction is conceptually elegant and yields strong convergence rate guarantees. However, these rates feature an extraneous logarithmic term arising from the need to compute each proximal point to high accuracy. In this work, we propose a novel Relaxed Error Criterion for Accelerated Proximal Point (RECAPP) that eliminates the need for high accuracy subproblem solutions. We apply RECAPP to two canonical problems: finite-sum and max-structured minimization. For finite-sum problems, we match the best known complexity, previously obtained by carefully-designed problem-specific algorithms. For minimizing max_y f(x,y) where f is convex in x and strongly-concave in y, we improve on the best known (Catalyst-based) bound by a logarithmic factor.
APA
Carmon, Y., Jambulapati, A., Jin, Y. & Sidford, A.. (2022). RECAPP: Crafting a More Efficient Catalyst for Convex Optimization. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:2658-2685 Available from https://proceedings.mlr.press/v162/carmon22a.html.

Related Material