Non-Euclidean High-Order Smooth Convex Optimization Extended Abstract

Juan Pablo Contreras, Cristóbal Guzmán, David Martı́nez-Rubio
Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:1330-1330, 2025.

Abstract

We develop algorithms for the optimization of convex objectives that have Hölder continuous $q$-th derivatives by using a $q$-th order oracle, for any $q \geq 1$. Our algorithms work for general norms under mild conditions, including the $\ell_p$-settings for $1\leq p\leq \infty$. We can also optimize structured functions that allow for inexactly implementing a non-Euclidean ball optimization oracle. We do this by developing a non-Euclidean inexact accelerated proximal point method that makes use of an \textit{inexact uniformly convex regularizer}. We show a lower bound for general norms that demonstrates our algorithms are nearly optimal in high-dimensions in the black-box oracle model for $\ell_p$-settings and all $q \geq 1$, even in randomized and parallel settings. This new lower bound, when applied to the first-order smooth case, resolves an open question in parallel convex optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v291-contreras25a, title = {Non-Euclidean High-Order Smooth Convex Optimization Extended Abstract}, author = {Contreras, Juan Pablo and Guzm\'an, Crist\'obal and Mart\'{\i}nez-Rubio, David}, booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory}, pages = {1330--1330}, year = {2025}, editor = {Haghtalab, Nika and Moitra, Ankur}, volume = {291}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--04 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v291/main/assets/contreras25a/contreras25a.pdf}, url = {https://proceedings.mlr.press/v291/contreras25a.html}, abstract = {We develop algorithms for the optimization of convex objectives that have Hölder continuous $q$-th derivatives by using a $q$-th order oracle, for any $q \geq 1$. Our algorithms work for general norms under mild conditions, including the $\ell_p$-settings for $1\leq p\leq \infty$. We can also optimize structured functions that allow for inexactly implementing a non-Euclidean ball optimization oracle. We do this by developing a non-Euclidean inexact accelerated proximal point method that makes use of an \textit{inexact uniformly convex regularizer}. We show a lower bound for general norms that demonstrates our algorithms are nearly optimal in high-dimensions in the black-box oracle model for $\ell_p$-settings and all $q \geq 1$, even in randomized and parallel settings. This new lower bound, when applied to the first-order smooth case, resolves an open question in parallel convex optimization. } }
Endnote
%0 Conference Paper %T Non-Euclidean High-Order Smooth Convex Optimization Extended Abstract %A Juan Pablo Contreras %A Cristóbal Guzmán %A David Martı́nez-Rubio %B Proceedings of Thirty Eighth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Nika Haghtalab %E Ankur Moitra %F pmlr-v291-contreras25a %I PMLR %P 1330--1330 %U https://proceedings.mlr.press/v291/contreras25a.html %V 291 %X We develop algorithms for the optimization of convex objectives that have Hölder continuous $q$-th derivatives by using a $q$-th order oracle, for any $q \geq 1$. Our algorithms work for general norms under mild conditions, including the $\ell_p$-settings for $1\leq p\leq \infty$. We can also optimize structured functions that allow for inexactly implementing a non-Euclidean ball optimization oracle. We do this by developing a non-Euclidean inexact accelerated proximal point method that makes use of an \textit{inexact uniformly convex regularizer}. We show a lower bound for general norms that demonstrates our algorithms are nearly optimal in high-dimensions in the black-box oracle model for $\ell_p$-settings and all $q \geq 1$, even in randomized and parallel settings. This new lower bound, when applied to the first-order smooth case, resolves an open question in parallel convex optimization.
APA
Contreras, J.P., Guzmán, C. & Martı́nez-Rubio, D.. (2025). Non-Euclidean High-Order Smooth Convex Optimization Extended Abstract. Proceedings of Thirty Eighth Conference on Learning Theory, in Proceedings of Machine Learning Research 291:1330-1330 Available from https://proceedings.mlr.press/v291/contreras25a.html.

Related Material