Convex optimization with $p$-norm oracles

Deeksha Adil, Brian Bullins, Arun Jambulapati, Aaron Sidford
Proceedings of The 37th International Conference on Algorithmic Learning Theory, PMLR 313:1-38, 2026.

Abstract

In recent years, there have been significant advances in efficiently solving $\ell_s$-regression using linear system solvers and $\ell_2$-regression [Adil-Kyng-Peng-Sachdeva, J. ACM’24]. Would efficient smoothed $\ell_p$-norm solvers lead to even faster rates for solving $\ell_s$-regression when $2 \leq p < s$? In this paper, we give an affirmative answer to this question and show how to solve $\ell_s$-regression using $\tilde{O}(n^{\frac{\nu}{1+\nu}})$ iterations of solving smoothed $\ell_p$ regression problems, where $\nu := \frac{1}{p} - \frac{1}{s}$. To obtain this result, we provide improved accelerated rates for convex optimization problems when given access to an _$\ell_p^s(\lambda)$-proximal oracle_, which, for a point $c$, returns the solution of the regularized problem $\min_{x} f(x) + \lambda ||x-c||_p^s$. Additionally, we show that these rates for the $\ell_p^s(\lambda)$-proximal oracle are optimal for algorithms that query in the span of the outputs of the oracle, and we further apply our techniques to settings of high-order and quasi-self-concordant optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v313-adil26b, title = {Convex optimization with $p$-norm oracles}, author = {Adil, Deeksha and Bullins, Brian and Jambulapati, Arun and Sidford, Aaron}, booktitle = {Proceedings of The 37th International Conference on Algorithmic Learning Theory}, pages = {1--38}, year = {2026}, editor = {Telgarsky, Matus and Ullman, Jonathan}, volume = {313}, series = {Proceedings of Machine Learning Research}, month = {23--26 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v313/main/assets/adil26b/adil26b.pdf}, url = {https://proceedings.mlr.press/v313/adil26b.html}, abstract = {In recent years, there have been significant advances in efficiently solving $\ell_s$-regression using linear system solvers and $\ell_2$-regression [Adil-Kyng-Peng-Sachdeva, J. ACM’24]. Would efficient smoothed $\ell_p$-norm solvers lead to even faster rates for solving $\ell_s$-regression when $2 \leq p < s$? In this paper, we give an affirmative answer to this question and show how to solve $\ell_s$-regression using $\tilde{O}(n^{\frac{\nu}{1+\nu}})$ iterations of solving smoothed $\ell_p$ regression problems, where $\nu := \frac{1}{p} - \frac{1}{s}$. To obtain this result, we provide improved accelerated rates for convex optimization problems when given access to an _$\ell_p^s(\lambda)$-proximal oracle_, which, for a point $c$, returns the solution of the regularized problem $\min_{x} f(x) + \lambda ||x-c||_p^s$. Additionally, we show that these rates for the $\ell_p^s(\lambda)$-proximal oracle are optimal for algorithms that query in the span of the outputs of the oracle, and we further apply our techniques to settings of high-order and quasi-self-concordant optimization.} }
Endnote
%0 Conference Paper %T Convex optimization with $p$-norm oracles %A Deeksha Adil %A Brian Bullins %A Arun Jambulapati %A Aaron Sidford %B Proceedings of The 37th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2026 %E Matus Telgarsky %E Jonathan Ullman %F pmlr-v313-adil26b %I PMLR %P 1--38 %U https://proceedings.mlr.press/v313/adil26b.html %V 313 %X In recent years, there have been significant advances in efficiently solving $\ell_s$-regression using linear system solvers and $\ell_2$-regression [Adil-Kyng-Peng-Sachdeva, J. ACM’24]. Would efficient smoothed $\ell_p$-norm solvers lead to even faster rates for solving $\ell_s$-regression when $2 \leq p < s$? In this paper, we give an affirmative answer to this question and show how to solve $\ell_s$-regression using $\tilde{O}(n^{\frac{\nu}{1+\nu}})$ iterations of solving smoothed $\ell_p$ regression problems, where $\nu := \frac{1}{p} - \frac{1}{s}$. To obtain this result, we provide improved accelerated rates for convex optimization problems when given access to an _$\ell_p^s(\lambda)$-proximal oracle_, which, for a point $c$, returns the solution of the regularized problem $\min_{x} f(x) + \lambda ||x-c||_p^s$. Additionally, we show that these rates for the $\ell_p^s(\lambda)$-proximal oracle are optimal for algorithms that query in the span of the outputs of the oracle, and we further apply our techniques to settings of high-order and quasi-self-concordant optimization.
APA
Adil, D., Bullins, B., Jambulapati, A. & Sidford, A.. (2026). Convex optimization with $p$-norm oracles. Proceedings of The 37th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 313:1-38 Available from https://proceedings.mlr.press/v313/adil26b.html.

Related Material