Efficient Projection-Free Online Convex Optimization with Membership Oracle

Zakaria Mhammedi
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5314-5390, 2022.

Abstract

In constrained convex optimization, existing interior point methods do not scale well with the dimension of the ambient space. Alternative approaches such as Projected Gradient Descent only provide a computational benefit for simple convex sets where Euclidean projections can be performed efficiently, such as Euclidean balls. For other more complex sets, the cost of the projections can be too high. To circumvent these issues, alternative methods based on the famous Frank-Wolfe algorithm have been studied and widely used. Such methods use a Linear Optimization Oracle at each iteration instead of Euclidean projections; the former can often be performed efficiently. Such methods have also been extended to the online and stochastic optimization settings. However, the Frank-Wolfe algorithm and its variants do not achieve the optimal performance, in terms of regret or rate, for general convex sets. What is more, the Linear Optimization Oracle they use can still be computationally expensive in some cases. In this paper, we move away from Frank-Wolfe style algorithms and present a new reduction that turns any algorithm $\mathsf{A}$ over a Euclidean ball (where projections are cheap) to an algorithm over a general convex constraint set $\mathcal{C}$ contained within the ball, without sacrificing the performance of the original algorithm $\mathsf{A}$ by much. Our reduction requires $O(T \ln T)$ calls to a Membership Oracle on $\mathcal{C}$ after $T$ rounds, and no linear optimization on $\K$ is needed. Using this reduction, we recover optimal regret bounds [resp. rates], in terms of the number of iterations, in online [resp. stochastic] convex optimization. Our guarantees are also useful in the offline convex optimization setting when the dimension of the ambient space is large.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-mhammedi22a, title = {Efficient Projection-Free Online Convex Optimization with Membership Oracle}, author = {Mhammedi, Zakaria}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {5314--5390}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/mhammedi22a/mhammedi22a.pdf}, url = {https://proceedings.mlr.press/v178/mhammedi22a.html}, abstract = {In constrained convex optimization, existing interior point methods do not scale well with the dimension of the ambient space. Alternative approaches such as Projected Gradient Descent only provide a computational benefit for simple convex sets where Euclidean projections can be performed efficiently, such as Euclidean balls. For other more complex sets, the cost of the projections can be too high. To circumvent these issues, alternative methods based on the famous Frank-Wolfe algorithm have been studied and widely used. Such methods use a Linear Optimization Oracle at each iteration instead of Euclidean projections; the former can often be performed efficiently. Such methods have also been extended to the online and stochastic optimization settings. However, the Frank-Wolfe algorithm and its variants do not achieve the optimal performance, in terms of regret or rate, for general convex sets. What is more, the Linear Optimization Oracle they use can still be computationally expensive in some cases. In this paper, we move away from Frank-Wolfe style algorithms and present a new reduction that turns any algorithm $\mathsf{A}$ over a Euclidean ball (where projections are cheap) to an algorithm over a general convex constraint set $\mathcal{C}$ contained within the ball, without sacrificing the performance of the original algorithm $\mathsf{A}$ by much. Our reduction requires $O(T \ln T)$ calls to a Membership Oracle on $\mathcal{C}$ after $T$ rounds, and no linear optimization on $\K$ is needed. Using this reduction, we recover optimal regret bounds [resp. rates], in terms of the number of iterations, in online [resp. stochastic] convex optimization. Our guarantees are also useful in the offline convex optimization setting when the dimension of the ambient space is large.} }
Endnote
%0 Conference Paper %T Efficient Projection-Free Online Convex Optimization with Membership Oracle %A Zakaria Mhammedi %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-mhammedi22a %I PMLR %P 5314--5390 %U https://proceedings.mlr.press/v178/mhammedi22a.html %V 178 %X In constrained convex optimization, existing interior point methods do not scale well with the dimension of the ambient space. Alternative approaches such as Projected Gradient Descent only provide a computational benefit for simple convex sets where Euclidean projections can be performed efficiently, such as Euclidean balls. For other more complex sets, the cost of the projections can be too high. To circumvent these issues, alternative methods based on the famous Frank-Wolfe algorithm have been studied and widely used. Such methods use a Linear Optimization Oracle at each iteration instead of Euclidean projections; the former can often be performed efficiently. Such methods have also been extended to the online and stochastic optimization settings. However, the Frank-Wolfe algorithm and its variants do not achieve the optimal performance, in terms of regret or rate, for general convex sets. What is more, the Linear Optimization Oracle they use can still be computationally expensive in some cases. In this paper, we move away from Frank-Wolfe style algorithms and present a new reduction that turns any algorithm $\mathsf{A}$ over a Euclidean ball (where projections are cheap) to an algorithm over a general convex constraint set $\mathcal{C}$ contained within the ball, without sacrificing the performance of the original algorithm $\mathsf{A}$ by much. Our reduction requires $O(T \ln T)$ calls to a Membership Oracle on $\mathcal{C}$ after $T$ rounds, and no linear optimization on $\K$ is needed. Using this reduction, we recover optimal regret bounds [resp. rates], in terms of the number of iterations, in online [resp. stochastic] convex optimization. Our guarantees are also useful in the offline convex optimization setting when the dimension of the ambient space is large.
APA
Mhammedi, Z.. (2022). Efficient Projection-Free Online Convex Optimization with Membership Oracle. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:5314-5390 Available from https://proceedings.mlr.press/v178/mhammedi22a.html.

Related Material