Faster Rates for Convex-Concave Games

Jacob Abernethy, Kevin A. Lai, Kfir Y. Levy, Jun-Kun Wang
Proceedings of the 31st Conference On Learning Theory, PMLR 75:1595-1625, 2018.

Abstract

We consider the use of no-regret algorithms to compute equilibria for particular classes of convex-concave games. While standard regret bounds would lead to convergence rates on the order of $O(T^{-1/2})$, recent work \citep{RS13,SALS15} has established $O(1/T)$ rates by taking advantage of a particular class of optimistic prediction algorithms. In this work we go further, showing that for a particular class of games one achieves a $O(1/T^2)$ rate, and we show how this applies to the Frank-Wolfe method and recovers a similar bound \citep{D15}. We also show that such no-regret techniques can even achieve a linear rate, $O(\exp(-T))$, for equilibrium computation under additional curvature assumptions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v75-abernethy18a, title = {Faster Rates for Convex-Concave Games}, author = {Abernethy, Jacob and Lai, Kevin A. and Levy, Kfir Y. and Wang, Jun-Kun}, booktitle = {Proceedings of the 31st Conference On Learning Theory}, pages = {1595--1625}, year = {2018}, editor = {Bubeck, Sébastien and Perchet, Vianney and Rigollet, Philippe}, volume = {75}, series = {Proceedings of Machine Learning Research}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v75/abernethy18a/abernethy18a.pdf}, url = {https://proceedings.mlr.press/v75/abernethy18a.html}, abstract = {We consider the use of no-regret algorithms to compute equilibria for particular classes of convex-concave games. While standard regret bounds would lead to convergence rates on the order of $O(T^{-1/2})$, recent work \citep{RS13,SALS15} has established $O(1/T)$ rates by taking advantage of a particular class of optimistic prediction algorithms. In this work we go further, showing that for a particular class of games one achieves a $O(1/T^2)$ rate, and we show how this applies to the Frank-Wolfe method and recovers a similar bound \citep{D15}. We also show that such no-regret techniques can even achieve a linear rate, $O(\exp(-T))$, for equilibrium computation under additional curvature assumptions. } }
Endnote
%0 Conference Paper %T Faster Rates for Convex-Concave Games %A Jacob Abernethy %A Kevin A. Lai %A Kfir Y. Levy %A Jun-Kun Wang %B Proceedings of the 31st Conference On Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E Sébastien Bubeck %E Vianney Perchet %E Philippe Rigollet %F pmlr-v75-abernethy18a %I PMLR %P 1595--1625 %U https://proceedings.mlr.press/v75/abernethy18a.html %V 75 %X We consider the use of no-regret algorithms to compute equilibria for particular classes of convex-concave games. While standard regret bounds would lead to convergence rates on the order of $O(T^{-1/2})$, recent work \citep{RS13,SALS15} has established $O(1/T)$ rates by taking advantage of a particular class of optimistic prediction algorithms. In this work we go further, showing that for a particular class of games one achieves a $O(1/T^2)$ rate, and we show how this applies to the Frank-Wolfe method and recovers a similar bound \citep{D15}. We also show that such no-regret techniques can even achieve a linear rate, $O(\exp(-T))$, for equilibrium computation under additional curvature assumptions.
APA
Abernethy, J., Lai, K.A., Levy, K.Y. & Wang, J.. (2018). Faster Rates for Convex-Concave Games. Proceedings of the 31st Conference On Learning Theory, in Proceedings of Machine Learning Research 75:1595-1625 Available from https://proceedings.mlr.press/v75/abernethy18a.html.

Related Material