Learning in Non-convex Games with an Optimization Oracle

Naman Agarwal, Alon Gonen, Elad Hazan
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:18-29, 2019.

Abstract

We consider online learning in an adversarial, non-convex setting under the assumption that the learner has an access to an offline optimization oracle. In the general setting of prediction with expert advice, Hazan and Koren established that in the optimization-oracle model, online learning requires exponentially more computation than statistical learning. In this paper we show that by slightly strengthening the oracle model, the online and the statistical learning models become computationally equivalent. Our result holds for any Lipschitz and bounded (but not necessarily convex) function. As an application we demonstrate how the offline oracle enables efficient computation of an equilibrium in non-convex games, that include GAN (generative adversarial networks) as a special case.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-agarwal19a, title = {Learning in Non-convex Games with an Optimization Oracle}, author = {Agarwal, Naman and Gonen, Alon and Hazan, Elad}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {18--29}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/agarwal19a/agarwal19a.pdf}, url = {https://proceedings.mlr.press/v99/agarwal19a.html}, abstract = {We consider online learning in an adversarial, non-convex setting under the assumption that the learner has an access to an offline optimization oracle. In the general setting of prediction with expert advice, Hazan and Koren established that in the optimization-oracle model, online learning requires exponentially more computation than statistical learning. In this paper we show that by slightly strengthening the oracle model, the online and the statistical learning models become computationally equivalent. Our result holds for any Lipschitz and bounded (but not necessarily convex) function. As an application we demonstrate how the offline oracle enables efficient computation of an equilibrium in non-convex games, that include GAN (generative adversarial networks) as a special case.} }
Endnote
%0 Conference Paper %T Learning in Non-convex Games with an Optimization Oracle %A Naman Agarwal %A Alon Gonen %A Elad Hazan %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-agarwal19a %I PMLR %P 18--29 %U https://proceedings.mlr.press/v99/agarwal19a.html %V 99 %X We consider online learning in an adversarial, non-convex setting under the assumption that the learner has an access to an offline optimization oracle. In the general setting of prediction with expert advice, Hazan and Koren established that in the optimization-oracle model, online learning requires exponentially more computation than statistical learning. In this paper we show that by slightly strengthening the oracle model, the online and the statistical learning models become computationally equivalent. Our result holds for any Lipschitz and bounded (but not necessarily convex) function. As an application we demonstrate how the offline oracle enables efficient computation of an equilibrium in non-convex games, that include GAN (generative adversarial networks) as a special case.
APA
Agarwal, N., Gonen, A. & Hazan, E.. (2019). Learning in Non-convex Games with an Optimization Oracle. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:18-29 Available from https://proceedings.mlr.press/v99/agarwal19a.html.

Related Material