Efficient Regret Minimization in Non-Convex Games

Elad Hazan, Karan Singh, Cyril Zhang
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1433-1441, 2017.

Abstract

We consider regret minimization in repeated games with non-convex loss functions. Minimizing the standard notion of regret is computationally intractable. Thus, we define a natural notion of regret which permits efficient optimization and generalizes offline guarantees for convergence to an approximate local optimum. We give gradient-based methods that achieve optimal regret, which in turn guarantee convergence to equilibrium in this framework.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-hazan17a, title = {Efficient Regret Minimization in Non-Convex Games}, author = {Elad Hazan and Karan Singh and Cyril Zhang}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {1433--1441}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/hazan17a/hazan17a.pdf}, url = {https://proceedings.mlr.press/v70/hazan17a.html}, abstract = {We consider regret minimization in repeated games with non-convex loss functions. Minimizing the standard notion of regret is computationally intractable. Thus, we define a natural notion of regret which permits efficient optimization and generalizes offline guarantees for convergence to an approximate local optimum. We give gradient-based methods that achieve optimal regret, which in turn guarantee convergence to equilibrium in this framework.} }
Endnote
%0 Conference Paper %T Efficient Regret Minimization in Non-Convex Games %A Elad Hazan %A Karan Singh %A Cyril Zhang %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-hazan17a %I PMLR %P 1433--1441 %U https://proceedings.mlr.press/v70/hazan17a.html %V 70 %X We consider regret minimization in repeated games with non-convex loss functions. Minimizing the standard notion of regret is computationally intractable. Thus, we define a natural notion of regret which permits efficient optimization and generalizes offline guarantees for convergence to an approximate local optimum. We give gradient-based methods that achieve optimal regret, which in turn guarantee convergence to equilibrium in this framework.
APA
Hazan, E., Singh, K. & Zhang, C.. (2017). Efficient Regret Minimization in Non-Convex Games. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:1433-1441 Available from https://proceedings.mlr.press/v70/hazan17a.html.

Related Material