Linear Lower Bounds and Conditioning of Differentiable Games

Adam Ibrahim, Waı̈ss Azizian, Gauthier Gidel, Ioannis Mitliagkas
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:4583-4593, 2020.

Abstract

Recent successes of game-theoretic formulations in ML have caused a resurgence of research interest in differentiable games. Overwhelmingly, that research focuses on methods and upper bounds on their speed of convergence. In this work, we approach the question of fundamental iteration complexity by providing lower bounds to complement the linear (i.e. geometric) upper bounds observed in the literature on a wide class of problems. We cast saddle-point and min-max problems as 2-player games. We leverage tools from single-objective convex optimisation to propose new linear lower bounds for convex-concave games. Notably, we give a linear lower bound for $n$-player differentiable games, by using the spectral properties of the update operator. We then propose a new definition of the condition number arising from our lower bound analysis. Unlike past definitions, our condition number captures the fact that linear rates are possible in games, even in the absence of strong convexity or strong concavity in the variables.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-ibrahim20a, title = {Linear Lower Bounds and Conditioning of Differentiable Games}, author = {Ibrahim, Adam and Azizian, Wa\"{\i}ss and Gidel, Gauthier and Mitliagkas, Ioannis}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {4583--4593}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/ibrahim20a/ibrahim20a.pdf}, url = {https://proceedings.mlr.press/v119/ibrahim20a.html}, abstract = {Recent successes of game-theoretic formulations in ML have caused a resurgence of research interest in differentiable games. Overwhelmingly, that research focuses on methods and upper bounds on their speed of convergence. In this work, we approach the question of fundamental iteration complexity by providing lower bounds to complement the linear (i.e. geometric) upper bounds observed in the literature on a wide class of problems. We cast saddle-point and min-max problems as 2-player games. We leverage tools from single-objective convex optimisation to propose new linear lower bounds for convex-concave games. Notably, we give a linear lower bound for $n$-player differentiable games, by using the spectral properties of the update operator. We then propose a new definition of the condition number arising from our lower bound analysis. Unlike past definitions, our condition number captures the fact that linear rates are possible in games, even in the absence of strong convexity or strong concavity in the variables.} }
Endnote
%0 Conference Paper %T Linear Lower Bounds and Conditioning of Differentiable Games %A Adam Ibrahim %A Waı̈ss Azizian %A Gauthier Gidel %A Ioannis Mitliagkas %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-ibrahim20a %I PMLR %P 4583--4593 %U https://proceedings.mlr.press/v119/ibrahim20a.html %V 119 %X Recent successes of game-theoretic formulations in ML have caused a resurgence of research interest in differentiable games. Overwhelmingly, that research focuses on methods and upper bounds on their speed of convergence. In this work, we approach the question of fundamental iteration complexity by providing lower bounds to complement the linear (i.e. geometric) upper bounds observed in the literature on a wide class of problems. We cast saddle-point and min-max problems as 2-player games. We leverage tools from single-objective convex optimisation to propose new linear lower bounds for convex-concave games. Notably, we give a linear lower bound for $n$-player differentiable games, by using the spectral properties of the update operator. We then propose a new definition of the condition number arising from our lower bound analysis. Unlike past definitions, our condition number captures the fact that linear rates are possible in games, even in the absence of strong convexity or strong concavity in the variables.
APA
Ibrahim, A., Azizian, W., Gidel, G. & Mitliagkas, I.. (2020). Linear Lower Bounds and Conditioning of Differentiable Games. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:4583-4593 Available from https://proceedings.mlr.press/v119/ibrahim20a.html.

Related Material