David Balduzzi,
Sebastien Racaniere,
James Martens,
Jakob Foerster,
Karl Tuyls,
Thore Graepel
;
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:354-363, 2018.
Abstract
The cornerstone underpinning deep learning is the guarantee that gradient descent on an objective converges to local minima. Unfortunately, this guarantee fails in settings, such as generative adversarial nets, where there are multiple interacting losses. The behavior of gradient-based methods in games is not well understood – and is becoming increasingly important as adversarial and multi-objective architectures proliferate. In this paper, we develop new techniques to understand and control the dynamics in general games. The key result is to decompose the second-order dynamics into two components. The first is related to potential games, which reduce to gradient descent on an implicit function; the second relates to Hamiltonian games, a new class of games that obey a conservation law, akin to conservation laws in classical mechanical systems. The decomposition motivates Symplectic Gradient Adjustment (SGA), a new algorithm for finding stable fixed points in general games. Basic experiments show SGA is competitive with recently proposed algorithms for finding local Nash equilibria in GANs – whilst at the same time being applicable to – and having guarantees in – much more general games.
@InProceedings{pmlr-v80-balduzzi18a,
title = {The Mechanics of n-Player Differentiable Games},
author = {Balduzzi, David and Racaniere, Sebastien and Martens, James and Foerster, Jakob and Tuyls, Karl and Graepel, Thore},
booktitle = {Proceedings of the 35th International Conference on Machine Learning},
pages = {354--363},
year = {2018},
editor = {Dy, Jennifer and Krause, Andreas},
volume = {80},
series = {Proceedings of Machine Learning Research},
address = {Stockholmsmässan, Stockholm Sweden},
month = {10--15 Jul},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v80/balduzzi18a/balduzzi18a.pdf},
url = {http://proceedings.mlr.press/v80/balduzzi18a.html},
abstract = {The cornerstone underpinning deep learning is the guarantee that gradient descent on an objective converges to local minima. Unfortunately, this guarantee fails in settings, such as generative adversarial nets, where there are multiple interacting losses. The behavior of gradient-based methods in games is not well understood – and is becoming increasingly important as adversarial and multi-objective architectures proliferate. In this paper, we develop new techniques to understand and control the dynamics in general games. The key result is to decompose the second-order dynamics into two components. The first is related to potential games, which reduce to gradient descent on an implicit function; the second relates to Hamiltonian games, a new class of games that obey a conservation law, akin to conservation laws in classical mechanical systems. The decomposition motivates Symplectic Gradient Adjustment (SGA), a new algorithm for finding stable fixed points in general games. Basic experiments show SGA is competitive with recently proposed algorithms for finding local Nash equilibria in GANs – whilst at the same time being applicable to – and having guarantees in – much more general games.}
}
%0 Conference Paper
%T The Mechanics of n-Player Differentiable Games
%A David Balduzzi
%A Sebastien Racaniere
%A James Martens
%A Jakob Foerster
%A Karl Tuyls
%A Thore Graepel
%B Proceedings of the 35th International Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2018
%E Jennifer Dy
%E Andreas Krause
%F pmlr-v80-balduzzi18a
%I PMLR
%J Proceedings of Machine Learning Research
%P 354--363
%U http://proceedings.mlr.press
%V 80
%W PMLR
%X The cornerstone underpinning deep learning is the guarantee that gradient descent on an objective converges to local minima. Unfortunately, this guarantee fails in settings, such as generative adversarial nets, where there are multiple interacting losses. The behavior of gradient-based methods in games is not well understood – and is becoming increasingly important as adversarial and multi-objective architectures proliferate. In this paper, we develop new techniques to understand and control the dynamics in general games. The key result is to decompose the second-order dynamics into two components. The first is related to potential games, which reduce to gradient descent on an implicit function; the second relates to Hamiltonian games, a new class of games that obey a conservation law, akin to conservation laws in classical mechanical systems. The decomposition motivates Symplectic Gradient Adjustment (SGA), a new algorithm for finding stable fixed points in general games. Basic experiments show SGA is competitive with recently proposed algorithms for finding local Nash equilibria in GANs – whilst at the same time being applicable to – and having guarantees in – much more general games.
Balduzzi, D., Racaniere, S., Martens, J., Foerster, J., Tuyls, K. & Graepel, T.. (2018). The Mechanics of n-Player Differentiable Games. Proceedings of the 35th International Conference on Machine Learning, in PMLR 80:354-363
This site last compiled Fri, 08 Feb 2019 11:50:29 +0000