Complex Momentum for Optimization in Games

Jonathan P. Lorraine, David Acuna, Paul Vicol, David Duvenaud
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:7742-7765, 2022.

Abstract

We generalize gradient descent with momentum for optimization in differentiable games to have complex-valued momentum. We give theoretical motivation for our method by proving convergence on bilinear zero-sum games for simultaneous and alternating updates. Our method gives real-valued parameter updates, making it a drop-in replacement for standard optimizers. We empirically demonstrate that complex-valued momentum can improve convergence in realistic adversarial games–like generative adversarial networks– by showing we can find better solutions with an almost identical computational cost. We also show a practical complex-valued Adam variant, which we use to train BigGAN to improve inception scores on CIFAR-10.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-lorraine22a, title = { Complex Momentum for Optimization in Games }, author = {Lorraine, Jonathan P. and Acuna, David and Vicol, Paul and Duvenaud, David}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {7742--7765}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/lorraine22a/lorraine22a.pdf}, url = {https://proceedings.mlr.press/v151/lorraine22a.html}, abstract = { We generalize gradient descent with momentum for optimization in differentiable games to have complex-valued momentum. We give theoretical motivation for our method by proving convergence on bilinear zero-sum games for simultaneous and alternating updates. Our method gives real-valued parameter updates, making it a drop-in replacement for standard optimizers. We empirically demonstrate that complex-valued momentum can improve convergence in realistic adversarial games–like generative adversarial networks– by showing we can find better solutions with an almost identical computational cost. We also show a practical complex-valued Adam variant, which we use to train BigGAN to improve inception scores on CIFAR-10. } }
Endnote
%0 Conference Paper %T Complex Momentum for Optimization in Games %A Jonathan P. Lorraine %A David Acuna %A Paul Vicol %A David Duvenaud %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-lorraine22a %I PMLR %P 7742--7765 %U https://proceedings.mlr.press/v151/lorraine22a.html %V 151 %X We generalize gradient descent with momentum for optimization in differentiable games to have complex-valued momentum. We give theoretical motivation for our method by proving convergence on bilinear zero-sum games for simultaneous and alternating updates. Our method gives real-valued parameter updates, making it a drop-in replacement for standard optimizers. We empirically demonstrate that complex-valued momentum can improve convergence in realistic adversarial games–like generative adversarial networks– by showing we can find better solutions with an almost identical computational cost. We also show a practical complex-valued Adam variant, which we use to train BigGAN to improve inception scores on CIFAR-10.
APA
Lorraine, J.P., Acuna, D., Vicol, P. & Duvenaud, D.. (2022). Complex Momentum for Optimization in Games . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:7742-7765 Available from https://proceedings.mlr.press/v151/lorraine22a.html.

Related Material