Multi-objective training of Generative Adversarial Networks with multiple discriminators

Isabela Albuquerque, Joao Monteiro, Thang Doan, Breandan Considine, Tiago Falk, Ioannis Mitliagkas
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:202-211, 2019.

Abstract

Recent literature has demonstrated promising results for training Generative Adversarial Networks by employing a set of discriminators, in contrast to the traditional game involving one generator against a single adversary. Such methods perform single-objective optimization on some simple consolidation of the losses, e.g. an arithmetic average. In this work, we revisit the multiple-discriminator setting by framing the simultaneous minimization of losses provided by different models as a multi-objective optimization problem. Specifically, we evaluate the performance of multiple gradient descent and the hypervolume maximization algorithm on a number of different datasets. Moreover, we argue that the previously proposed methods and hypervolume maximization can all be seen as variations of multiple gradient descent in which the update direction can be computed efficiently. Our results indicate that hypervolume maximization presents a better compromise between sample quality and computational cost than previous methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-albuquerque19a, title = {Multi-objective training of Generative Adversarial Networks with multiple discriminators}, author = {Albuquerque, Isabela and Monteiro, Joao and Doan, Thang and Considine, Breandan and Falk, Tiago and Mitliagkas, Ioannis}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {202--211}, year = {2019}, editor = {Kamalika Chaudhuri and Ruslan Salakhutdinov}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/albuquerque19a/albuquerque19a.pdf}, url = { http://proceedings.mlr.press/v97/albuquerque19a.html }, abstract = {Recent literature has demonstrated promising results for training Generative Adversarial Networks by employing a set of discriminators, in contrast to the traditional game involving one generator against a single adversary. Such methods perform single-objective optimization on some simple consolidation of the losses, e.g. an arithmetic average. In this work, we revisit the multiple-discriminator setting by framing the simultaneous minimization of losses provided by different models as a multi-objective optimization problem. Specifically, we evaluate the performance of multiple gradient descent and the hypervolume maximization algorithm on a number of different datasets. Moreover, we argue that the previously proposed methods and hypervolume maximization can all be seen as variations of multiple gradient descent in which the update direction can be computed efficiently. Our results indicate that hypervolume maximization presents a better compromise between sample quality and computational cost than previous methods.} }
Endnote
%0 Conference Paper %T Multi-objective training of Generative Adversarial Networks with multiple discriminators %A Isabela Albuquerque %A Joao Monteiro %A Thang Doan %A Breandan Considine %A Tiago Falk %A Ioannis Mitliagkas %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-albuquerque19a %I PMLR %P 202--211 %U http://proceedings.mlr.press/v97/albuquerque19a.html %V 97 %X Recent literature has demonstrated promising results for training Generative Adversarial Networks by employing a set of discriminators, in contrast to the traditional game involving one generator against a single adversary. Such methods perform single-objective optimization on some simple consolidation of the losses, e.g. an arithmetic average. In this work, we revisit the multiple-discriminator setting by framing the simultaneous minimization of losses provided by different models as a multi-objective optimization problem. Specifically, we evaluate the performance of multiple gradient descent and the hypervolume maximization algorithm on a number of different datasets. Moreover, we argue that the previously proposed methods and hypervolume maximization can all be seen as variations of multiple gradient descent in which the update direction can be computed efficiently. Our results indicate that hypervolume maximization presents a better compromise between sample quality and computational cost than previous methods.
APA
Albuquerque, I., Monteiro, J., Doan, T., Considine, B., Falk, T. & Mitliagkas, I.. (2019). Multi-objective training of Generative Adversarial Networks with multiple discriminators. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:202-211 Available from http://proceedings.mlr.press/v97/albuquerque19a.html .

Related Material