A Two-Step Computation of the Exact GAN Wasserstein Distance

Huidong Liu, Xianfeng GU, Dimitris Samaras
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3159-3168, 2018.

Abstract

In this paper, we propose a two-step method to compute the Wasserstein distance in Wasserstein Generative Adversarial Networks (WGANs): 1) The convex part of our objective can be solved by linear programming; 2) The non-convex residual can be approximated by a deep neural network. We theoretically prove that the proposed formulation is equivalent to the discrete Monge-Kantorovich dual formulation. Furthermore, we give the approximation error bound of the Wasserstein distance and the error bound of generalizing the Wasserstein distance from discrete to continuous distributions. Our approach optimizes the exact Wasserstein distance, obviating the need for weight clipping previously used in WGANs. Results on synthetic data show that the our method computes the Wasserstein distance more accurately. Qualitative and quantitative results on MNIST, LSUN and CIFAR-10 datasets show that the proposed method is more efficient than state-of-the-art WGAN methods, and still produces images of comparable quality.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-liu18d, title = {A Two-Step Computation of the Exact {GAN} {W}asserstein Distance}, author = {Liu, Huidong and GU, Xianfeng and Samaras, Dimitris}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3159--3168}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/liu18d/liu18d.pdf}, url = {https://proceedings.mlr.press/v80/liu18d.html}, abstract = {In this paper, we propose a two-step method to compute the Wasserstein distance in Wasserstein Generative Adversarial Networks (WGANs): 1) The convex part of our objective can be solved by linear programming; 2) The non-convex residual can be approximated by a deep neural network. We theoretically prove that the proposed formulation is equivalent to the discrete Monge-Kantorovich dual formulation. Furthermore, we give the approximation error bound of the Wasserstein distance and the error bound of generalizing the Wasserstein distance from discrete to continuous distributions. Our approach optimizes the exact Wasserstein distance, obviating the need for weight clipping previously used in WGANs. Results on synthetic data show that the our method computes the Wasserstein distance more accurately. Qualitative and quantitative results on MNIST, LSUN and CIFAR-10 datasets show that the proposed method is more efficient than state-of-the-art WGAN methods, and still produces images of comparable quality.} }
Endnote
%0 Conference Paper %T A Two-Step Computation of the Exact GAN Wasserstein Distance %A Huidong Liu %A Xianfeng GU %A Dimitris Samaras %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-liu18d %I PMLR %P 3159--3168 %U https://proceedings.mlr.press/v80/liu18d.html %V 80 %X In this paper, we propose a two-step method to compute the Wasserstein distance in Wasserstein Generative Adversarial Networks (WGANs): 1) The convex part of our objective can be solved by linear programming; 2) The non-convex residual can be approximated by a deep neural network. We theoretically prove that the proposed formulation is equivalent to the discrete Monge-Kantorovich dual formulation. Furthermore, we give the approximation error bound of the Wasserstein distance and the error bound of generalizing the Wasserstein distance from discrete to continuous distributions. Our approach optimizes the exact Wasserstein distance, obviating the need for weight clipping previously used in WGANs. Results on synthetic data show that the our method computes the Wasserstein distance more accurately. Qualitative and quantitative results on MNIST, LSUN and CIFAR-10 datasets show that the proposed method is more efficient than state-of-the-art WGAN methods, and still produces images of comparable quality.
APA
Liu, H., GU, X. & Samaras, D.. (2018). A Two-Step Computation of the Exact GAN Wasserstein Distance. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3159-3168 Available from https://proceedings.mlr.press/v80/liu18d.html.

Related Material