Sinkhorn AutoEncoders

Giorgio Patrini, Rianne van den Berg, Patrick Forré, Marcello Carioni, Samarth Bhargav, Max Welling, Tim Genewein, Frank Nielsen
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:733-743, 2020.

Abstract

Optimal transport offers an alternative to maximum likelihood for learning generative autoencoding models. We show that minimizing the $p$-Wasserstein distance between the generator and the true data distribution is equivalent to the unconstrained min-min optimization of the $p$-Wasserstein distance between the encoder aggregated posterior and the prior in latent space, plus a reconstruction error. We also identify the role of its trade-off hyperparameter as the capacity of the generator: its Lipschitz constant. Moreover, we prove that optimizing the encoder over any class of universal approximators, such as deterministic neural networks, is enough to come arbitrarily close to the optimum. We therefore advertise this framework, which holds for any metric space and prior, as a sweet-spot of current generative autoencoding objectives.We then introduce the Sinkhorn autoencoder (SAE), which approximates and minimizes the $p$-Wasserstein distance in latent space via backprogation through the Sinkhorn algorithm. SAE directly works on samples, i.e. it models the aggregated posterior as an implicit distribution, with no need for a reparameterization trick for gradients estimations. SAE is thus able to work with different metric spaces and priors with minimal adaptations. We demonstrate the flexibility of SAE on latent spaces with different geometries and priors and compare with other methods on benchmark data sets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v115-patrini20a, title = {Sinkhorn AutoEncoders}, author = {Patrini, Giorgio and van den Berg, Rianne and Forr{\'{e}}, Patrick and Carioni, Marcello and Bhargav, Samarth and Welling, Max and Genewein, Tim and Nielsen, Frank}, booktitle = {Proceedings of The 35th Uncertainty in Artificial Intelligence Conference}, pages = {733--743}, year = {2020}, editor = {Adams, Ryan P. and Gogate, Vibhav}, volume = {115}, series = {Proceedings of Machine Learning Research}, month = {22--25 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v115/patrini20a/patrini20a.pdf}, url = {https://proceedings.mlr.press/v115/patrini20a.html}, abstract = {Optimal transport offers an alternative to maximum likelihood for learning generative autoencoding models. We show that minimizing the $p$-Wasserstein distance between the generator and the true data distribution is equivalent to the unconstrained min-min optimization of the $p$-Wasserstein distance between the encoder aggregated posterior and the prior in latent space, plus a reconstruction error. We also identify the role of its trade-off hyperparameter as the capacity of the generator: its Lipschitz constant. Moreover, we prove that optimizing the encoder over any class of universal approximators, such as deterministic neural networks, is enough to come arbitrarily close to the optimum. We therefore advertise this framework, which holds for any metric space and prior, as a sweet-spot of current generative autoencoding objectives.We then introduce the Sinkhorn autoencoder (SAE), which approximates and minimizes the $p$-Wasserstein distance in latent space via backprogation through the Sinkhorn algorithm. SAE directly works on samples, i.e. it models the aggregated posterior as an implicit distribution, with no need for a reparameterization trick for gradients estimations. SAE is thus able to work with different metric spaces and priors with minimal adaptations. We demonstrate the flexibility of SAE on latent spaces with different geometries and priors and compare with other methods on benchmark data sets.} }
Endnote
%0 Conference Paper %T Sinkhorn AutoEncoders %A Giorgio Patrini %A Rianne van den Berg %A Patrick Forré %A Marcello Carioni %A Samarth Bhargav %A Max Welling %A Tim Genewein %A Frank Nielsen %B Proceedings of The 35th Uncertainty in Artificial Intelligence Conference %C Proceedings of Machine Learning Research %D 2020 %E Ryan P. Adams %E Vibhav Gogate %F pmlr-v115-patrini20a %I PMLR %P 733--743 %U https://proceedings.mlr.press/v115/patrini20a.html %V 115 %X Optimal transport offers an alternative to maximum likelihood for learning generative autoencoding models. We show that minimizing the $p$-Wasserstein distance between the generator and the true data distribution is equivalent to the unconstrained min-min optimization of the $p$-Wasserstein distance between the encoder aggregated posterior and the prior in latent space, plus a reconstruction error. We also identify the role of its trade-off hyperparameter as the capacity of the generator: its Lipschitz constant. Moreover, we prove that optimizing the encoder over any class of universal approximators, such as deterministic neural networks, is enough to come arbitrarily close to the optimum. We therefore advertise this framework, which holds for any metric space and prior, as a sweet-spot of current generative autoencoding objectives.We then introduce the Sinkhorn autoencoder (SAE), which approximates and minimizes the $p$-Wasserstein distance in latent space via backprogation through the Sinkhorn algorithm. SAE directly works on samples, i.e. it models the aggregated posterior as an implicit distribution, with no need for a reparameterization trick for gradients estimations. SAE is thus able to work with different metric spaces and priors with minimal adaptations. We demonstrate the flexibility of SAE on latent spaces with different geometries and priors and compare with other methods on benchmark data sets.
APA
Patrini, G., van den Berg, R., Forré, P., Carioni, M., Bhargav, S., Welling, M., Genewein, T. & Nielsen, F.. (2020). Sinkhorn AutoEncoders. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, in Proceedings of Machine Learning Research 115:733-743 Available from https://proceedings.mlr.press/v115/patrini20a.html.

Related Material