A Wasserstein Minimum Velocity Approach to Learning Unnormalized Models

Ziyu Wang, Shuyu Cheng, Li Yueru, Jun Zhu, Bo Zhang
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:3728-3738, 2020.

Abstract

Score matching provides an effective approach to learning flexible unnormalized models, but its scalability is limited by the need to evaluate a second-order derivative. In this paper, we present a scalable approximation to a general family of learning objectives including score matching, by observing a new connection between these objectives and Wasserstein gradient flows. We present applications with promise in learning neural density estimators on manifolds, and training implicit variational and Wasserstein auto-encoders with a manifold-valued prior.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-wang20j, title = {A Wasserstein Minimum Velocity Approach to Learning Unnormalized Models}, author = {Wang, Ziyu and Cheng, Shuyu and Yueru, Li and Zhu, Jun and Zhang, Bo}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {3728--3738}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/wang20j/wang20j.pdf}, url = {https://proceedings.mlr.press/v108/wang20j.html}, abstract = {Score matching provides an effective approach to learning flexible unnormalized models, but its scalability is limited by the need to evaluate a second-order derivative. In this paper, we present a scalable approximation to a general family of learning objectives including score matching, by observing a new connection between these objectives and Wasserstein gradient flows. We present applications with promise in learning neural density estimators on manifolds, and training implicit variational and Wasserstein auto-encoders with a manifold-valued prior.} }
Endnote
%0 Conference Paper %T A Wasserstein Minimum Velocity Approach to Learning Unnormalized Models %A Ziyu Wang %A Shuyu Cheng %A Li Yueru %A Jun Zhu %A Bo Zhang %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-wang20j %I PMLR %P 3728--3738 %U https://proceedings.mlr.press/v108/wang20j.html %V 108 %X Score matching provides an effective approach to learning flexible unnormalized models, but its scalability is limited by the need to evaluate a second-order derivative. In this paper, we present a scalable approximation to a general family of learning objectives including score matching, by observing a new connection between these objectives and Wasserstein gradient flows. We present applications with promise in learning neural density estimators on manifolds, and training implicit variational and Wasserstein auto-encoders with a manifold-valued prior.
APA
Wang, Z., Cheng, S., Yueru, L., Zhu, J. & Zhang, B.. (2020). A Wasserstein Minimum Velocity Approach to Learning Unnormalized Models. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:3728-3738 Available from https://proceedings.mlr.press/v108/wang20j.html.

Related Material