Probability Functional Descent: A Unifying Perspective on GANs, Variational Inference, and Reinforcement Learning
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:1213-1222, 2019.
The goal of this paper is to provide a unifying view of a wide range of problems of interest in machine learning by framing them as the minimization of functionals defined on the space of probability measures. In particular, we show that generative adversarial networks, variational inference, and actor-critic methods in reinforcement learning can all be seen through the lens of our framework. We then discuss a generic optimization algorithm for our formulation, called probability functional descent (PFD), and show how this algorithm recovers existing methods developed independently in the settings mentioned earlier.