Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem
[edit]
Proceedings of the 31st Conference On Learning Theory, PMLR 75:20933027, 2018.
Abstract
We study sampling as optimization in the space of measures. We focus on gradient flowbased optimization with the Langevin dynamics as a case study. We investigate the source of the bias of the unadjusted Langevin algorithm (ULA) in discrete time, and consider how to remove or reduce the bias. We point out the difficulty is that the heat flow is exactly solvable, but neither its forward nor backward method is implementable in general, except for Gaussian data. We propose the symmetrized Langevin algorithm (SLA), which should have a smaller bias than ULA, at the price of implementing a proximal gradient step in space. We show SLA is in fact consistent for Gaussian target measure, whereas ULA is not. We also illustrate various algorithms explicitly for Gaussian target measure with Gaussian data, including gradient descent, proximal gradient, and ForwardBackward, and show they are all consistent.
Related Material


