Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem

Andre Wibisono
Proceedings of the 31st Conference On Learning Theory, PMLR 75:2093-3027, 2018.

Abstract

We study sampling as optimization in the space of measures. We focus on gradient flow-based optimization with the Langevin dynamics as a case study. We investigate the source of the bias of the unadjusted Langevin algorithm (ULA) in discrete time, and consider how to remove or reduce the bias. We point out the difficulty is that the heat flow is exactly solvable, but neither its forward nor backward method is implementable in general, except for Gaussian data. We propose the symmetrized Langevin algorithm (SLA), which should have a smaller bias than ULA, at the price of implementing a proximal gradient step in space. We show SLA is in fact consistent for Gaussian target measure, whereas ULA is not. We also illustrate various algorithms explicitly for Gaussian target measure with Gaussian data, including gradient descent, proximal gradient, and Forward-Backward, and show they are all consistent.

Cite this Paper


BibTeX
@InProceedings{pmlr-v75-wibisono18a, title = {Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem}, author = {Wibisono, Andre}, booktitle = {Proceedings of the 31st Conference On Learning Theory}, pages = {2093--3027}, year = {2018}, editor = {Bubeck, Sébastien and Perchet, Vianney and Rigollet, Philippe}, volume = {75}, series = {Proceedings of Machine Learning Research}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v75/wibisono18a/wibisono18a.pdf}, url = {https://proceedings.mlr.press/v75/wibisono18a.html}, abstract = {We study sampling as optimization in the space of measures. We focus on gradient flow-based optimization with the Langevin dynamics as a case study. We investigate the source of the bias of the unadjusted Langevin algorithm (ULA) in discrete time, and consider how to remove or reduce the bias. We point out the difficulty is that the heat flow is exactly solvable, but neither its forward nor backward method is implementable in general, except for Gaussian data. We propose the symmetrized Langevin algorithm (SLA), which should have a smaller bias than ULA, at the price of implementing a proximal gradient step in space. We show SLA is in fact consistent for Gaussian target measure, whereas ULA is not. We also illustrate various algorithms explicitly for Gaussian target measure with Gaussian data, including gradient descent, proximal gradient, and Forward-Backward, and show they are all consistent.} }
Endnote
%0 Conference Paper %T Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem %A Andre Wibisono %B Proceedings of the 31st Conference On Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E Sébastien Bubeck %E Vianney Perchet %E Philippe Rigollet %F pmlr-v75-wibisono18a %I PMLR %P 2093--3027 %U https://proceedings.mlr.press/v75/wibisono18a.html %V 75 %X We study sampling as optimization in the space of measures. We focus on gradient flow-based optimization with the Langevin dynamics as a case study. We investigate the source of the bias of the unadjusted Langevin algorithm (ULA) in discrete time, and consider how to remove or reduce the bias. We point out the difficulty is that the heat flow is exactly solvable, but neither its forward nor backward method is implementable in general, except for Gaussian data. We propose the symmetrized Langevin algorithm (SLA), which should have a smaller bias than ULA, at the price of implementing a proximal gradient step in space. We show SLA is in fact consistent for Gaussian target measure, whereas ULA is not. We also illustrate various algorithms explicitly for Gaussian target measure with Gaussian data, including gradient descent, proximal gradient, and Forward-Backward, and show they are all consistent.
APA
Wibisono, A.. (2018). Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem. Proceedings of the 31st Conference On Learning Theory, in Proceedings of Machine Learning Research 75:2093-3027 Available from https://proceedings.mlr.press/v75/wibisono18a.html.

Related Material