Double-Loop Unadjusted Langevin Algorithm

Paul Rolland, Armin Eftekhari, Ali Kavis, Volkan Cevher
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:8169-8177, 2020.

Abstract

A well-known first-order method for sampling from log-concave probability distributions is the Unadjusted Langevin Algorithm (ULA). This work proposes a new annealing step-size schedule for ULA, which allows to prove new convergence guarantees for sampling from a smooth log-concave distribution, which are not covered by existing state-of-the-art convergence guarantees. To establish this result, we derive a new theoretical bound that relates the Wasserstein distance to total variation distance between any two log-concave distributions that complements the reach of Talagrand $T_2$ inequality. Moreover, applying this new step size schedule to an existing constrained sampling algorithm, we show state-of-the-art convergence rates for sampling from a constrained log-concave distribution, as well as improved dimension dependence.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-rolland20a, title = {Double-Loop Unadjusted {L}angevin Algorithm}, author = {Rolland, Paul and Eftekhari, Armin and Kavis, Ali and Cevher, Volkan}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {8169--8177}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/rolland20a/rolland20a.pdf}, url = {https://proceedings.mlr.press/v119/rolland20a.html}, abstract = {A well-known first-order method for sampling from log-concave probability distributions is the Unadjusted Langevin Algorithm (ULA). This work proposes a new annealing step-size schedule for ULA, which allows to prove new convergence guarantees for sampling from a smooth log-concave distribution, which are not covered by existing state-of-the-art convergence guarantees. To establish this result, we derive a new theoretical bound that relates the Wasserstein distance to total variation distance between any two log-concave distributions that complements the reach of Talagrand $T_2$ inequality. Moreover, applying this new step size schedule to an existing constrained sampling algorithm, we show state-of-the-art convergence rates for sampling from a constrained log-concave distribution, as well as improved dimension dependence.} }
Endnote
%0 Conference Paper %T Double-Loop Unadjusted Langevin Algorithm %A Paul Rolland %A Armin Eftekhari %A Ali Kavis %A Volkan Cevher %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-rolland20a %I PMLR %P 8169--8177 %U https://proceedings.mlr.press/v119/rolland20a.html %V 119 %X A well-known first-order method for sampling from log-concave probability distributions is the Unadjusted Langevin Algorithm (ULA). This work proposes a new annealing step-size schedule for ULA, which allows to prove new convergence guarantees for sampling from a smooth log-concave distribution, which are not covered by existing state-of-the-art convergence guarantees. To establish this result, we derive a new theoretical bound that relates the Wasserstein distance to total variation distance between any two log-concave distributions that complements the reach of Talagrand $T_2$ inequality. Moreover, applying this new step size schedule to an existing constrained sampling algorithm, we show state-of-the-art convergence rates for sampling from a constrained log-concave distribution, as well as improved dimension dependence.
APA
Rolland, P., Eftekhari, A., Kavis, A. & Cevher, V.. (2020). Double-Loop Unadjusted Langevin Algorithm. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:8169-8177 Available from https://proceedings.mlr.press/v119/rolland20a.html.

Related Material