Projected Stochastic Gradient Langevin Algorithms for Constrained Sampling and Non-Convex Learning

Andrew Lamperski
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:2891-2937, 2021.

Abstract

Langevin algorithms are gradient descent methods with additive noise. They have been used for decades in Markov Chain Monte Carlo (MCMC) sampling, optimization, and learning. Their convergence properties for unconstrained non-convex optimization and learning problems have been studied widely in the last few years. Other work has examined projected Langevin algorithms for sampling from log-concave distributions restricted to convex compact sets. For learning and optimization, log-concave distributions correspond to convex losses. In this paper, we analyze the case of non-convex losses with compact convex constraint sets and IID external data variables. We term the resulting method the projected stochastic gradient Langevin algorithm (PSGLA). We show the algorithm achieves a deviation of $O(T^{-1/4} (log T )^{1/2} )$ from its target distribution in 1-Wasserstein distance. For optimization and learning, we show that the algorithm achieves $\epsilon$-suboptimal solutions, on average, provided that it is run for a time that is polynomial in $\epsilon$ and slightly super-exponential in the problem dimension.

Cite this Paper


BibTeX
@InProceedings{pmlr-v134-lamperski21a, title = {Projected Stochastic Gradient Langevin Algorithms for Constrained Sampling and Non-Convex Learning}, author = {Lamperski, Andrew}, booktitle = {Proceedings of Thirty Fourth Conference on Learning Theory}, pages = {2891--2937}, year = {2021}, editor = {Belkin, Mikhail and Kpotufe, Samory}, volume = {134}, series = {Proceedings of Machine Learning Research}, month = {15--19 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v134/lamperski21a/lamperski21a.pdf}, url = {https://proceedings.mlr.press/v134/lamperski21a.html}, abstract = {Langevin algorithms are gradient descent methods with additive noise. They have been used for decades in Markov Chain Monte Carlo (MCMC) sampling, optimization, and learning. Their convergence properties for unconstrained non-convex optimization and learning problems have been studied widely in the last few years. Other work has examined projected Langevin algorithms for sampling from log-concave distributions restricted to convex compact sets. For learning and optimization, log-concave distributions correspond to convex losses. In this paper, we analyze the case of non-convex losses with compact convex constraint sets and IID external data variables. We term the resulting method the projected stochastic gradient Langevin algorithm (PSGLA). We show the algorithm achieves a deviation of $O(T^{-1/4} (log T )^{1/2} )$ from its target distribution in 1-Wasserstein distance. For optimization and learning, we show that the algorithm achieves $\epsilon$-suboptimal solutions, on average, provided that it is run for a time that is polynomial in $\epsilon$ and slightly super-exponential in the problem dimension.} }
Endnote
%0 Conference Paper %T Projected Stochastic Gradient Langevin Algorithms for Constrained Sampling and Non-Convex Learning %A Andrew Lamperski %B Proceedings of Thirty Fourth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Mikhail Belkin %E Samory Kpotufe %F pmlr-v134-lamperski21a %I PMLR %P 2891--2937 %U https://proceedings.mlr.press/v134/lamperski21a.html %V 134 %X Langevin algorithms are gradient descent methods with additive noise. They have been used for decades in Markov Chain Monte Carlo (MCMC) sampling, optimization, and learning. Their convergence properties for unconstrained non-convex optimization and learning problems have been studied widely in the last few years. Other work has examined projected Langevin algorithms for sampling from log-concave distributions restricted to convex compact sets. For learning and optimization, log-concave distributions correspond to convex losses. In this paper, we analyze the case of non-convex losses with compact convex constraint sets and IID external data variables. We term the resulting method the projected stochastic gradient Langevin algorithm (PSGLA). We show the algorithm achieves a deviation of $O(T^{-1/4} (log T )^{1/2} )$ from its target distribution in 1-Wasserstein distance. For optimization and learning, we show that the algorithm achieves $\epsilon$-suboptimal solutions, on average, provided that it is run for a time that is polynomial in $\epsilon$ and slightly super-exponential in the problem dimension.
APA
Lamperski, A.. (2021). Projected Stochastic Gradient Langevin Algorithms for Constrained Sampling and Non-Convex Learning. Proceedings of Thirty Fourth Conference on Learning Theory, in Proceedings of Machine Learning Research 134:2891-2937 Available from https://proceedings.mlr.press/v134/lamperski21a.html.

Related Material