A Convergence Theory for SVGD in the Population Limit under Talagrand’s Inequality T1

Adil Salim, Lukang Sun, Peter Richtarik
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:19139-19152, 2022.

Abstract

Stein Variational Gradient Descent (SVGD) is an algorithm for sampling from a target density which is known up to a multiplicative constant. Although SVGD is a popular algorithm in practice, its theoretical study is limited to a few recent works. We study the convergence of SVGD in the population limit, (i.e., with an infinite number of particles) to sample from a non-logconcave target distribution satisfying Talagrand’s inequality T1. We first establish the convergence of the algorithm. Then, we establish a dimension-dependent complexity bound in terms of the Kernelized Stein Discrepancy (KSD). Unlike existing works, we do not assume that the KSD is bounded along the trajectory of the algorithm. Our approach relies on interpreting SVGD as a gradient descent over a space of probability measures.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-salim22a, title = {A Convergence Theory for {SVGD} in the Population Limit under Talagrand’s Inequality T1}, author = {Salim, Adil and Sun, Lukang and Richtarik, Peter}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {19139--19152}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/salim22a/salim22a.pdf}, url = {https://proceedings.mlr.press/v162/salim22a.html}, abstract = {Stein Variational Gradient Descent (SVGD) is an algorithm for sampling from a target density which is known up to a multiplicative constant. Although SVGD is a popular algorithm in practice, its theoretical study is limited to a few recent works. We study the convergence of SVGD in the population limit, (i.e., with an infinite number of particles) to sample from a non-logconcave target distribution satisfying Talagrand’s inequality T1. We first establish the convergence of the algorithm. Then, we establish a dimension-dependent complexity bound in terms of the Kernelized Stein Discrepancy (KSD). Unlike existing works, we do not assume that the KSD is bounded along the trajectory of the algorithm. Our approach relies on interpreting SVGD as a gradient descent over a space of probability measures.} }
Endnote
%0 Conference Paper %T A Convergence Theory for SVGD in the Population Limit under Talagrand’s Inequality T1 %A Adil Salim %A Lukang Sun %A Peter Richtarik %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-salim22a %I PMLR %P 19139--19152 %U https://proceedings.mlr.press/v162/salim22a.html %V 162 %X Stein Variational Gradient Descent (SVGD) is an algorithm for sampling from a target density which is known up to a multiplicative constant. Although SVGD is a popular algorithm in practice, its theoretical study is limited to a few recent works. We study the convergence of SVGD in the population limit, (i.e., with an infinite number of particles) to sample from a non-logconcave target distribution satisfying Talagrand’s inequality T1. We first establish the convergence of the algorithm. Then, we establish a dimension-dependent complexity bound in terms of the Kernelized Stein Discrepancy (KSD). Unlike existing works, we do not assume that the KSD is bounded along the trajectory of the algorithm. Our approach relies on interpreting SVGD as a gradient descent over a space of probability measures.
APA
Salim, A., Sun, L. & Richtarik, P.. (2022). A Convergence Theory for SVGD in the Population Limit under Talagrand’s Inequality T1. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:19139-19152 Available from https://proceedings.mlr.press/v162/salim22a.html.

Related Material