Optimization, Isoperimetric Inequalities, and Sampling via Lyapunov Potentials

August Y Chen, Karthik Sridharan
Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:1094-1153, 2025.

Abstract

In this paper, we prove that optimizability of any function $F$ using Gradient Flow from all initializations implies a Poincaré Inequality for Gibbs measures $\mu_{\beta}\propto e^{-\beta F}$ at low temperature. In particular, under mild regularity assumptions on the convergence rate of Gradient Flow, we establish that $\mu_{\beta}$ satisfies a Poincaré Inequality with constant $O(C’)$ for $\beta \ge \Omega(d)$, where $C’$ is the Poincaré constant of $\mu_{\beta}$ restricted to a neighborhood of the global minimizers of $F$. Under an additional mild condition on $F$, we show that $\mu_{\beta}$ satisfies a Log-Sobolev Inequality with constant $O(S \beta C’)$ where $S$ denotes the second moment of $\mu_{\beta}$. Here asymptotic notation hides $F$-dependent parameters. At a high level, this establishes that optimizability via Gradient Flow from every initialization implies a Poincaré and Log-Sobolev Inequality for the low-temperature Gibbs measure, which in turn imply sampling from all initializations. Analogously, we establish that under the same assumptions, if $F$ can be initialized from everywhere except some set $\mathcal{S}$, then $\mu_{\beta}$ satisfies a Weak Poincaré Inequality with parameters $(O(C’), O(\mu_{\beta}(\mathcal{S})))$ for $\beta \ge \Omega(d)$. At a high level, this shows while optimizability from ‘most’ initializations implies a Weak Poincaré Inequality, which in turn implies sampling from suitable warm starts. Our regularity assumptions are mild and as a consequence, we show we can efficiently sample from several new natural and interesting classes of non-log-concave densities, an important setting with relatively few examples. As another corollary, we obtain efficient discrete-time sampling results for log-concave measures satisfying milder regularity conditions than smoothness, similar to Lehec (2023).

Cite this Paper


BibTeX
@InProceedings{pmlr-v291-chen25g, title = {Optimization, Isoperimetric Inequalities, and Sampling via Lyapunov Potentials}, author = {Chen, August Y and Sridharan, Karthik}, booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory}, pages = {1094--1153}, year = {2025}, editor = {Haghtalab, Nika and Moitra, Ankur}, volume = {291}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--04 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v291/main/assets/chen25g/chen25g.pdf}, url = {https://proceedings.mlr.press/v291/chen25g.html}, abstract = {In this paper, we prove that optimizability of any function $F$ using Gradient Flow from all initializations implies a Poincaré Inequality for Gibbs measures $\mu_{\beta}\propto e^{-\beta F}$ at low temperature. In particular, under mild regularity assumptions on the convergence rate of Gradient Flow, we establish that $\mu_{\beta}$ satisfies a Poincaré Inequality with constant $O(C’)$ for $\beta \ge \Omega(d)$, where $C’$ is the Poincaré constant of $\mu_{\beta}$ restricted to a neighborhood of the global minimizers of $F$. Under an additional mild condition on $F$, we show that $\mu_{\beta}$ satisfies a Log-Sobolev Inequality with constant $O(S \beta C’)$ where $S$ denotes the second moment of $\mu_{\beta}$. Here asymptotic notation hides $F$-dependent parameters. At a high level, this establishes that optimizability via Gradient Flow from every initialization implies a Poincaré and Log-Sobolev Inequality for the low-temperature Gibbs measure, which in turn imply sampling from all initializations. Analogously, we establish that under the same assumptions, if $F$ can be initialized from everywhere except some set $\mathcal{S}$, then $\mu_{\beta}$ satisfies a Weak Poincaré Inequality with parameters $(O(C’), O(\mu_{\beta}(\mathcal{S})))$ for $\beta \ge \Omega(d)$. At a high level, this shows while optimizability from ‘most’ initializations implies a Weak Poincaré Inequality, which in turn implies sampling from suitable warm starts. Our regularity assumptions are mild and as a consequence, we show we can efficiently sample from several new natural and interesting classes of non-log-concave densities, an important setting with relatively few examples. As another corollary, we obtain efficient discrete-time sampling results for log-concave measures satisfying milder regularity conditions than smoothness, similar to Lehec (2023). } }
Endnote
%0 Conference Paper %T Optimization, Isoperimetric Inequalities, and Sampling via Lyapunov Potentials %A August Y Chen %A Karthik Sridharan %B Proceedings of Thirty Eighth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Nika Haghtalab %E Ankur Moitra %F pmlr-v291-chen25g %I PMLR %P 1094--1153 %U https://proceedings.mlr.press/v291/chen25g.html %V 291 %X In this paper, we prove that optimizability of any function $F$ using Gradient Flow from all initializations implies a Poincaré Inequality for Gibbs measures $\mu_{\beta}\propto e^{-\beta F}$ at low temperature. In particular, under mild regularity assumptions on the convergence rate of Gradient Flow, we establish that $\mu_{\beta}$ satisfies a Poincaré Inequality with constant $O(C’)$ for $\beta \ge \Omega(d)$, where $C’$ is the Poincaré constant of $\mu_{\beta}$ restricted to a neighborhood of the global minimizers of $F$. Under an additional mild condition on $F$, we show that $\mu_{\beta}$ satisfies a Log-Sobolev Inequality with constant $O(S \beta C’)$ where $S$ denotes the second moment of $\mu_{\beta}$. Here asymptotic notation hides $F$-dependent parameters. At a high level, this establishes that optimizability via Gradient Flow from every initialization implies a Poincaré and Log-Sobolev Inequality for the low-temperature Gibbs measure, which in turn imply sampling from all initializations. Analogously, we establish that under the same assumptions, if $F$ can be initialized from everywhere except some set $\mathcal{S}$, then $\mu_{\beta}$ satisfies a Weak Poincaré Inequality with parameters $(O(C’), O(\mu_{\beta}(\mathcal{S})))$ for $\beta \ge \Omega(d)$. At a high level, this shows while optimizability from ‘most’ initializations implies a Weak Poincaré Inequality, which in turn implies sampling from suitable warm starts. Our regularity assumptions are mild and as a consequence, we show we can efficiently sample from several new natural and interesting classes of non-log-concave densities, an important setting with relatively few examples. As another corollary, we obtain efficient discrete-time sampling results for log-concave measures satisfying milder regularity conditions than smoothness, similar to Lehec (2023).
APA
Chen, A.Y. & Sridharan, K.. (2025). Optimization, Isoperimetric Inequalities, and Sampling via Lyapunov Potentials. Proceedings of Thirty Eighth Conference on Learning Theory, in Proceedings of Machine Learning Research 291:1094-1153 Available from https://proceedings.mlr.press/v291/chen25g.html.

Related Material