[edit]
Fisher information lower bounds for sampling
Proceedings of The 34th International Conference on Algorithmic Learning Theory, PMLR 201:375-410, 2023.
Abstract
We prove two lower bounds for the complexity of non-log-concave sampling within the framework of Balasubramanian et al. (2022), who introduced the use of Fisher information (FI) bounds as a notion of approximate first-order stationarity in sampling. Our first lower bound shows that averaged Langevin Monte Carlo (LMC) is optimal for the regime of large FI by reducing the problem of finding stationary points in non-convex optimization to sampling. Our second lower bound shows that in the regime of small FI, obtaining a FI of at most ε2 from the target distribution requires poly(1/ε) queries, which is surprising as it rules out the existence of high-accuracy algorithms (e.g., algorithms using Metropolis{–}Hastings filters) in this context.