[edit]

# Diffusion Posterior Sampling is Computationally Intractable

*Proceedings of the 41st International Conference on Machine Learning*, PMLR 235:17020-17059, 2024.

#### Abstract

Diffusion models are a remarkably effective way of learning and sampling from a distribution $p(x)$. In posterior sampling, one is also given a measurement model $p(y \mid x)$ and a measurement $y$, and would like to sample from $p(x \mid y)$. Posterior sampling is useful for tasks such as inpainting, super-resolution, and MRI reconstruction, so a number of recent works have given algorithms to heuristically approximate it; but none are known to converge to the correct distribution in polynomial time. In this paper we show that posterior sampling is

*computationally intractable*: under the most basic assumption in cryptography—that one-way functions exist—there are instances for which*every*algorithm takes superpolynomial time, even though*unconditional*sampling is provably fast. We also show that the exponential-time rejection sampling algorithm is essentially optimal under the stronger plausible assumption that there are one-way functions that take exponential time to invert.