Instance-Optimal Compressed Sensing via Posterior Sampling

Ajil Jalal, Sushrut Karmalkar, Alex Dimakis, Eric Price
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:4709-4720, 2021.

Abstract

We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors). We show for Gaussian measurements and \emph{any} prior distribution on the signal, that the posterior sampling estimator achieves near-optimal recovery guarantees. Moreover, this result is robust to model mismatch, as long as the distribution estimate (e.g., from an invertible generative model) is close to the true distribution in Wasserstein distance. We implement the posterior sampling estimator for deep generative priors using Langevin dynamics, and empirically find that it produces accurate estimates with more diversity than MAP.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-jalal21a, title = {Instance-Optimal Compressed Sensing via Posterior Sampling}, author = {Jalal, Ajil and Karmalkar, Sushrut and Dimakis, Alex and Price, Eric}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {4709--4720}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/jalal21a/jalal21a.pdf}, url = {https://proceedings.mlr.press/v139/jalal21a.html}, abstract = {We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors). We show for Gaussian measurements and \emph{any} prior distribution on the signal, that the posterior sampling estimator achieves near-optimal recovery guarantees. Moreover, this result is robust to model mismatch, as long as the distribution estimate (e.g., from an invertible generative model) is close to the true distribution in Wasserstein distance. We implement the posterior sampling estimator for deep generative priors using Langevin dynamics, and empirically find that it produces accurate estimates with more diversity than MAP.} }
Endnote
%0 Conference Paper %T Instance-Optimal Compressed Sensing via Posterior Sampling %A Ajil Jalal %A Sushrut Karmalkar %A Alex Dimakis %A Eric Price %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-jalal21a %I PMLR %P 4709--4720 %U https://proceedings.mlr.press/v139/jalal21a.html %V 139 %X We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors). We show for Gaussian measurements and \emph{any} prior distribution on the signal, that the posterior sampling estimator achieves near-optimal recovery guarantees. Moreover, this result is robust to model mismatch, as long as the distribution estimate (e.g., from an invertible generative model) is close to the true distribution in Wasserstein distance. We implement the posterior sampling estimator for deep generative priors using Langevin dynamics, and empirically find that it produces accurate estimates with more diversity than MAP.
APA
Jalal, A., Karmalkar, S., Dimakis, A. & Price, E.. (2021). Instance-Optimal Compressed Sensing via Posterior Sampling. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:4709-4720 Available from https://proceedings.mlr.press/v139/jalal21a.html.

Related Material