Composing Normalizing Flows for Inverse Problems

Jay Whang, Erik Lindgren, Alex Dimakis
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:11158-11169, 2021.

Abstract

Given an inverse problem with a normalizing flow prior, we wish to estimate the distribution of the underlying signal conditioned on the observations. We approach this problem as a task of conditional inference on the pre-trained unconditional flow model. We first establish that this is computationally hard for a large class of flow models. Motivated by this, we propose a framework for approximate inference that estimates the target conditional as a composition of two flow models. This formulation leads to a stable variational inference training procedure that avoids adversarial training. Our method is evaluated on a variety of inverse problems and is shown to produce high-quality samples with uncertainty quantification. We further demonstrate that our approach can be amortized for zero-shot inference.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-whang21b, title = {Composing Normalizing Flows for Inverse Problems}, author = {Whang, Jay and Lindgren, Erik and Dimakis, Alex}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {11158--11169}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/whang21b/whang21b.pdf}, url = {https://proceedings.mlr.press/v139/whang21b.html}, abstract = {Given an inverse problem with a normalizing flow prior, we wish to estimate the distribution of the underlying signal conditioned on the observations. We approach this problem as a task of conditional inference on the pre-trained unconditional flow model. We first establish that this is computationally hard for a large class of flow models. Motivated by this, we propose a framework for approximate inference that estimates the target conditional as a composition of two flow models. This formulation leads to a stable variational inference training procedure that avoids adversarial training. Our method is evaluated on a variety of inverse problems and is shown to produce high-quality samples with uncertainty quantification. We further demonstrate that our approach can be amortized for zero-shot inference.} }
Endnote
%0 Conference Paper %T Composing Normalizing Flows for Inverse Problems %A Jay Whang %A Erik Lindgren %A Alex Dimakis %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-whang21b %I PMLR %P 11158--11169 %U https://proceedings.mlr.press/v139/whang21b.html %V 139 %X Given an inverse problem with a normalizing flow prior, we wish to estimate the distribution of the underlying signal conditioned on the observations. We approach this problem as a task of conditional inference on the pre-trained unconditional flow model. We first establish that this is computationally hard for a large class of flow models. Motivated by this, we propose a framework for approximate inference that estimates the target conditional as a composition of two flow models. This formulation leads to a stable variational inference training procedure that avoids adversarial training. Our method is evaluated on a variety of inverse problems and is shown to produce high-quality samples with uncertainty quantification. We further demonstrate that our approach can be amortized for zero-shot inference.
APA
Whang, J., Lindgren, E. & Dimakis, A.. (2021). Composing Normalizing Flows for Inverse Problems. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:11158-11169 Available from https://proceedings.mlr.press/v139/whang21b.html.

Related Material