Solving Bayesian Inverse Problems via Variational Autoencoders

Hwan Goh, Sheroze Sheriffdeen, Jonathan Wittmer, Tan Bui-Thanh
Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference, PMLR 145:386-425, 2022.

Abstract

In recent years, the field of machine learning has made phenomenal progress in the pursuit of simu- lating real-world data generation processes. One notable example of such success is the variational autoencoder (VAE). In this work, with a small shift in perspective, we leverage and adapt VAEs for a different purpose: uncertainty quantification (UQ) in scientific inverse problems. We intro- duce UQ-VAE: a flexible, adaptive, hybrid data/model-constrained framework for training neural networks capable of rapid modelling of the posterior distribution representing the unknown param- eter of interest. Specifically, from divergence-based variational inference, our framework is derived such that most of the information usually present in scientific inverse problems is fully utilized in the training procedure. Additionally, this framework includes an adjustable hyperparameter that allows selection of the notion of distance between the posterior model and the target distribution. This introduces more flexibility in controlling how optimization directs the learning of the posterior model. Further, this framework possesses an inherent adaptive optimization property that emerges through the learning of the posterior uncertainty. Numerical results for an elliptic PDE-constrained Bayesian inverse problem are provided to verify the proposed framework.

Cite this Paper


BibTeX
@InProceedings{pmlr-v145-goh22a, title = {Solving Bayesian Inverse Problems via Variational Autoencoders}, author = {Goh, Hwan and Sheriffdeen, Sheroze and Wittmer, Jonathan and Bui-Thanh, Tan}, booktitle = {Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference}, pages = {386--425}, year = {2022}, editor = {Bruna, Joan and Hesthaven, Jan and Zdeborova, Lenka}, volume = {145}, series = {Proceedings of Machine Learning Research}, month = {16--19 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v145/goh22a/goh22a.pdf}, url = {https://proceedings.mlr.press/v145/goh22a.html}, abstract = {In recent years, the field of machine learning has made phenomenal progress in the pursuit of simu- lating real-world data generation processes. One notable example of such success is the variational autoencoder (VAE). In this work, with a small shift in perspective, we leverage and adapt VAEs for a different purpose: uncertainty quantification (UQ) in scientific inverse problems. We intro- duce UQ-VAE: a flexible, adaptive, hybrid data/model-constrained framework for training neural networks capable of rapid modelling of the posterior distribution representing the unknown param- eter of interest. Specifically, from divergence-based variational inference, our framework is derived such that most of the information usually present in scientific inverse problems is fully utilized in the training procedure. Additionally, this framework includes an adjustable hyperparameter that allows selection of the notion of distance between the posterior model and the target distribution. This introduces more flexibility in controlling how optimization directs the learning of the posterior model. Further, this framework possesses an inherent adaptive optimization property that emerges through the learning of the posterior uncertainty. Numerical results for an elliptic PDE-constrained Bayesian inverse problem are provided to verify the proposed framework. } }
Endnote
%0 Conference Paper %T Solving Bayesian Inverse Problems via Variational Autoencoders %A Hwan Goh %A Sheroze Sheriffdeen %A Jonathan Wittmer %A Tan Bui-Thanh %B Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference %C Proceedings of Machine Learning Research %D 2022 %E Joan Bruna %E Jan Hesthaven %E Lenka Zdeborova %F pmlr-v145-goh22a %I PMLR %P 386--425 %U https://proceedings.mlr.press/v145/goh22a.html %V 145 %X In recent years, the field of machine learning has made phenomenal progress in the pursuit of simu- lating real-world data generation processes. One notable example of such success is the variational autoencoder (VAE). In this work, with a small shift in perspective, we leverage and adapt VAEs for a different purpose: uncertainty quantification (UQ) in scientific inverse problems. We intro- duce UQ-VAE: a flexible, adaptive, hybrid data/model-constrained framework for training neural networks capable of rapid modelling of the posterior distribution representing the unknown param- eter of interest. Specifically, from divergence-based variational inference, our framework is derived such that most of the information usually present in scientific inverse problems is fully utilized in the training procedure. Additionally, this framework includes an adjustable hyperparameter that allows selection of the notion of distance between the posterior model and the target distribution. This introduces more flexibility in controlling how optimization directs the learning of the posterior model. Further, this framework possesses an inherent adaptive optimization property that emerges through the learning of the posterior uncertainty. Numerical results for an elliptic PDE-constrained Bayesian inverse problem are provided to verify the proposed framework.
APA
Goh, H., Sheriffdeen, S., Wittmer, J. & Bui-Thanh, T.. (2022). Solving Bayesian Inverse Problems via Variational Autoencoders. Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference, in Proceedings of Machine Learning Research 145:386-425 Available from https://proceedings.mlr.press/v145/goh22a.html.

Related Material