Forward Amortized Inference for Likelihood-Free Variational Marginalization

Luca Ambrogioni, Umut Güçlü, Julia Berezutskaya, Eva Borne, Yaǧmur Güçlütürk, Max Hinne, Eric Maris, Marcel Gerven
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:777-786, 2019.

Abstract

In this paper, we introduce a new form of amortized variational inference by using the forward KL divergence in a joint-contrastive variational loss. The resulting forward amortized variational inference is a likelihood-free method as its gradient can be sampled without bias and without requiring any evaluation of either the model joint distribution or its derivatives. We prove that our new variational loss is optimized by the exact posterior marginals in the fully factorized mean-field approximation, a property that is not shared with the more conventional reverse KL inference. Furthermore, we show that forward amortized inference can be easily marginalized over large families of latent variables in order to obtain a marginalized variational posterior. We consider two examples of variational marginalization. In our first example we train a Bayesian forecaster for predicting a simplified chaotic model of atmospheric convection. In the second example we train an amortized variational approximation of a Bayesian optimal classifier by marginalizing over the model space. The result is a powerful meta-classification network that can solve arbitrary classification problems without further training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-ambrogioni19a, title = {Forward Amortized Inference for Likelihood-Free Variational Marginalization}, author = {Ambrogioni, Luca and G\"{u}\c{c}l\"{u}, Umut and Berezutskaya, Julia and van den Borne, Eva and G\"{u}\c{c}l\"{u}t\"{u}rk, Ya\v{g}mur and Hinne, Max and Maris, Eric and van Gerven, Marcel}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {777--786}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/ambrogioni19a/ambrogioni19a.pdf}, url = {https://proceedings.mlr.press/v89/ambrogioni19a.html}, abstract = {In this paper, we introduce a new form of amortized variational inference by using the forward KL divergence in a joint-contrastive variational loss. The resulting forward amortized variational inference is a likelihood-free method as its gradient can be sampled without bias and without requiring any evaluation of either the model joint distribution or its derivatives. We prove that our new variational loss is optimized by the exact posterior marginals in the fully factorized mean-field approximation, a property that is not shared with the more conventional reverse KL inference. Furthermore, we show that forward amortized inference can be easily marginalized over large families of latent variables in order to obtain a marginalized variational posterior. We consider two examples of variational marginalization. In our first example we train a Bayesian forecaster for predicting a simplified chaotic model of atmospheric convection. In the second example we train an amortized variational approximation of a Bayesian optimal classifier by marginalizing over the model space. The result is a powerful meta-classification network that can solve arbitrary classification problems without further training.} }
Endnote
%0 Conference Paper %T Forward Amortized Inference for Likelihood-Free Variational Marginalization %A Luca Ambrogioni %A Umut Güçlü %A Julia Berezutskaya %A Eva Borne %A Yaǧmur Güçlütürk %A Max Hinne %A Eric Maris %A Marcel Gerven %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-ambrogioni19a %I PMLR %P 777--786 %U https://proceedings.mlr.press/v89/ambrogioni19a.html %V 89 %X In this paper, we introduce a new form of amortized variational inference by using the forward KL divergence in a joint-contrastive variational loss. The resulting forward amortized variational inference is a likelihood-free method as its gradient can be sampled without bias and without requiring any evaluation of either the model joint distribution or its derivatives. We prove that our new variational loss is optimized by the exact posterior marginals in the fully factorized mean-field approximation, a property that is not shared with the more conventional reverse KL inference. Furthermore, we show that forward amortized inference can be easily marginalized over large families of latent variables in order to obtain a marginalized variational posterior. We consider two examples of variational marginalization. In our first example we train a Bayesian forecaster for predicting a simplified chaotic model of atmospheric convection. In the second example we train an amortized variational approximation of a Bayesian optimal classifier by marginalizing over the model space. The result is a powerful meta-classification network that can solve arbitrary classification problems without further training.
APA
Ambrogioni, L., Güçlü, U., Berezutskaya, J., Borne, E., Güçlütürk, Y., Hinne, M., Maris, E. & Gerven, M.. (2019). Forward Amortized Inference for Likelihood-Free Variational Marginalization. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:777-786 Available from https://proceedings.mlr.press/v89/ambrogioni19a.html.

Related Material