Amortized Inference of Variational Bounds for Learning Noisy-OR

Yiming Yan, Melissa Ailem, Fei Sha
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:3632-3641, 2020.

Abstract

Classical approaches for approximate inference depend on cleverly designed variational distributions and bounds. Modern approaches employ amortized variational inference, which uses a neural network to approximate any posterior without leveraging the structures of the generative models. In this paper, we propose Amortized Conjugate Posterior (ACP), a hybrid approach taking advantages of both types of approaches. Specifically, we use the classical methods to derive specific forms of posterior distributions and then learn the variational parameters using amortized inference. We study the effectiveness of the proposed approach on the Noisy-OR model and compare to both the classical and the modern approaches for approximate inference and parameter learning. Our results show that the proposed method outperforms or are at par with other approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-yan20b, title = {Amortized Inference of Variational Bounds for Learning Noisy-OR}, author = {Yan, Yiming and Ailem, Melissa and Sha, Fei}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {3632--3641}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/yan20b/yan20b.pdf}, url = {https://proceedings.mlr.press/v108/yan20b.html}, abstract = {Classical approaches for approximate inference depend on cleverly designed variational distributions and bounds. Modern approaches employ amortized variational inference, which uses a neural network to approximate any posterior without leveraging the structures of the generative models. In this paper, we propose Amortized Conjugate Posterior (ACP), a hybrid approach taking advantages of both types of approaches. Specifically, we use the classical methods to derive specific forms of posterior distributions and then learn the variational parameters using amortized inference. We study the effectiveness of the proposed approach on the Noisy-OR model and compare to both the classical and the modern approaches for approximate inference and parameter learning. Our results show that the proposed method outperforms or are at par with other approaches.} }
Endnote
%0 Conference Paper %T Amortized Inference of Variational Bounds for Learning Noisy-OR %A Yiming Yan %A Melissa Ailem %A Fei Sha %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-yan20b %I PMLR %P 3632--3641 %U https://proceedings.mlr.press/v108/yan20b.html %V 108 %X Classical approaches for approximate inference depend on cleverly designed variational distributions and bounds. Modern approaches employ amortized variational inference, which uses a neural network to approximate any posterior without leveraging the structures of the generative models. In this paper, we propose Amortized Conjugate Posterior (ACP), a hybrid approach taking advantages of both types of approaches. Specifically, we use the classical methods to derive specific forms of posterior distributions and then learn the variational parameters using amortized inference. We study the effectiveness of the proposed approach on the Noisy-OR model and compare to both the classical and the modern approaches for approximate inference and parameter learning. Our results show that the proposed method outperforms or are at par with other approaches.
APA
Yan, Y., Ailem, M. & Sha, F.. (2020). Amortized Inference of Variational Bounds for Learning Noisy-OR. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:3632-3641 Available from https://proceedings.mlr.press/v108/yan20b.html.

Related Material