Hierarchical Importance Weighted Autoencoders

Chin-Wei Huang, Kris Sankaran, Eeshan Dhekane, Alexandre Lacoste, Aaron Courville
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:2869-2878, 2019.

Abstract

Importance weighted variational inference (Burda et al., 2015) uses multiple i.i.d. samples to have a tighter variational lower bound. We believe a joint proposal has the potential of reducing the number of redundant samples, and introduce a hierarchical structure to induce correlation. The hope is that the proposals would coordinate to make up for the error made by one another to reduce the variance of the importance estimator. Theoretically, we analyze the condition under which convergence of the estimator variance can be connected to convergence of the lower bound. Empirically, we confirm that maximization of the lower bound does implicitly minimize variance. Further analysis shows that this is a result of negative correlation induced by the proposed hierarchical meta sampling scheme, and performance of inference also improves when the number of samples increases.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-huang19d, title = {Hierarchical Importance Weighted Autoencoders}, author = {Huang, Chin-Wei and Sankaran, Kris and Dhekane, Eeshan and Lacoste, Alexandre and Courville, Aaron}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {2869--2878}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/huang19d/huang19d.pdf}, url = {https://proceedings.mlr.press/v97/huang19d.html}, abstract = {Importance weighted variational inference (Burda et al., 2015) uses multiple i.i.d. samples to have a tighter variational lower bound. We believe a joint proposal has the potential of reducing the number of redundant samples, and introduce a hierarchical structure to induce correlation. The hope is that the proposals would coordinate to make up for the error made by one another to reduce the variance of the importance estimator. Theoretically, we analyze the condition under which convergence of the estimator variance can be connected to convergence of the lower bound. Empirically, we confirm that maximization of the lower bound does implicitly minimize variance. Further analysis shows that this is a result of negative correlation induced by the proposed hierarchical meta sampling scheme, and performance of inference also improves when the number of samples increases.} }
Endnote
%0 Conference Paper %T Hierarchical Importance Weighted Autoencoders %A Chin-Wei Huang %A Kris Sankaran %A Eeshan Dhekane %A Alexandre Lacoste %A Aaron Courville %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-huang19d %I PMLR %P 2869--2878 %U https://proceedings.mlr.press/v97/huang19d.html %V 97 %X Importance weighted variational inference (Burda et al., 2015) uses multiple i.i.d. samples to have a tighter variational lower bound. We believe a joint proposal has the potential of reducing the number of redundant samples, and introduce a hierarchical structure to induce correlation. The hope is that the proposals would coordinate to make up for the error made by one another to reduce the variance of the importance estimator. Theoretically, we analyze the condition under which convergence of the estimator variance can be connected to convergence of the lower bound. Empirically, we confirm that maximization of the lower bound does implicitly minimize variance. Further analysis shows that this is a result of negative correlation induced by the proposed hierarchical meta sampling scheme, and performance of inference also improves when the number of samples increases.
APA
Huang, C., Sankaran, K., Dhekane, E., Lacoste, A. & Courville, A.. (2019). Hierarchical Importance Weighted Autoencoders. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:2869-2878 Available from https://proceedings.mlr.press/v97/huang19d.html.

Related Material