A Hybrid Approach for Probabilistic Inference using Random Projections

Michael Zhu, Stefano Ermon
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:2039-2047, 2015.

Abstract

We introduce a new meta-algorithm for probabilistic inference in graphical models based on random projections. The key idea is to use approximate inference algorithms for an (exponentially) large number of samples, obtained by randomly projecting the original statistical model using universal hash functions. In the case where the approximate inference algorithm is a variational approximation, this approach can be viewed as interpolating between sampling-based and variational techniques. The number of samples used controls the trade-off between the accuracy of the approximate inference algorithm and the variance of the estimator. We show empirically that by using random projections, we can improve the accuracy of common approximate inference algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-zhuc15, title = {A Hybrid Approach for Probabilistic Inference using Random Projections}, author = {Zhu, Michael and Ermon, Stefano}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {2039--2047}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/zhuc15.pdf}, url = {https://proceedings.mlr.press/v37/zhuc15.html}, abstract = {We introduce a new meta-algorithm for probabilistic inference in graphical models based on random projections. The key idea is to use approximate inference algorithms for an (exponentially) large number of samples, obtained by randomly projecting the original statistical model using universal hash functions. In the case where the approximate inference algorithm is a variational approximation, this approach can be viewed as interpolating between sampling-based and variational techniques. The number of samples used controls the trade-off between the accuracy of the approximate inference algorithm and the variance of the estimator. We show empirically that by using random projections, we can improve the accuracy of common approximate inference algorithms.} }
Endnote
%0 Conference Paper %T A Hybrid Approach for Probabilistic Inference using Random Projections %A Michael Zhu %A Stefano Ermon %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-zhuc15 %I PMLR %P 2039--2047 %U https://proceedings.mlr.press/v37/zhuc15.html %V 37 %X We introduce a new meta-algorithm for probabilistic inference in graphical models based on random projections. The key idea is to use approximate inference algorithms for an (exponentially) large number of samples, obtained by randomly projecting the original statistical model using universal hash functions. In the case where the approximate inference algorithm is a variational approximation, this approach can be viewed as interpolating between sampling-based and variational techniques. The number of samples used controls the trade-off between the accuracy of the approximate inference algorithm and the variance of the estimator. We show empirically that by using random projections, we can improve the accuracy of common approximate inference algorithms.
RIS
TY - CPAPER TI - A Hybrid Approach for Probabilistic Inference using Random Projections AU - Michael Zhu AU - Stefano Ermon BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-zhuc15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 2039 EP - 2047 L1 - http://proceedings.mlr.press/v37/zhuc15.pdf UR - https://proceedings.mlr.press/v37/zhuc15.html AB - We introduce a new meta-algorithm for probabilistic inference in graphical models based on random projections. The key idea is to use approximate inference algorithms for an (exponentially) large number of samples, obtained by randomly projecting the original statistical model using universal hash functions. In the case where the approximate inference algorithm is a variational approximation, this approach can be viewed as interpolating between sampling-based and variational techniques. The number of samples used controls the trade-off between the accuracy of the approximate inference algorithm and the variance of the estimator. We show empirically that by using random projections, we can improve the accuracy of common approximate inference algorithms. ER -
APA
Zhu, M. & Ermon, S.. (2015). A Hybrid Approach for Probabilistic Inference using Random Projections. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:2039-2047 Available from https://proceedings.mlr.press/v37/zhuc15.html.

Related Material