Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support

Yuan Zhou, Hongseok Yang, Yee Whye Teh, Tom Rainforth
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11534-11545, 2020.

Abstract

Universal probabilistic programming systems (PPSs) provide a powerful framework for specifying rich probabilistic models. They further attempt to automate the process of drawing inferences from these models, but doing this successfully is severely hampered by the wide range of non–standard models they can express. As a result, although one can specify complex models in a universal PPS, the provided inference engines often fall far short of what is required. In particular, we show that they produce surprisingly unsatisfactory performance for models where the support varies between executions, often doing no better than importance sampling from the prior. To address this, we introduce a new inference framework: Divide, Conquer, and Combine, which remains efficient for such models, and show how it can be implemented as an automated and generic PPS inference engine. We empirically demonstrate substantial performance improvements over existing approaches on three examples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zhou20e, title = {Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support}, author = {Zhou, Yuan and Yang, Hongseok and Teh, Yee Whye and Rainforth, Tom}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11534--11545}, year = {2020}, editor = {Hal Daumé III and Aarti Singh}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zhou20e/zhou20e.pdf}, url = { http://proceedings.mlr.press/v119/zhou20e.html }, abstract = {Universal probabilistic programming systems (PPSs) provide a powerful framework for specifying rich probabilistic models. They further attempt to automate the process of drawing inferences from these models, but doing this successfully is severely hampered by the wide range of non–standard models they can express. As a result, although one can specify complex models in a universal PPS, the provided inference engines often fall far short of what is required. In particular, we show that they produce surprisingly unsatisfactory performance for models where the support varies between executions, often doing no better than importance sampling from the prior. To address this, we introduce a new inference framework: Divide, Conquer, and Combine, which remains efficient for such models, and show how it can be implemented as an automated and generic PPS inference engine. We empirically demonstrate substantial performance improvements over existing approaches on three examples.} }
Endnote
%0 Conference Paper %T Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support %A Yuan Zhou %A Hongseok Yang %A Yee Whye Teh %A Tom Rainforth %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zhou20e %I PMLR %P 11534--11545 %U http://proceedings.mlr.press/v119/zhou20e.html %V 119 %X Universal probabilistic programming systems (PPSs) provide a powerful framework for specifying rich probabilistic models. They further attempt to automate the process of drawing inferences from these models, but doing this successfully is severely hampered by the wide range of non–standard models they can express. As a result, although one can specify complex models in a universal PPS, the provided inference engines often fall far short of what is required. In particular, we show that they produce surprisingly unsatisfactory performance for models where the support varies between executions, often doing no better than importance sampling from the prior. To address this, we introduce a new inference framework: Divide, Conquer, and Combine, which remains efficient for such models, and show how it can be implemented as an automated and generic PPS inference engine. We empirically demonstrate substantial performance improvements over existing approaches on three examples.
APA
Zhou, Y., Yang, H., Teh, Y.W. & Rainforth, T.. (2020). Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11534-11545 Available from http://proceedings.mlr.press/v119/zhou20e.html .

Related Material