On Nesting Monte Carlo Estimators

Tom Rainforth, Rob Cornish, Hongseok Yang, Andrew Warrington, Frank Wood
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4267-4276, 2018.

Abstract

Many problems in machine learning and statistics involve nested expectations and thus do not permit conventional Monte Carlo (MC) estimation. For such problems, one must nest estimators, such that terms in an outer estimator themselves involve calculation of a separate, nested, estimation. We investigate the statistical implications of nesting MC estimators, including cases of multiple levels of nesting, and establish the conditions under which they converge. We derive corresponding rates of convergence and provide empirical evidence that these rates are observed in practice. We further establish a number of pitfalls that can arise from naive nesting of MC estimators, provide guidelines about how these can be avoided, and lay out novel methods for reformulating certain classes of nested expectation problems into single expectations, leading to improved convergence rates. We demonstrate the applicability of our work by using our results to develop a new estimator for discrete Bayesian experimental design problems and derive error bounds for a class of variational objectives.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-rainforth18a, title = {On Nesting {M}onte {C}arlo Estimators}, author = {Rainforth, Tom and Cornish, Rob and Yang, Hongseok and Warrington, Andrew and Wood, Frank}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {4267--4276}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/rainforth18a/rainforth18a.pdf}, url = {https://proceedings.mlr.press/v80/rainforth18a.html}, abstract = {Many problems in machine learning and statistics involve nested expectations and thus do not permit conventional Monte Carlo (MC) estimation. For such problems, one must nest estimators, such that terms in an outer estimator themselves involve calculation of a separate, nested, estimation. We investigate the statistical implications of nesting MC estimators, including cases of multiple levels of nesting, and establish the conditions under which they converge. We derive corresponding rates of convergence and provide empirical evidence that these rates are observed in practice. We further establish a number of pitfalls that can arise from naive nesting of MC estimators, provide guidelines about how these can be avoided, and lay out novel methods for reformulating certain classes of nested expectation problems into single expectations, leading to improved convergence rates. We demonstrate the applicability of our work by using our results to develop a new estimator for discrete Bayesian experimental design problems and derive error bounds for a class of variational objectives.} }
Endnote
%0 Conference Paper %T On Nesting Monte Carlo Estimators %A Tom Rainforth %A Rob Cornish %A Hongseok Yang %A Andrew Warrington %A Frank Wood %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-rainforth18a %I PMLR %P 4267--4276 %U https://proceedings.mlr.press/v80/rainforth18a.html %V 80 %X Many problems in machine learning and statistics involve nested expectations and thus do not permit conventional Monte Carlo (MC) estimation. For such problems, one must nest estimators, such that terms in an outer estimator themselves involve calculation of a separate, nested, estimation. We investigate the statistical implications of nesting MC estimators, including cases of multiple levels of nesting, and establish the conditions under which they converge. We derive corresponding rates of convergence and provide empirical evidence that these rates are observed in practice. We further establish a number of pitfalls that can arise from naive nesting of MC estimators, provide guidelines about how these can be avoided, and lay out novel methods for reformulating certain classes of nested expectation problems into single expectations, leading to improved convergence rates. We demonstrate the applicability of our work by using our results to develop a new estimator for discrete Bayesian experimental design problems and derive error bounds for a class of variational objectives.
APA
Rainforth, T., Cornish, R., Yang, H., Warrington, A. & Wood, F.. (2018). On Nesting Monte Carlo Estimators. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:4267-4276 Available from https://proceedings.mlr.press/v80/rainforth18a.html.

Related Material