Optimally-weighted Estimators of the Maximum Mean Discrepancy for Likelihood-Free Inference

Ayush Bharti, Masha Naslidnyk, Oscar Key, Samuel Kaski, Francois-Xavier Briol
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:2289-2312, 2023.

Abstract

Likelihood-free inference methods typically make use of a distance between simulated and real data. A common example is the maximum mean discrepancy (MMD), which has previously been used for approximate Bayesian computation, minimum distance estimation, generalised Bayesian inference, and within the nonparametric learning framework. The MMD is commonly estimated at a root-$m$ rate, where $m$ is the number of simulated samples. This can lead to significant computational challenges since a large $m$ is required to obtain an accurate estimate, which is crucial for parameter estimation. In this paper, we propose a novel estimator for the MMD with significantly improved sample complexity. The estimator is particularly well suited for computationally expensive smooth simulators with low- to mid-dimensional inputs. This claim is supported through both theoretical results and an extensive simulation study on benchmark simulators.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-bharti23a, title = {Optimally-weighted Estimators of the Maximum Mean Discrepancy for Likelihood-Free Inference}, author = {Bharti, Ayush and Naslidnyk, Masha and Key, Oscar and Kaski, Samuel and Briol, Francois-Xavier}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {2289--2312}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/bharti23a/bharti23a.pdf}, url = {https://proceedings.mlr.press/v202/bharti23a.html}, abstract = {Likelihood-free inference methods typically make use of a distance between simulated and real data. A common example is the maximum mean discrepancy (MMD), which has previously been used for approximate Bayesian computation, minimum distance estimation, generalised Bayesian inference, and within the nonparametric learning framework. The MMD is commonly estimated at a root-$m$ rate, where $m$ is the number of simulated samples. This can lead to significant computational challenges since a large $m$ is required to obtain an accurate estimate, which is crucial for parameter estimation. In this paper, we propose a novel estimator for the MMD with significantly improved sample complexity. The estimator is particularly well suited for computationally expensive smooth simulators with low- to mid-dimensional inputs. This claim is supported through both theoretical results and an extensive simulation study on benchmark simulators.} }
Endnote
%0 Conference Paper %T Optimally-weighted Estimators of the Maximum Mean Discrepancy for Likelihood-Free Inference %A Ayush Bharti %A Masha Naslidnyk %A Oscar Key %A Samuel Kaski %A Francois-Xavier Briol %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-bharti23a %I PMLR %P 2289--2312 %U https://proceedings.mlr.press/v202/bharti23a.html %V 202 %X Likelihood-free inference methods typically make use of a distance between simulated and real data. A common example is the maximum mean discrepancy (MMD), which has previously been used for approximate Bayesian computation, minimum distance estimation, generalised Bayesian inference, and within the nonparametric learning framework. The MMD is commonly estimated at a root-$m$ rate, where $m$ is the number of simulated samples. This can lead to significant computational challenges since a large $m$ is required to obtain an accurate estimate, which is crucial for parameter estimation. In this paper, we propose a novel estimator for the MMD with significantly improved sample complexity. The estimator is particularly well suited for computationally expensive smooth simulators with low- to mid-dimensional inputs. This claim is supported through both theoretical results and an extensive simulation study on benchmark simulators.
APA
Bharti, A., Naslidnyk, M., Key, O., Kaski, S. & Briol, F.. (2023). Optimally-weighted Estimators of the Maximum Mean Discrepancy for Likelihood-Free Inference. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:2289-2312 Available from https://proceedings.mlr.press/v202/bharti23a.html.

Related Material