On Contrastive Learning for Likelihood-free Inference

Conor Durkan, Iain Murray, George Papamakarios
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2771-2781, 2020.

Abstract

Likelihood-free methods perform parameter inference in stochastic simulator models where evaluating the likelihood is intractable but sampling synthetic data is possible. One class of methods for this likelihood-free problem uses a classifier to distinguish between pairs of parameter-observation samples generated using the simulator and pairs sampled from some reference distribution, which implicitly learns a density ratio proportional to the likelihood. Another popular class of methods fits a conditional distribution to the parameter posterior directly, and a particular recent variant allows for the use of flexible neural density estimators for this task. In this work, we show that both of these approaches can be unified under a general contrastive learning scheme, and clarify how they should be run and compared.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-durkan20a, title = {On Contrastive Learning for Likelihood-free Inference}, author = {Durkan, Conor and Murray, Iain and Papamakarios, George}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2771--2781}, year = {2020}, editor = {Hal Daumé III and Aarti Singh}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/durkan20a/durkan20a.pdf}, url = { http://proceedings.mlr.press/v119/durkan20a.html }, abstract = {Likelihood-free methods perform parameter inference in stochastic simulator models where evaluating the likelihood is intractable but sampling synthetic data is possible. One class of methods for this likelihood-free problem uses a classifier to distinguish between pairs of parameter-observation samples generated using the simulator and pairs sampled from some reference distribution, which implicitly learns a density ratio proportional to the likelihood. Another popular class of methods fits a conditional distribution to the parameter posterior directly, and a particular recent variant allows for the use of flexible neural density estimators for this task. In this work, we show that both of these approaches can be unified under a general contrastive learning scheme, and clarify how they should be run and compared.} }
Endnote
%0 Conference Paper %T On Contrastive Learning for Likelihood-free Inference %A Conor Durkan %A Iain Murray %A George Papamakarios %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-durkan20a %I PMLR %P 2771--2781 %U http://proceedings.mlr.press/v119/durkan20a.html %V 119 %X Likelihood-free methods perform parameter inference in stochastic simulator models where evaluating the likelihood is intractable but sampling synthetic data is possible. One class of methods for this likelihood-free problem uses a classifier to distinguish between pairs of parameter-observation samples generated using the simulator and pairs sampled from some reference distribution, which implicitly learns a density ratio proportional to the likelihood. Another popular class of methods fits a conditional distribution to the parameter posterior directly, and a particular recent variant allows for the use of flexible neural density estimators for this task. In this work, we show that both of these approaches can be unified under a general contrastive learning scheme, and clarify how they should be run and compared.
APA
Durkan, C., Murray, I. & Papamakarios, G.. (2020). On Contrastive Learning for Likelihood-free Inference. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2771-2781 Available from http://proceedings.mlr.press/v119/durkan20a.html .

Related Material