A Language for Counterfactual Generative Models

Zenna Tavares, James Koppel, Xin Zhang, Ria Das, Armando Solar-Lezama
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:10173-10182, 2021.

Abstract

We present Omega, a probabilistic programming language with support for counterfactual inference. Counterfactual inference means to observe some fact in the present, and infer what would have happened had some past intervention been taken, e.g. “given that medication was not effective at dose x, what is the probability that it would have been effective at dose 2x?.” We accomplish this by introducing a new operator to probabilistic programming akin to Pearl’s do, define its formal semantics, provide an implementation, and demonstrate its utility through examples in a variety of simulation models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-tavares21a, title = {A Language for Counterfactual Generative Models}, author = {Tavares, Zenna and Koppel, James and Zhang, Xin and Das, Ria and Solar-Lezama, Armando}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {10173--10182}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/tavares21a/tavares21a.pdf}, url = {https://proceedings.mlr.press/v139/tavares21a.html}, abstract = {We present Omega, a probabilistic programming language with support for counterfactual inference. Counterfactual inference means to observe some fact in the present, and infer what would have happened had some past intervention been taken, e.g. “given that medication was not effective at dose x, what is the probability that it would have been effective at dose 2x?.” We accomplish this by introducing a new operator to probabilistic programming akin to Pearl’s do, define its formal semantics, provide an implementation, and demonstrate its utility through examples in a variety of simulation models.} }
Endnote
%0 Conference Paper %T A Language for Counterfactual Generative Models %A Zenna Tavares %A James Koppel %A Xin Zhang %A Ria Das %A Armando Solar-Lezama %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-tavares21a %I PMLR %P 10173--10182 %U https://proceedings.mlr.press/v139/tavares21a.html %V 139 %X We present Omega, a probabilistic programming language with support for counterfactual inference. Counterfactual inference means to observe some fact in the present, and infer what would have happened had some past intervention been taken, e.g. “given that medication was not effective at dose x, what is the probability that it would have been effective at dose 2x?.” We accomplish this by introducing a new operator to probabilistic programming akin to Pearl’s do, define its formal semantics, provide an implementation, and demonstrate its utility through examples in a variety of simulation models.
APA
Tavares, Z., Koppel, J., Zhang, X., Das, R. & Solar-Lezama, A.. (2021). A Language for Counterfactual Generative Models. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:10173-10182 Available from https://proceedings.mlr.press/v139/tavares21a.html.

Related Material