Universal Causal Evaluation Engine: An API for empirically evaluating causal inference models

Alexander Lin, Amil Merchant, Suproteem K. Sarkar, Alexander D’Amour
Proceedings of Machine Learning Research, PMLR 104:50-58, 2019.

Abstract

A major driver in the success of predictive machine learning has been the “common task framework,” where community-wide benchmarks are shared for evaluating new algorithms. This pattern, however, is difficult to implement for causal learning tasks because the ground truth in these tasks is in general unobservable. Instead, causal inference methods are often evaluated on synthetic or semi-synthetic datasets that incorporate idiosyncratic assump- tions about the underlying data-generating process. These evaluations are often proposed in conjunction with new causal inference methods—as a result, many methods are eval- uated on incomparable benchmarks. To address this issue, we establish an API for gen- eralized causal inference model assessment, with the goal of developing a platform that lets researchers deploy and evaluate new model classes in instances where treatments are explicitly known. The API uses a common interface for each of its components, and it allows for new methods and datasets to be evaluated and saved for future benchmarking.

Cite this Paper


BibTeX
@InProceedings{pmlr-v104-lin19a, title = {Universal Causal Evaluation Engine: An API for empirically evaluating causal inference models}, author = {Lin, Alexander and Merchant, Amil and Sarkar, Suproteem K. and D'Amour, Alexander}, booktitle = {Proceedings of Machine Learning Research}, pages = {50--58}, year = {2019}, editor = {}, volume = {104}, series = {Proceedings of Machine Learning Research}, month = {05 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v104/lin19a/lin19a.pdf}, url = {https://proceedings.mlr.press/v104/lin19a.html}, abstract = {A major driver in the success of predictive machine learning has been the “common task framework,” where community-wide benchmarks are shared for evaluating new algorithms. This pattern, however, is difficult to implement for causal learning tasks because the ground truth in these tasks is in general unobservable. Instead, causal inference methods are often evaluated on synthetic or semi-synthetic datasets that incorporate idiosyncratic assump- tions about the underlying data-generating process. These evaluations are often proposed in conjunction with new causal inference methods—as a result, many methods are eval- uated on incomparable benchmarks. To address this issue, we establish an API for gen- eralized causal inference model assessment, with the goal of developing a platform that lets researchers deploy and evaluate new model classes in instances where treatments are explicitly known. The API uses a common interface for each of its components, and it allows for new methods and datasets to be evaluated and saved for future benchmarking.} }
Endnote
%0 Conference Paper %T Universal Causal Evaluation Engine: An API for empirically evaluating causal inference models %A Alexander Lin %A Amil Merchant %A Suproteem K. Sarkar %A Alexander D’Amour %B Proceedings of Machine Learning Research %C Proceedings of Machine Learning Research %D 2019 %E %F pmlr-v104-lin19a %I PMLR %P 50--58 %U https://proceedings.mlr.press/v104/lin19a.html %V 104 %X A major driver in the success of predictive machine learning has been the “common task framework,” where community-wide benchmarks are shared for evaluating new algorithms. This pattern, however, is difficult to implement for causal learning tasks because the ground truth in these tasks is in general unobservable. Instead, causal inference methods are often evaluated on synthetic or semi-synthetic datasets that incorporate idiosyncratic assump- tions about the underlying data-generating process. These evaluations are often proposed in conjunction with new causal inference methods—as a result, many methods are eval- uated on incomparable benchmarks. To address this issue, we establish an API for gen- eralized causal inference model assessment, with the goal of developing a platform that lets researchers deploy and evaluate new model classes in instances where treatments are explicitly known. The API uses a common interface for each of its components, and it allows for new methods and datasets to be evaluated and saved for future benchmarking.
APA
Lin, A., Merchant, A., Sarkar, S.K. & D’Amour, A.. (2019). Universal Causal Evaluation Engine: An API for empirically evaluating causal inference models. Proceedings of Machine Learning Research, in Proceedings of Machine Learning Research 104:50-58 Available from https://proceedings.mlr.press/v104/lin19a.html.

Related Material