Counterfactual Cross-Validation: Stable Model Selection Procedure for Causal Inference Models

Yuta Saito, Shota Yasui
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:8398-8407, 2020.

Abstract

We study the model selection problem in \emph{conditional average treatment effect} (CATE) prediction. Unlike previous works on this topic, we focus on preserving the rank order of the performance of candidate CATE predictors to enable accurate and stable model selection. To this end, we analyze the model performance ranking problem and formulate guidelines to obtain a better evaluation metric. We then propose a novel metric that can identify the ranking of the performance of CATE predictors with high confidence. Empirical evaluations demonstrate that our metric outperforms existing metrics in both model selection and hyperparameter tuning tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-saito20a, title = {Counterfactual Cross-Validation: Stable Model Selection Procedure for Causal Inference Models}, author = {Saito, Yuta and Yasui, Shota}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {8398--8407}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/saito20a/saito20a.pdf}, url = {https://proceedings.mlr.press/v119/saito20a.html}, abstract = {We study the model selection problem in \emph{conditional average treatment effect} (CATE) prediction. Unlike previous works on this topic, we focus on preserving the rank order of the performance of candidate CATE predictors to enable accurate and stable model selection. To this end, we analyze the model performance ranking problem and formulate guidelines to obtain a better evaluation metric. We then propose a novel metric that can identify the ranking of the performance of CATE predictors with high confidence. Empirical evaluations demonstrate that our metric outperforms existing metrics in both model selection and hyperparameter tuning tasks.} }
Endnote
%0 Conference Paper %T Counterfactual Cross-Validation: Stable Model Selection Procedure for Causal Inference Models %A Yuta Saito %A Shota Yasui %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-saito20a %I PMLR %P 8398--8407 %U https://proceedings.mlr.press/v119/saito20a.html %V 119 %X We study the model selection problem in \emph{conditional average treatment effect} (CATE) prediction. Unlike previous works on this topic, we focus on preserving the rank order of the performance of candidate CATE predictors to enable accurate and stable model selection. To this end, we analyze the model performance ranking problem and formulate guidelines to obtain a better evaluation metric. We then propose a novel metric that can identify the ranking of the performance of CATE predictors with high confidence. Empirical evaluations demonstrate that our metric outperforms existing metrics in both model selection and hyperparameter tuning tasks.
APA
Saito, Y. & Yasui, S.. (2020). Counterfactual Cross-Validation: Stable Model Selection Procedure for Causal Inference Models. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:8398-8407 Available from https://proceedings.mlr.press/v119/saito20a.html.

Related Material