A Meta-learner for Heterogeneous Effects in Difference-in-Differences

Hui Lan, Haoge Chang, Eleanor Wiske Dillon, Vasilis Syrgkanis
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:32420-32451, 2025.

Abstract

We address the problem of estimating heterogeneous treatment effects in panel data, adopting the popular Difference-in-Differences (DiD) framework under the conditional parallel trends assumption. We propose a novel doubly robust meta-learner for the Conditional Average Treatment Effect on the Treated (CATT), reducing the estimation to a convex risk minimization problem involving a set of auxiliary models. Our framework allows for the flexible estimation of the CATT, when conditioning on any subset of variables of interest using generic machine learning. Leveraging Neyman orthogonality, our proposed approach is robust to estimation errors in the auxiliary models. As a generalization to our main result, we develop a meta-learning approach for the estimation of general conditional functionals under covariate shift. We also provide an extension to the instrumented DiD setting with non-compliance. Empirical results demonstrate the superiority of our approach over existing baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-lan25a, title = {A Meta-learner for Heterogeneous Effects in Difference-in-Differences}, author = {Lan, Hui and Chang, Haoge and Dillon, Eleanor Wiske and Syrgkanis, Vasilis}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {32420--32451}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/lan25a/lan25a.pdf}, url = {https://proceedings.mlr.press/v267/lan25a.html}, abstract = {We address the problem of estimating heterogeneous treatment effects in panel data, adopting the popular Difference-in-Differences (DiD) framework under the conditional parallel trends assumption. We propose a novel doubly robust meta-learner for the Conditional Average Treatment Effect on the Treated (CATT), reducing the estimation to a convex risk minimization problem involving a set of auxiliary models. Our framework allows for the flexible estimation of the CATT, when conditioning on any subset of variables of interest using generic machine learning. Leveraging Neyman orthogonality, our proposed approach is robust to estimation errors in the auxiliary models. As a generalization to our main result, we develop a meta-learning approach for the estimation of general conditional functionals under covariate shift. We also provide an extension to the instrumented DiD setting with non-compliance. Empirical results demonstrate the superiority of our approach over existing baselines.} }
Endnote
%0 Conference Paper %T A Meta-learner for Heterogeneous Effects in Difference-in-Differences %A Hui Lan %A Haoge Chang %A Eleanor Wiske Dillon %A Vasilis Syrgkanis %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-lan25a %I PMLR %P 32420--32451 %U https://proceedings.mlr.press/v267/lan25a.html %V 267 %X We address the problem of estimating heterogeneous treatment effects in panel data, adopting the popular Difference-in-Differences (DiD) framework under the conditional parallel trends assumption. We propose a novel doubly robust meta-learner for the Conditional Average Treatment Effect on the Treated (CATT), reducing the estimation to a convex risk minimization problem involving a set of auxiliary models. Our framework allows for the flexible estimation of the CATT, when conditioning on any subset of variables of interest using generic machine learning. Leveraging Neyman orthogonality, our proposed approach is robust to estimation errors in the auxiliary models. As a generalization to our main result, we develop a meta-learning approach for the estimation of general conditional functionals under covariate shift. We also provide an extension to the instrumented DiD setting with non-compliance. Empirical results demonstrate the superiority of our approach over existing baselines.
APA
Lan, H., Chang, H., Dillon, E.W. & Syrgkanis, V.. (2025). A Meta-learner for Heterogeneous Effects in Difference-in-Differences. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:32420-32451 Available from https://proceedings.mlr.press/v267/lan25a.html.

Related Material