Policy Learning for Localized Interventions from Observational Data

Myrl G. Marmarelis, Fred Morstatter, Aram Galstyan, Greg Ver Steeg
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:4456-4464, 2024.

Abstract

A largely unaddressed problem in causal inference is that of learning reliable policies in continuous, high-dimensional treatment variables from observational data. Especially in the presence of strong confounding, it can be infeasible to learn the entire heterogeneous response surface from treatment to outcome. It is also not particularly useful, when there are practical constraints on the size of the interventions altering the observational treatments. Since it tends to be easier to learn the outcome for treatments near existing observations, we propose a new framework for evaluating and optimizing the effect of small, tailored, and localized interventions that nudge the observed treatment assignments. Our doubly robust effect estimator plugs into a policy learner that stays within the interventional scope by optimal transport. Consequently, the error of the total policy effect is restricted to prediction errors nearby the observational distribution, rather than the whole response surface.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-marmarelis24a, title = {Policy Learning for Localized Interventions from Observational Data}, author = {Marmarelis, Myrl G. and Morstatter, Fred and Galstyan, Aram and Ver Steeg, Greg}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {4456--4464}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/marmarelis24a/marmarelis24a.pdf}, url = {https://proceedings.mlr.press/v238/marmarelis24a.html}, abstract = {A largely unaddressed problem in causal inference is that of learning reliable policies in continuous, high-dimensional treatment variables from observational data. Especially in the presence of strong confounding, it can be infeasible to learn the entire heterogeneous response surface from treatment to outcome. It is also not particularly useful, when there are practical constraints on the size of the interventions altering the observational treatments. Since it tends to be easier to learn the outcome for treatments near existing observations, we propose a new framework for evaluating and optimizing the effect of small, tailored, and localized interventions that nudge the observed treatment assignments. Our doubly robust effect estimator plugs into a policy learner that stays within the interventional scope by optimal transport. Consequently, the error of the total policy effect is restricted to prediction errors nearby the observational distribution, rather than the whole response surface.} }
Endnote
%0 Conference Paper %T Policy Learning for Localized Interventions from Observational Data %A Myrl G. Marmarelis %A Fred Morstatter %A Aram Galstyan %A Greg Ver Steeg %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-marmarelis24a %I PMLR %P 4456--4464 %U https://proceedings.mlr.press/v238/marmarelis24a.html %V 238 %X A largely unaddressed problem in causal inference is that of learning reliable policies in continuous, high-dimensional treatment variables from observational data. Especially in the presence of strong confounding, it can be infeasible to learn the entire heterogeneous response surface from treatment to outcome. It is also not particularly useful, when there are practical constraints on the size of the interventions altering the observational treatments. Since it tends to be easier to learn the outcome for treatments near existing observations, we propose a new framework for evaluating and optimizing the effect of small, tailored, and localized interventions that nudge the observed treatment assignments. Our doubly robust effect estimator plugs into a policy learner that stays within the interventional scope by optimal transport. Consequently, the error of the total policy effect is restricted to prediction errors nearby the observational distribution, rather than the whole response surface.
APA
Marmarelis, M.G., Morstatter, F., Galstyan, A. & Ver Steeg, G.. (2024). Policy Learning for Localized Interventions from Observational Data. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:4456-4464 Available from https://proceedings.mlr.press/v238/marmarelis24a.html.

Related Material