Counterfactually Guided Policy Transfer in Clinical Settings

Taylor W. Killian, Marzyeh Ghassemi, Shalmali Joshi
Proceedings of the Conference on Health, Inference, and Learning, PMLR 174:5-31, 2022.

Abstract

Domain shift, encountered when using a trained model for a new patient population, creates significant challenges for sequential decision making in healthcare since the target domain may be both data-scarce and confounded. In this paper, we propose a method for off-policy transfer by modeling the underlying generative process with a causal mechanism. We use informative priors from the source domain to augment counterfactual trajectories in the target in a principled manner. We demonstrate how this addresses data-scarcity in the presence of unobserved confounding. The causal parametrization of our sampling procedure guarantees that counterfactual quantities can be estimated from scarce observational target data, maintaining intuitive stability properties. Policy learning in the target domain is further regularized via the source policy through KL-divergence. Through evaluation on a simulated sepsis treatment task, our counterfactual policy transfer procedure significantly improves the performance of a learned treatment policy when assumptions of “no-unobserved confounding" are relaxed.

Cite this Paper


BibTeX
@InProceedings{pmlr-v174-killian22a, title = {Counterfactually Guided Policy Transfer in Clinical Settings}, author = {Killian, Taylor W. and Ghassemi, Marzyeh and Joshi, Shalmali}, booktitle = {Proceedings of the Conference on Health, Inference, and Learning}, pages = {5--31}, year = {2022}, editor = {Flores, Gerardo and Chen, George H and Pollard, Tom and Ho, Joyce C and Naumann, Tristan}, volume = {174}, series = {Proceedings of Machine Learning Research}, month = {07--08 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v174/killian22a/killian22a.pdf}, url = {https://proceedings.mlr.press/v174/killian22a.html}, abstract = {Domain shift, encountered when using a trained model for a new patient population, creates significant challenges for sequential decision making in healthcare since the target domain may be both data-scarce and confounded. In this paper, we propose a method for off-policy transfer by modeling the underlying generative process with a causal mechanism. We use informative priors from the source domain to augment counterfactual trajectories in the target in a principled manner. We demonstrate how this addresses data-scarcity in the presence of unobserved confounding. The causal parametrization of our sampling procedure guarantees that counterfactual quantities can be estimated from scarce observational target data, maintaining intuitive stability properties. Policy learning in the target domain is further regularized via the source policy through KL-divergence. Through evaluation on a simulated sepsis treatment task, our counterfactual policy transfer procedure significantly improves the performance of a learned treatment policy when assumptions of “no-unobserved confounding" are relaxed.} }
Endnote
%0 Conference Paper %T Counterfactually Guided Policy Transfer in Clinical Settings %A Taylor W. Killian %A Marzyeh Ghassemi %A Shalmali Joshi %B Proceedings of the Conference on Health, Inference, and Learning %C Proceedings of Machine Learning Research %D 2022 %E Gerardo Flores %E George H Chen %E Tom Pollard %E Joyce C Ho %E Tristan Naumann %F pmlr-v174-killian22a %I PMLR %P 5--31 %U https://proceedings.mlr.press/v174/killian22a.html %V 174 %X Domain shift, encountered when using a trained model for a new patient population, creates significant challenges for sequential decision making in healthcare since the target domain may be both data-scarce and confounded. In this paper, we propose a method for off-policy transfer by modeling the underlying generative process with a causal mechanism. We use informative priors from the source domain to augment counterfactual trajectories in the target in a principled manner. We demonstrate how this addresses data-scarcity in the presence of unobserved confounding. The causal parametrization of our sampling procedure guarantees that counterfactual quantities can be estimated from scarce observational target data, maintaining intuitive stability properties. Policy learning in the target domain is further regularized via the source policy through KL-divergence. Through evaluation on a simulated sepsis treatment task, our counterfactual policy transfer procedure significantly improves the performance of a learned treatment policy when assumptions of “no-unobserved confounding" are relaxed.
APA
Killian, T.W., Ghassemi, M. & Joshi, S.. (2022). Counterfactually Guided Policy Transfer in Clinical Settings. Proceedings of the Conference on Health, Inference, and Learning, in Proceedings of Machine Learning Research 174:5-31 Available from https://proceedings.mlr.press/v174/killian22a.html.

Related Material