ExOSITO: Explainable Off-Policy Learning with Side Information for Intensive Care Unit Blood Test Orders

Zongliang Ji, Andre Carlos Kajdacsy-Balla Amaral, Anna Goldenberg, Rahul G Krishnan
Proceedings of the sixth Conference on Health, Inference, and Learning, PMLR 287:337-368, 2025.

Abstract

Ordering a minimal subset of lab tests for patients in the intensive care unit (ICU) can be challenging. Care teams must balance between ensuring the availability of the right information and reducing the clinical burden and costs associated with each lab test order. Most in-patient settings experience frequent over-ordering of lab tests, but are now aiming to reduce this burden on both hospital resources and the environment. This paper develops a novel method that combines off-policy learning with privileged information to identify the optimal set of ICU lab tests to order. Our approach, EXplainable Off-policy learning with Side Information for ICU blood Test Orders (ExOSITO) creates an interpretable assistive tool for clinicians to order lab tests by considering both the observed and predicted future status of each patient. We pose this problem as a causal bandit trained using offline data and a novel reward function derived from clinically-approved rules; we introduce a novel learning framework that integrates clinical knowledge with observational data to bridge the gap between the optimal and logging policies. The learned policy function provides interpretable clinical information and reduces costs without omitting any vital lab orders, outperforming both a physician’s policy and prior approaches to this practical problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v287-ji25a, title = {ExOSITO: Explainable Off-Policy Learning with Side Information for Intensive Care Unit Blood Test Orders}, author = {Ji, Zongliang and Amaral, Andre Carlos Kajdacsy-Balla and Goldenberg, Anna and Krishnan, Rahul G}, booktitle = {Proceedings of the sixth Conference on Health, Inference, and Learning}, pages = {337--368}, year = {2025}, editor = {Xu, Xuhai Orson and Choi, Edward and Singhal, Pankhuri and Gerych, Walter and Tang, Shengpu and Agrawal, Monica and Subbaswamy, Adarsh and Sizikova, Elena and Dunn, Jessilyn and Daneshjou, Roxana and Sarker, Tasmie and McDermott, Matthew and Chen, Irene}, volume = {287}, series = {Proceedings of Machine Learning Research}, month = {25--27 Jun}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v287/main/assets/ji25a/ji25a.pdf}, url = {https://proceedings.mlr.press/v287/ji25a.html}, abstract = {Ordering a minimal subset of lab tests for patients in the intensive care unit (ICU) can be challenging. Care teams must balance between ensuring the availability of the right information and reducing the clinical burden and costs associated with each lab test order. Most in-patient settings experience frequent over-ordering of lab tests, but are now aiming to reduce this burden on both hospital resources and the environment. This paper develops a novel method that combines off-policy learning with privileged information to identify the optimal set of ICU lab tests to order. Our approach, EXplainable Off-policy learning with Side Information for ICU blood Test Orders (ExOSITO) creates an interpretable assistive tool for clinicians to order lab tests by considering both the observed and predicted future status of each patient. We pose this problem as a causal bandit trained using offline data and a novel reward function derived from clinically-approved rules; we introduce a novel learning framework that integrates clinical knowledge with observational data to bridge the gap between the optimal and logging policies. The learned policy function provides interpretable clinical information and reduces costs without omitting any vital lab orders, outperforming both a physician’s policy and prior approaches to this practical problem.} }
Endnote
%0 Conference Paper %T ExOSITO: Explainable Off-Policy Learning with Side Information for Intensive Care Unit Blood Test Orders %A Zongliang Ji %A Andre Carlos Kajdacsy-Balla Amaral %A Anna Goldenberg %A Rahul G Krishnan %B Proceedings of the sixth Conference on Health, Inference, and Learning %C Proceedings of Machine Learning Research %D 2025 %E Xuhai Orson Xu %E Edward Choi %E Pankhuri Singhal %E Walter Gerych %E Shengpu Tang %E Monica Agrawal %E Adarsh Subbaswamy %E Elena Sizikova %E Jessilyn Dunn %E Roxana Daneshjou %E Tasmie Sarker %E Matthew McDermott %E Irene Chen %F pmlr-v287-ji25a %I PMLR %P 337--368 %U https://proceedings.mlr.press/v287/ji25a.html %V 287 %X Ordering a minimal subset of lab tests for patients in the intensive care unit (ICU) can be challenging. Care teams must balance between ensuring the availability of the right information and reducing the clinical burden and costs associated with each lab test order. Most in-patient settings experience frequent over-ordering of lab tests, but are now aiming to reduce this burden on both hospital resources and the environment. This paper develops a novel method that combines off-policy learning with privileged information to identify the optimal set of ICU lab tests to order. Our approach, EXplainable Off-policy learning with Side Information for ICU blood Test Orders (ExOSITO) creates an interpretable assistive tool for clinicians to order lab tests by considering both the observed and predicted future status of each patient. We pose this problem as a causal bandit trained using offline data and a novel reward function derived from clinically-approved rules; we introduce a novel learning framework that integrates clinical knowledge with observational data to bridge the gap between the optimal and logging policies. The learned policy function provides interpretable clinical information and reduces costs without omitting any vital lab orders, outperforming both a physician’s policy and prior approaches to this practical problem.
APA
Ji, Z., Amaral, A.C.K., Goldenberg, A. & Krishnan, R.G.. (2025). ExOSITO: Explainable Off-Policy Learning with Side Information for Intensive Care Unit Blood Test Orders. Proceedings of the sixth Conference on Health, Inference, and Learning, in Proceedings of Machine Learning Research 287:337-368 Available from https://proceedings.mlr.press/v287/ji25a.html.

Related Material