Guardian-regularized Safe Offline Reinforcement Learning for Smart Weaning of Mechanical Circulatory Devices

Aysin Tumay, Sophia Sun, Sonia Fereidooni, Aaron Dumas, Elise Jortberg, Rose Yu
Proceedings of the Fifth Machine Learning for Health Symposium, PMLR 297:1269-1296, 2026.

Abstract

We study the sequential decision-making problem for automated weaning of mechanical circulatory support ({MCS}) devices in cardiogenic shock patients. {MCS} devices are percutaneous micro-axial flow pumps that provide left ventricular unloading and forward blood flow, but current weaning strategies vary significantly across care teams and lack data-driven approaches. Offline reinforcement learning ({RL}) has proven to be successful in sequential decision-making tasks, but our setting presents challenges for training and evaluating traditional offline {RL} methods: prohibition of online patient interaction, highly uncertain circulatory dynamics due to concurrent treatments, and limited data availability. We developed an end-to-end machine learning framework with two key contributions (1) Clinically-aware OOD-regularized Model-based Policy Optimization ({CORMPO}) a density-regularized offline {RL} algorithm for out-of-distribution suppression that also incorporates clinically-informed reward shaping and (2) a Transformer-based probabilistic digital twin that models {MCS} circulatory dynamics for policy evaluation with rich physiological and clinical metrics. We prove that {CORMPO} achieves theoretical performance guarantees under mild assumptions. {CORMPO} attains a higher reward than the offline {RL} baselines by 28% and higher scores in clinical metrics by 82.6% on real and synthetic datasets. Our approach offers a principled framework for safe offline policy learning in high-stakes medical applications where domain expertise and safety constraints are essential.

Cite this Paper


BibTeX
@InProceedings{pmlr-v297-tumay26a, title = {Guardian-regularized Safe Offline Reinforcement Learning for Smart Weaning of Mechanical Circulatory Devices}, author = {Tumay, Aysin and Sun, Sophia and Fereidooni, Sonia and Dumas, Aaron and Jortberg, Elise and Yu, Rose}, booktitle = {Proceedings of the Fifth Machine Learning for Health Symposium}, pages = {1269--1296}, year = {2026}, editor = {Argaw, Peniel and Zhang, Haoran and Jabbour, Sarah and Chandak, Payal and Ji, Jerry and Mukherjee, Sumit and Salaudeen, Olawale and Chang, Trenton and Healey, Elizabeth and Gröger, Fabian and Adibi, Amin and Hegselmann, Stefan and Wild, Benjamin and Noori, Ayush}, volume = {297}, series = {Proceedings of Machine Learning Research}, month = {13--14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v297/main/assets/tumay26a/tumay26a.pdf}, url = {https://proceedings.mlr.press/v297/tumay26a.html}, abstract = {We study the sequential decision-making problem for automated weaning of mechanical circulatory support ({MCS}) devices in cardiogenic shock patients. {MCS} devices are percutaneous micro-axial flow pumps that provide left ventricular unloading and forward blood flow, but current weaning strategies vary significantly across care teams and lack data-driven approaches. Offline reinforcement learning ({RL}) has proven to be successful in sequential decision-making tasks, but our setting presents challenges for training and evaluating traditional offline {RL} methods: prohibition of online patient interaction, highly uncertain circulatory dynamics due to concurrent treatments, and limited data availability. We developed an end-to-end machine learning framework with two key contributions (1) Clinically-aware OOD-regularized Model-based Policy Optimization ({CORMPO}) a density-regularized offline {RL} algorithm for out-of-distribution suppression that also incorporates clinically-informed reward shaping and (2) a Transformer-based probabilistic digital twin that models {MCS} circulatory dynamics for policy evaluation with rich physiological and clinical metrics. We prove that {CORMPO} achieves theoretical performance guarantees under mild assumptions. {CORMPO} attains a higher reward than the offline {RL} baselines by 28% and higher scores in clinical metrics by 82.6% on real and synthetic datasets. Our approach offers a principled framework for safe offline policy learning in high-stakes medical applications where domain expertise and safety constraints are essential.} }
Endnote
%0 Conference Paper %T Guardian-regularized Safe Offline Reinforcement Learning for Smart Weaning of Mechanical Circulatory Devices %A Aysin Tumay %A Sophia Sun %A Sonia Fereidooni %A Aaron Dumas %A Elise Jortberg %A Rose Yu %B Proceedings of the Fifth Machine Learning for Health Symposium %C Proceedings of Machine Learning Research %D 2026 %E Peniel Argaw %E Haoran Zhang %E Sarah Jabbour %E Payal Chandak %E Jerry Ji %E Sumit Mukherjee %E Olawale Salaudeen %E Trenton Chang %E Elizabeth Healey %E Fabian Gröger %E Amin Adibi %E Stefan Hegselmann %E Benjamin Wild %E Ayush Noori %F pmlr-v297-tumay26a %I PMLR %P 1269--1296 %U https://proceedings.mlr.press/v297/tumay26a.html %V 297 %X We study the sequential decision-making problem for automated weaning of mechanical circulatory support ({MCS}) devices in cardiogenic shock patients. {MCS} devices are percutaneous micro-axial flow pumps that provide left ventricular unloading and forward blood flow, but current weaning strategies vary significantly across care teams and lack data-driven approaches. Offline reinforcement learning ({RL}) has proven to be successful in sequential decision-making tasks, but our setting presents challenges for training and evaluating traditional offline {RL} methods: prohibition of online patient interaction, highly uncertain circulatory dynamics due to concurrent treatments, and limited data availability. We developed an end-to-end machine learning framework with two key contributions (1) Clinically-aware OOD-regularized Model-based Policy Optimization ({CORMPO}) a density-regularized offline {RL} algorithm for out-of-distribution suppression that also incorporates clinically-informed reward shaping and (2) a Transformer-based probabilistic digital twin that models {MCS} circulatory dynamics for policy evaluation with rich physiological and clinical metrics. We prove that {CORMPO} achieves theoretical performance guarantees under mild assumptions. {CORMPO} attains a higher reward than the offline {RL} baselines by 28% and higher scores in clinical metrics by 82.6% on real and synthetic datasets. Our approach offers a principled framework for safe offline policy learning in high-stakes medical applications where domain expertise and safety constraints are essential.
APA
Tumay, A., Sun, S., Fereidooni, S., Dumas, A., Jortberg, E. & Yu, R.. (2026). Guardian-regularized Safe Offline Reinforcement Learning for Smart Weaning of Mechanical Circulatory Devices. Proceedings of the Fifth Machine Learning for Health Symposium, in Proceedings of Machine Learning Research 297:1269-1296 Available from https://proceedings.mlr.press/v297/tumay26a.html.

Related Material