Semi-Markov Offline Reinforcement Learning for Healthcare

Mehdi Fatemi, Mary Wu, Jeremy Petch, Walter Nelson, Stuart J Connolly, Alexander Benz, Anthony Carnicelli, Marzyeh Ghassemi
Proceedings of the Conference on Health, Inference, and Learning, PMLR 174:119-137, 2022.

Abstract

Reinforcement learning (RL) tasks are typically framed as Markov Decision Processes (MDPs), assuming that decisions are made at fixed time intervals. However, many applications of great importance, including healthcare, do not satisfy this assumption, yet they are commonly modelled as MDPs after an artificial reshaping of the data. In addition, most healthcare (and similar) problems are \emph{offline} by nature, allowing for only retrospective studies. To address both challenges, we begin by discussing the Semi-MDP (SMDP) framework, which formally handles actions of variable timings. We next present a formal way to apply SMDP modifications to nearly any given value-based offline RL method. We use this theory to introduce three SMDP-based offline RL algorithms, namely, SDQN, SDDQN, and SBCQ. We then experimentally demonstrate that only these SMDP-based algorithms learn the optimal policy in variable-time environments, whereas their MDP counterparts do not. Finally, we apply our new algorithms to a real-world offline dataset pertaining to \emph{warfarin dosing for stroke prevention} and demonstrate similar results.

Cite this Paper


BibTeX
@InProceedings{pmlr-v174-fatemi22a, title = {Semi-Markov Offline Reinforcement Learning for Healthcare}, author = {Fatemi, Mehdi and Wu, Mary and Petch, Jeremy and Nelson, Walter and Connolly, Stuart J and Benz, Alexander and Carnicelli, Anthony and Ghassemi, Marzyeh}, booktitle = {Proceedings of the Conference on Health, Inference, and Learning}, pages = {119--137}, year = {2022}, editor = {Flores, Gerardo and Chen, George H and Pollard, Tom and Ho, Joyce C and Naumann, Tristan}, volume = {174}, series = {Proceedings of Machine Learning Research}, month = {07--08 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v174/fatemi22a/fatemi22a.pdf}, url = {https://proceedings.mlr.press/v174/fatemi22a.html}, abstract = {Reinforcement learning (RL) tasks are typically framed as Markov Decision Processes (MDPs), assuming that decisions are made at fixed time intervals. However, many applications of great importance, including healthcare, do not satisfy this assumption, yet they are commonly modelled as MDPs after an artificial reshaping of the data. In addition, most healthcare (and similar) problems are \emph{offline} by nature, allowing for only retrospective studies. To address both challenges, we begin by discussing the Semi-MDP (SMDP) framework, which formally handles actions of variable timings. We next present a formal way to apply SMDP modifications to nearly any given value-based offline RL method. We use this theory to introduce three SMDP-based offline RL algorithms, namely, SDQN, SDDQN, and SBCQ. We then experimentally demonstrate that only these SMDP-based algorithms learn the optimal policy in variable-time environments, whereas their MDP counterparts do not. Finally, we apply our new algorithms to a real-world offline dataset pertaining to \emph{warfarin dosing for stroke prevention} and demonstrate similar results.} }
Endnote
%0 Conference Paper %T Semi-Markov Offline Reinforcement Learning for Healthcare %A Mehdi Fatemi %A Mary Wu %A Jeremy Petch %A Walter Nelson %A Stuart J Connolly %A Alexander Benz %A Anthony Carnicelli %A Marzyeh Ghassemi %B Proceedings of the Conference on Health, Inference, and Learning %C Proceedings of Machine Learning Research %D 2022 %E Gerardo Flores %E George H Chen %E Tom Pollard %E Joyce C Ho %E Tristan Naumann %F pmlr-v174-fatemi22a %I PMLR %P 119--137 %U https://proceedings.mlr.press/v174/fatemi22a.html %V 174 %X Reinforcement learning (RL) tasks are typically framed as Markov Decision Processes (MDPs), assuming that decisions are made at fixed time intervals. However, many applications of great importance, including healthcare, do not satisfy this assumption, yet they are commonly modelled as MDPs after an artificial reshaping of the data. In addition, most healthcare (and similar) problems are \emph{offline} by nature, allowing for only retrospective studies. To address both challenges, we begin by discussing the Semi-MDP (SMDP) framework, which formally handles actions of variable timings. We next present a formal way to apply SMDP modifications to nearly any given value-based offline RL method. We use this theory to introduce three SMDP-based offline RL algorithms, namely, SDQN, SDDQN, and SBCQ. We then experimentally demonstrate that only these SMDP-based algorithms learn the optimal policy in variable-time environments, whereas their MDP counterparts do not. Finally, we apply our new algorithms to a real-world offline dataset pertaining to \emph{warfarin dosing for stroke prevention} and demonstrate similar results.
APA
Fatemi, M., Wu, M., Petch, J., Nelson, W., Connolly, S.J., Benz, A., Carnicelli, A. & Ghassemi, M.. (2022). Semi-Markov Offline Reinforcement Learning for Healthcare. Proceedings of the Conference on Health, Inference, and Learning, in Proceedings of Machine Learning Research 174:119-137 Available from https://proceedings.mlr.press/v174/fatemi22a.html.

Related Material