Pragmatic Policy Development via Interpretable Behavior Cloning

Anton Matsson, Yaochen Rao, Heather J. Litman, Fredrik D. Johansson
Proceedings of the Fifth Machine Learning for Health Symposium, PMLR 297:807-825, 2026.

Abstract

Offline reinforcement learning ({RL}) holds great promise for deriving optimal policies from observational data, but challenges related to interpretability and evaluation limit its practical use in safety-critical domains. Interpretability is hindered by the black-box nature of unconstrained {RL} policies, while evaluation typically performed off-policy is sensitive to large deviations from the data-collecting behavior policy, especially when using methods based on importance sampling. To address these challenges, we propose a simple yet practical alternative: deriving treatment policies from the most frequently chosen actions in each patient state, as estimated by an interpretable model of the behavior policy. By using a tree-based model, which is specifically designed to exploit patterns in the data, we obtain a natural grouping of states with respect to treatment. The tree structure ensures interpretability by design, while varying the number of most common actions considered controls the degree of overlap with the behavior policy, enabling reliable off-policy evaluation. This pragmatic approach to policy development standardizes frequent treatment patterns, capturing the collective clinical judgment embedded in the data. Using real-world examples in rheumatoid arthritis and sepsis care, we demonstrate that policies derived under this framework can outperform current practice, offering interpretable alternatives to those obtained via offline {RL}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v297-matsson26a, title = {Pragmatic Policy Development via Interpretable Behavior Cloning}, author = {Matsson, Anton and Rao, Yaochen and Litman, Heather J. and Johansson, Fredrik D.}, booktitle = {Proceedings of the Fifth Machine Learning for Health Symposium}, pages = {807--825}, year = {2026}, editor = {Argaw, Peniel and Zhang, Haoran and Jabbour, Sarah and Chandak, Payal and Ji, Jerry and Mukherjee, Sumit and Salaudeen, Olawale and Chang, Trenton and Healey, Elizabeth and Gröger, Fabian and Adibi, Amin and Hegselmann, Stefan and Wild, Benjamin and Noori, Ayush}, volume = {297}, series = {Proceedings of Machine Learning Research}, month = {13--14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v297/main/assets/matsson26a/matsson26a.pdf}, url = {https://proceedings.mlr.press/v297/matsson26a.html}, abstract = {Offline reinforcement learning ({RL}) holds great promise for deriving optimal policies from observational data, but challenges related to interpretability and evaluation limit its practical use in safety-critical domains. Interpretability is hindered by the black-box nature of unconstrained {RL} policies, while evaluation typically performed off-policy is sensitive to large deviations from the data-collecting behavior policy, especially when using methods based on importance sampling. To address these challenges, we propose a simple yet practical alternative: deriving treatment policies from the most frequently chosen actions in each patient state, as estimated by an interpretable model of the behavior policy. By using a tree-based model, which is specifically designed to exploit patterns in the data, we obtain a natural grouping of states with respect to treatment. The tree structure ensures interpretability by design, while varying the number of most common actions considered controls the degree of overlap with the behavior policy, enabling reliable off-policy evaluation. This pragmatic approach to policy development standardizes frequent treatment patterns, capturing the collective clinical judgment embedded in the data. Using real-world examples in rheumatoid arthritis and sepsis care, we demonstrate that policies derived under this framework can outperform current practice, offering interpretable alternatives to those obtained via offline {RL}.} }
Endnote
%0 Conference Paper %T Pragmatic Policy Development via Interpretable Behavior Cloning %A Anton Matsson %A Yaochen Rao %A Heather J. Litman %A Fredrik D. Johansson %B Proceedings of the Fifth Machine Learning for Health Symposium %C Proceedings of Machine Learning Research %D 2026 %E Peniel Argaw %E Haoran Zhang %E Sarah Jabbour %E Payal Chandak %E Jerry Ji %E Sumit Mukherjee %E Olawale Salaudeen %E Trenton Chang %E Elizabeth Healey %E Fabian Gröger %E Amin Adibi %E Stefan Hegselmann %E Benjamin Wild %E Ayush Noori %F pmlr-v297-matsson26a %I PMLR %P 807--825 %U https://proceedings.mlr.press/v297/matsson26a.html %V 297 %X Offline reinforcement learning ({RL}) holds great promise for deriving optimal policies from observational data, but challenges related to interpretability and evaluation limit its practical use in safety-critical domains. Interpretability is hindered by the black-box nature of unconstrained {RL} policies, while evaluation typically performed off-policy is sensitive to large deviations from the data-collecting behavior policy, especially when using methods based on importance sampling. To address these challenges, we propose a simple yet practical alternative: deriving treatment policies from the most frequently chosen actions in each patient state, as estimated by an interpretable model of the behavior policy. By using a tree-based model, which is specifically designed to exploit patterns in the data, we obtain a natural grouping of states with respect to treatment. The tree structure ensures interpretability by design, while varying the number of most common actions considered controls the degree of overlap with the behavior policy, enabling reliable off-policy evaluation. This pragmatic approach to policy development standardizes frequent treatment patterns, capturing the collective clinical judgment embedded in the data. Using real-world examples in rheumatoid arthritis and sepsis care, we demonstrate that policies derived under this framework can outperform current practice, offering interpretable alternatives to those obtained via offline {RL}.
APA
Matsson, A., Rao, Y., Litman, H.J. & Johansson, F.D.. (2026). Pragmatic Policy Development via Interpretable Behavior Cloning. Proceedings of the Fifth Machine Learning for Health Symposium, in Proceedings of Machine Learning Research 297:807-825 Available from https://proceedings.mlr.press/v297/matsson26a.html.

Related Material