Contextual Bandits with Budgeted Information Reveal

Kyra Gan, Esmaeil Keyvanshokooh, Xueqing Liu, Susan Murphy
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:3970-3978, 2024.

Abstract

Contextual bandit algorithms are commonly used in digital health to recommend personalized treatments. However, to ensure the effectiveness of the treatments, patients are often requested to take actions that have no immediate benefit to them, which we refer to as pro-treatment actions. In practice, clinicians have a limited budget to encourage patients to take these actions and collect additional information. We introduce a novel optimization and learning algorithm to address this problem. This algorithm effectively combines the strengths of two algorithmic approaches in a seamless manner, including 1) an online primal-dual algorithm for deciding the optimal timing to reach out to patients, and 2) a contextual bandit learning algorithm to deliver personalized treatment to the patient. We prove that this algorithm admits a sub-linear regret bound. We illustrate the usefulness of this algorithm on both synthetic and real-world data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-gan24a, title = {Contextual Bandits with Budgeted Information Reveal}, author = {Gan, Kyra and Keyvanshokooh, Esmaeil and Liu, Xueqing and Murphy, Susan}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {3970--3978}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/gan24a/gan24a.pdf}, url = {https://proceedings.mlr.press/v238/gan24a.html}, abstract = {Contextual bandit algorithms are commonly used in digital health to recommend personalized treatments. However, to ensure the effectiveness of the treatments, patients are often requested to take actions that have no immediate benefit to them, which we refer to as pro-treatment actions. In practice, clinicians have a limited budget to encourage patients to take these actions and collect additional information. We introduce a novel optimization and learning algorithm to address this problem. This algorithm effectively combines the strengths of two algorithmic approaches in a seamless manner, including 1) an online primal-dual algorithm for deciding the optimal timing to reach out to patients, and 2) a contextual bandit learning algorithm to deliver personalized treatment to the patient. We prove that this algorithm admits a sub-linear regret bound. We illustrate the usefulness of this algorithm on both synthetic and real-world data.} }
Endnote
%0 Conference Paper %T Contextual Bandits with Budgeted Information Reveal %A Kyra Gan %A Esmaeil Keyvanshokooh %A Xueqing Liu %A Susan Murphy %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-gan24a %I PMLR %P 3970--3978 %U https://proceedings.mlr.press/v238/gan24a.html %V 238 %X Contextual bandit algorithms are commonly used in digital health to recommend personalized treatments. However, to ensure the effectiveness of the treatments, patients are often requested to take actions that have no immediate benefit to them, which we refer to as pro-treatment actions. In practice, clinicians have a limited budget to encourage patients to take these actions and collect additional information. We introduce a novel optimization and learning algorithm to address this problem. This algorithm effectively combines the strengths of two algorithmic approaches in a seamless manner, including 1) an online primal-dual algorithm for deciding the optimal timing to reach out to patients, and 2) a contextual bandit learning algorithm to deliver personalized treatment to the patient. We prove that this algorithm admits a sub-linear regret bound. We illustrate the usefulness of this algorithm on both synthetic and real-world data.
APA
Gan, K., Keyvanshokooh, E., Liu, X. & Murphy, S.. (2024). Contextual Bandits with Budgeted Information Reveal. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:3970-3978 Available from https://proceedings.mlr.press/v238/gan24a.html.

Related Material