Budgeted and Non-Budgeted Causal Bandits

Vineet Nair, Vishakha Patil, Gaurav Sinha
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:2017-2025, 2021.

Abstract

Learning good interventions in a causal graph can be modelled as a stochastic multi-armed bandit problem with side-information. First, we study this problem when interventions are more expensive than observations and a budget is specified. If there are no backdoor paths from the intervenable nodes to the reward node then we propose an algorithm to minimize simple regret that optimally trades-off observations and interventions based on the cost of intervention. We also propose an algorithm that accounts for the cost of interventions, utilizes causal side-information, and minimizes the expected cumulative regret without exceeding the budget. Our algorithm performs better than standard algorithms that do not take side-information into account. Finally, we study the problem of learning best interventions without budget constraint in general graphs and give an algorithm that achieves constant expected cumulative regret in terms of the instance parameters when the parent distribution of the reward variable for each intervention is known. Our results are experimentally validated and compared to the best-known bounds in the current literature.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-nair21a, title = { Budgeted and Non-Budgeted Causal Bandits }, author = {Nair, Vineet and Patil, Vishakha and Sinha, Gaurav}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {2017--2025}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/nair21a/nair21a.pdf}, url = {https://proceedings.mlr.press/v130/nair21a.html}, abstract = { Learning good interventions in a causal graph can be modelled as a stochastic multi-armed bandit problem with side-information. First, we study this problem when interventions are more expensive than observations and a budget is specified. If there are no backdoor paths from the intervenable nodes to the reward node then we propose an algorithm to minimize simple regret that optimally trades-off observations and interventions based on the cost of intervention. We also propose an algorithm that accounts for the cost of interventions, utilizes causal side-information, and minimizes the expected cumulative regret without exceeding the budget. Our algorithm performs better than standard algorithms that do not take side-information into account. Finally, we study the problem of learning best interventions without budget constraint in general graphs and give an algorithm that achieves constant expected cumulative regret in terms of the instance parameters when the parent distribution of the reward variable for each intervention is known. Our results are experimentally validated and compared to the best-known bounds in the current literature. } }
Endnote
%0 Conference Paper %T Budgeted and Non-Budgeted Causal Bandits %A Vineet Nair %A Vishakha Patil %A Gaurav Sinha %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-nair21a %I PMLR %P 2017--2025 %U https://proceedings.mlr.press/v130/nair21a.html %V 130 %X Learning good interventions in a causal graph can be modelled as a stochastic multi-armed bandit problem with side-information. First, we study this problem when interventions are more expensive than observations and a budget is specified. If there are no backdoor paths from the intervenable nodes to the reward node then we propose an algorithm to minimize simple regret that optimally trades-off observations and interventions based on the cost of intervention. We also propose an algorithm that accounts for the cost of interventions, utilizes causal side-information, and minimizes the expected cumulative regret without exceeding the budget. Our algorithm performs better than standard algorithms that do not take side-information into account. Finally, we study the problem of learning best interventions without budget constraint in general graphs and give an algorithm that achieves constant expected cumulative regret in terms of the instance parameters when the parent distribution of the reward variable for each intervention is known. Our results are experimentally validated and compared to the best-known bounds in the current literature.
APA
Nair, V., Patil, V. & Sinha, G.. (2021). Budgeted and Non-Budgeted Causal Bandits . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:2017-2025 Available from https://proceedings.mlr.press/v130/nair21a.html.

Related Material