Confounded Budgeted Causal Bandits

Fateme Jamshidi, Jalal Etesami, Negar Kiyavash
Proceedings of the Third Conference on Causal Learning and Reasoning, PMLR 236:423-461, 2024.

Abstract

We study the problem of learning "good" interventions in a stochastic environment modeled by its underlying causal graph. Good interventions refer to interventions that maximize rewards. Specifically, we consider the setting of a pre-specified budget constraint, where interventions can have non-uniform costs. We show that this problem can be formulated as maximizing the expected reward for a stochastic multi-armed bandit with side information. We propose an algorithm to minimize the cumulative regret in general causal graphs. This algorithm trades off observations and interventions based on their costs to achieve the optimal reward. This algorithm generalizes the state-of-the-art methods by allowing non-uniform costs and hidden confounders in the causal graph. Furthermore, we develop an algorithm to minimize the simple regret in the budgeted setting with non-uniform costs and also general causal graphs. We provide theoretical guarantees, including both upper and lower bounds, as well as empirical evaluations of our algorithms. Our empirical results showcase that our algorithms outperform the state of the art.

Cite this Paper


BibTeX
@InProceedings{pmlr-v236-jamshidi24a, title = {Confounded Budgeted Causal Bandits}, author = {Jamshidi, Fateme and Etesami, Jalal and Kiyavash, Negar}, booktitle = {Proceedings of the Third Conference on Causal Learning and Reasoning}, pages = {423--461}, year = {2024}, editor = {Locatello, Francesco and Didelez, Vanessa}, volume = {236}, series = {Proceedings of Machine Learning Research}, month = {01--03 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v236/jamshidi24a/jamshidi24a.pdf}, url = {https://proceedings.mlr.press/v236/jamshidi24a.html}, abstract = {We study the problem of learning "good" interventions in a stochastic environment modeled by its underlying causal graph. Good interventions refer to interventions that maximize rewards. Specifically, we consider the setting of a pre-specified budget constraint, where interventions can have non-uniform costs. We show that this problem can be formulated as maximizing the expected reward for a stochastic multi-armed bandit with side information. We propose an algorithm to minimize the cumulative regret in general causal graphs. This algorithm trades off observations and interventions based on their costs to achieve the optimal reward. This algorithm generalizes the state-of-the-art methods by allowing non-uniform costs and hidden confounders in the causal graph. Furthermore, we develop an algorithm to minimize the simple regret in the budgeted setting with non-uniform costs and also general causal graphs. We provide theoretical guarantees, including both upper and lower bounds, as well as empirical evaluations of our algorithms. Our empirical results showcase that our algorithms outperform the state of the art.} }
Endnote
%0 Conference Paper %T Confounded Budgeted Causal Bandits %A Fateme Jamshidi %A Jalal Etesami %A Negar Kiyavash %B Proceedings of the Third Conference on Causal Learning and Reasoning %C Proceedings of Machine Learning Research %D 2024 %E Francesco Locatello %E Vanessa Didelez %F pmlr-v236-jamshidi24a %I PMLR %P 423--461 %U https://proceedings.mlr.press/v236/jamshidi24a.html %V 236 %X We study the problem of learning "good" interventions in a stochastic environment modeled by its underlying causal graph. Good interventions refer to interventions that maximize rewards. Specifically, we consider the setting of a pre-specified budget constraint, where interventions can have non-uniform costs. We show that this problem can be formulated as maximizing the expected reward for a stochastic multi-armed bandit with side information. We propose an algorithm to minimize the cumulative regret in general causal graphs. This algorithm trades off observations and interventions based on their costs to achieve the optimal reward. This algorithm generalizes the state-of-the-art methods by allowing non-uniform costs and hidden confounders in the causal graph. Furthermore, we develop an algorithm to minimize the simple regret in the budgeted setting with non-uniform costs and also general causal graphs. We provide theoretical guarantees, including both upper and lower bounds, as well as empirical evaluations of our algorithms. Our empirical results showcase that our algorithms outperform the state of the art.
APA
Jamshidi, F., Etesami, J. & Kiyavash, N.. (2024). Confounded Budgeted Causal Bandits. Proceedings of the Third Conference on Causal Learning and Reasoning, in Proceedings of Machine Learning Research 236:423-461 Available from https://proceedings.mlr.press/v236/jamshidi24a.html.

Related Material