[edit]
Causal Bandits with Propagating Inference
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5512-5520, 2018.
Abstract
Bandit is a framework for designing sequential experiments, where a learner selects an arm A∈A and obtains an observation corresponding to A in each experiment. Theoretically, the tight regret lower-bound for the general bandit is polynomial with respect to the number of arms |A|, and thus, to overcome this bound, the bandit problem with side-information is often considered. Recently, a bandit framework over a causal graph was introduced, where the structure of the causal graph is available as side-information and the arms are identified with interventions on the causal graph. Existing algorithms for causal bandit overcame the Ω(√|A|/T) simple-regret lower-bound; however, their algorithms work only when the interventions A are localized around a single node (i.e., an intervention propagates only to its neighbors). We then propose a novel causal bandit algorithm for an arbitrary set of interventions, which can propagate throughout the causal graph. We also show that it achieves O(√γ∗log(|A|T)/T) regret bound, where γ∗ is determined by using a causal graph structure. In particular, if the maximum in-degree of the causal graph is a constant, then γ∗=O(N2), where N is the number of nodes.