Analysis of Thompson Sampling for Combinatorial Multiarmed Bandit with Probabilistically Triggered Arms
[edit]
Proceedings of Machine Learning Research, PMLR 89:13221330, 2019.
Abstract
We analyze the regret of combinatorial Thompson sampling (CTS) for the combinatorial multiarmed bandit with probabilistically triggered arms under the semibandit feedback setting. We assume that the learner has access to an exact optimization oracle but does not know the expected base arm outcomes beforehand. When the expected reward function is Lipschitz continuous in the expected base arm outcomes, we derive $O(\sum_{i =1}^m \log T / (p_i \Delta_i))$ regret bound for CTS, where $m$ denotes the number of base arms, $p_i$ denotes the minimum nonzero triggering probability of base arm $i$ and $\Delta_i$ denotes the minimum suboptimality gap of base arm $i$. We also compare CTS with combinatorial upper confidence bound (CUCB) via numerical experiments on a cascading bandit problem.
Related Material


