[edit]
Multiarmed Bandits With Limited Expert Advice
Proceedings of The 27th Conference on Learning Theory, PMLR 35:107-122, 2014.
Abstract
We consider the problem of minimizing regret in the setting of advice-efficient multiarmed bandits with expert advice. We give an algorithm for the setting of K arms and N experts out of which we are allowed to query and use only M experts’ advice in each round, which has a regret bound of \tildeO\left(\sqrt\frac\min{K, M} NM T\right) after T rounds. We also prove that any algorithm for this problem must have expected regret at least \tildeΩ\left(\sqrt\frac\min{K, M} NMT\right), thus showing that our upper bound is nearly tight. This solves the COLT 2013 open problem of Seldin et al. (2013).