[edit]
Impossible Tuning Made Possible: A New Expert Algorithm and Its Applications
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:1216-1259, 2021.
Abstract
We resolve the long-standing "impossible tuning" issue for the classic expert problem and show that, it is in fact possible to achieve regret O(√(lnd)∑tℓ2t,i) simultaneously for all expert i in a T-round d-expert problem where ℓt,i is the loss for expert i in round t. Our algorithm is based on the Mirror Descent framework with a correction term and a weighted entropy regularizer. While natural, the algorithm has not been studied before and requires a careful analysis. We also generalize the bound to O(√(lnd)∑t(ℓt,i−mt,i)2) for any prediction vector mt that the learner receives, and recover or improve many existing results by choosing different mt. Furthermore, we use the same framework to create a master algorithm that combines a set of base algorithms and learns the best one with little overhead. The new guarantee of our master allows us to derive many new results for both the expert problem and more generally Online Linear Optimization.