[edit]
Power of Hints for Online Learning with Movement Costs
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:2818-2826, 2021.
Abstract
We consider the online linear optimization problem with movement costs, a variant of online learning in which the learner must not only respond to cost vectors ct with points xt in order to maintain low regret, but is also penalized for movement by an additional cost ‖ for some \epsilon>0. Classically, simple algorithms that obtain the optimal \sqrt{T} regret already are very stable and do not incur a significant movement cost. However, recent work has shown that when the learning algorithm is provided with weak “hint” vectors that have a positive correlation with the costs, the regret can be significantly improved to \log(T). In this work, we study the stability of such algorithms, and provide matching upper and lower bounds showing that incorporating movement costs results in intricate tradeoffs between \log(T) when \epsilon\ge 1 and \sqrt{T} regret when \epsilon=0.