[edit]
Pure entropic regularization for metrical task systems
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:835-848, 2019.
Abstract
We show that on every $n$-point HST metric, there is a randomized online algorithm for metrical task systems (MTS) that is $1$-competitive for service costs and $O(\log n)$-competitive for movement costs. In general, these refined guarantees are optimal up to the implicit constant. While an $O(\log n)$-competitive algorithm for MTS on HST metrics was developed by Bubeck et al. (2018), that approach could only establish an $O((\log n)^2)$-competitive ratio when the service costs are required to be $O(1)$-competitive. Our algorithm is an instantiation of online mirror descent with the regularizer derived from a multiscale conditional entropy. In fact, our algorithm satisfies a set of even more refined guarantees; we are able to exploit this property to combine it with known random embedding theorems and obtain, for {\em any} $n$-point metric space, a randomized algorithm that is $1$-competitive for service costs and $O((\log n)^2)$-competitive for movement costs.