[edit]
Open Problem: Parameter-Free and Scale-Free Online Algorithms
29th Annual Conference on Learning Theory, PMLR 49:1659-1664, 2016.
Abstract
Existing vanilla algorithms for online linear optimization have O((ηR(u) + 1/η) \sqrtT) regret with respect to any competitor u, where R(u) is a 1-strongly convex regularizer and η> 0 is a tuning parameter of the algorithm. For certain decision sets and regularizers, the so-called \emphparameter-free algorithms have \widetilde O(\sqrtR(u) T) regret with respect to any competitor u. Vanilla algorithm can achieve the same bound only for a fixed competitor u known ahead of time by setting η= 1/\sqrtR(u). A drawback of both vanilla and parameter-free algorithms is that they assume that the norm of the loss vectors is bounded by a constant known to the algorithm. There exist \emphscale-free algorithms that have O((ηR(u) + 1/η) \sqrtT \max_1 \le t \le T \norm\ell_t) regret with respect to any competitor u and for any sequence of loss vector \ell_1, …, \ell_T. Parameter-free analogue of scale-free algorithms have never been designed. Is is possible to design algorithms that are simultaneously \emphparameter-free and \emphscale-free?