[edit]
Open Problem: Better Bounds for Online Logistic Regression
Proceedings of the 25th Annual Conference on Learning Theory, PMLR 23:44.1-44.3, 2012.
Abstract
Known algorithms applied to online logistic regression on a feasible set of \emphL_2 diameter \emphD achieve regret bounds like \emphO(\emphe^D log \emphT) in one dimension, but we show a bound of \emphO(√\emphD + log \emphT) is possible in a binary 1-dimensional problem. Thus, we pose the following question: Is it possible to achieve a regret bound for online logistic regression that is \emphO(poly(\emphD) log(\emphT))? Even if this is not possible in general, it would be interesting to have a bound that reduces to our bound in the one-dimensional case.