[edit]
Unconstrained Online Linear Learning in Hilbert Spaces: Minimax Algorithms and Normal Approximations
Proceedings of The 27th Conference on Learning Theory, PMLR 35:1020-1039, 2014.
Abstract
We study algorithms for online linear optimization in Hilbert spaces, focusing on the case where the player is unconstrained. We develop a novel characterization of a large class of minimax algorithms, recovering, and even improving, several previous results as immediate corollaries. Moreover, using our tools, we develop an algorithm that provides a regret bound of O(U \sqrtT \log( U \sqrtT \log^2 T +1)), where U is the L_2 norm of an arbitrary comparator and both T and U are unknown to the player. This bound is optimal up to \sqrt\log \log T terms. When T is known, we derive an algorithm with an optimal regret bound (up to constant factors). For both the known and unknown T case, a Normal approximation to the conditional value of the game proves to be the key analysis tool.