[edit]
Generalized Mixability via Entropic Duality
Proceedings of The 28th Conference on Learning Theory, PMLR 40:1501-1522, 2015.
Abstract
Mixability is a property of a loss which characterizes when constant regret is possible in the game of prediction with expert advice. We show that a key property of mixability generalizes, and the \exp and \log operations present in the usual theory are not as special as one might have thought. In doing so we introduce a more general notion of Φ-mixability where Φis a general entropy (\emphi.e., any convex function on probabilities). We show how a property shared by the convex dual of any such entropy yields a natural algorithm (the minimizer of a regret bound) which, analogous to the classical Aggregating Algorithm, is guaranteed a constant regret when used with Φ-mixable losses. We characterize which Φhave non-trivial Φ-mixable losses and relate Φ-mixability and its associated Aggregating Algorithm to potential-based methods, a Blackwell-like condition, mirror descent, and risk measures from finance. We also define a notion of “dominance” between different entropies in terms of bounds they guarantee and conjecture that classical mixability gives optimal bounds, for which we provide some supporting empirical evidence.