[edit]
Near-Optimal Herding
Proceedings of The 27th Conference on Learning Theory, PMLR 35:1165-1182, 2014.
Abstract
Herding is an algorithm of recent interest in the machine learning community, motivated by inference in Markov random fields. It solves the following sampling problem: given a set \mathcalX ⊂\mathbbR^d with mean μ, construct an infinite sequence of points from \mathcalX such that, for every t ≥1, the mean of the first t points in that sequence lies within Euclidean distance O(1/t) of μ. The classic Perceptron boundedness theorem implies that such a result actually holds for a wide class of algorithms, although the factors suppressed by the O(1/t) notation are exponential in d. Thus, to establish a non-trivial result for the sampling problem, one must carefully analyze the factors suppressed by the O(1/t) error bound. This paper studies the best error that can be achieved for the sampling problem. Known analysis of the Herding algorithm give an error bound that depends on geometric properties of \mathcalX but, even under favorable conditions, this bound depends linearly on d. We present a new polynomial-time algorithm that solves the sampling problem with error O\left(\sqrtd \log^2.5|\mathcalX| / t \right) assuming that \mathcalX is finite. Our algorithm is based on recent algorithmic results in \textitdiscrepancy theory. We also show that any algorithm for the sampling problem must have error Ω( \sqrtd / t ). This implies that our algorithm is optimal to within logarithmic factors.