Near-Optimal Herding

Nick Harvey, Samira Samadi
Proceedings of The 27th Conference on Learning Theory, PMLR 35:1165-1182, 2014.

Abstract

Herding is an algorithm of recent interest in the machine learning community, motivated by inference in Markov random fields. It solves the following sampling problem: given a set \mathcalX ⊂\mathbbR^d with mean μ, construct an infinite sequence of points from \mathcalX such that, for every t ≥1, the mean of the first t points in that sequence lies within Euclidean distance O(1/t) of μ. The classic Perceptron boundedness theorem implies that such a result actually holds for a wide class of algorithms, although the factors suppressed by the O(1/t) notation are exponential in d. Thus, to establish a non-trivial result for the sampling problem, one must carefully analyze the factors suppressed by the O(1/t) error bound. This paper studies the best error that can be achieved for the sampling problem. Known analysis of the Herding algorithm give an error bound that depends on geometric properties of \mathcalX but, even under favorable conditions, this bound depends linearly on d. We present a new polynomial-time algorithm that solves the sampling problem with error O\left(\sqrtd \log^2.5|\mathcalX| / t \right) assuming that \mathcalX is finite. Our algorithm is based on recent algorithmic results in \textitdiscrepancy theory. We also show that any algorithm for the sampling problem must have error Ω( \sqrtd / t ). This implies that our algorithm is optimal to within logarithmic factors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v35-harvey14, title = {Near-Optimal Herding}, author = {Harvey, Nick and Samadi, Samira}, booktitle = {Proceedings of The 27th Conference on Learning Theory}, pages = {1165--1182}, year = {2014}, editor = {Balcan, Maria Florina and Feldman, Vitaly and Szepesvári, Csaba}, volume = {35}, series = {Proceedings of Machine Learning Research}, address = {Barcelona, Spain}, month = {13--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v35/harvey14.pdf}, url = {https://proceedings.mlr.press/v35/harvey14.html}, abstract = {Herding is an algorithm of recent interest in the machine learning community, motivated by inference in Markov random fields. It solves the following sampling problem: given a set \mathcalX ⊂\mathbbR^d with mean μ, construct an infinite sequence of points from \mathcalX such that, for every t ≥1, the mean of the first t points in that sequence lies within Euclidean distance O(1/t) of μ. The classic Perceptron boundedness theorem implies that such a result actually holds for a wide class of algorithms, although the factors suppressed by the O(1/t) notation are exponential in d. Thus, to establish a non-trivial result for the sampling problem, one must carefully analyze the factors suppressed by the O(1/t) error bound. This paper studies the best error that can be achieved for the sampling problem. Known analysis of the Herding algorithm give an error bound that depends on geometric properties of \mathcalX but, even under favorable conditions, this bound depends linearly on d. We present a new polynomial-time algorithm that solves the sampling problem with error O\left(\sqrtd \log^2.5|\mathcalX| / t \right) assuming that \mathcalX is finite. Our algorithm is based on recent algorithmic results in \textitdiscrepancy theory. We also show that any algorithm for the sampling problem must have error Ω( \sqrtd / t ). This implies that our algorithm is optimal to within logarithmic factors. } }
Endnote
%0 Conference Paper %T Near-Optimal Herding %A Nick Harvey %A Samira Samadi %B Proceedings of The 27th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2014 %E Maria Florina Balcan %E Vitaly Feldman %E Csaba Szepesvári %F pmlr-v35-harvey14 %I PMLR %P 1165--1182 %U https://proceedings.mlr.press/v35/harvey14.html %V 35 %X Herding is an algorithm of recent interest in the machine learning community, motivated by inference in Markov random fields. It solves the following sampling problem: given a set \mathcalX ⊂\mathbbR^d with mean μ, construct an infinite sequence of points from \mathcalX such that, for every t ≥1, the mean of the first t points in that sequence lies within Euclidean distance O(1/t) of μ. The classic Perceptron boundedness theorem implies that such a result actually holds for a wide class of algorithms, although the factors suppressed by the O(1/t) notation are exponential in d. Thus, to establish a non-trivial result for the sampling problem, one must carefully analyze the factors suppressed by the O(1/t) error bound. This paper studies the best error that can be achieved for the sampling problem. Known analysis of the Herding algorithm give an error bound that depends on geometric properties of \mathcalX but, even under favorable conditions, this bound depends linearly on d. We present a new polynomial-time algorithm that solves the sampling problem with error O\left(\sqrtd \log^2.5|\mathcalX| / t \right) assuming that \mathcalX is finite. Our algorithm is based on recent algorithmic results in \textitdiscrepancy theory. We also show that any algorithm for the sampling problem must have error Ω( \sqrtd / t ). This implies that our algorithm is optimal to within logarithmic factors.
RIS
TY - CPAPER TI - Near-Optimal Herding AU - Nick Harvey AU - Samira Samadi BT - Proceedings of The 27th Conference on Learning Theory DA - 2014/05/29 ED - Maria Florina Balcan ED - Vitaly Feldman ED - Csaba Szepesvári ID - pmlr-v35-harvey14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 35 SP - 1165 EP - 1182 L1 - http://proceedings.mlr.press/v35/harvey14.pdf UR - https://proceedings.mlr.press/v35/harvey14.html AB - Herding is an algorithm of recent interest in the machine learning community, motivated by inference in Markov random fields. It solves the following sampling problem: given a set \mathcalX ⊂\mathbbR^d with mean μ, construct an infinite sequence of points from \mathcalX such that, for every t ≥1, the mean of the first t points in that sequence lies within Euclidean distance O(1/t) of μ. The classic Perceptron boundedness theorem implies that such a result actually holds for a wide class of algorithms, although the factors suppressed by the O(1/t) notation are exponential in d. Thus, to establish a non-trivial result for the sampling problem, one must carefully analyze the factors suppressed by the O(1/t) error bound. This paper studies the best error that can be achieved for the sampling problem. Known analysis of the Herding algorithm give an error bound that depends on geometric properties of \mathcalX but, even under favorable conditions, this bound depends linearly on d. We present a new polynomial-time algorithm that solves the sampling problem with error O\left(\sqrtd \log^2.5|\mathcalX| / t \right) assuming that \mathcalX is finite. Our algorithm is based on recent algorithmic results in \textitdiscrepancy theory. We also show that any algorithm for the sampling problem must have error Ω( \sqrtd / t ). This implies that our algorithm is optimal to within logarithmic factors. ER -
APA
Harvey, N. & Samadi, S.. (2014). Near-Optimal Herding. Proceedings of The 27th Conference on Learning Theory, in Proceedings of Machine Learning Research 35:1165-1182 Available from https://proceedings.mlr.press/v35/harvey14.html.

Related Material