Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:209-217, 2015.
Abstract
Scalable training of Bayesian nonparametric models is a notoriously difficult challenge. We explore the use of coresets - a data summarization technique originating from computational geometry - for this task. Coresets are weighted subsets of the data such that models trained on these coresets are provably competitive with models trained on the full dataset. Coresets sublinear in the dataset size allow for fast approximate inference with provable guarantees. Existing constructions, however, are limited to parametric problems. Using novel techniques in coreset construction we show the existence of coresets for DP-Means - a prototypical nonparametric clustering problem - and provide a practical construction algorithm. We empirically demonstrate that our algorithm allows us to efficiently trade off computation time and approximation error and thus scale DP-Means to large datasets. For instance, with coresets we can obtain a computational speedup of 45x at an approximation error of only 2.4% compared to solving on the full data set. In contrast, for the same subsample size, the “naive” approach of uniformly subsampling the data incurs an approximation error of 22.5%.
@InProceedings{pmlr-v37-bachem15,
title = {Coresets for Nonparametric Estimation - the Case of DP-Means},
author = {Olivier Bachem and Mario Lucic and Andreas Krause},
booktitle = {Proceedings of the 32nd International Conference on Machine Learning},
pages = {209--217},
year = {2015},
editor = {Francis Bach and David Blei},
volume = {37},
series = {Proceedings of Machine Learning Research},
address = {Lille, France},
month = {07--09 Jul},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v37/bachem15.pdf},
url = {http://proceedings.mlr.press/v37/bachem15.html},
abstract = {Scalable training of Bayesian nonparametric models is a notoriously difficult challenge. We explore the use of coresets - a data summarization technique originating from computational geometry - for this task. Coresets are weighted subsets of the data such that models trained on these coresets are provably competitive with models trained on the full dataset. Coresets sublinear in the dataset size allow for fast approximate inference with provable guarantees. Existing constructions, however, are limited to parametric problems. Using novel techniques in coreset construction we show the existence of coresets for DP-Means - a prototypical nonparametric clustering problem - and provide a practical construction algorithm. We empirically demonstrate that our algorithm allows us to efficiently trade off computation time and approximation error and thus scale DP-Means to large datasets. For instance, with coresets we can obtain a computational speedup of 45x at an approximation error of only 2.4% compared to solving on the full data set. In contrast, for the same subsample size, the “naive” approach of uniformly subsampling the data incurs an approximation error of 22.5%.}
}
%0 Conference Paper
%T Coresets for Nonparametric Estimation - the Case of DP-Means
%A Olivier Bachem
%A Mario Lucic
%A Andreas Krause
%B Proceedings of the 32nd International Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2015
%E Francis Bach
%E David Blei
%F pmlr-v37-bachem15
%I PMLR
%J Proceedings of Machine Learning Research
%P 209--217
%U http://proceedings.mlr.press
%V 37
%W PMLR
%X Scalable training of Bayesian nonparametric models is a notoriously difficult challenge. We explore the use of coresets - a data summarization technique originating from computational geometry - for this task. Coresets are weighted subsets of the data such that models trained on these coresets are provably competitive with models trained on the full dataset. Coresets sublinear in the dataset size allow for fast approximate inference with provable guarantees. Existing constructions, however, are limited to parametric problems. Using novel techniques in coreset construction we show the existence of coresets for DP-Means - a prototypical nonparametric clustering problem - and provide a practical construction algorithm. We empirically demonstrate that our algorithm allows us to efficiently trade off computation time and approximation error and thus scale DP-Means to large datasets. For instance, with coresets we can obtain a computational speedup of 45x at an approximation error of only 2.4% compared to solving on the full data set. In contrast, for the same subsample size, the “naive” approach of uniformly subsampling the data incurs an approximation error of 22.5%.
TY - CPAPER
TI - Coresets for Nonparametric Estimation - the Case of DP-Means
AU - Olivier Bachem
AU - Mario Lucic
AU - Andreas Krause
BT - Proceedings of the 32nd International Conference on Machine Learning
PY - 2015/06/01
DA - 2015/06/01
ED - Francis Bach
ED - David Blei
ID - pmlr-v37-bachem15
PB - PMLR
SP - 209
DP - PMLR
EP - 217
L1 - http://proceedings.mlr.press/v37/bachem15.pdf
UR - http://proceedings.mlr.press/v37/bachem15.html
AB - Scalable training of Bayesian nonparametric models is a notoriously difficult challenge. We explore the use of coresets - a data summarization technique originating from computational geometry - for this task. Coresets are weighted subsets of the data such that models trained on these coresets are provably competitive with models trained on the full dataset. Coresets sublinear in the dataset size allow for fast approximate inference with provable guarantees. Existing constructions, however, are limited to parametric problems. Using novel techniques in coreset construction we show the existence of coresets for DP-Means - a prototypical nonparametric clustering problem - and provide a practical construction algorithm. We empirically demonstrate that our algorithm allows us to efficiently trade off computation time and approximation error and thus scale DP-Means to large datasets. For instance, with coresets we can obtain a computational speedup of 45x at an approximation error of only 2.4% compared to solving on the full data set. In contrast, for the same subsample size, the “naive” approach of uniformly subsampling the data incurs an approximation error of 22.5%.
ER -
Bachem, O., Lucic, M. & Krause, A.. (2015). Coresets for Nonparametric Estimation - the Case of DP-Means. Proceedings of the 32nd International Conference on Machine Learning, in PMLR 37:209-217
This site last compiled Sat, 06 May 2017 21:27:07 +0000