Achieving the time of 1-NN, but the accuracy of k-NN

Lirong Xue, Samory Kpotufe
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:1628-1636, 2018.

Abstract

We propose a simple approach which, given distributed computing resources, can nearly achieve the accuracy of k-NN prediction, while matching (or improving) the faster prediction time of 1-NN. The approach consists of aggregating denoised 1-NN predictors over a small number of distributed subsamples. We show, both theoretically and experimentally, that small subsample sizes suffice to attain similar performance as k-NN, without sacrificing the computational efficiency of 1-NN.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-xue18a, title = {Achieving the time of 1-NN, but the accuracy of k-NN}, author = {Xue, Lirong and Kpotufe, Samory}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {1628--1636}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/xue18a/xue18a.pdf}, url = {https://proceedings.mlr.press/v84/xue18a.html}, abstract = {We propose a simple approach which, given distributed computing resources, can nearly achieve the accuracy of k-NN prediction, while matching (or improving) the faster prediction time of 1-NN. The approach consists of aggregating denoised 1-NN predictors over a small number of distributed subsamples. We show, both theoretically and experimentally, that small subsample sizes suffice to attain similar performance as k-NN, without sacrificing the computational efficiency of 1-NN. } }
Endnote
%0 Conference Paper %T Achieving the time of 1-NN, but the accuracy of k-NN %A Lirong Xue %A Samory Kpotufe %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-xue18a %I PMLR %P 1628--1636 %U https://proceedings.mlr.press/v84/xue18a.html %V 84 %X We propose a simple approach which, given distributed computing resources, can nearly achieve the accuracy of k-NN prediction, while matching (or improving) the faster prediction time of 1-NN. The approach consists of aggregating denoised 1-NN predictors over a small number of distributed subsamples. We show, both theoretically and experimentally, that small subsample sizes suffice to attain similar performance as k-NN, without sacrificing the computational efficiency of 1-NN.
APA
Xue, L. & Kpotufe, S.. (2018). Achieving the time of 1-NN, but the accuracy of k-NN. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:1628-1636 Available from https://proceedings.mlr.press/v84/xue18a.html.

Related Material