Achieving the time of 1-NN, but the accuracy of k-NN
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:1628-1636, 2018.
We propose a simple approach which, given distributed computing resources, can nearly achieve the accuracy of k-NN prediction, while matching (or improving) the faster prediction time of 1-NN. The approach consists of aggregating denoised 1-NN predictors over a small number of distributed subsamples. We show, both theoretically and experimentally, that small subsample sizes suffice to attain similar performance as k-NN, without sacrificing the computational efficiency of 1-NN.