DEANN: Speeding up Kernel-Density Estimation using Approximate Nearest Neighbor Search

Matti Karppa, Martin Aumüller, Rasmus Pagh
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:3108-3137, 2022.

Abstract

Kernel Density Estimation (KDE) is a nonparametric method for estimatig the shape of a density function, given a set of samples from the distribution. Recently, locality-sensitive hashing, originally proposed as a tool for nearest neighbor search, has been shown to enable fast KDE data structures. However, these approaches do not take advantage of the many other advances that have been made in algorithms for nearest neighbor algorithms. We present an algorithm called Density Estimation from Approximate Nearest Neighbors (DEANN) where we apply Approximate Nearest Neighbor (ANN) algorithms as a black box subroutine to compute an unbiased KDE. The idea is to find points that have a large contribution to the KDE using ANN, compute their contribution exactly, and approximate the remainder with Random Sampling (RS). We present a theoretical argument that supports the idea that an ANN subroutine can speed up the evaluation. Furthermore, we provide a C++ implementation with a Python interface that can make use of an arbitrary ANN implementation as a subroutine for KDE evaluation. We show empirically that our implementation outperforms state of the art implementations in all high dimensional datasets we considered, and matches the performance of RS in cases where the ANN yield no gains in performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-karppa22a, title = { DEANN: Speeding up Kernel-Density Estimation using Approximate Nearest Neighbor Search }, author = {Karppa, Matti and Aum\"uller, Martin and Pagh, Rasmus}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {3108--3137}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/karppa22a/karppa22a.pdf}, url = {https://proceedings.mlr.press/v151/karppa22a.html}, abstract = { Kernel Density Estimation (KDE) is a nonparametric method for estimatig the shape of a density function, given a set of samples from the distribution. Recently, locality-sensitive hashing, originally proposed as a tool for nearest neighbor search, has been shown to enable fast KDE data structures. However, these approaches do not take advantage of the many other advances that have been made in algorithms for nearest neighbor algorithms. We present an algorithm called Density Estimation from Approximate Nearest Neighbors (DEANN) where we apply Approximate Nearest Neighbor (ANN) algorithms as a black box subroutine to compute an unbiased KDE. The idea is to find points that have a large contribution to the KDE using ANN, compute their contribution exactly, and approximate the remainder with Random Sampling (RS). We present a theoretical argument that supports the idea that an ANN subroutine can speed up the evaluation. Furthermore, we provide a C++ implementation with a Python interface that can make use of an arbitrary ANN implementation as a subroutine for KDE evaluation. We show empirically that our implementation outperforms state of the art implementations in all high dimensional datasets we considered, and matches the performance of RS in cases where the ANN yield no gains in performance. } }
Endnote
%0 Conference Paper %T DEANN: Speeding up Kernel-Density Estimation using Approximate Nearest Neighbor Search %A Matti Karppa %A Martin Aumüller %A Rasmus Pagh %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-karppa22a %I PMLR %P 3108--3137 %U https://proceedings.mlr.press/v151/karppa22a.html %V 151 %X Kernel Density Estimation (KDE) is a nonparametric method for estimatig the shape of a density function, given a set of samples from the distribution. Recently, locality-sensitive hashing, originally proposed as a tool for nearest neighbor search, has been shown to enable fast KDE data structures. However, these approaches do not take advantage of the many other advances that have been made in algorithms for nearest neighbor algorithms. We present an algorithm called Density Estimation from Approximate Nearest Neighbors (DEANN) where we apply Approximate Nearest Neighbor (ANN) algorithms as a black box subroutine to compute an unbiased KDE. The idea is to find points that have a large contribution to the KDE using ANN, compute their contribution exactly, and approximate the remainder with Random Sampling (RS). We present a theoretical argument that supports the idea that an ANN subroutine can speed up the evaluation. Furthermore, we provide a C++ implementation with a Python interface that can make use of an arbitrary ANN implementation as a subroutine for KDE evaluation. We show empirically that our implementation outperforms state of the art implementations in all high dimensional datasets we considered, and matches the performance of RS in cases where the ANN yield no gains in performance.
APA
Karppa, M., Aumüller, M. & Pagh, R.. (2022). DEANN: Speeding up Kernel-Density Estimation using Approximate Nearest Neighbor Search . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:3108-3137 Available from https://proceedings.mlr.press/v151/karppa22a.html.

Related Material