Comparison-Based Nearest Neighbor Search

Siavash Haghiri, Debarghya Ghoshdastidar, Ulrike von Luxburg
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:851-859, 2017.

Abstract

We consider machine learning in a comparison-based setting where we are given a set of points in a metric space, but we have no access to the actual distances between the points. Instead, we can only ask an oracle whether the distance between two points i and j is smaller than the distance between the points i and k. We are concerned with data structures and algorithms to find nearest neighbors based on such comparisons. We focus on a simple yet effective algorithm that recursively splits the space by first selecting two random pivot points and then assigning all other points to the closer of the two (comparison tree). We prove that if the metric space satisfies certain expansion conditions, then with high probability the height of the comparison tree is logarithmic in the number of points, leading to efficient search performance. We also provide an upper bound for the failure probability to return the true nearest neighbor. Experiments show that the comparison tree is competitive with algorithms that have access to the actual distance values, and needs less triplet comparisons than other competitors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-haghiri17a, title = {{Comparison-Based Nearest Neighbor Search}}, author = {Haghiri, Siavash and Ghoshdastidar, Debarghya and Luxburg, Ulrike von}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {851--859}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/haghiri17a/haghiri17a.pdf}, url = {https://proceedings.mlr.press/v54/haghiri17a.html}, abstract = {We consider machine learning in a comparison-based setting where we are given a set of points in a metric space, but we have no access to the actual distances between the points. Instead, we can only ask an oracle whether the distance between two points i and j is smaller than the distance between the points i and k. We are concerned with data structures and algorithms to find nearest neighbors based on such comparisons. We focus on a simple yet effective algorithm that recursively splits the space by first selecting two random pivot points and then assigning all other points to the closer of the two (comparison tree). We prove that if the metric space satisfies certain expansion conditions, then with high probability the height of the comparison tree is logarithmic in the number of points, leading to efficient search performance. We also provide an upper bound for the failure probability to return the true nearest neighbor. Experiments show that the comparison tree is competitive with algorithms that have access to the actual distance values, and needs less triplet comparisons than other competitors.} }
Endnote
%0 Conference Paper %T Comparison-Based Nearest Neighbor Search %A Siavash Haghiri %A Debarghya Ghoshdastidar %A Ulrike von Luxburg %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-haghiri17a %I PMLR %P 851--859 %U https://proceedings.mlr.press/v54/haghiri17a.html %V 54 %X We consider machine learning in a comparison-based setting where we are given a set of points in a metric space, but we have no access to the actual distances between the points. Instead, we can only ask an oracle whether the distance between two points i and j is smaller than the distance between the points i and k. We are concerned with data structures and algorithms to find nearest neighbors based on such comparisons. We focus on a simple yet effective algorithm that recursively splits the space by first selecting two random pivot points and then assigning all other points to the closer of the two (comparison tree). We prove that if the metric space satisfies certain expansion conditions, then with high probability the height of the comparison tree is logarithmic in the number of points, leading to efficient search performance. We also provide an upper bound for the failure probability to return the true nearest neighbor. Experiments show that the comparison tree is competitive with algorithms that have access to the actual distance values, and needs less triplet comparisons than other competitors.
APA
Haghiri, S., Ghoshdastidar, D. & Luxburg, U.v.. (2017). Comparison-Based Nearest Neighbor Search. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:851-859 Available from https://proceedings.mlr.press/v54/haghiri17a.html.

Related Material