[edit]
On Efficient Low Distortion Ultrametric Embedding
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2078-2088, 2020.
Abstract
A classic problem in unsupervised learning and data analysis is to find simpler and easy-to-visualize representations of the data that preserve its essential properties. A widely-used method to preserve the underlying hierarchical structure of the data while reducing its complexity is to find an embedding of the data into a tree or an ultrametric, but computing such an embedding on a data set of n points in Ω(logn) dimensions incurs a quite prohibitive running time of Θ(n2). In this paper, we provide a new algorithm which takes as input a set of points P in \Rd, and for every c≥1, runs in time n1+ρc2 (for some universal constant ρ>1) to output an ultrametric Δ such that for any two points u,v in P, we have Δ(u,v) is within a multiplicative factor of 5c to the distance between u and v in the best ultrametric representation of P. Here, the best ultrametric is the ultrametric ˜Δ that minimizes the maximum distance distortion with respect to the ℓ2 distance, namely that minimizes max. We complement the above result by showing that under popular complexity theoretic assumptions, for every constant \varepsilon>0, no algorithm with running time n^{2-\varepsilon} can distinguish between inputs in \ell_\infty-metric that admit isometric embedding and those that incur a distortion of \nicefrac{3}{2}. Finally, we present empirical evaluation on classic machine learning datasets and show that the output of our algorithm is comparable to the output of the linkage algorithms while achieving a much faster running time.