Hierarchical Clustering via Sketches and Hierarchical Correlation Clustering

Danny Vainstein, Vaggos Chatziafratis, Gui Citovsky, Anand Rajagopalan, Mohammad Mahdian, Yossi Azar
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:559-567, 2021.

Abstract

Recently, Hierarchical Clustering (HC) has been considered through the lens of optimization. In particular, two maximization objectives have been defined. Moseley and Wang defined the \emph{Revenue} objective to handle similarity information given by a weighted graph on the data points (w.l.o.g., $[0,1]$ weights), while Cohen-Addad et al. defined the \emph{Dissimilarity} objective to handle dissimilarity information. In this paper, we prove structural lemmas for both objectives allowing us to convert any HC tree to a tree with constant number of internal nodes while incurring an arbitrarily small loss in each objective. Although the best-known approximations are 0.585 and 0.667 respectively, using our lemmas we obtain approximations arbitrarily close to 1, if not all weights are small (i.e., there exist constants $\epsilon, \delta$ such that the fraction of weights smaller than $\delta$, is at most $1 - \epsilon$); such instances encompass many metric-based similarity instances, thereby improving upon prior work. Finally, we introduce Hierarchical Correlation Clustering (HCC) to handle instances that contain similarity and dissimilarity information simultaneously. For HCC, we provide an approximation of 0.4767 and for complementary similarity/dissimilarity weights (analogous to $+/-$ correlation clustering), we again present nearly-optimal approximations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-vainstein21a, title = { Hierarchical Clustering via Sketches and Hierarchical Correlation Clustering }, author = {Vainstein, Danny and Chatziafratis, Vaggos and Citovsky, Gui and Rajagopalan, Anand and Mahdian, Mohammad and Azar, Yossi}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {559--567}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/vainstein21a/vainstein21a.pdf}, url = {https://proceedings.mlr.press/v130/vainstein21a.html}, abstract = { Recently, Hierarchical Clustering (HC) has been considered through the lens of optimization. In particular, two maximization objectives have been defined. Moseley and Wang defined the \emph{Revenue} objective to handle similarity information given by a weighted graph on the data points (w.l.o.g., $[0,1]$ weights), while Cohen-Addad et al. defined the \emph{Dissimilarity} objective to handle dissimilarity information. In this paper, we prove structural lemmas for both objectives allowing us to convert any HC tree to a tree with constant number of internal nodes while incurring an arbitrarily small loss in each objective. Although the best-known approximations are 0.585 and 0.667 respectively, using our lemmas we obtain approximations arbitrarily close to 1, if not all weights are small (i.e., there exist constants $\epsilon, \delta$ such that the fraction of weights smaller than $\delta$, is at most $1 - \epsilon$); such instances encompass many metric-based similarity instances, thereby improving upon prior work. Finally, we introduce Hierarchical Correlation Clustering (HCC) to handle instances that contain similarity and dissimilarity information simultaneously. For HCC, we provide an approximation of 0.4767 and for complementary similarity/dissimilarity weights (analogous to $+/-$ correlation clustering), we again present nearly-optimal approximations. } }
Endnote
%0 Conference Paper %T Hierarchical Clustering via Sketches and Hierarchical Correlation Clustering %A Danny Vainstein %A Vaggos Chatziafratis %A Gui Citovsky %A Anand Rajagopalan %A Mohammad Mahdian %A Yossi Azar %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-vainstein21a %I PMLR %P 559--567 %U https://proceedings.mlr.press/v130/vainstein21a.html %V 130 %X Recently, Hierarchical Clustering (HC) has been considered through the lens of optimization. In particular, two maximization objectives have been defined. Moseley and Wang defined the \emph{Revenue} objective to handle similarity information given by a weighted graph on the data points (w.l.o.g., $[0,1]$ weights), while Cohen-Addad et al. defined the \emph{Dissimilarity} objective to handle dissimilarity information. In this paper, we prove structural lemmas for both objectives allowing us to convert any HC tree to a tree with constant number of internal nodes while incurring an arbitrarily small loss in each objective. Although the best-known approximations are 0.585 and 0.667 respectively, using our lemmas we obtain approximations arbitrarily close to 1, if not all weights are small (i.e., there exist constants $\epsilon, \delta$ such that the fraction of weights smaller than $\delta$, is at most $1 - \epsilon$); such instances encompass many metric-based similarity instances, thereby improving upon prior work. Finally, we introduce Hierarchical Correlation Clustering (HCC) to handle instances that contain similarity and dissimilarity information simultaneously. For HCC, we provide an approximation of 0.4767 and for complementary similarity/dissimilarity weights (analogous to $+/-$ correlation clustering), we again present nearly-optimal approximations.
APA
Vainstein, D., Chatziafratis, V., Citovsky, G., Rajagopalan, A., Mahdian, M. & Azar, Y.. (2021). Hierarchical Clustering via Sketches and Hierarchical Correlation Clustering . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:559-567 Available from https://proceedings.mlr.press/v130/vainstein21a.html.

Related Material