Unexpected Effects of Online no-Substitution $k$-means Clustering

Michal Moshkovitz
Proceedings of the 32nd International Conference on Algorithmic Learning Theory, PMLR 132:892-930, 2021.

Abstract

Offline $k$-means clustering was studied extensively, and algorithms with a constant approximation are available. However, online clustering is still uncharted. New factors come into play: the ordering of the dataset and whether the number of points, $n$, is known in advance or not. Their exact effects are unknown. In this paper we focus on the online setting where the decisions are irreversible: after a point arrives, the algorithm needs to decide whether to take the point as a center or not, and this decision is final. How many centers are needed and sufficient to achieve constant approximation in this setting? We show upper and lower bounds for all the different cases. These bounds are exactly the same up to a constant, thus achieving optimal bounds. For example, for $k$-means cost with constant $k>1$ and random order, $\Theta(\log n)$ centers are enough to achieve a constant approximation, while the mere a priori knowledge of $n$ reduces the number of centers to a constant. These bounds hold for any distance function that obeys a triangle-type inequality.

Cite this Paper


BibTeX
@InProceedings{pmlr-v132-moshkovitz21a, title = {Unexpected Effects of Online no-Substitution $k$-means Clustering}, author = {Moshkovitz, Michal}, booktitle = {Proceedings of the 32nd International Conference on Algorithmic Learning Theory}, pages = {892--930}, year = {2021}, editor = {Feldman, Vitaly and Ligett, Katrina and Sabato, Sivan}, volume = {132}, series = {Proceedings of Machine Learning Research}, month = {16--19 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v132/moshkovitz21a/moshkovitz21a.pdf}, url = {https://proceedings.mlr.press/v132/moshkovitz21a.html}, abstract = {Offline $k$-means clustering was studied extensively, and algorithms with a constant approximation are available. However, online clustering is still uncharted. New factors come into play: the ordering of the dataset and whether the number of points, $n$, is known in advance or not. Their exact effects are unknown. In this paper we focus on the online setting where the decisions are irreversible: after a point arrives, the algorithm needs to decide whether to take the point as a center or not, and this decision is final. How many centers are needed and sufficient to achieve constant approximation in this setting? We show upper and lower bounds for all the different cases. These bounds are exactly the same up to a constant, thus achieving optimal bounds. For example, for $k$-means cost with constant $k>1$ and random order, $\Theta(\log n)$ centers are enough to achieve a constant approximation, while the mere a priori knowledge of $n$ reduces the number of centers to a constant. These bounds hold for any distance function that obeys a triangle-type inequality.} }
Endnote
%0 Conference Paper %T Unexpected Effects of Online no-Substitution $k$-means Clustering %A Michal Moshkovitz %B Proceedings of the 32nd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Vitaly Feldman %E Katrina Ligett %E Sivan Sabato %F pmlr-v132-moshkovitz21a %I PMLR %P 892--930 %U https://proceedings.mlr.press/v132/moshkovitz21a.html %V 132 %X Offline $k$-means clustering was studied extensively, and algorithms with a constant approximation are available. However, online clustering is still uncharted. New factors come into play: the ordering of the dataset and whether the number of points, $n$, is known in advance or not. Their exact effects are unknown. In this paper we focus on the online setting where the decisions are irreversible: after a point arrives, the algorithm needs to decide whether to take the point as a center or not, and this decision is final. How many centers are needed and sufficient to achieve constant approximation in this setting? We show upper and lower bounds for all the different cases. These bounds are exactly the same up to a constant, thus achieving optimal bounds. For example, for $k$-means cost with constant $k>1$ and random order, $\Theta(\log n)$ centers are enough to achieve a constant approximation, while the mere a priori knowledge of $n$ reduces the number of centers to a constant. These bounds hold for any distance function that obeys a triangle-type inequality.
APA
Moshkovitz, M.. (2021). Unexpected Effects of Online no-Substitution $k$-means Clustering. Proceedings of the 32nd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 132:892-930 Available from https://proceedings.mlr.press/v132/moshkovitz21a.html.

Related Material