Consistent k-Clustering

Silvio Lattanzi, Sergei Vassilvitskii
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1975-1984, 2017.

Abstract

The study of online algorithms and competitive analysis provides a solid foundation for studying the quality of irrevocable decision making when the data arrives in an online manner. While in some scenarios the decisions are indeed irrevocable, there are many practical situations when changing a previous decision is not impossible, but simply expensive. In this work we formalize this notion and introduce the consistent k-clustering problem. With points arriving online, the goal is to maintain a constant approximate solution, while minimizing the number of reclusterings necessary. We prove a lower bound, showing that $\Omega(k \log n)$ changes are necessary in the worst case for a wide range of objective functions. On the positive side, we give an algorithm that needs only $O(k^2 \log^4n)$ changes to maintain a constant competitive solution, an exponential improvement on the naive solution of reclustering at every time step. Finally, we show experimentally that our approach performs much better than the theoretical bound, with the number of changes growing approximately as $O(\log n)$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-lattanzi17a, title = {Consistent k-Clustering}, author = {Silvio Lattanzi and Sergei Vassilvitskii}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {1975--1984}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/lattanzi17a/lattanzi17a.pdf}, url = {https://proceedings.mlr.press/v70/lattanzi17a.html}, abstract = {The study of online algorithms and competitive analysis provides a solid foundation for studying the quality of irrevocable decision making when the data arrives in an online manner. While in some scenarios the decisions are indeed irrevocable, there are many practical situations when changing a previous decision is not impossible, but simply expensive. In this work we formalize this notion and introduce the consistent k-clustering problem. With points arriving online, the goal is to maintain a constant approximate solution, while minimizing the number of reclusterings necessary. We prove a lower bound, showing that $\Omega(k \log n)$ changes are necessary in the worst case for a wide range of objective functions. On the positive side, we give an algorithm that needs only $O(k^2 \log^4n)$ changes to maintain a constant competitive solution, an exponential improvement on the naive solution of reclustering at every time step. Finally, we show experimentally that our approach performs much better than the theoretical bound, with the number of changes growing approximately as $O(\log n)$.} }
Endnote
%0 Conference Paper %T Consistent k-Clustering %A Silvio Lattanzi %A Sergei Vassilvitskii %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-lattanzi17a %I PMLR %P 1975--1984 %U https://proceedings.mlr.press/v70/lattanzi17a.html %V 70 %X The study of online algorithms and competitive analysis provides a solid foundation for studying the quality of irrevocable decision making when the data arrives in an online manner. While in some scenarios the decisions are indeed irrevocable, there are many practical situations when changing a previous decision is not impossible, but simply expensive. In this work we formalize this notion and introduce the consistent k-clustering problem. With points arriving online, the goal is to maintain a constant approximate solution, while minimizing the number of reclusterings necessary. We prove a lower bound, showing that $\Omega(k \log n)$ changes are necessary in the worst case for a wide range of objective functions. On the positive side, we give an algorithm that needs only $O(k^2 \log^4n)$ changes to maintain a constant competitive solution, an exponential improvement on the naive solution of reclustering at every time step. Finally, we show experimentally that our approach performs much better than the theoretical bound, with the number of changes growing approximately as $O(\log n)$.
APA
Lattanzi, S. & Vassilvitskii, S.. (2017). Consistent k-Clustering. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:1975-1984 Available from https://proceedings.mlr.press/v70/lattanzi17a.html.

Related Material