Entropy Weighted Power k-Means Clustering

Saptarshi Chakraborty, Debolina Paul, Swagatam Das, Jason Xu
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:691-701, 2020.

Abstract

Despite its well-known shortcomings, k-means remains one of the most widely used approaches to data clustering. Current research continues to tackle its flaws while attempting to preserve its simplicity. Recently, the power k-means algorithm was proposed to avoid poor local minima by annealing through a family of smoother surfaces. However, the approach lacks statistical guarantees and fails in high dimensions when many features are irrelevant. This paper addresses these issues by introducing entropy regularization to learn feature relevance while annealing. We prove consistency of the proposed approach and derive a scalable majorization-minimization algorithm that enjoys closed-form updates and convergence guarantees. In particular, our method retains the same computational complexity of k-means and power k-means, but yields significant improvements over both. Its merits are thoroughly assessed on a suite of real and synthetic data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-chakraborty20a, title = {Entropy Weighted Power k-Means Clustering}, author = {Chakraborty, Saptarshi and Paul, Debolina and Das, Swagatam and Xu, Jason}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {691--701}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/chakraborty20a/chakraborty20a.pdf}, url = {https://proceedings.mlr.press/v108/chakraborty20a.html}, abstract = {Despite its well-known shortcomings, k-means remains one of the most widely used approaches to data clustering. Current research continues to tackle its flaws while attempting to preserve its simplicity. Recently, the power k-means algorithm was proposed to avoid poor local minima by annealing through a family of smoother surfaces. However, the approach lacks statistical guarantees and fails in high dimensions when many features are irrelevant. This paper addresses these issues by introducing entropy regularization to learn feature relevance while annealing. We prove consistency of the proposed approach and derive a scalable majorization-minimization algorithm that enjoys closed-form updates and convergence guarantees. In particular, our method retains the same computational complexity of k-means and power k-means, but yields significant improvements over both. Its merits are thoroughly assessed on a suite of real and synthetic data.} }
Endnote
%0 Conference Paper %T Entropy Weighted Power k-Means Clustering %A Saptarshi Chakraborty %A Debolina Paul %A Swagatam Das %A Jason Xu %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-chakraborty20a %I PMLR %P 691--701 %U https://proceedings.mlr.press/v108/chakraborty20a.html %V 108 %X Despite its well-known shortcomings, k-means remains one of the most widely used approaches to data clustering. Current research continues to tackle its flaws while attempting to preserve its simplicity. Recently, the power k-means algorithm was proposed to avoid poor local minima by annealing through a family of smoother surfaces. However, the approach lacks statistical guarantees and fails in high dimensions when many features are irrelevant. This paper addresses these issues by introducing entropy regularization to learn feature relevance while annealing. We prove consistency of the proposed approach and derive a scalable majorization-minimization algorithm that enjoys closed-form updates and convergence guarantees. In particular, our method retains the same computational complexity of k-means and power k-means, but yields significant improvements over both. Its merits are thoroughly assessed on a suite of real and synthetic data.
APA
Chakraborty, S., Paul, D., Das, S. & Xu, J.. (2020). Entropy Weighted Power k-Means Clustering. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:691-701 Available from https://proceedings.mlr.press/v108/chakraborty20a.html.

Related Material