Bregman Power k-Means for Clustering Exponential Family Data

Adithya Vellal, Saptarshi Chakraborty, Jason Q Xu
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:22103-22119, 2022.

Abstract

Recent progress in center-based clustering algorithms combats poor local minima by implicit annealing through a family of generalized means. These methods are variations of Lloyd’s celebrated k-means algorithm, and are most appropriate for spherical clusters such as those arising from Gaussian data. In this paper, we bridge these new algorithmic advances to classical work on hard clustering under Bregman divergences, which enjoy a bijection to exponential family distributions and are thus well-suited for clustering objects arising from a breadth of data generating mechanisms. The elegant properties of Bregman divergences allow us to maintain closed form updates in a simple and transparent algorithm, and moreover lead to new theoretical arguments for establishing finite sample bounds that relax the bounded support assumption made in the existing state of the art. Additionally, we consider thorough empirical analyses on simulated experiments and a case study on rainfall data, finding that the proposed method outperforms existing peer methods in a variety of non-Gaussian data settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-vellal22a, title = {{B}regman Power k-Means for Clustering Exponential Family Data}, author = {Vellal, Adithya and Chakraborty, Saptarshi and Xu, Jason Q}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {22103--22119}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/vellal22a/vellal22a.pdf}, url = {https://proceedings.mlr.press/v162/vellal22a.html}, abstract = {Recent progress in center-based clustering algorithms combats poor local minima by implicit annealing through a family of generalized means. These methods are variations of Lloyd’s celebrated k-means algorithm, and are most appropriate for spherical clusters such as those arising from Gaussian data. In this paper, we bridge these new algorithmic advances to classical work on hard clustering under Bregman divergences, which enjoy a bijection to exponential family distributions and are thus well-suited for clustering objects arising from a breadth of data generating mechanisms. The elegant properties of Bregman divergences allow us to maintain closed form updates in a simple and transparent algorithm, and moreover lead to new theoretical arguments for establishing finite sample bounds that relax the bounded support assumption made in the existing state of the art. Additionally, we consider thorough empirical analyses on simulated experiments and a case study on rainfall data, finding that the proposed method outperforms existing peer methods in a variety of non-Gaussian data settings.} }
Endnote
%0 Conference Paper %T Bregman Power k-Means for Clustering Exponential Family Data %A Adithya Vellal %A Saptarshi Chakraborty %A Jason Q Xu %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-vellal22a %I PMLR %P 22103--22119 %U https://proceedings.mlr.press/v162/vellal22a.html %V 162 %X Recent progress in center-based clustering algorithms combats poor local minima by implicit annealing through a family of generalized means. These methods are variations of Lloyd’s celebrated k-means algorithm, and are most appropriate for spherical clusters such as those arising from Gaussian data. In this paper, we bridge these new algorithmic advances to classical work on hard clustering under Bregman divergences, which enjoy a bijection to exponential family distributions and are thus well-suited for clustering objects arising from a breadth of data generating mechanisms. The elegant properties of Bregman divergences allow us to maintain closed form updates in a simple and transparent algorithm, and moreover lead to new theoretical arguments for establishing finite sample bounds that relax the bounded support assumption made in the existing state of the art. Additionally, we consider thorough empirical analyses on simulated experiments and a case study on rainfall data, finding that the proposed method outperforms existing peer methods in a variety of non-Gaussian data settings.
APA
Vellal, A., Chakraborty, S. & Xu, J.Q.. (2022). Bregman Power k-Means for Clustering Exponential Family Data. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:22103-22119 Available from https://proceedings.mlr.press/v162/vellal22a.html.

Related Material