Finite mixture models do not reliably learn the number of components

Diana Cai, Trevor Campbell, Tamara Broderick
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1158-1169, 2021.

Abstract

Scientists and engineers are often interested in learning the number of subpopulations (or components) present in a data set. A common suggestion is to use a finite mixture model (FMM) with a prior on the number of components. Past work has shown the resulting FMM component-count posterior is consistent; that is, the posterior concentrates on the true, generating number of components. But consistency requires the assumption that the component likelihoods are perfectly specified, which is unrealistic in practice. In this paper, we add rigor to data-analysis folk wisdom by proving that under even the slightest model misspecification, the FMM component-count posterior diverges: the posterior probability of any particular finite number of components converges to 0 in the limit of infinite data. Contrary to intuition, posterior-density consistency is not sufficient to establish this result. We develop novel sufficient conditions that are more realistic and easily checkable than those common in the asymptotics literature. We illustrate practical consequences of our theory on simulated and real data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-cai21a, title = {Finite mixture models do not reliably learn the number of components}, author = {Cai, Diana and Campbell, Trevor and Broderick, Tamara}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {1158--1169}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/cai21a/cai21a.pdf}, url = {https://proceedings.mlr.press/v139/cai21a.html}, abstract = {Scientists and engineers are often interested in learning the number of subpopulations (or components) present in a data set. A common suggestion is to use a finite mixture model (FMM) with a prior on the number of components. Past work has shown the resulting FMM component-count posterior is consistent; that is, the posterior concentrates on the true, generating number of components. But consistency requires the assumption that the component likelihoods are perfectly specified, which is unrealistic in practice. In this paper, we add rigor to data-analysis folk wisdom by proving that under even the slightest model misspecification, the FMM component-count posterior diverges: the posterior probability of any particular finite number of components converges to 0 in the limit of infinite data. Contrary to intuition, posterior-density consistency is not sufficient to establish this result. We develop novel sufficient conditions that are more realistic and easily checkable than those common in the asymptotics literature. We illustrate practical consequences of our theory on simulated and real data.} }
Endnote
%0 Conference Paper %T Finite mixture models do not reliably learn the number of components %A Diana Cai %A Trevor Campbell %A Tamara Broderick %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-cai21a %I PMLR %P 1158--1169 %U https://proceedings.mlr.press/v139/cai21a.html %V 139 %X Scientists and engineers are often interested in learning the number of subpopulations (or components) present in a data set. A common suggestion is to use a finite mixture model (FMM) with a prior on the number of components. Past work has shown the resulting FMM component-count posterior is consistent; that is, the posterior concentrates on the true, generating number of components. But consistency requires the assumption that the component likelihoods are perfectly specified, which is unrealistic in practice. In this paper, we add rigor to data-analysis folk wisdom by proving that under even the slightest model misspecification, the FMM component-count posterior diverges: the posterior probability of any particular finite number of components converges to 0 in the limit of infinite data. Contrary to intuition, posterior-density consistency is not sufficient to establish this result. We develop novel sufficient conditions that are more realistic and easily checkable than those common in the asymptotics literature. We illustrate practical consequences of our theory on simulated and real data.
APA
Cai, D., Campbell, T. & Broderick, T.. (2021). Finite mixture models do not reliably learn the number of components. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:1158-1169 Available from https://proceedings.mlr.press/v139/cai21a.html.

Related Material