The EM Algorithm gives Sample-Optimality for Learning Mixtures of Well-Separated Gaussians

Jeongyeol Kwon, Constantine Caramanis
Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:2425-2487, 2020.

Abstract

We consider the problem of spherical Gaussian Mixture models with $k \geq 3$ components when the components are well separated. A fundamental previous result established that separation of $\Omega(\sqrt{\log k})$ is necessary and sufficient for identifiability of the parameters with \textit{polynomial} sample complexity (Regev and Vijayaraghavan, 2017). In the same context, we show that $\tilde{O} (kd/\epsilon^2)$ samples suffice for any $\epsilon \lesssim 1/k$, closing the gap from polynomial to linear, and thus giving the first optimal sample upper bound for the parameter estimation of well-separated Gaussian mixtures. We accomplish this by proving a new result for the Expectation-Maximization (EM) algorithm: we show that EM converges locally, under separation $\Omega(\sqrt{\log k})$. The previous best-known guarantee required $\Omega(\sqrt{k})$ separation (Yan, et al., 2017). Unlike prior work, our results do not assume or use prior knowledge of the (potentially different) mixing weights or variances of the Gaussian components. Furthermore, our results show that the finite-sample error of EM does not depend on non-universal quantities such as pairwise distances between means of Gaussian components.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-kwon20a, title = {The EM Algorithm gives Sample-Optimality for Learning Mixtures of Well-Separated Gaussians}, author = {Kwon, Jeongyeol and Caramanis, Constantine}, booktitle = {Proceedings of Thirty Third Conference on Learning Theory}, pages = {2425--2487}, year = {2020}, editor = {Abernethy, Jacob and Agarwal, Shivani}, volume = {125}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/kwon20a/kwon20a.pdf}, url = {https://proceedings.mlr.press/v125/kwon20a.html}, abstract = { We consider the problem of spherical Gaussian Mixture models with $k \geq 3$ components when the components are well separated. A fundamental previous result established that separation of $\Omega(\sqrt{\log k})$ is necessary and sufficient for identifiability of the parameters with \textit{polynomial} sample complexity (Regev and Vijayaraghavan, 2017). In the same context, we show that $\tilde{O} (kd/\epsilon^2)$ samples suffice for any $\epsilon \lesssim 1/k$, closing the gap from polynomial to linear, and thus giving the first optimal sample upper bound for the parameter estimation of well-separated Gaussian mixtures. We accomplish this by proving a new result for the Expectation-Maximization (EM) algorithm: we show that EM converges locally, under separation $\Omega(\sqrt{\log k})$. The previous best-known guarantee required $\Omega(\sqrt{k})$ separation (Yan, et al., 2017). Unlike prior work, our results do not assume or use prior knowledge of the (potentially different) mixing weights or variances of the Gaussian components. Furthermore, our results show that the finite-sample error of EM does not depend on non-universal quantities such as pairwise distances between means of Gaussian components.} }
Endnote
%0 Conference Paper %T The EM Algorithm gives Sample-Optimality for Learning Mixtures of Well-Separated Gaussians %A Jeongyeol Kwon %A Constantine Caramanis %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-kwon20a %I PMLR %P 2425--2487 %U https://proceedings.mlr.press/v125/kwon20a.html %V 125 %X We consider the problem of spherical Gaussian Mixture models with $k \geq 3$ components when the components are well separated. A fundamental previous result established that separation of $\Omega(\sqrt{\log k})$ is necessary and sufficient for identifiability of the parameters with \textit{polynomial} sample complexity (Regev and Vijayaraghavan, 2017). In the same context, we show that $\tilde{O} (kd/\epsilon^2)$ samples suffice for any $\epsilon \lesssim 1/k$, closing the gap from polynomial to linear, and thus giving the first optimal sample upper bound for the parameter estimation of well-separated Gaussian mixtures. We accomplish this by proving a new result for the Expectation-Maximization (EM) algorithm: we show that EM converges locally, under separation $\Omega(\sqrt{\log k})$. The previous best-known guarantee required $\Omega(\sqrt{k})$ separation (Yan, et al., 2017). Unlike prior work, our results do not assume or use prior knowledge of the (potentially different) mixing weights or variances of the Gaussian components. Furthermore, our results show that the finite-sample error of EM does not depend on non-universal quantities such as pairwise distances between means of Gaussian components.
APA
Kwon, J. & Caramanis, C.. (2020). The EM Algorithm gives Sample-Optimality for Learning Mixtures of Well-Separated Gaussians. Proceedings of Thirty Third Conference on Learning Theory, in Proceedings of Machine Learning Research 125:2425-2487 Available from https://proceedings.mlr.press/v125/kwon20a.html.

Related Material