Ten Steps of EM Suffice for Mixtures of Two Gaussians

Constantinos Daskalakis, Christos Tzamos, Manolis Zampetakis
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:704-710, 2017.

Abstract

The Expectation-Maximization (EM) algorithm is a widely used method for maximum likelihood estimation in models with latent variables. For estimating mixtures of Gaussians, its iteration can be viewed as a soft version of the k-means clustering algorithm. Despite its wide use and applications, there are essentially no known convergence guarantees for this method. We provide global convergence guarantees for mixtures of two Gaussians with known covariance matrices. We show that the population version of EM, where the algorithm is given access to infinitely many samples from the mixture, converges geometrically to the correct mean vectors, and provide simple, closed-form expressions for the convergence rate. As a simple illustration, we show that, in one dimension, ten steps of the EM algorithm initialized at infinity result in less than $1%$ error estimation of the means. In the finite sample regime, we show that, under a random initialization, $\tilde{O}(d/ε^2)$ samples suffice to compute the unknown vectors to within $ε$ in Mahalanobis distance, where $d$ is the dimension. In particular, the error rate of the EM based estimator is $\tilde{O}\left(\sqrt{d} \over n\right)$ where $n$ is the number of samples, which is optimal up to logarithmic factors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v65-daskalakis17b, title = {Ten Steps of EM Suffice for Mixtures of Two Gaussians}, author = {Daskalakis, Constantinos and Tzamos, Christos and Zampetakis, Manolis}, booktitle = {Proceedings of the 2017 Conference on Learning Theory}, pages = {704--710}, year = {2017}, editor = {Kale, Satyen and Shamir, Ohad}, volume = {65}, series = {Proceedings of Machine Learning Research}, month = {07--10 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v65/daskalakis17b/daskalakis17b.pdf}, url = {https://proceedings.mlr.press/v65/daskalakis17b.html}, abstract = {The Expectation-Maximization (EM) algorithm is a widely used method for maximum likelihood estimation in models with latent variables. For estimating mixtures of Gaussians, its iteration can be viewed as a soft version of the k-means clustering algorithm. Despite its wide use and applications, there are essentially no known convergence guarantees for this method. We provide global convergence guarantees for mixtures of two Gaussians with known covariance matrices. We show that the population version of EM, where the algorithm is given access to infinitely many samples from the mixture, converges geometrically to the correct mean vectors, and provide simple, closed-form expressions for the convergence rate. As a simple illustration, we show that, in one dimension, ten steps of the EM algorithm initialized at infinity result in less than $1%$ error estimation of the means. In the finite sample regime, we show that, under a random initialization, $\tilde{O}(d/ε^2)$ samples suffice to compute the unknown vectors to within $ε$ in Mahalanobis distance, where $d$ is the dimension. In particular, the error rate of the EM based estimator is $\tilde{O}\left(\sqrt{d} \over n\right)$ where $n$ is the number of samples, which is optimal up to logarithmic factors.} }
Endnote
%0 Conference Paper %T Ten Steps of EM Suffice for Mixtures of Two Gaussians %A Constantinos Daskalakis %A Christos Tzamos %A Manolis Zampetakis %B Proceedings of the 2017 Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2017 %E Satyen Kale %E Ohad Shamir %F pmlr-v65-daskalakis17b %I PMLR %P 704--710 %U https://proceedings.mlr.press/v65/daskalakis17b.html %V 65 %X The Expectation-Maximization (EM) algorithm is a widely used method for maximum likelihood estimation in models with latent variables. For estimating mixtures of Gaussians, its iteration can be viewed as a soft version of the k-means clustering algorithm. Despite its wide use and applications, there are essentially no known convergence guarantees for this method. We provide global convergence guarantees for mixtures of two Gaussians with known covariance matrices. We show that the population version of EM, where the algorithm is given access to infinitely many samples from the mixture, converges geometrically to the correct mean vectors, and provide simple, closed-form expressions for the convergence rate. As a simple illustration, we show that, in one dimension, ten steps of the EM algorithm initialized at infinity result in less than $1%$ error estimation of the means. In the finite sample regime, we show that, under a random initialization, $\tilde{O}(d/ε^2)$ samples suffice to compute the unknown vectors to within $ε$ in Mahalanobis distance, where $d$ is the dimension. In particular, the error rate of the EM based estimator is $\tilde{O}\left(\sqrt{d} \over n\right)$ where $n$ is the number of samples, which is optimal up to logarithmic factors.
APA
Daskalakis, C., Tzamos, C. & Zampetakis, M.. (2017). Ten Steps of EM Suffice for Mixtures of Two Gaussians. Proceedings of the 2017 Conference on Learning Theory, in Proceedings of Machine Learning Research 65:704-710 Available from https://proceedings.mlr.press/v65/daskalakis17b.html.

Related Material