EM Converges for a Mixture of Many Linear Regressions

Jeongyeol Kwon, Constantine Caramanis
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1727-1736, 2020.

Abstract

We study the convergence of the Expectation-Maximization (EM) algorithm for mixtures of linear regressions with an arbitrary number k of components. We show that as long as signal-to-noise ratio (SNR) is ˜Ω(k), well-initialized EM converges to the true regression parameters. Previous results for k3 have only established local convergence for the noiseless setting, i.e., where SNR is infinitely large. Our results enlarge the scope to the environment with noises, and notably, we establish a statistical error rate that is independent of the norm (or pairwise distance) of the regression parameters. In particular, our results imply exact recovery as σ0, in contrast to most previous local convergence results for EM, where the statistical error scaled with the norm of parameters. Standard moment-method approaches may be applied to guarantee we are in the region where our local convergence guarantees apply.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-kwon20a, title = {EM Converges for a Mixture of Many Linear Regressions}, author = {Kwon, Jeongyeol and Caramanis, Constantine}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {1727--1736}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/kwon20a/kwon20a.pdf}, url = {https://proceedings.mlr.press/v108/kwon20a.html}, abstract = {We study the convergence of the Expectation-Maximization (EM) algorithm for mixtures of linear regressions with an arbitrary number $k$ of components. We show that as long as signal-to-noise ratio (SNR) is $\tilde{\Omega}(k)$, well-initialized EM converges to the true regression parameters. Previous results for $k \geq 3$ have only established local convergence for the noiseless setting, i.e., where SNR is infinitely large. Our results enlarge the scope to the environment with noises, and notably, we establish a statistical error rate that is independent of the norm (or pairwise distance) of the regression parameters. In particular, our results imply exact recovery as $\sigma \rightarrow 0$, in contrast to most previous local convergence results for EM, where the statistical error scaled with the norm of parameters. Standard moment-method approaches may be applied to guarantee we are in the region where our local convergence guarantees apply.} }
Endnote
%0 Conference Paper %T EM Converges for a Mixture of Many Linear Regressions %A Jeongyeol Kwon %A Constantine Caramanis %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-kwon20a %I PMLR %P 1727--1736 %U https://proceedings.mlr.press/v108/kwon20a.html %V 108 %X We study the convergence of the Expectation-Maximization (EM) algorithm for mixtures of linear regressions with an arbitrary number $k$ of components. We show that as long as signal-to-noise ratio (SNR) is $\tilde{\Omega}(k)$, well-initialized EM converges to the true regression parameters. Previous results for $k \geq 3$ have only established local convergence for the noiseless setting, i.e., where SNR is infinitely large. Our results enlarge the scope to the environment with noises, and notably, we establish a statistical error rate that is independent of the norm (or pairwise distance) of the regression parameters. In particular, our results imply exact recovery as $\sigma \rightarrow 0$, in contrast to most previous local convergence results for EM, where the statistical error scaled with the norm of parameters. Standard moment-method approaches may be applied to guarantee we are in the region where our local convergence guarantees apply.
APA
Kwon, J. & Caramanis, C.. (2020). EM Converges for a Mixture of Many Linear Regressions. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:1727-1736 Available from https://proceedings.mlr.press/v108/kwon20a.html.

Related Material