Tight Bounds on the Hardness of Learning Simple Nonparametric Mixtures

Wai Ming Tai, Bryon Aragam
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:2849-2849, 2023.

Abstract

We study the problem of learning nonparametric distributions in a finite mixture, and establish tight bounds on the sample complexity for learning the component distributions in such models.Namely, we are given i.i.d. samples from a pdf $f$ where $$f=w_1f_1+w_2f_2, \quad w_1+w_2=1, \quad w_1,w_2>0$$and we are interested in learning each component $f_i$.Without any assumptions on $f_i$, this problem is ill-posed.In order to identify the components $f_i$, we assume that each $f_i$ can be written as a convolution of a Gaussian and a compactly supported density $\nu_i$ with $\text{supp}(\nu_1)\cap \text{supp}(\nu_2)=\emptyset$.Our main result shows that $(\frac{1}{\varepsilon})^{\Omega(\log\log \frac{1}{\varepsilon})}$ samples are required for estimating each $f_i$. The proof relies on a quantitative Tauberian theorem that yields a fast rate of approximation with Gaussians, which may be of independent interest. To show this is tight, we also propose an algorithm that uses $(\frac{1}{\varepsilon})^{O(\log\log \frac{1}{\varepsilon})}$ samples to estimate each $f_i$. Unlike existing approaches to learning latent variable models based on moment-matching and tensor methods, our proof instead involves a delicate analysis of an ill-conditioned linear system via orthogonal functions.Combining these bounds, we conclude that the optimal sample complexity of this problem properly lies in between polynomial and exponential, which is not common in learning theory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-tai23a, title = {Tight Bounds on the Hardness of Learning Simple Nonparametric Mixtures}, author = {Tai, Wai Ming and Aragam, Bryon}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {2849--2849}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/tai23a/tai23a.pdf}, url = {https://proceedings.mlr.press/v195/tai23a.html}, abstract = {We study the problem of learning nonparametric distributions in a finite mixture, and establish tight bounds on the sample complexity for learning the component distributions in such models.Namely, we are given i.i.d. samples from a pdf $f$ where $$f=w_1f_1+w_2f_2, \quad w_1+w_2=1, \quad w_1,w_2>0$$and we are interested in learning each component $f_i$.Without any assumptions on $f_i$, this problem is ill-posed.In order to identify the components $f_i$, we assume that each $f_i$ can be written as a convolution of a Gaussian and a compactly supported density $\nu_i$ with $\text{supp}(\nu_1)\cap \text{supp}(\nu_2)=\emptyset$.Our main result shows that $(\frac{1}{\varepsilon})^{\Omega(\log\log \frac{1}{\varepsilon})}$ samples are required for estimating each $f_i$. The proof relies on a quantitative Tauberian theorem that yields a fast rate of approximation with Gaussians, which may be of independent interest. To show this is tight, we also propose an algorithm that uses $(\frac{1}{\varepsilon})^{O(\log\log \frac{1}{\varepsilon})}$ samples to estimate each $f_i$. Unlike existing approaches to learning latent variable models based on moment-matching and tensor methods, our proof instead involves a delicate analysis of an ill-conditioned linear system via orthogonal functions.Combining these bounds, we conclude that the optimal sample complexity of this problem properly lies in between polynomial and exponential, which is not common in learning theory.} }
Endnote
%0 Conference Paper %T Tight Bounds on the Hardness of Learning Simple Nonparametric Mixtures %A Wai Ming Tai %A Bryon Aragam %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-tai23a %I PMLR %P 2849--2849 %U https://proceedings.mlr.press/v195/tai23a.html %V 195 %X We study the problem of learning nonparametric distributions in a finite mixture, and establish tight bounds on the sample complexity for learning the component distributions in such models.Namely, we are given i.i.d. samples from a pdf $f$ where $$f=w_1f_1+w_2f_2, \quad w_1+w_2=1, \quad w_1,w_2>0$$and we are interested in learning each component $f_i$.Without any assumptions on $f_i$, this problem is ill-posed.In order to identify the components $f_i$, we assume that each $f_i$ can be written as a convolution of a Gaussian and a compactly supported density $\nu_i$ with $\text{supp}(\nu_1)\cap \text{supp}(\nu_2)=\emptyset$.Our main result shows that $(\frac{1}{\varepsilon})^{\Omega(\log\log \frac{1}{\varepsilon})}$ samples are required for estimating each $f_i$. The proof relies on a quantitative Tauberian theorem that yields a fast rate of approximation with Gaussians, which may be of independent interest. To show this is tight, we also propose an algorithm that uses $(\frac{1}{\varepsilon})^{O(\log\log \frac{1}{\varepsilon})}$ samples to estimate each $f_i$. Unlike existing approaches to learning latent variable models based on moment-matching and tensor methods, our proof instead involves a delicate analysis of an ill-conditioned linear system via orthogonal functions.Combining these bounds, we conclude that the optimal sample complexity of this problem properly lies in between polynomial and exponential, which is not common in learning theory.
APA
Tai, W.M. & Aragam, B.. (2023). Tight Bounds on the Hardness of Learning Simple Nonparametric Mixtures. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:2849-2849 Available from https://proceedings.mlr.press/v195/tai23a.html.

Related Material