Learning Multivariate Log-concave Distributions

Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:711-727, 2017.

Abstract

We study the problem of estimating multivariate log-concave probability density functions. We prove the first sample complexity upper bound for learning log-concave densities on $\mathbb{R}^d$, for all $d ≥1$. Prior to our work, no upper bound on the sample complexity of this learning problem was known for the case of $d>3$. In more detail, we give an estimator that, for any $d \ge 1$ and $ε>0$, draws $\tilde{O}_d \left( (1/ε)^(d+5)/2 \right)$ samples from an unknown target log-concave density on $R^d$, and outputs a hypothesis that (with high probability) is $ε$-close to the target, in total variation distance. Our upper bound on the sample complexity comes close to the known lower bound of $\Omega_d \left( (1/ε)^(d+1)/2 \right)$ for this problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v65-diakonikolas17a, title = {Learning Multivariate Log-concave Distributions}, author = {Diakonikolas, Ilias and Kane, Daniel M. and Stewart, Alistair}, booktitle = {Proceedings of the 2017 Conference on Learning Theory}, pages = {711--727}, year = {2017}, editor = {Kale, Satyen and Shamir, Ohad}, volume = {65}, series = {Proceedings of Machine Learning Research}, month = {07--10 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v65/diakonikolas17a/diakonikolas17a.pdf}, url = {https://proceedings.mlr.press/v65/diakonikolas17a.html}, abstract = {We study the problem of estimating multivariate log-concave probability density functions. We prove the first sample complexity upper bound for learning log-concave densities on $\mathbb{R}^d$, for all $d ≥1$. Prior to our work, no upper bound on the sample complexity of this learning problem was known for the case of $d>3$. In more detail, we give an estimator that, for any $d \ge 1$ and $ε>0$, draws $\tilde{O}_d \left( (1/ε)^(d+5)/2 \right)$ samples from an unknown target log-concave density on $R^d$, and outputs a hypothesis that (with high probability) is $ε$-close to the target, in total variation distance. Our upper bound on the sample complexity comes close to the known lower bound of $\Omega_d \left( (1/ε)^(d+1)/2 \right)$ for this problem. } }
Endnote
%0 Conference Paper %T Learning Multivariate Log-concave Distributions %A Ilias Diakonikolas %A Daniel M. Kane %A Alistair Stewart %B Proceedings of the 2017 Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2017 %E Satyen Kale %E Ohad Shamir %F pmlr-v65-diakonikolas17a %I PMLR %P 711--727 %U https://proceedings.mlr.press/v65/diakonikolas17a.html %V 65 %X We study the problem of estimating multivariate log-concave probability density functions. We prove the first sample complexity upper bound for learning log-concave densities on $\mathbb{R}^d$, for all $d ≥1$. Prior to our work, no upper bound on the sample complexity of this learning problem was known for the case of $d>3$. In more detail, we give an estimator that, for any $d \ge 1$ and $ε>0$, draws $\tilde{O}_d \left( (1/ε)^(d+5)/2 \right)$ samples from an unknown target log-concave density on $R^d$, and outputs a hypothesis that (with high probability) is $ε$-close to the target, in total variation distance. Our upper bound on the sample complexity comes close to the known lower bound of $\Omega_d \left( (1/ε)^(d+1)/2 \right)$ for this problem.
APA
Diakonikolas, I., Kane, D.M. & Stewart, A.. (2017). Learning Multivariate Log-concave Distributions. Proceedings of the 2017 Conference on Learning Theory, in Proceedings of Machine Learning Research 65:711-727 Available from https://proceedings.mlr.press/v65/diakonikolas17a.html.

Related Material