[edit]
Near-Optimal Sample Complexity Bounds for Maximum Likelihood Estimation of Multivariate Log-concave Densities
Proceedings of the 31st Conference On Learning Theory, PMLR 75:1234-1262, 2018.
Abstract
We study the problem of learning multivariate log-concave densities with respect to a global loss function. We obtain the first upper bound on the sample complexity of the maximum likelihood estimator (MLE) for a log-concave density on Rd, for all d≥4. Prior to this work, no finite sample upper bound was known for this estimator in more than 3 dimensions. In more detail, we prove that for any d≥1 and ϵ>0, given ˜Od((1/ϵ)(d+3)/2) samples drawn from an unknown log-concave density f0 on Rd, the MLE outputs a hypothesis h that with high probability is ϵ-close to f0, in squared Hellinger loss. A sample complexity lower bound of Ωd((1/ϵ)(d+1)/2) was previously known for any learning algorithm that achieves this guarantee. We thus establish that the sample complexity of the log-concave MLE is near-optimal, up to an ˜O(1/ϵ) factor.