Adaptive Convergence Rates for Log-Concave Maximum Likelihood

Gil Kur, Aditya Guntuboyina
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:1450-1458, 2025.

Abstract

We study the task of estimating a log-concave density in $\mathbb{R}^d$ using the Maximum Likelihood Estimator, known as the log-concave MLE. We show that for every $d \geq 4$, the log-concave MLE attains an \emph{adaptive rate} when the negative logarithm of the underlying density is the maximum of $k$ affine functions, meaning that the estimation error for such a density is significantly lower than the minimax rate for the class of log-concave densities. Specifically, we prove that for such densities, the risk of the log-concave MLE is of order $c(k) \cdot n^{-\frac{4}{d}}$ in terms of the Hellinger squared distance. This result complements the work of (Kim et al. AoS 2018) and Feng et al. (AoS 2021), who addressed the cases $d = 1$ and $d \in \{2,3\}$, respectively. Our proof provides a unified and relatively simple approach for all $d \geq 1$, and is based on techniques from stochastic convex geometry and empirical process theory, which may be of independent interest.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-kur25a, title = {Adaptive Convergence Rates for Log-Concave Maximum Likelihood}, author = {Kur, Gil and Guntuboyina, Aditya}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {1450--1458}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/kur25a/kur25a.pdf}, url = {https://proceedings.mlr.press/v258/kur25a.html}, abstract = {We study the task of estimating a log-concave density in $\mathbb{R}^d$ using the Maximum Likelihood Estimator, known as the log-concave MLE. We show that for every $d \geq 4$, the log-concave MLE attains an \emph{adaptive rate} when the negative logarithm of the underlying density is the maximum of $k$ affine functions, meaning that the estimation error for such a density is significantly lower than the minimax rate for the class of log-concave densities. Specifically, we prove that for such densities, the risk of the log-concave MLE is of order $c(k) \cdot n^{-\frac{4}{d}}$ in terms of the Hellinger squared distance. This result complements the work of (Kim et al. AoS 2018) and Feng et al. (AoS 2021), who addressed the cases $d = 1$ and $d \in \{2,3\}$, respectively. Our proof provides a unified and relatively simple approach for all $d \geq 1$, and is based on techniques from stochastic convex geometry and empirical process theory, which may be of independent interest.} }
Endnote
%0 Conference Paper %T Adaptive Convergence Rates for Log-Concave Maximum Likelihood %A Gil Kur %A Aditya Guntuboyina %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-kur25a %I PMLR %P 1450--1458 %U https://proceedings.mlr.press/v258/kur25a.html %V 258 %X We study the task of estimating a log-concave density in $\mathbb{R}^d$ using the Maximum Likelihood Estimator, known as the log-concave MLE. We show that for every $d \geq 4$, the log-concave MLE attains an \emph{adaptive rate} when the negative logarithm of the underlying density is the maximum of $k$ affine functions, meaning that the estimation error for such a density is significantly lower than the minimax rate for the class of log-concave densities. Specifically, we prove that for such densities, the risk of the log-concave MLE is of order $c(k) \cdot n^{-\frac{4}{d}}$ in terms of the Hellinger squared distance. This result complements the work of (Kim et al. AoS 2018) and Feng et al. (AoS 2021), who addressed the cases $d = 1$ and $d \in \{2,3\}$, respectively. Our proof provides a unified and relatively simple approach for all $d \geq 1$, and is based on techniques from stochastic convex geometry and empirical process theory, which may be of independent interest.
APA
Kur, G. & Guntuboyina, A.. (2025). Adaptive Convergence Rates for Log-Concave Maximum Likelihood. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:1450-1458 Available from https://proceedings.mlr.press/v258/kur25a.html.

Related Material