Dimension-Independent Rates for Structured Neural Density Estimation

Robert A. Vandermeulen, Wai Ming Tai, Bryon Aragam
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:60857-60879, 2025.

Abstract

We show that deep neural networks can achieve dimension-independent rates of convergence for learning structured densities typical of image, audio, video, and text data. For example, in images, where each pixel becomes independent of the rest of the image when conditioned on pixels at most $t$ steps away, a simple $L^2$-minimizing neural network can attain a rate of $n^{-1/((t+1)^2+4)}$, where $t$ is independent of the ambient dimension $d$, i.e. the total number of pixels. We further provide empirical evidence that, in real-world applications, $t$ is often a small constant, thus effectively circumventing the curse of dimensionality. Moreover, for sequential data (e.g., audio or text) exhibiting a similar local dependence structure, our analysis shows a rate of $n^{-1/(t+5)}$, offering further evidence of dimension independence in practical scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-vandermeulen25a, title = {Dimension-Independent Rates for Structured Neural Density Estimation}, author = {Vandermeulen, Robert A. and Tai, Wai Ming and Aragam, Bryon}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {60857--60879}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/vandermeulen25a/vandermeulen25a.pdf}, url = {https://proceedings.mlr.press/v267/vandermeulen25a.html}, abstract = {We show that deep neural networks can achieve dimension-independent rates of convergence for learning structured densities typical of image, audio, video, and text data. For example, in images, where each pixel becomes independent of the rest of the image when conditioned on pixels at most $t$ steps away, a simple $L^2$-minimizing neural network can attain a rate of $n^{-1/((t+1)^2+4)}$, where $t$ is independent of the ambient dimension $d$, i.e. the total number of pixels. We further provide empirical evidence that, in real-world applications, $t$ is often a small constant, thus effectively circumventing the curse of dimensionality. Moreover, for sequential data (e.g., audio or text) exhibiting a similar local dependence structure, our analysis shows a rate of $n^{-1/(t+5)}$, offering further evidence of dimension independence in practical scenarios.} }
Endnote
%0 Conference Paper %T Dimension-Independent Rates for Structured Neural Density Estimation %A Robert A. Vandermeulen %A Wai Ming Tai %A Bryon Aragam %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-vandermeulen25a %I PMLR %P 60857--60879 %U https://proceedings.mlr.press/v267/vandermeulen25a.html %V 267 %X We show that deep neural networks can achieve dimension-independent rates of convergence for learning structured densities typical of image, audio, video, and text data. For example, in images, where each pixel becomes independent of the rest of the image when conditioned on pixels at most $t$ steps away, a simple $L^2$-minimizing neural network can attain a rate of $n^{-1/((t+1)^2+4)}$, where $t$ is independent of the ambient dimension $d$, i.e. the total number of pixels. We further provide empirical evidence that, in real-world applications, $t$ is often a small constant, thus effectively circumventing the curse of dimensionality. Moreover, for sequential data (e.g., audio or text) exhibiting a similar local dependence structure, our analysis shows a rate of $n^{-1/(t+5)}$, offering further evidence of dimension independence in practical scenarios.
APA
Vandermeulen, R.A., Tai, W.M. & Aragam, B.. (2025). Dimension-Independent Rates for Structured Neural Density Estimation. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:60857-60879 Available from https://proceedings.mlr.press/v267/vandermeulen25a.html.

Related Material