High-Dimensional Projection Pursuit: Outer Bounds and Applications to Interpolation in Neural Networks

Kangjie Zhou, Andrea Montanari
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5525-5527, 2022.

Abstract

Given a cloud of $n$ data points in $\R^d$, consider all projections onto $m$-dimensional subspaces of $\R^d$ and, for each such projection, the empirical distribution of the projected points. What does this collection of probability distributions look like when $n,d$ grow large? We consider this question under the null model in which the points are i.i.d. standard Gaussian vectors, focusing on the asymptotic regime in which $n,d\to\infty$, with $n/d\to\alpha\in (0,\infty)$, while $m$ is fixed. Denoting by $\cuF_{m, \alpha}$ the set of probability distributions in $\R^m$ that arise as low-dimensional projections in this limit, we establish new outer bounds on $\cuF_{m, \alpha}$. In particular, we characterize the radius of $\cuF_{m,\alpha}$ in terms of Wasserstein distance and prove sharp bounds in terms of Kullback-Leibler divergence and Rényi information dimension. The previous question has application to unsupervised learning methods, such as projection pursuit and independent component analysis. We introduce a version of the same problem that is relevant for supervised learning, and prove a sharp Wasserstein radius bound. As an application, we establish an upper bound on the interpolation threshold of two-layers neural networks with $m$ hidden neurons.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-zhou22a, title = {High-Dimensional Projection Pursuit: Outer Bounds and Applications to Interpolation in Neural Networks}, author = {Zhou, Kangjie and Montanari, Andrea}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {5525--5527}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/zhou22a/zhou22a.pdf}, url = {https://proceedings.mlr.press/v178/zhou22a.html}, abstract = {Given a cloud of $n$ data points in $\R^d$, consider all projections onto $m$-dimensional subspaces of $\R^d$ and, for each such projection, the empirical distribution of the projected points. What does this collection of probability distributions look like when $n,d$ grow large? We consider this question under the null model in which the points are i.i.d. standard Gaussian vectors, focusing on the asymptotic regime in which $n,d\to\infty$, with $n/d\to\alpha\in (0,\infty)$, while $m$ is fixed. Denoting by $\cuF_{m, \alpha}$ the set of probability distributions in $\R^m$ that arise as low-dimensional projections in this limit, we establish new outer bounds on $\cuF_{m, \alpha}$. In particular, we characterize the radius of $\cuF_{m,\alpha}$ in terms of Wasserstein distance and prove sharp bounds in terms of Kullback-Leibler divergence and Rényi information dimension. The previous question has application to unsupervised learning methods, such as projection pursuit and independent component analysis. We introduce a version of the same problem that is relevant for supervised learning, and prove a sharp Wasserstein radius bound. As an application, we establish an upper bound on the interpolation threshold of two-layers neural networks with $m$ hidden neurons.} }
Endnote
%0 Conference Paper %T High-Dimensional Projection Pursuit: Outer Bounds and Applications to Interpolation in Neural Networks %A Kangjie Zhou %A Andrea Montanari %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-zhou22a %I PMLR %P 5525--5527 %U https://proceedings.mlr.press/v178/zhou22a.html %V 178 %X Given a cloud of $n$ data points in $\R^d$, consider all projections onto $m$-dimensional subspaces of $\R^d$ and, for each such projection, the empirical distribution of the projected points. What does this collection of probability distributions look like when $n,d$ grow large? We consider this question under the null model in which the points are i.i.d. standard Gaussian vectors, focusing on the asymptotic regime in which $n,d\to\infty$, with $n/d\to\alpha\in (0,\infty)$, while $m$ is fixed. Denoting by $\cuF_{m, \alpha}$ the set of probability distributions in $\R^m$ that arise as low-dimensional projections in this limit, we establish new outer bounds on $\cuF_{m, \alpha}$. In particular, we characterize the radius of $\cuF_{m,\alpha}$ in terms of Wasserstein distance and prove sharp bounds in terms of Kullback-Leibler divergence and Rényi information dimension. The previous question has application to unsupervised learning methods, such as projection pursuit and independent component analysis. We introduce a version of the same problem that is relevant for supervised learning, and prove a sharp Wasserstein radius bound. As an application, we establish an upper bound on the interpolation threshold of two-layers neural networks with $m$ hidden neurons.
APA
Zhou, K. & Montanari, A.. (2022). High-Dimensional Projection Pursuit: Outer Bounds and Applications to Interpolation in Neural Networks. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:5525-5527 Available from https://proceedings.mlr.press/v178/zhou22a.html.

Related Material