[edit]
Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4009-4081, 2024.
Abstract
We study the statistical and computational complexity of learning a target function $f_*:\R^d\to\R$ with \textit{additive structure}, that is, $f_*(x) = \frac{1}{\sqrt{M}}\sum_{m=1}^M f_m(⟨x, v_m⟩)$, where $f_1,f_2,...,f_M:\R\to\R$ are nonlinear link functions of single-index models (ridge functions) with diverse and near-orthogonal index features $\{v_m\}_{m=1}^M$, and the number of additive tasks $M$ grows with the dimensionality $M\asymp d^\gamma$ for $\gamma\ge 0$. This problem setting is motivated by the classical additive model literature, the recent representation learning theory of two-layer neural network, and large-scale pretraining where the model simultaneously acquires a large number of “skills” that are often \textit{localized} in distinct parts of the trained network. We prove that a large subset of polynomial $f_*$ can be efficiently learned by gradient descent training of a two-layer neural network, with a polynomial statistical and computational complexity that depends on the number of tasks $M$ and the \textit{information exponent} of $f_m$, despite the unknown link function and $M$ growing with the dimensionality. We complement this learnability guarantee with computational hardness result by establishing statistical query (SQ) lower bounds for both the correlational SQ and full SQ algorithms.