Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations

Kazusato Oko, Yujin Song, Taiji Suzuki, Denny Wu
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4009-4081, 2024.

Abstract

We study the statistical and computational complexity of learning a target function $f_*:\R^d\to\R$ with \textit{additive structure}, that is, $f_*(x) = \frac{1}{\sqrt{M}}\sum_{m=1}^M f_m(⟨x, v_m⟩)$, where $f_1,f_2,...,f_M:\R\to\R$ are nonlinear link functions of single-index models (ridge functions) with diverse and near-orthogonal index features $\{v_m\}_{m=1}^M$, and the number of additive tasks $M$ grows with the dimensionality $M\asymp d^\gamma$ for $\gamma\ge 0$. This problem setting is motivated by the classical additive model literature, the recent representation learning theory of two-layer neural network, and large-scale pretraining where the model simultaneously acquires a large number of “skills” that are often \textit{localized} in distinct parts of the trained network. We prove that a large subset of polynomial $f_*$ can be efficiently learned by gradient descent training of a two-layer neural network, with a polynomial statistical and computational complexity that depends on the number of tasks $M$ and the \textit{information exponent} of $f_m$, despite the unknown link function and $M$ growing with the dimensionality. We complement this learnability guarantee with computational hardness result by establishing statistical query (SQ) lower bounds for both the correlational SQ and full SQ algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-oko24a, title = {Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations}, author = {Oko, Kazusato and Song, Yujin and Suzuki, Taiji and Wu, Denny}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {4009--4081}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/oko24a/oko24a.pdf}, url = {https://proceedings.mlr.press/v247/oko24a.html}, abstract = {We study the statistical and computational complexity of learning a target function $f_*:\R^d\to\R$ with \textit{additive structure}, that is, $f_*(x) = \frac{1}{\sqrt{M}}\sum_{m=1}^M f_m(⟨x, v_m⟩)$, where $f_1,f_2,...,f_M:\R\to\R$ are nonlinear link functions of single-index models (ridge functions) with diverse and near-orthogonal index features $\{v_m\}_{m=1}^M$, and the number of additive tasks $M$ grows with the dimensionality $M\asymp d^\gamma$ for $\gamma\ge 0$. This problem setting is motivated by the classical additive model literature, the recent representation learning theory of two-layer neural network, and large-scale pretraining where the model simultaneously acquires a large number of “skills” that are often \textit{localized} in distinct parts of the trained network. We prove that a large subset of polynomial $f_*$ can be efficiently learned by gradient descent training of a two-layer neural network, with a polynomial statistical and computational complexity that depends on the number of tasks $M$ and the \textit{information exponent} of $f_m$, despite the unknown link function and $M$ growing with the dimensionality. We complement this learnability guarantee with computational hardness result by establishing statistical query (SQ) lower bounds for both the correlational SQ and full SQ algorithms.} }
Endnote
%0 Conference Paper %T Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations %A Kazusato Oko %A Yujin Song %A Taiji Suzuki %A Denny Wu %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-oko24a %I PMLR %P 4009--4081 %U https://proceedings.mlr.press/v247/oko24a.html %V 247 %X We study the statistical and computational complexity of learning a target function $f_*:\R^d\to\R$ with \textit{additive structure}, that is, $f_*(x) = \frac{1}{\sqrt{M}}\sum_{m=1}^M f_m(⟨x, v_m⟩)$, where $f_1,f_2,...,f_M:\R\to\R$ are nonlinear link functions of single-index models (ridge functions) with diverse and near-orthogonal index features $\{v_m\}_{m=1}^M$, and the number of additive tasks $M$ grows with the dimensionality $M\asymp d^\gamma$ for $\gamma\ge 0$. This problem setting is motivated by the classical additive model literature, the recent representation learning theory of two-layer neural network, and large-scale pretraining where the model simultaneously acquires a large number of “skills” that are often \textit{localized} in distinct parts of the trained network. We prove that a large subset of polynomial $f_*$ can be efficiently learned by gradient descent training of a two-layer neural network, with a polynomial statistical and computational complexity that depends on the number of tasks $M$ and the \textit{information exponent} of $f_m$, despite the unknown link function and $M$ growing with the dimensionality. We complement this learnability guarantee with computational hardness result by establishing statistical query (SQ) lower bounds for both the correlational SQ and full SQ algorithms.
APA
Oko, K., Song, Y., Suzuki, T. & Wu, D.. (2024). Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:4009-4081 Available from https://proceedings.mlr.press/v247/oko24a.html.

Related Material