Transformers Provably Learn Sparse Token Selection While Fully-Connected Nets Cannot

Zixuan Wang, Stanley Wei, Daniel Hsu, Jason D. Lee
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:51854-51912, 2024.

Abstract

The transformer architecture has prevailed in various deep learning settings due to its exceptional capabilities to select and compose structural information. Motivated by these capabilities, Sanford et al. (2023) proposed the sparse token selection task, in which transformers excel while fully-connected networks (FCNs) fail in the worst case. Building upon that, we strengthen the FCN lower bound to an average-case setting and establish an algorithmic separation of transformers over FCNs. Specifically, a one-layer transformer trained with gradient descent provably learns the sparse token selection task and, surprisingly, exhibits strong out-of-distribution length generalization. We provide empirical simulations to justify our theoretical findings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-wang24ca, title = {Transformers Provably Learn Sparse Token Selection While Fully-Connected Nets Cannot}, author = {Wang, Zixuan and Wei, Stanley and Hsu, Daniel and Lee, Jason D.}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {51854--51912}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/wang24ca/wang24ca.pdf}, url = {https://proceedings.mlr.press/v235/wang24ca.html}, abstract = {The transformer architecture has prevailed in various deep learning settings due to its exceptional capabilities to select and compose structural information. Motivated by these capabilities, Sanford et al. (2023) proposed the sparse token selection task, in which transformers excel while fully-connected networks (FCNs) fail in the worst case. Building upon that, we strengthen the FCN lower bound to an average-case setting and establish an algorithmic separation of transformers over FCNs. Specifically, a one-layer transformer trained with gradient descent provably learns the sparse token selection task and, surprisingly, exhibits strong out-of-distribution length generalization. We provide empirical simulations to justify our theoretical findings.} }
Endnote
%0 Conference Paper %T Transformers Provably Learn Sparse Token Selection While Fully-Connected Nets Cannot %A Zixuan Wang %A Stanley Wei %A Daniel Hsu %A Jason D. Lee %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-wang24ca %I PMLR %P 51854--51912 %U https://proceedings.mlr.press/v235/wang24ca.html %V 235 %X The transformer architecture has prevailed in various deep learning settings due to its exceptional capabilities to select and compose structural information. Motivated by these capabilities, Sanford et al. (2023) proposed the sparse token selection task, in which transformers excel while fully-connected networks (FCNs) fail in the worst case. Building upon that, we strengthen the FCN lower bound to an average-case setting and establish an algorithmic separation of transformers over FCNs. Specifically, a one-layer transformer trained with gradient descent provably learns the sparse token selection task and, surprisingly, exhibits strong out-of-distribution length generalization. We provide empirical simulations to justify our theoretical findings.
APA
Wang, Z., Wei, S., Hsu, D. & Lee, J.D.. (2024). Transformers Provably Learn Sparse Token Selection While Fully-Connected Nets Cannot. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:51854-51912 Available from https://proceedings.mlr.press/v235/wang24ca.html.

Related Material