Sequence Length Independent Norm-Based Generalization Bounds for Transformers

Jacob Trauger, Ambuj Tewari
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:1405-1413, 2024.

Abstract

This paper provides norm-based generalization bounds for the Transformer architecture that do not depend on the input sequence length. We employ a covering number based approach to prove our bounds. We use three novel covering number bounds for the function class of bounded linear mappings to upper bound the Rademacher complexity of the Transformer. Furthermore, we show this generalization bound applies to the common Transformer training technique of masking and then predicting the masked word. We also run a simulated study on a sparse majority data set that empirically validates our theoretical findings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-trauger24a, title = { Sequence Length Independent Norm-Based Generalization Bounds for Transformers }, author = {Trauger, Jacob and Tewari, Ambuj}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {1405--1413}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/trauger24a/trauger24a.pdf}, url = {https://proceedings.mlr.press/v238/trauger24a.html}, abstract = { This paper provides norm-based generalization bounds for the Transformer architecture that do not depend on the input sequence length. We employ a covering number based approach to prove our bounds. We use three novel covering number bounds for the function class of bounded linear mappings to upper bound the Rademacher complexity of the Transformer. Furthermore, we show this generalization bound applies to the common Transformer training technique of masking and then predicting the masked word. We also run a simulated study on a sparse majority data set that empirically validates our theoretical findings. } }
Endnote
%0 Conference Paper %T Sequence Length Independent Norm-Based Generalization Bounds for Transformers %A Jacob Trauger %A Ambuj Tewari %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-trauger24a %I PMLR %P 1405--1413 %U https://proceedings.mlr.press/v238/trauger24a.html %V 238 %X This paper provides norm-based generalization bounds for the Transformer architecture that do not depend on the input sequence length. We employ a covering number based approach to prove our bounds. We use three novel covering number bounds for the function class of bounded linear mappings to upper bound the Rademacher complexity of the Transformer. Furthermore, we show this generalization bound applies to the common Transformer training technique of masking and then predicting the masked word. We also run a simulated study on a sparse majority data set that empirically validates our theoretical findings.
APA
Trauger, J. & Tewari, A.. (2024). Sequence Length Independent Norm-Based Generalization Bounds for Transformers . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:1405-1413 Available from https://proceedings.mlr.press/v238/trauger24a.html.

Related Material