Low-Rank Bottleneck in Multi-head Attention Models

Srinadh Bhojanapalli, Chulhee Yun, Ankit Singh Rawat, Sashank Reddi, Sanjiv Kumar
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:864-873, 2020.

Abstract

Attention based Transformer architecture has enabled significant advances in the field of natural language processing. In addition to new pre-training techniques, recent improvements crucially rely on working with a relatively larger embedding dimension for tokens. Unfortunately, this leads to models that are prohibitively large to be employed in the downstream tasks. In this paper we identify one of the important factors contributing to the large embedding size requirement. In particular, our analysis highlights that the scaling between the number of heads and the size of each head in the current architecture gives rise to a low-rank bottleneck in attention heads, causing this limitation. We further validate this in our experiments. As a solution we propose to set the head size of an attention unit to input sequence length, and independent of the number of heads, resulting in multi-head attention layers with provably more expressive power. We empirically show that this allows us to train models with a relatively smaller embedding dimension and with better performance scaling.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-bhojanapalli20a, title = {Low-Rank Bottleneck in Multi-head Attention Models}, author = {Bhojanapalli, Srinadh and Yun, Chulhee and Rawat, Ankit Singh and Reddi, Sashank and Kumar, Sanjiv}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {864--873}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/bhojanapalli20a/bhojanapalli20a.pdf}, url = {https://proceedings.mlr.press/v119/bhojanapalli20a.html}, abstract = {Attention based Transformer architecture has enabled significant advances in the field of natural language processing. In addition to new pre-training techniques, recent improvements crucially rely on working with a relatively larger embedding dimension for tokens. Unfortunately, this leads to models that are prohibitively large to be employed in the downstream tasks. In this paper we identify one of the important factors contributing to the large embedding size requirement. In particular, our analysis highlights that the scaling between the number of heads and the size of each head in the current architecture gives rise to a low-rank bottleneck in attention heads, causing this limitation. We further validate this in our experiments. As a solution we propose to set the head size of an attention unit to input sequence length, and independent of the number of heads, resulting in multi-head attention layers with provably more expressive power. We empirically show that this allows us to train models with a relatively smaller embedding dimension and with better performance scaling.} }
Endnote
%0 Conference Paper %T Low-Rank Bottleneck in Multi-head Attention Models %A Srinadh Bhojanapalli %A Chulhee Yun %A Ankit Singh Rawat %A Sashank Reddi %A Sanjiv Kumar %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-bhojanapalli20a %I PMLR %P 864--873 %U https://proceedings.mlr.press/v119/bhojanapalli20a.html %V 119 %X Attention based Transformer architecture has enabled significant advances in the field of natural language processing. In addition to new pre-training techniques, recent improvements crucially rely on working with a relatively larger embedding dimension for tokens. Unfortunately, this leads to models that are prohibitively large to be employed in the downstream tasks. In this paper we identify one of the important factors contributing to the large embedding size requirement. In particular, our analysis highlights that the scaling between the number of heads and the size of each head in the current architecture gives rise to a low-rank bottleneck in attention heads, causing this limitation. We further validate this in our experiments. As a solution we propose to set the head size of an attention unit to input sequence length, and independent of the number of heads, resulting in multi-head attention layers with provably more expressive power. We empirically show that this allows us to train models with a relatively smaller embedding dimension and with better performance scaling.
APA
Bhojanapalli, S., Yun, C., Rawat, A.S., Reddi, S. & Kumar, S.. (2020). Low-Rank Bottleneck in Multi-head Attention Models. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:864-873 Available from https://proceedings.mlr.press/v119/bhojanapalli20a.html.

Related Material