Breaking the Softmax Bottleneck via Learnable Monotonic Pointwise Non-linearities

Octavian Ganea, Sylvain Gelly, Gary Becigneul, Aliaksei Severyn
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:2073-2082, 2019.

Abstract

The Softmax function on top of a final linear layer is the de facto method to output probability distributions in neural networks. In many applications such as language models or text generation, this model has to produce distributions over large output vocabularies. Recently, this has been shown to have limited representational capacity due to its connection with the rank bottleneck in matrix factorization. However, little is known about the limitations of Linear-Softmax for quantities of practical interest such as cross entropy or mode estimation, a direction that we explore here. As an efficient and effective solution to alleviate this issue, we propose to learn parametric monotonic functions on top of the logits. We theoretically investigate the rank increasing capabilities of such monotonic functions. Empirically, our method improves in two different quality metrics over the traditional Linear-Softmax layer in synthetic and real language model experiments, adding little time or memory overhead, while being comparable to the more computationally expensive mixture of Softmaxes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-ganea19a, title = {Breaking the Softmax Bottleneck via Learnable Monotonic Pointwise Non-linearities}, author = {Ganea, Octavian and Gelly, Sylvain and Becigneul, Gary and Severyn, Aliaksei}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {2073--2082}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/ganea19a/ganea19a.pdf}, url = {https://proceedings.mlr.press/v97/ganea19a.html}, abstract = {The Softmax function on top of a final linear layer is the de facto method to output probability distributions in neural networks. In many applications such as language models or text generation, this model has to produce distributions over large output vocabularies. Recently, this has been shown to have limited representational capacity due to its connection with the rank bottleneck in matrix factorization. However, little is known about the limitations of Linear-Softmax for quantities of practical interest such as cross entropy or mode estimation, a direction that we explore here. As an efficient and effective solution to alleviate this issue, we propose to learn parametric monotonic functions on top of the logits. We theoretically investigate the rank increasing capabilities of such monotonic functions. Empirically, our method improves in two different quality metrics over the traditional Linear-Softmax layer in synthetic and real language model experiments, adding little time or memory overhead, while being comparable to the more computationally expensive mixture of Softmaxes.} }
Endnote
%0 Conference Paper %T Breaking the Softmax Bottleneck via Learnable Monotonic Pointwise Non-linearities %A Octavian Ganea %A Sylvain Gelly %A Gary Becigneul %A Aliaksei Severyn %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-ganea19a %I PMLR %P 2073--2082 %U https://proceedings.mlr.press/v97/ganea19a.html %V 97 %X The Softmax function on top of a final linear layer is the de facto method to output probability distributions in neural networks. In many applications such as language models or text generation, this model has to produce distributions over large output vocabularies. Recently, this has been shown to have limited representational capacity due to its connection with the rank bottleneck in matrix factorization. However, little is known about the limitations of Linear-Softmax for quantities of practical interest such as cross entropy or mode estimation, a direction that we explore here. As an efficient and effective solution to alleviate this issue, we propose to learn parametric monotonic functions on top of the logits. We theoretically investigate the rank increasing capabilities of such monotonic functions. Empirically, our method improves in two different quality metrics over the traditional Linear-Softmax layer in synthetic and real language model experiments, adding little time or memory overhead, while being comparable to the more computationally expensive mixture of Softmaxes.
APA
Ganea, O., Gelly, S., Becigneul, G. & Severyn, A.. (2019). Breaking the Softmax Bottleneck via Learnable Monotonic Pointwise Non-linearities. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:2073-2082 Available from https://proceedings.mlr.press/v97/ganea19a.html.

Related Material