Efficient softmax approximation for GPUs

Grave, Armand Joulin, Moustapha Cissé, David Grangier, Hervé Jégou
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1302-1310, 2017.

Abstract

We propose an approximate strategy to efficiently train neural network based language models over very large vocabularies. Our approach, called adaptive softmax, circumvents the linear dependency on the vocabulary size by exploiting the unbalanced word distribution to form clusters that explicitly minimize the expectation of computation time. Our approach further reduces the computational cost by exploiting the specificities of modern architectures and matrix-matrix vector operations, making it particularly suited for graphical processing units. Our experiments carried out on standard benchmarks, such as EuroParl and One Billion Word, show that our approach brings a large gain in efficiency over standard approximations while achieving an accuracy close to that of the full softmax. The code of our method is available at https://github.com/facebookresearch/adaptive-softmax.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-grave17a, title = {Efficient softmax approximation for {GPU}s}, author = {{\'E}douard Grave and Armand Joulin and Moustapha Ciss{\'e} and David Grangier and Herv{\'e} J{\'e}gou}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {1302--1310}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/grave17a/grave17a.pdf}, url = {https://proceedings.mlr.press/v70/grave17a.html}, abstract = {We propose an approximate strategy to efficiently train neural network based language models over very large vocabularies. Our approach, called adaptive softmax, circumvents the linear dependency on the vocabulary size by exploiting the unbalanced word distribution to form clusters that explicitly minimize the expectation of computation time. Our approach further reduces the computational cost by exploiting the specificities of modern architectures and matrix-matrix vector operations, making it particularly suited for graphical processing units. Our experiments carried out on standard benchmarks, such as EuroParl and One Billion Word, show that our approach brings a large gain in efficiency over standard approximations while achieving an accuracy close to that of the full softmax. The code of our method is available at https://github.com/facebookresearch/adaptive-softmax.} }
Endnote
%0 Conference Paper %T Efficient softmax approximation for GPUs %A Grave %A Armand Joulin %A Moustapha Cissé %A David Grangier %A Hervé Jégou %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-grave17a %I PMLR %P 1302--1310 %U https://proceedings.mlr.press/v70/grave17a.html %V 70 %X We propose an approximate strategy to efficiently train neural network based language models over very large vocabularies. Our approach, called adaptive softmax, circumvents the linear dependency on the vocabulary size by exploiting the unbalanced word distribution to form clusters that explicitly minimize the expectation of computation time. Our approach further reduces the computational cost by exploiting the specificities of modern architectures and matrix-matrix vector operations, making it particularly suited for graphical processing units. Our experiments carried out on standard benchmarks, such as EuroParl and One Billion Word, show that our approach brings a large gain in efficiency over standard approximations while achieving an accuracy close to that of the full softmax. The code of our method is available at https://github.com/facebookresearch/adaptive-softmax.
APA
Grave, , Joulin, A., Cissé, M., Grangier, D. & Jégou, H.. (2017). Efficient softmax approximation for GPUs. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:1302-1310 Available from https://proceedings.mlr.press/v70/grave17a.html.

Related Material