Deep Residual Output Layers for Neural Language Generation

Nikolaos Pappas, James Henderson
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5000-5011, 2019.

Abstract

Many tasks, including language generation, benefit from learning the structure of the output space, particularly when the space of output labels is large and the data is sparse. State-of-the-art neural language models indirectly capture the output space structure in their classifier weights since they lack parameter sharing across output labels. Learning shared output label mappings helps, but existing methods have limited expressivity and are prone to overfitting. In this paper, we investigate the usefulness of more powerful shared mappings for output labels, and propose a deep residual output mapping with dropout between layers to better capture the structure of the output space and avoid overfitting. Evaluations on three language generation tasks show that our output label mapping can match or improve state-of-the-art recurrent and self-attention architectures, and suggest that the classifier does not necessarily need to be high-rank to better model natural language if it is better at capturing the structure of the output space.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-pappas19a, title = {Deep Residual Output Layers for Neural Language Generation}, author = {Pappas, Nikolaos and Henderson, James}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5000--5011}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/pappas19a/pappas19a.pdf}, url = {https://proceedings.mlr.press/v97/pappas19a.html}, abstract = {Many tasks, including language generation, benefit from learning the structure of the output space, particularly when the space of output labels is large and the data is sparse. State-of-the-art neural language models indirectly capture the output space structure in their classifier weights since they lack parameter sharing across output labels. Learning shared output label mappings helps, but existing methods have limited expressivity and are prone to overfitting. In this paper, we investigate the usefulness of more powerful shared mappings for output labels, and propose a deep residual output mapping with dropout between layers to better capture the structure of the output space and avoid overfitting. Evaluations on three language generation tasks show that our output label mapping can match or improve state-of-the-art recurrent and self-attention architectures, and suggest that the classifier does not necessarily need to be high-rank to better model natural language if it is better at capturing the structure of the output space.} }
Endnote
%0 Conference Paper %T Deep Residual Output Layers for Neural Language Generation %A Nikolaos Pappas %A James Henderson %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-pappas19a %I PMLR %P 5000--5011 %U https://proceedings.mlr.press/v97/pappas19a.html %V 97 %X Many tasks, including language generation, benefit from learning the structure of the output space, particularly when the space of output labels is large and the data is sparse. State-of-the-art neural language models indirectly capture the output space structure in their classifier weights since they lack parameter sharing across output labels. Learning shared output label mappings helps, but existing methods have limited expressivity and are prone to overfitting. In this paper, we investigate the usefulness of more powerful shared mappings for output labels, and propose a deep residual output mapping with dropout between layers to better capture the structure of the output space and avoid overfitting. Evaluations on three language generation tasks show that our output label mapping can match or improve state-of-the-art recurrent and self-attention architectures, and suggest that the classifier does not necessarily need to be high-rank to better model natural language if it is better at capturing the structure of the output space.
APA
Pappas, N. & Henderson, J.. (2019). Deep Residual Output Layers for Neural Language Generation. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:5000-5011 Available from https://proceedings.mlr.press/v97/pappas19a.html.

Related Material