Learning Longer-term Dependencies in RNNs with Auxiliary Losses

Trieu Trinh, Andrew Dai, Thang Luong, Quoc Le
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4965-4974, 2018.

Abstract

Despite recent advances in training recurrent neural networks (RNNs), capturing long-term dependencies in sequences remains a fundamental challenge. Most approaches use backpropagation through time (BPTT), which is difficult to scale to very long sequences. This paper proposes a simple method that improves the ability to capture long term dependencies in RNNs by adding an unsupervised auxiliary loss to the original objective. This auxiliary loss forces RNNs to either reconstruct previous events or predict next events in a sequence, making truncated backpropagation feasible for long sequences and also improving full BPTT. We evaluate our method on a variety of settings, including pixel-by-pixel image classification with sequence lengths up to 16000, and a real document classification benchmark. Our results highlight good performance and resource efficiency of this approach over competitive baselines, including other recurrent models and a comparable sized Transformer. Further analyses reveal beneficial effects of the auxiliary loss on optimization and regularization, as well as extreme cases where there is little to no backpropagation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-trinh18a, title = {Learning Longer-term Dependencies in {RNN}s with Auxiliary Losses}, author = {Trinh, Trieu and Dai, Andrew and Luong, Thang and Le, Quoc}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {4965--4974}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/trinh18a/trinh18a.pdf}, url = {https://proceedings.mlr.press/v80/trinh18a.html}, abstract = {Despite recent advances in training recurrent neural networks (RNNs), capturing long-term dependencies in sequences remains a fundamental challenge. Most approaches use backpropagation through time (BPTT), which is difficult to scale to very long sequences. This paper proposes a simple method that improves the ability to capture long term dependencies in RNNs by adding an unsupervised auxiliary loss to the original objective. This auxiliary loss forces RNNs to either reconstruct previous events or predict next events in a sequence, making truncated backpropagation feasible for long sequences and also improving full BPTT. We evaluate our method on a variety of settings, including pixel-by-pixel image classification with sequence lengths up to 16000, and a real document classification benchmark. Our results highlight good performance and resource efficiency of this approach over competitive baselines, including other recurrent models and a comparable sized Transformer. Further analyses reveal beneficial effects of the auxiliary loss on optimization and regularization, as well as extreme cases where there is little to no backpropagation.} }
Endnote
%0 Conference Paper %T Learning Longer-term Dependencies in RNNs with Auxiliary Losses %A Trieu Trinh %A Andrew Dai %A Thang Luong %A Quoc Le %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-trinh18a %I PMLR %P 4965--4974 %U https://proceedings.mlr.press/v80/trinh18a.html %V 80 %X Despite recent advances in training recurrent neural networks (RNNs), capturing long-term dependencies in sequences remains a fundamental challenge. Most approaches use backpropagation through time (BPTT), which is difficult to scale to very long sequences. This paper proposes a simple method that improves the ability to capture long term dependencies in RNNs by adding an unsupervised auxiliary loss to the original objective. This auxiliary loss forces RNNs to either reconstruct previous events or predict next events in a sequence, making truncated backpropagation feasible for long sequences and also improving full BPTT. We evaluate our method on a variety of settings, including pixel-by-pixel image classification with sequence lengths up to 16000, and a real document classification benchmark. Our results highlight good performance and resource efficiency of this approach over competitive baselines, including other recurrent models and a comparable sized Transformer. Further analyses reveal beneficial effects of the auxiliary loss on optimization and regularization, as well as extreme cases where there is little to no backpropagation.
APA
Trinh, T., Dai, A., Luong, T. & Le, Q.. (2018). Learning Longer-term Dependencies in RNNs with Auxiliary Losses. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:4965-4974 Available from https://proceedings.mlr.press/v80/trinh18a.html.

Related Material