On the difficulty of training recurrent neural networks

Razvan Pascanu, Tomas Mikolov, Yoshua Bengio
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):1310-1318, 2013.

Abstract

There are two widely known issues with properly training recurrent neural networks, the vanishing and the exploding gradient problems detailed in Bengio et al. (1994). In this paper we attempt to improve the understanding of the underlying issues by exploring these problems from an analytical, a geometric and a dynamical systems perspective. Our analysis is used to justify a simple yet effective solution. We propose a gradient norm clipping strategy to deal with exploding gradients and a soft constraint for the vanishing gradients problem. We validate empirically our hypothesis and proposed solutions in the experimental section.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-pascanu13, title = {On the difficulty of training recurrent neural networks}, author = {Pascanu, Razvan and Mikolov, Tomas and Bengio, Yoshua}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {1310--1318}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/pascanu13.pdf}, url = {https://proceedings.mlr.press/v28/pascanu13.html}, abstract = {There are two widely known issues with properly training recurrent neural networks, the vanishing and the exploding gradient problems detailed in Bengio et al. (1994). In this paper we attempt to improve the understanding of the underlying issues by exploring these problems from an analytical, a geometric and a dynamical systems perspective. Our analysis is used to justify a simple yet effective solution. We propose a gradient norm clipping strategy to deal with exploding gradients and a soft constraint for the vanishing gradients problem. We validate empirically our hypothesis and proposed solutions in the experimental section. } }
Endnote
%0 Conference Paper %T On the difficulty of training recurrent neural networks %A Razvan Pascanu %A Tomas Mikolov %A Yoshua Bengio %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-pascanu13 %I PMLR %P 1310--1318 %U https://proceedings.mlr.press/v28/pascanu13.html %V 28 %N 3 %X There are two widely known issues with properly training recurrent neural networks, the vanishing and the exploding gradient problems detailed in Bengio et al. (1994). In this paper we attempt to improve the understanding of the underlying issues by exploring these problems from an analytical, a geometric and a dynamical systems perspective. Our analysis is used to justify a simple yet effective solution. We propose a gradient norm clipping strategy to deal with exploding gradients and a soft constraint for the vanishing gradients problem. We validate empirically our hypothesis and proposed solutions in the experimental section.
RIS
TY - CPAPER TI - On the difficulty of training recurrent neural networks AU - Razvan Pascanu AU - Tomas Mikolov AU - Yoshua Bengio BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-pascanu13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 1310 EP - 1318 L1 - http://proceedings.mlr.press/v28/pascanu13.pdf UR - https://proceedings.mlr.press/v28/pascanu13.html AB - There are two widely known issues with properly training recurrent neural networks, the vanishing and the exploding gradient problems detailed in Bengio et al. (1994). In this paper we attempt to improve the understanding of the underlying issues by exploring these problems from an analytical, a geometric and a dynamical systems perspective. Our analysis is used to justify a simple yet effective solution. We propose a gradient norm clipping strategy to deal with exploding gradients and a soft constraint for the vanishing gradients problem. We validate empirically our hypothesis and proposed solutions in the experimental section. ER -
APA
Pascanu, R., Mikolov, T. & Bengio, Y.. (2013). On the difficulty of training recurrent neural networks. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):1310-1318 Available from https://proceedings.mlr.press/v28/pascanu13.html.

Related Material