Improving the Gating Mechanism of Recurrent Neural Networks

Albert Gu, Caglar Gulcehre, Thomas Paine, Matt Hoffman, Razvan Pascanu
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:3800-3809, 2020.

Abstract

Gating mechanisms are widely used in neural network models, where they allow gradients to backpropagate easily through depth or time. However, their saturation property introduces problems of its own. For example, in recurrent models these gates need to have outputs near 1 to propagate information over long time-delays, which requires them to operate in their saturation regime and hinders gradient-based learning of the gate mechanism. We address this problem by deriving two synergistic modifications to the standard gating mechanism that are easy to implement, introduce no additional hyperparameters, and improve learnability of the gates when they are close to saturation. We show how these changes are related to and improve on alternative recently proposed gating mechanisms such as chrono-initialization and Ordered Neurons. Empirically, our simple gating mechanisms robustly improve the performance of recurrent models on a range of applications, including synthetic memorization tasks, sequential image classification, language modeling, and reinforcement learning, particularly when long-term dependencies are involved.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-gu20a, title = {Improving the Gating Mechanism of Recurrent Neural Networks}, author = {Gu, Albert and Gulcehre, Caglar and Paine, Thomas and Hoffman, Matt and Pascanu, Razvan}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {3800--3809}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/gu20a/gu20a.pdf}, url = {https://proceedings.mlr.press/v119/gu20a.html}, abstract = {Gating mechanisms are widely used in neural network models, where they allow gradients to backpropagate easily through depth or time. However, their saturation property introduces problems of its own. For example, in recurrent models these gates need to have outputs near 1 to propagate information over long time-delays, which requires them to operate in their saturation regime and hinders gradient-based learning of the gate mechanism. We address this problem by deriving two synergistic modifications to the standard gating mechanism that are easy to implement, introduce no additional hyperparameters, and improve learnability of the gates when they are close to saturation. We show how these changes are related to and improve on alternative recently proposed gating mechanisms such as chrono-initialization and Ordered Neurons. Empirically, our simple gating mechanisms robustly improve the performance of recurrent models on a range of applications, including synthetic memorization tasks, sequential image classification, language modeling, and reinforcement learning, particularly when long-term dependencies are involved.} }
Endnote
%0 Conference Paper %T Improving the Gating Mechanism of Recurrent Neural Networks %A Albert Gu %A Caglar Gulcehre %A Thomas Paine %A Matt Hoffman %A Razvan Pascanu %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-gu20a %I PMLR %P 3800--3809 %U https://proceedings.mlr.press/v119/gu20a.html %V 119 %X Gating mechanisms are widely used in neural network models, where they allow gradients to backpropagate easily through depth or time. However, their saturation property introduces problems of its own. For example, in recurrent models these gates need to have outputs near 1 to propagate information over long time-delays, which requires them to operate in their saturation regime and hinders gradient-based learning of the gate mechanism. We address this problem by deriving two synergistic modifications to the standard gating mechanism that are easy to implement, introduce no additional hyperparameters, and improve learnability of the gates when they are close to saturation. We show how these changes are related to and improve on alternative recently proposed gating mechanisms such as chrono-initialization and Ordered Neurons. Empirically, our simple gating mechanisms robustly improve the performance of recurrent models on a range of applications, including synthetic memorization tasks, sequential image classification, language modeling, and reinforcement learning, particularly when long-term dependencies are involved.
APA
Gu, A., Gulcehre, C., Paine, T., Hoffman, M. & Pascanu, R.. (2020). Improving the Gating Mechanism of Recurrent Neural Networks. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:3800-3809 Available from https://proceedings.mlr.press/v119/gu20a.html.

Related Material