On orthogonality and learning recurrent networks with long term dependencies

Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, Chris Pal
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:3570-3578, 2017.

Abstract

It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and may therefore be a desirable property. This paper explores issues with optimization convergence, speed and gradient stability when encouraging or enforcing orthogonality. To perform this analysis, we propose a weight matrix factorization and parameterization strategy through which we can bound matrix norms and therein control the degree of expansivity induced during backpropagation. We find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-vorontsov17a, title = {On orthogonality and learning recurrent networks with long term dependencies}, author = {Eugene Vorontsov and Chiheb Trabelsi and Samuel Kadoury and Chris Pal}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {3570--3578}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/vorontsov17a/vorontsov17a.pdf}, url = {https://proceedings.mlr.press/v70/vorontsov17a.html}, abstract = {It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and may therefore be a desirable property. This paper explores issues with optimization convergence, speed and gradient stability when encouraging or enforcing orthogonality. To perform this analysis, we propose a weight matrix factorization and parameterization strategy through which we can bound matrix norms and therein control the degree of expansivity induced during backpropagation. We find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance.} }
Endnote
%0 Conference Paper %T On orthogonality and learning recurrent networks with long term dependencies %A Eugene Vorontsov %A Chiheb Trabelsi %A Samuel Kadoury %A Chris Pal %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-vorontsov17a %I PMLR %P 3570--3578 %U https://proceedings.mlr.press/v70/vorontsov17a.html %V 70 %X It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and may therefore be a desirable property. This paper explores issues with optimization convergence, speed and gradient stability when encouraging or enforcing orthogonality. To perform this analysis, we propose a weight matrix factorization and parameterization strategy through which we can bound matrix norms and therein control the degree of expansivity induced during backpropagation. We find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance.
APA
Vorontsov, E., Trabelsi, C., Kadoury, S. & Pal, C.. (2017). On orthogonality and learning recurrent networks with long term dependencies. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:3570-3578 Available from https://proceedings.mlr.press/v70/vorontsov17a.html.

Related Material