Learning Long Term Dependencies via Fourier Recurrent Units

Jiong Zhang, Yibo Lin, Zhao Song, Inderjit Dhillon
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5815-5823, 2018.

Abstract

It is a known fact that training recurrent neural networks for tasks that have long term dependencies is challenging. One of the main reasons is the vanishing or exploding gradient problem, which prevents gradient information from propagating to early layers. In this paper we propose a simple recurrent architecture, the Fourier Recurrent Unit (FRU), that stabilizes the gradients that arise in its training while giving us stronger expressive power. Specifically, FRU summarizes the hidden states $h^{(t)}$ along the temporal dimension with Fourier basis functions. This allows gradients to easily reach any layer due to FRU’s residual learning structure and the global support of trigonometric functions. We show that FRU has gradient lower and upper bounds independent of temporal dimension. We also show the strong expressivity of sparse Fourier basis, from which FRU obtains its strong expressive power. Our experimental study also demonstrates that with fewer parameters the proposed architecture outperforms other recurrent architectures on many tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-zhang18h, title = {Learning Long Term Dependencies via {F}ourier Recurrent Units}, author = {Zhang, Jiong and Lin, Yibo and Song, Zhao and Dhillon, Inderjit}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5815--5823}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/zhang18h/zhang18h.pdf}, url = {https://proceedings.mlr.press/v80/zhang18h.html}, abstract = {It is a known fact that training recurrent neural networks for tasks that have long term dependencies is challenging. One of the main reasons is the vanishing or exploding gradient problem, which prevents gradient information from propagating to early layers. In this paper we propose a simple recurrent architecture, the Fourier Recurrent Unit (FRU), that stabilizes the gradients that arise in its training while giving us stronger expressive power. Specifically, FRU summarizes the hidden states $h^{(t)}$ along the temporal dimension with Fourier basis functions. This allows gradients to easily reach any layer due to FRU’s residual learning structure and the global support of trigonometric functions. We show that FRU has gradient lower and upper bounds independent of temporal dimension. We also show the strong expressivity of sparse Fourier basis, from which FRU obtains its strong expressive power. Our experimental study also demonstrates that with fewer parameters the proposed architecture outperforms other recurrent architectures on many tasks.} }
Endnote
%0 Conference Paper %T Learning Long Term Dependencies via Fourier Recurrent Units %A Jiong Zhang %A Yibo Lin %A Zhao Song %A Inderjit Dhillon %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-zhang18h %I PMLR %P 5815--5823 %U https://proceedings.mlr.press/v80/zhang18h.html %V 80 %X It is a known fact that training recurrent neural networks for tasks that have long term dependencies is challenging. One of the main reasons is the vanishing or exploding gradient problem, which prevents gradient information from propagating to early layers. In this paper we propose a simple recurrent architecture, the Fourier Recurrent Unit (FRU), that stabilizes the gradients that arise in its training while giving us stronger expressive power. Specifically, FRU summarizes the hidden states $h^{(t)}$ along the temporal dimension with Fourier basis functions. This allows gradients to easily reach any layer due to FRU’s residual learning structure and the global support of trigonometric functions. We show that FRU has gradient lower and upper bounds independent of temporal dimension. We also show the strong expressivity of sparse Fourier basis, from which FRU obtains its strong expressive power. Our experimental study also demonstrates that with fewer parameters the proposed architecture outperforms other recurrent architectures on many tasks.
APA
Zhang, J., Lin, Y., Song, Z. & Dhillon, I.. (2018). Learning Long Term Dependencies via Fourier Recurrent Units. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:5815-5823 Available from https://proceedings.mlr.press/v80/zhang18h.html.

Related Material