Factorized Recurrent Neural Architectures for Longer Range Dependence

[edit]

Francois Belletti, Alex Beutel, Sagar Jain, Ed Chi ;
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:1522-1530, 2018.

Abstract

The ability to capture Long Range Dependence (LRD) in a stochastic process is of prime importance in the context of predictive models. A sequential model with a longer-term memory is better able contextualize recent observations. In this article, we apply the theory of LRD stochastic processes to modern recurrent architectures, such as LSTMs and GRUs, and prove they do not provide LRD under assumptions sufficient for gradients to vanish. Motivated by an information-theoretic analysis, we provide a modified recurrent neural architecture that mitigates the issue of faulty memory through redundancy while keeping the compute time constant. Experimental results on a synthetic copy task, the Youtube-8m video classification task and a recommender system show that we enable better memorization and longer-term memory.

Related Material