PredRNN++: Towards A Resolution of the Deep-in-Time Dilemma in Spatiotemporal Predictive Learning

Yunbo Wang, Zhifeng Gao, Mingsheng Long, Jianmin Wang, Philip S Yu
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5123-5132, 2018.

Abstract

We present PredRNN++, a recurrent network for spatiotemporal predictive learning. In pursuit of a great modeling capability for short-term video dynamics, we make our network deeper in time by leveraging a new recurrent structure named Causal LSTM with cascaded dual memories. To alleviate the gradient propagation difficulties in deep predictive models, we propose a Gradient Highway Unit, which provides alternative quick routes for the gradient flows from outputs back to long-range previous inputs. The gradient highway units work seamlessly with the causal LSTMs, enabling our model to capture the short-term and the long-term video dependencies adaptively. Our model achieves state-of-the-art prediction results on both synthetic and real video datasets, showing its power in modeling entangled motions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-wang18b, title = {{P}red{RNN}++: Towards A Resolution of the Deep-in-Time Dilemma in Spatiotemporal Predictive Learning}, author = {Wang, Yunbo and Gao, Zhifeng and Long, Mingsheng and Wang, Jianmin and Yu, Philip S}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5123--5132}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/wang18b/wang18b.pdf}, url = {https://proceedings.mlr.press/v80/wang18b.html}, abstract = {We present PredRNN++, a recurrent network for spatiotemporal predictive learning. In pursuit of a great modeling capability for short-term video dynamics, we make our network deeper in time by leveraging a new recurrent structure named Causal LSTM with cascaded dual memories. To alleviate the gradient propagation difficulties in deep predictive models, we propose a Gradient Highway Unit, which provides alternative quick routes for the gradient flows from outputs back to long-range previous inputs. The gradient highway units work seamlessly with the causal LSTMs, enabling our model to capture the short-term and the long-term video dependencies adaptively. Our model achieves state-of-the-art prediction results on both synthetic and real video datasets, showing its power in modeling entangled motions.} }
Endnote
%0 Conference Paper %T PredRNN++: Towards A Resolution of the Deep-in-Time Dilemma in Spatiotemporal Predictive Learning %A Yunbo Wang %A Zhifeng Gao %A Mingsheng Long %A Jianmin Wang %A Philip S Yu %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-wang18b %I PMLR %P 5123--5132 %U https://proceedings.mlr.press/v80/wang18b.html %V 80 %X We present PredRNN++, a recurrent network for spatiotemporal predictive learning. In pursuit of a great modeling capability for short-term video dynamics, we make our network deeper in time by leveraging a new recurrent structure named Causal LSTM with cascaded dual memories. To alleviate the gradient propagation difficulties in deep predictive models, we propose a Gradient Highway Unit, which provides alternative quick routes for the gradient flows from outputs back to long-range previous inputs. The gradient highway units work seamlessly with the causal LSTMs, enabling our model to capture the short-term and the long-term video dependencies adaptively. Our model achieves state-of-the-art prediction results on both synthetic and real video datasets, showing its power in modeling entangled motions.
APA
Wang, Y., Gao, Z., Long, M., Wang, J. & Yu, P.S.. (2018). PredRNN++: Towards A Resolution of the Deep-in-Time Dilemma in Spatiotemporal Predictive Learning. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:5123-5132 Available from https://proceedings.mlr.press/v80/wang18b.html.

Related Material