Hierarchical Long-term Video Prediction without Supervision

Nevan wichers, Ruben Villegas, Dumitru Erhan, Honglak Lee
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:6038-6046, 2018.

Abstract

Much of recent research has been devoted to video prediction and generation, yet most of the previous works have demonstrated only limited success in generating videos on short-term horizons. The hierarchical video prediction method by Villegas et al. (2017) is an example of a state-of-the-art method for long-term video prediction, but their method is limited because it requires ground truth annotation of high-level structures (e.g., human joint landmarks) at training time. Our network encodes the input frame, predicts a high-level encoding into the future, and then a decoder with access to the first frame produces the predicted image from the predicted encoding. The decoder also produces a mask that outlines the predicted foreground object (e.g., person) as a by-product. Unlike Villegas et al. (2017), we develop a novel training method that jointly trains the encoder, the predictor, and the decoder together without highlevel supervision; we further improve upon this by using an adversarial loss in the feature space to train the predictor. Our method can predict about 20 seconds into the future and provides better results compared to Denton and Fergus (2018) and Finn et al. (2016) on the Human 3.6M dataset.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-wichers18a, title = {Hierarchical Long-term Video Prediction without Supervision}, author = {wichers, Nevan and Villegas, Ruben and Erhan, Dumitru and Lee, Honglak}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {6038--6046}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/wichers18a/wichers18a.pdf}, url = {https://proceedings.mlr.press/v80/wichers18a.html}, abstract = {Much of recent research has been devoted to video prediction and generation, yet most of the previous works have demonstrated only limited success in generating videos on short-term horizons. The hierarchical video prediction method by Villegas et al. (2017) is an example of a state-of-the-art method for long-term video prediction, but their method is limited because it requires ground truth annotation of high-level structures (e.g., human joint landmarks) at training time. Our network encodes the input frame, predicts a high-level encoding into the future, and then a decoder with access to the first frame produces the predicted image from the predicted encoding. The decoder also produces a mask that outlines the predicted foreground object (e.g., person) as a by-product. Unlike Villegas et al. (2017), we develop a novel training method that jointly trains the encoder, the predictor, and the decoder together without highlevel supervision; we further improve upon this by using an adversarial loss in the feature space to train the predictor. Our method can predict about 20 seconds into the future and provides better results compared to Denton and Fergus (2018) and Finn et al. (2016) on the Human 3.6M dataset.} }
Endnote
%0 Conference Paper %T Hierarchical Long-term Video Prediction without Supervision %A Nevan wichers %A Ruben Villegas %A Dumitru Erhan %A Honglak Lee %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-wichers18a %I PMLR %P 6038--6046 %U https://proceedings.mlr.press/v80/wichers18a.html %V 80 %X Much of recent research has been devoted to video prediction and generation, yet most of the previous works have demonstrated only limited success in generating videos on short-term horizons. The hierarchical video prediction method by Villegas et al. (2017) is an example of a state-of-the-art method for long-term video prediction, but their method is limited because it requires ground truth annotation of high-level structures (e.g., human joint landmarks) at training time. Our network encodes the input frame, predicts a high-level encoding into the future, and then a decoder with access to the first frame produces the predicted image from the predicted encoding. The decoder also produces a mask that outlines the predicted foreground object (e.g., person) as a by-product. Unlike Villegas et al. (2017), we develop a novel training method that jointly trains the encoder, the predictor, and the decoder together without highlevel supervision; we further improve upon this by using an adversarial loss in the feature space to train the predictor. Our method can predict about 20 seconds into the future and provides better results compared to Denton and Fergus (2018) and Finn et al. (2016) on the Human 3.6M dataset.
APA
wichers, N., Villegas, R., Erhan, D. & Lee, H.. (2018). Hierarchical Long-term Video Prediction without Supervision. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:6038-6046 Available from https://proceedings.mlr.press/v80/wichers18a.html.

Related Material