Text-To-4D Dynamic Scene Generation

Uriel Singer, Shelly Sheynin, Adam Polyak, Oron Ashual, Iurii Makarov, Filippos Kokkinos, Naman Goyal, Andrea Vedaldi, Devi Parikh, Justin Johnson, Yaniv Taigman
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:31915-31929, 2023.

Abstract

We present MAV3D (Make-A-Video3D), a method for generating three-dimensional dynamic scenes from text descriptions. Our approach uses a 4D dynamic Neural Radiance Field (NeRF), which is optimized for scene appearance, density, and motion consistency by querying a Text-to-Video (T2V) diffusion-based model. The dynamic video output generated from the provided text can be viewed from any camera location and angle, and can be composited into any 3D environment. MAV3D does not require any 3D or 4D data and the T2V model is trained only on Text-Image pairs and unlabeled videos. We demonstrate the effectiveness of our approach using comprehensive quantitative and qualitative experiments and show an improvement over previously established internal baselines. To the best of our knowledge, our method is the first to generate 3D dynamic scenes given a text description. Generated samples can be viewed at make-a-video3d.github.io

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-singer23a, title = {Text-To-4{D} Dynamic Scene Generation}, author = {Singer, Uriel and Sheynin, Shelly and Polyak, Adam and Ashual, Oron and Makarov, Iurii and Kokkinos, Filippos and Goyal, Naman and Vedaldi, Andrea and Parikh, Devi and Johnson, Justin and Taigman, Yaniv}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {31915--31929}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/singer23a/singer23a.pdf}, url = {https://proceedings.mlr.press/v202/singer23a.html}, abstract = {We present MAV3D (Make-A-Video3D), a method for generating three-dimensional dynamic scenes from text descriptions. Our approach uses a 4D dynamic Neural Radiance Field (NeRF), which is optimized for scene appearance, density, and motion consistency by querying a Text-to-Video (T2V) diffusion-based model. The dynamic video output generated from the provided text can be viewed from any camera location and angle, and can be composited into any 3D environment. MAV3D does not require any 3D or 4D data and the T2V model is trained only on Text-Image pairs and unlabeled videos. We demonstrate the effectiveness of our approach using comprehensive quantitative and qualitative experiments and show an improvement over previously established internal baselines. To the best of our knowledge, our method is the first to generate 3D dynamic scenes given a text description. Generated samples can be viewed at make-a-video3d.github.io} }
Endnote
%0 Conference Paper %T Text-To-4D Dynamic Scene Generation %A Uriel Singer %A Shelly Sheynin %A Adam Polyak %A Oron Ashual %A Iurii Makarov %A Filippos Kokkinos %A Naman Goyal %A Andrea Vedaldi %A Devi Parikh %A Justin Johnson %A Yaniv Taigman %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-singer23a %I PMLR %P 31915--31929 %U https://proceedings.mlr.press/v202/singer23a.html %V 202 %X We present MAV3D (Make-A-Video3D), a method for generating three-dimensional dynamic scenes from text descriptions. Our approach uses a 4D dynamic Neural Radiance Field (NeRF), which is optimized for scene appearance, density, and motion consistency by querying a Text-to-Video (T2V) diffusion-based model. The dynamic video output generated from the provided text can be viewed from any camera location and angle, and can be composited into any 3D environment. MAV3D does not require any 3D or 4D data and the T2V model is trained only on Text-Image pairs and unlabeled videos. We demonstrate the effectiveness of our approach using comprehensive quantitative and qualitative experiments and show an improvement over previously established internal baselines. To the best of our knowledge, our method is the first to generate 3D dynamic scenes given a text description. Generated samples can be viewed at make-a-video3d.github.io
APA
Singer, U., Sheynin, S., Polyak, A., Ashual, O., Makarov, I., Kokkinos, F., Goyal, N., Vedaldi, A., Parikh, D., Johnson, J. & Taigman, Y.. (2023). Text-To-4D Dynamic Scene Generation. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:31915-31929 Available from https://proceedings.mlr.press/v202/singer23a.html.

Related Material