Genie: Generative Interactive Environments

Jake Bruce, Michael D Dennis, Ashley Edwards, Jack Parker-Holder, Yuge Shi, Edward Hughes, Matthew Lai, Aditi Mavalankar, Richie Steigerwald, Chris Apps, Yusuf Aytar, Sarah Maria Elisabeth Bechtle, Feryal Behbahani, Stephanie C.Y. Chan, Nicolas Heess, Lucy Gonzalez, Simon Osindero, Sherjil Ozair, Scott Reed, Jingwei Zhang, Konrad Zolna, Jeff Clune, Nando De Freitas, Satinder Singh, Tim Rocktäschel
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:4603-4623, 2024.

Abstract

We introduce Genie, the first generative interactive environment trained in an unsupervised manner from unlabelled Internet videos. The model can be prompted to generate an endless variety of action-controllable virtual worlds described through text, synthetic images, photographs, and even sketches. At 11B parameters, Genie can be considered a foundation world model. It is comprised of a spatiotemporal video tokenizer, an autoregressive dynamics model, and a simple and scalable latent action model. Genie enables users to act in the generated environments on a frame-by-frame basis despite training without any ground-truth action labels or other domain specific requirements typically found in the world model literature. Further the resulting learned latent action space facilitates training agents to imitate behaviors from unseen videos, opening the path for training generalist agents of the future.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-bruce24a, title = {Genie: Generative Interactive Environments}, author = {Bruce, Jake and Dennis, Michael D and Edwards, Ashley and Parker-Holder, Jack and Shi, Yuge and Hughes, Edward and Lai, Matthew and Mavalankar, Aditi and Steigerwald, Richie and Apps, Chris and Aytar, Yusuf and Bechtle, Sarah Maria Elisabeth and Behbahani, Feryal and Chan, Stephanie C.Y. and Heess, Nicolas and Gonzalez, Lucy and Osindero, Simon and Ozair, Sherjil and Reed, Scott and Zhang, Jingwei and Zolna, Konrad and Clune, Jeff and Freitas, Nando De and Singh, Satinder and Rockt\"{a}schel, Tim}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {4603--4623}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/bruce24a/bruce24a.pdf}, url = {https://proceedings.mlr.press/v235/bruce24a.html}, abstract = {We introduce Genie, the first generative interactive environment trained in an unsupervised manner from unlabelled Internet videos. The model can be prompted to generate an endless variety of action-controllable virtual worlds described through text, synthetic images, photographs, and even sketches. At 11B parameters, Genie can be considered a foundation world model. It is comprised of a spatiotemporal video tokenizer, an autoregressive dynamics model, and a simple and scalable latent action model. Genie enables users to act in the generated environments on a frame-by-frame basis despite training without any ground-truth action labels or other domain specific requirements typically found in the world model literature. Further the resulting learned latent action space facilitates training agents to imitate behaviors from unseen videos, opening the path for training generalist agents of the future.} }
Endnote
%0 Conference Paper %T Genie: Generative Interactive Environments %A Jake Bruce %A Michael D Dennis %A Ashley Edwards %A Jack Parker-Holder %A Yuge Shi %A Edward Hughes %A Matthew Lai %A Aditi Mavalankar %A Richie Steigerwald %A Chris Apps %A Yusuf Aytar %A Sarah Maria Elisabeth Bechtle %A Feryal Behbahani %A Stephanie C.Y. Chan %A Nicolas Heess %A Lucy Gonzalez %A Simon Osindero %A Sherjil Ozair %A Scott Reed %A Jingwei Zhang %A Konrad Zolna %A Jeff Clune %A Nando De Freitas %A Satinder Singh %A Tim Rocktäschel %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-bruce24a %I PMLR %P 4603--4623 %U https://proceedings.mlr.press/v235/bruce24a.html %V 235 %X We introduce Genie, the first generative interactive environment trained in an unsupervised manner from unlabelled Internet videos. The model can be prompted to generate an endless variety of action-controllable virtual worlds described through text, synthetic images, photographs, and even sketches. At 11B parameters, Genie can be considered a foundation world model. It is comprised of a spatiotemporal video tokenizer, an autoregressive dynamics model, and a simple and scalable latent action model. Genie enables users to act in the generated environments on a frame-by-frame basis despite training without any ground-truth action labels or other domain specific requirements typically found in the world model literature. Further the resulting learned latent action space facilitates training agents to imitate behaviors from unseen videos, opening the path for training generalist agents of the future.
APA
Bruce, J., Dennis, M.D., Edwards, A., Parker-Holder, J., Shi, Y., Hughes, E., Lai, M., Mavalankar, A., Steigerwald, R., Apps, C., Aytar, Y., Bechtle, S.M.E., Behbahani, F., Chan, S.C., Heess, N., Gonzalez, L., Osindero, S., Ozair, S., Reed, S., Zhang, J., Zolna, K., Clune, J., Freitas, N.D., Singh, S. & Rocktäschel, T.. (2024). Genie: Generative Interactive Environments. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:4603-4623 Available from https://proceedings.mlr.press/v235/bruce24a.html.

Related Material