Learning Multi-Object Dynamics with Compositional Neural Radiance Fields

Danny Driess, Zhiao Huang, Yunzhu Li, Russ Tedrake, Marc Toussaint
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1755-1768, 2023.

Abstract

We present a method to learn compositional multi-object dynamics models from image observations based on implicit object encoders, Neural Radiance Fields (NeRFs), and graph neural networks. NeRFs have become a popular choice for representing scenes due to their strong 3D prior. However, most NeRF approaches are trained on a single scene, representing the whole scene with a global model, making generalization to novel scenes, containing different numbers of objects, challenging. Instead, we present a compositional, object-centric auto-encoder framework that maps multiple views of the scene to a set of latent vectors representing each object separately. The latent vectors parameterize individual NeRFs from which the scene can be reconstructed. Based on those latent vectors, we train a graph neural network dynamics model in the latent space to achieve compositionality for dynamics prediction. A key feature of our approach is that the latent vectors are forced to encode 3D information through the NeRF decoder, which enables us to incorporate structural priors in learning the dynamics models, making long-term predictions more stable compared to several baselines. Simulated and real world experiments show that our method can model and learn the dynamics of compositional scenes including rigid and deformable objects. Video: https://dannydriess.github.io/compnerfdyn/

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-driess23a, title = {Learning Multi-Object Dynamics with Compositional Neural Radiance Fields}, author = {Driess, Danny and Huang, Zhiao and Li, Yunzhu and Tedrake, Russ and Toussaint, Marc}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1755--1768}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/driess23a/driess23a.pdf}, url = {https://proceedings.mlr.press/v205/driess23a.html}, abstract = {We present a method to learn compositional multi-object dynamics models from image observations based on implicit object encoders, Neural Radiance Fields (NeRFs), and graph neural networks. NeRFs have become a popular choice for representing scenes due to their strong 3D prior. However, most NeRF approaches are trained on a single scene, representing the whole scene with a global model, making generalization to novel scenes, containing different numbers of objects, challenging. Instead, we present a compositional, object-centric auto-encoder framework that maps multiple views of the scene to a set of latent vectors representing each object separately. The latent vectors parameterize individual NeRFs from which the scene can be reconstructed. Based on those latent vectors, we train a graph neural network dynamics model in the latent space to achieve compositionality for dynamics prediction. A key feature of our approach is that the latent vectors are forced to encode 3D information through the NeRF decoder, which enables us to incorporate structural priors in learning the dynamics models, making long-term predictions more stable compared to several baselines. Simulated and real world experiments show that our method can model and learn the dynamics of compositional scenes including rigid and deformable objects. Video: https://dannydriess.github.io/compnerfdyn/} }
Endnote
%0 Conference Paper %T Learning Multi-Object Dynamics with Compositional Neural Radiance Fields %A Danny Driess %A Zhiao Huang %A Yunzhu Li %A Russ Tedrake %A Marc Toussaint %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-driess23a %I PMLR %P 1755--1768 %U https://proceedings.mlr.press/v205/driess23a.html %V 205 %X We present a method to learn compositional multi-object dynamics models from image observations based on implicit object encoders, Neural Radiance Fields (NeRFs), and graph neural networks. NeRFs have become a popular choice for representing scenes due to their strong 3D prior. However, most NeRF approaches are trained on a single scene, representing the whole scene with a global model, making generalization to novel scenes, containing different numbers of objects, challenging. Instead, we present a compositional, object-centric auto-encoder framework that maps multiple views of the scene to a set of latent vectors representing each object separately. The latent vectors parameterize individual NeRFs from which the scene can be reconstructed. Based on those latent vectors, we train a graph neural network dynamics model in the latent space to achieve compositionality for dynamics prediction. A key feature of our approach is that the latent vectors are forced to encode 3D information through the NeRF decoder, which enables us to incorporate structural priors in learning the dynamics models, making long-term predictions more stable compared to several baselines. Simulated and real world experiments show that our method can model and learn the dynamics of compositional scenes including rigid and deformable objects. Video: https://dannydriess.github.io/compnerfdyn/
APA
Driess, D., Huang, Z., Li, Y., Tedrake, R. & Toussaint, M.. (2023). Learning Multi-Object Dynamics with Compositional Neural Radiance Fields. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1755-1768 Available from https://proceedings.mlr.press/v205/driess23a.html.

Related Material