D3Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Rearrangement

Yixuan Wang, Mingtong Zhang, Zhuoran Li, Tarik Kelestemur, Katherine Rose Driggs-Campbell, Jiajun Wu, Li Fei-Fei, Yunzhu Li
Proceedings of The 8th Conference on Robot Learning, PMLR 270:272-298, 2025.

Abstract

Scene representation is a crucial design choice in robotic manipulation systems. An ideal representation is expected to be 3D, dynamic, and semantic to meet the demands of diverse manipulation tasks. However, previous works often lack all three properties simultaneously. In this work, we introduce D3Fields—**dynamic 3D descriptor fields**. These fields are **implicit 3D representations** that take in 3D points and output semantic features and instance masks. They can also capture the dynamics of the underlying 3D environments. Specifically, we project arbitrary 3D points in the workspace onto multi-view 2D visual observations and interpolate features derived from visual foundational models. The resulting fused descriptor fields allow for flexible goal specifications using 2D images with varied contexts, styles, and instances. To evaluate the effectiveness of these descriptor fields, we apply our representation to rearrangement tasks in a zero-shot manner. Through extensive evaluation in real worlds and simulations, we demonstrate that D3Fields are effective for **zero-shot generalizable** rearrangement tasks. We also compare D3Fields with state-of-the-art implicit 3D representations and show significant improvements in effectiveness and efficiency. Project page: https://robopil.github.io/d3fields/

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-wang25b, title = {D$^3$Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Rearrangement}, author = {Wang, Yixuan and Zhang, Mingtong and Li, Zhuoran and Kelestemur, Tarik and Driggs-Campbell, Katherine Rose and Wu, Jiajun and Fei-Fei, Li and Li, Yunzhu}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {272--298}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/wang25b/wang25b.pdf}, url = {https://proceedings.mlr.press/v270/wang25b.html}, abstract = {Scene representation is a crucial design choice in robotic manipulation systems. An ideal representation is expected to be 3D, dynamic, and semantic to meet the demands of diverse manipulation tasks. However, previous works often lack all three properties simultaneously. In this work, we introduce D$^3$Fields—**dynamic 3D descriptor fields**. These fields are **implicit 3D representations** that take in 3D points and output semantic features and instance masks. They can also capture the dynamics of the underlying 3D environments. Specifically, we project arbitrary 3D points in the workspace onto multi-view 2D visual observations and interpolate features derived from visual foundational models. The resulting fused descriptor fields allow for flexible goal specifications using 2D images with varied contexts, styles, and instances. To evaluate the effectiveness of these descriptor fields, we apply our representation to rearrangement tasks in a zero-shot manner. Through extensive evaluation in real worlds and simulations, we demonstrate that D$^3$Fields are effective for **zero-shot generalizable** rearrangement tasks. We also compare D$^3$Fields with state-of-the-art implicit 3D representations and show significant improvements in effectiveness and efficiency. Project page: https://robopil.github.io/d3fields/} }
Endnote
%0 Conference Paper %T D$^3$Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Rearrangement %A Yixuan Wang %A Mingtong Zhang %A Zhuoran Li %A Tarik Kelestemur %A Katherine Rose Driggs-Campbell %A Jiajun Wu %A Li Fei-Fei %A Yunzhu Li %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-wang25b %I PMLR %P 272--298 %U https://proceedings.mlr.press/v270/wang25b.html %V 270 %X Scene representation is a crucial design choice in robotic manipulation systems. An ideal representation is expected to be 3D, dynamic, and semantic to meet the demands of diverse manipulation tasks. However, previous works often lack all three properties simultaneously. In this work, we introduce D$^3$Fields—**dynamic 3D descriptor fields**. These fields are **implicit 3D representations** that take in 3D points and output semantic features and instance masks. They can also capture the dynamics of the underlying 3D environments. Specifically, we project arbitrary 3D points in the workspace onto multi-view 2D visual observations and interpolate features derived from visual foundational models. The resulting fused descriptor fields allow for flexible goal specifications using 2D images with varied contexts, styles, and instances. To evaluate the effectiveness of these descriptor fields, we apply our representation to rearrangement tasks in a zero-shot manner. Through extensive evaluation in real worlds and simulations, we demonstrate that D$^3$Fields are effective for **zero-shot generalizable** rearrangement tasks. We also compare D$^3$Fields with state-of-the-art implicit 3D representations and show significant improvements in effectiveness and efficiency. Project page: https://robopil.github.io/d3fields/
APA
Wang, Y., Zhang, M., Li, Z., Kelestemur, T., Driggs-Campbell, K.R., Wu, J., Fei-Fei, L. & Li, Y.. (2025). D$^3$Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Rearrangement. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:272-298 Available from https://proceedings.mlr.press/v270/wang25b.html.

Related Material