Dynamics-Guided Diffusion Model for Sensor-less Robot Manipulator Design

Xiaomeng Xu, Huy Ha, Shuran Song
Proceedings of The 8th Conference on Robot Learning, PMLR 270:4446-4462, 2025.

Abstract

We present Dynamics-Guided Diffusion Model (DGDM), a data-driven framework for generating task-specific manipulator designs without task-specific training. Given object shapes and task specifications, DGDM generates sensor-less manipulator designs that can blindly manipulate objects towards desired motions and poses using an open-loop parallel motion. This framework 1) flexibly represents manipulation tasks as interaction profiles, 2) represents the design space using a geometric diffusion model, and 3) efficiently searches this design space using the gradients provided by a dynamics network trained without any task information. We evaluate DGDM on various manipulation tasks ranging from shifting/rotating objects to converging objects to a specific pose. Our generated designs outperform optimization-based and unguided diffusion baselines relatively by 31.5% and 45.3% on average success rate. With the ability to generate a new design within 0.8s, DGDM facilitates rapid design iteration and enhances the adoption of data-driven approaches for robot mechanism design. Qualitative results are best viewed on our project website https://dgdmcorl.github.io.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-xu25d, title = {Dynamics-Guided Diffusion Model for Sensor-less Robot Manipulator Design}, author = {Xu, Xiaomeng and Ha, Huy and Song, Shuran}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {4446--4462}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/xu25d/xu25d.pdf}, url = {https://proceedings.mlr.press/v270/xu25d.html}, abstract = {We present Dynamics-Guided Diffusion Model (DGDM), a data-driven framework for generating task-specific manipulator designs without task-specific training. Given object shapes and task specifications, DGDM generates sensor-less manipulator designs that can blindly manipulate objects towards desired motions and poses using an open-loop parallel motion. This framework 1) flexibly represents manipulation tasks as interaction profiles, 2) represents the design space using a geometric diffusion model, and 3) efficiently searches this design space using the gradients provided by a dynamics network trained without any task information. We evaluate DGDM on various manipulation tasks ranging from shifting/rotating objects to converging objects to a specific pose. Our generated designs outperform optimization-based and unguided diffusion baselines relatively by 31.5% and 45.3% on average success rate. With the ability to generate a new design within 0.8s, DGDM facilitates rapid design iteration and enhances the adoption of data-driven approaches for robot mechanism design. Qualitative results are best viewed on our project website https://dgdmcorl.github.io.} }
Endnote
%0 Conference Paper %T Dynamics-Guided Diffusion Model for Sensor-less Robot Manipulator Design %A Xiaomeng Xu %A Huy Ha %A Shuran Song %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-xu25d %I PMLR %P 4446--4462 %U https://proceedings.mlr.press/v270/xu25d.html %V 270 %X We present Dynamics-Guided Diffusion Model (DGDM), a data-driven framework for generating task-specific manipulator designs without task-specific training. Given object shapes and task specifications, DGDM generates sensor-less manipulator designs that can blindly manipulate objects towards desired motions and poses using an open-loop parallel motion. This framework 1) flexibly represents manipulation tasks as interaction profiles, 2) represents the design space using a geometric diffusion model, and 3) efficiently searches this design space using the gradients provided by a dynamics network trained without any task information. We evaluate DGDM on various manipulation tasks ranging from shifting/rotating objects to converging objects to a specific pose. Our generated designs outperform optimization-based and unguided diffusion baselines relatively by 31.5% and 45.3% on average success rate. With the ability to generate a new design within 0.8s, DGDM facilitates rapid design iteration and enhances the adoption of data-driven approaches for robot mechanism design. Qualitative results are best viewed on our project website https://dgdmcorl.github.io.
APA
Xu, X., Ha, H. & Song, S.. (2025). Dynamics-Guided Diffusion Model for Sensor-less Robot Manipulator Design. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:4446-4462 Available from https://proceedings.mlr.press/v270/xu25d.html.

Related Material