Cold Diffusion on the Replay Buffer: Learning to Plan from Known Good States

Zidan Wang, Takeru Oba, Takuma Yoneda, Rui Shen, Matthew Walter, Bradly C. Stadie
Proceedings of The 7th Conference on Robot Learning, PMLR 229:3277-3291, 2023.

Abstract

Learning from demonstrations (LfD) has successfully trained robots to exhibit remarkable generalization capabilities. However, many powerful imitation techniques do not prioritize the feasibility of the robot behaviors they generate. In this work, we explore the feasibility of plans produced by LfD. As in prior work, we employ a temporal diffusion model with fixed start and goal states to facilitate imitation through in-painting. Unlike previous studies, we apply cold diffusion to ensure the optimization process is directed through the agent’s replay buffer of previously visited states. This routing approach increases the likelihood that the final trajectories will predominantly occupy the feasible region of the robot’s state space. We test this method in simulated robotic environments with obstacles and observe a significant improvement in the agent’s ability to avoid these obstacles during planning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-wang23e, title = {Cold Diffusion on the Replay Buffer: Learning to Plan from Known Good States}, author = {Wang, Zidan and Oba, Takeru and Yoneda, Takuma and Shen, Rui and Walter, Matthew and Stadie, Bradly C.}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {3277--3291}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/wang23e/wang23e.pdf}, url = {https://proceedings.mlr.press/v229/wang23e.html}, abstract = {Learning from demonstrations (LfD) has successfully trained robots to exhibit remarkable generalization capabilities. However, many powerful imitation techniques do not prioritize the feasibility of the robot behaviors they generate. In this work, we explore the feasibility of plans produced by LfD. As in prior work, we employ a temporal diffusion model with fixed start and goal states to facilitate imitation through in-painting. Unlike previous studies, we apply cold diffusion to ensure the optimization process is directed through the agent’s replay buffer of previously visited states. This routing approach increases the likelihood that the final trajectories will predominantly occupy the feasible region of the robot’s state space. We test this method in simulated robotic environments with obstacles and observe a significant improvement in the agent’s ability to avoid these obstacles during planning.} }
Endnote
%0 Conference Paper %T Cold Diffusion on the Replay Buffer: Learning to Plan from Known Good States %A Zidan Wang %A Takeru Oba %A Takuma Yoneda %A Rui Shen %A Matthew Walter %A Bradly C. Stadie %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-wang23e %I PMLR %P 3277--3291 %U https://proceedings.mlr.press/v229/wang23e.html %V 229 %X Learning from demonstrations (LfD) has successfully trained robots to exhibit remarkable generalization capabilities. However, many powerful imitation techniques do not prioritize the feasibility of the robot behaviors they generate. In this work, we explore the feasibility of plans produced by LfD. As in prior work, we employ a temporal diffusion model with fixed start and goal states to facilitate imitation through in-painting. Unlike previous studies, we apply cold diffusion to ensure the optimization process is directed through the agent’s replay buffer of previously visited states. This routing approach increases the likelihood that the final trajectories will predominantly occupy the feasible region of the robot’s state space. We test this method in simulated robotic environments with obstacles and observe a significant improvement in the agent’s ability to avoid these obstacles during planning.
APA
Wang, Z., Oba, T., Yoneda, T., Shen, R., Walter, M. & Stadie, B.C.. (2023). Cold Diffusion on the Replay Buffer: Learning to Plan from Known Good States. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:3277-3291 Available from https://proceedings.mlr.press/v229/wang23e.html.

Related Material