[edit]
KDPE: A Kernel Density Estimation Strategy for Diffusion Policy Trajectory Selection
Proceedings of The 9th Conference on Robot Learning, PMLR 305:1210-1224, 2025.
Abstract
Learning robot policies that capture multimodality in the training data has been a long-standing open challenge for behavior cloning. Recent approaches tackle the problem by modeling the conditional action distribution with generative models. One of these approaches is Diffusion Policy, which relies on a diffusion model to denoise random points into robot action trajectories. While achieving state-of-the-art performance, it has two main drawbacks that may lead the robot out of the data distribution during policy execution. First, the stochasticity of the denoising process can highly impact on the quality of generated trajectory of actions. Second, being a supervised learning approach, it can learn data outliers from the dataset used for training. Recent work focuses on mitigating these limitations by combining Diffusion Policy either with large-scale training or with classical behavior cloning algorithms. Instead, we propose KDPE, a Kernel Density Estimation-based strategy that filters out potentially harmful trajectories output of Diffusion Policy while keeping a low test-time computational overhead. For Kernel Density Estimation, we propose a manifold-aware kernel to model a probability density function for actions composed of end-effector Cartesian position, orientation, and gripper state. KDPE overall achieves better performance than Diffusion Policy on simulated single-arm RoboMimic and MimicGen tasks, and on three real robot experiments:PickPlush, a tabletop grasping task, CubeSort, a multimodal pick and place task, and CoffeeMaking, a task that requires long-horizon capabilities and precise execution. The code will be released upon acceptance and additional material is provided on our anonymized project page:https://kdpe-robotics.github.io.