[edit]
FlowRetrieval: Flow-Guided Data Retrieval for Few-Shot Imitation Learning
Proceedings of The 8th Conference on Robot Learning, PMLR 270:4084-4099, 2025.
Abstract
Imitation learning policies in robotics tend to require an extensive amount of demonstrations. It is critical to develop few-shot adaptation strategies that rely only on a small amount of task-specific human demonstrations. Prior works focus on learning general policies from large scale dataset with diverse behaviors. Recent research has shown that directly retrieving relevant past experiences to augment policy learning has great promise in few-shot settings. However, existing data retrieval methods fall under two extremes: they either rely on the existence of exact same behaviors with visually similar scenes in the prior data, which is impractical to assume; or they retrieve based on semantic similarity of high-level language descriptions of the task, which might not be that informative about the shared behaviors or motions across tasks. In this work, we investigate how we can leverage motion similarity in the vast amount of cross-task data to improve few-shot imitation learning of the target task. Our key insight is that motion-similar data carry rich information about the effects of actions and object interactions that can be leveraged during few-shot adaptation. We propose FlowRetrieval, an approach that leverages optical flow representations for both extracting similar motions to target tasks from prior data, and for guiding learning of a policy that can maximally benefit from such data. Our results show FlowRetrieval significantly outperforms prior methods across simulated and real-world domains, achieving on average 27% higher success rate than the best retrieval-based prior method. In the Pen-in-Cup task with a real Franka Emika robot, FlowRetrieval achieves 3.7x the performance of the baseline learning from all prior and target data.