FlowRetrieval: Flow-Guided Data Retrieval for Few-Shot Imitation Learning

Li-Heng Lin, Yuchen Cui, Amber Xie, Tianyu Hua, Dorsa Sadigh
Proceedings of The 8th Conference on Robot Learning, PMLR 270:4084-4099, 2025.

Abstract

Imitation learning policies in robotics tend to require an extensive amount of demonstrations. It is critical to develop few-shot adaptation strategies that rely only on a small amount of task-specific human demonstrations. Prior works focus on learning general policies from large scale dataset with diverse behaviors. Recent research has shown that directly retrieving relevant past experiences to augment policy learning has great promise in few-shot settings. However, existing data retrieval methods fall under two extremes: they either rely on the existence of exact same behaviors with visually similar scenes in the prior data, which is impractical to assume; or they retrieve based on semantic similarity of high-level language descriptions of the task, which might not be that informative about the shared behaviors or motions across tasks. In this work, we investigate how we can leverage motion similarity in the vast amount of cross-task data to improve few-shot imitation learning of the target task. Our key insight is that motion-similar data carry rich information about the effects of actions and object interactions that can be leveraged during few-shot adaptation. We propose FlowRetrieval, an approach that leverages optical flow representations for both extracting similar motions to target tasks from prior data, and for guiding learning of a policy that can maximally benefit from such data. Our results show FlowRetrieval significantly outperforms prior methods across simulated and real-world domains, achieving on average 27% higher success rate than the best retrieval-based prior method. In the Pen-in-Cup task with a real Franka Emika robot, FlowRetrieval achieves 3.7x the performance of the baseline learning from all prior and target data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-lin25a, title = {FlowRetrieval: Flow-Guided Data Retrieval for Few-Shot Imitation Learning}, author = {Lin, Li-Heng and Cui, Yuchen and Xie, Amber and Hua, Tianyu and Sadigh, Dorsa}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {4084--4099}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/lin25a/lin25a.pdf}, url = {https://proceedings.mlr.press/v270/lin25a.html}, abstract = {Imitation learning policies in robotics tend to require an extensive amount of demonstrations. It is critical to develop few-shot adaptation strategies that rely only on a small amount of task-specific human demonstrations. Prior works focus on learning general policies from large scale dataset with diverse behaviors. Recent research has shown that directly retrieving relevant past experiences to augment policy learning has great promise in few-shot settings. However, existing data retrieval methods fall under two extremes: they either rely on the existence of exact same behaviors with visually similar scenes in the prior data, which is impractical to assume; or they retrieve based on semantic similarity of high-level language descriptions of the task, which might not be that informative about the shared behaviors or motions across tasks. In this work, we investigate how we can leverage motion similarity in the vast amount of cross-task data to improve few-shot imitation learning of the target task. Our key insight is that motion-similar data carry rich information about the effects of actions and object interactions that can be leveraged during few-shot adaptation. We propose FlowRetrieval, an approach that leverages optical flow representations for both extracting similar motions to target tasks from prior data, and for guiding learning of a policy that can maximally benefit from such data. Our results show FlowRetrieval significantly outperforms prior methods across simulated and real-world domains, achieving on average 27% higher success rate than the best retrieval-based prior method. In the Pen-in-Cup task with a real Franka Emika robot, FlowRetrieval achieves 3.7x the performance of the baseline learning from all prior and target data.} }
Endnote
%0 Conference Paper %T FlowRetrieval: Flow-Guided Data Retrieval for Few-Shot Imitation Learning %A Li-Heng Lin %A Yuchen Cui %A Amber Xie %A Tianyu Hua %A Dorsa Sadigh %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-lin25a %I PMLR %P 4084--4099 %U https://proceedings.mlr.press/v270/lin25a.html %V 270 %X Imitation learning policies in robotics tend to require an extensive amount of demonstrations. It is critical to develop few-shot adaptation strategies that rely only on a small amount of task-specific human demonstrations. Prior works focus on learning general policies from large scale dataset with diverse behaviors. Recent research has shown that directly retrieving relevant past experiences to augment policy learning has great promise in few-shot settings. However, existing data retrieval methods fall under two extremes: they either rely on the existence of exact same behaviors with visually similar scenes in the prior data, which is impractical to assume; or they retrieve based on semantic similarity of high-level language descriptions of the task, which might not be that informative about the shared behaviors or motions across tasks. In this work, we investigate how we can leverage motion similarity in the vast amount of cross-task data to improve few-shot imitation learning of the target task. Our key insight is that motion-similar data carry rich information about the effects of actions and object interactions that can be leveraged during few-shot adaptation. We propose FlowRetrieval, an approach that leverages optical flow representations for both extracting similar motions to target tasks from prior data, and for guiding learning of a policy that can maximally benefit from such data. Our results show FlowRetrieval significantly outperforms prior methods across simulated and real-world domains, achieving on average 27% higher success rate than the best retrieval-based prior method. In the Pen-in-Cup task with a real Franka Emika robot, FlowRetrieval achieves 3.7x the performance of the baseline learning from all prior and target data.
APA
Lin, L., Cui, Y., Xie, A., Hua, T. & Sadigh, D.. (2025). FlowRetrieval: Flow-Guided Data Retrieval for Few-Shot Imitation Learning. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:4084-4099 Available from https://proceedings.mlr.press/v270/lin25a.html.

Related Material