[edit]
SDS – See it, Do it, Sorted: Quadruped Skill Synthesis from Single Video Demonstration
Proceedings of The 9th Conference on Robot Learning, PMLR 305:1879-1897, 2025.
Abstract
Imagine a robot learning locomotion skills from any single video, without labels or reward engineering. We introduce SDS ("See it. Do it. Sorted."), an automated pipeline for skill acquisition from unstructured video demonstrations. Using GPT-4o, SDS applies novel prompting techniques, in the form of spatio-temporal grid-based visual encoding (Gv) and structured input decomposition (SUS). These produce executable reward functions (RF) from raw input videos. The RFs are used to train PPO policies and are optimized through closed-loop evolution, using training footage and performance metrics as self-supervised signals. SDS allows quadrupeds (e.g., Unitree Go1) to learn four gaits—trot, bound, pace, and hop—achieving 100% gait matching fidelity, Dynamic Time Warping (DTW) distance in the order of 10^-6, and stable locomotion with zero failures, both in simulation and the real world. SDS generalizes to morphologically different quadrupeds (e.g., ANYmal) and outperforms prior work in data efficiency, training time, and engineering effort. Our code is open-source under: https://sdsreview.github.io/SDS_ANONYM/