DemoSpeedup: Accelerating Visuomotor Policies via Entropy-Guided Demonstration Acceleration

Lingxiao Guo, Zhengrong Xue, Zijing Xu, Huazhe Xu
Proceedings of The 9th Conference on Robot Learning, PMLR 305:599-609, 2025.

Abstract

Imitation learning has shown great promise in robotic manipulation, but the policy’s execution is often unsatisfactorily slow due to commonly tardy demonstrations collected by human operators. In this work, we present DemoSpeedup, a self-supervised method to accelerate visuomotor policy execution via entropy-guided demonstration acceleration. DemoSpeedup starts from training an arbitrary generative policy (e.g., ACT or Diffusion Policy) on normal-speed demonstrations, which serves as a per-frame action entropy estimator. The key insight is that frames with lower action entropy estimates call for more consistent policy behaviors, which often indicate the demands for higher-precision operations. In contrast, frames with higher entropy estimates correspond to more casual sections, and therefore can be more safely accelerated. Thus, we segment the original demonstrations according to the estimated entropy, and accelerate them by down-sampling at rates that increase with the entropy values. Trained with the speedup demonstrations, the resulting policies execute up to 3 times faster while maintaining the task completion performance. Interestingly, these policies could even achieve higher success rates than those trained with normal-speed demonstrations, due to the benefits of reduced decision-making horizons.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-guo25a, title = {DemoSpeedup: Accelerating Visuomotor Policies via Entropy-Guided Demonstration Acceleration}, author = {Guo, Lingxiao and Xue, Zhengrong and Xu, Zijing and Xu, Huazhe}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {599--609}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/guo25a/guo25a.pdf}, url = {https://proceedings.mlr.press/v305/guo25a.html}, abstract = {Imitation learning has shown great promise in robotic manipulation, but the policy’s execution is often unsatisfactorily slow due to commonly tardy demonstrations collected by human operators. In this work, we present DemoSpeedup, a self-supervised method to accelerate visuomotor policy execution via entropy-guided demonstration acceleration. DemoSpeedup starts from training an arbitrary generative policy (e.g., ACT or Diffusion Policy) on normal-speed demonstrations, which serves as a per-frame action entropy estimator. The key insight is that frames with lower action entropy estimates call for more consistent policy behaviors, which often indicate the demands for higher-precision operations. In contrast, frames with higher entropy estimates correspond to more casual sections, and therefore can be more safely accelerated. Thus, we segment the original demonstrations according to the estimated entropy, and accelerate them by down-sampling at rates that increase with the entropy values. Trained with the speedup demonstrations, the resulting policies execute up to 3 times faster while maintaining the task completion performance. Interestingly, these policies could even achieve higher success rates than those trained with normal-speed demonstrations, due to the benefits of reduced decision-making horizons.} }
Endnote
%0 Conference Paper %T DemoSpeedup: Accelerating Visuomotor Policies via Entropy-Guided Demonstration Acceleration %A Lingxiao Guo %A Zhengrong Xue %A Zijing Xu %A Huazhe Xu %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-guo25a %I PMLR %P 599--609 %U https://proceedings.mlr.press/v305/guo25a.html %V 305 %X Imitation learning has shown great promise in robotic manipulation, but the policy’s execution is often unsatisfactorily slow due to commonly tardy demonstrations collected by human operators. In this work, we present DemoSpeedup, a self-supervised method to accelerate visuomotor policy execution via entropy-guided demonstration acceleration. DemoSpeedup starts from training an arbitrary generative policy (e.g., ACT or Diffusion Policy) on normal-speed demonstrations, which serves as a per-frame action entropy estimator. The key insight is that frames with lower action entropy estimates call for more consistent policy behaviors, which often indicate the demands for higher-precision operations. In contrast, frames with higher entropy estimates correspond to more casual sections, and therefore can be more safely accelerated. Thus, we segment the original demonstrations according to the estimated entropy, and accelerate them by down-sampling at rates that increase with the entropy values. Trained with the speedup demonstrations, the resulting policies execute up to 3 times faster while maintaining the task completion performance. Interestingly, these policies could even achieve higher success rates than those trained with normal-speed demonstrations, due to the benefits of reduced decision-making horizons.
APA
Guo, L., Xue, Z., Xu, Z. & Xu, H.. (2025). DemoSpeedup: Accelerating Visuomotor Policies via Entropy-Guided Demonstration Acceleration. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:599-609 Available from https://proceedings.mlr.press/v305/guo25a.html.

Related Material