Learning to Jump from Pixels

Gabriel B Margolis, Tao Chen, Kartik Paigwar, Xiang Fu, Donghyun Kim, Sang bae Kim, Pulkit Agrawal
Proceedings of the 5th Conference on Robot Learning, PMLR 164:1025-1034, 2022.

Abstract

Today’s robotic quadruped systems can robustly walk over a diverse range of rough but continuous terrains, where the terrain elevation varies gradually. Locomotion on discontinuous terrains, such as those with gaps or obstacles, presents a complementary set of challenges. In discontinuous settings, it becomes necessary to plan ahead using visual inputs and to execute agile behaviors beyond robust walking, such as jumps. Such dynamic motion results in significant motion of onboard sensors, which introduces a new set of challenges for real-time visual processing. The requirements of agility and terrain awareness in this setting reinforce the need for robust control. We present Depth-based Impulse Control (DIC), a method for synthesizing highly agile visually-guided locomotion behaviors. DIC affords the flexibility of model-free learning but regularizes behavior through explicit model-based optimization of ground reaction forces. We evaluate performance both in simulation and in the real world.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-margolis22a, title = {Learning to Jump from Pixels}, author = {Margolis, Gabriel B and Chen, Tao and Paigwar, Kartik and Fu, Xiang and Kim, Donghyun and Kim, Sang bae and Agrawal, Pulkit}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {1025--1034}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/margolis22a/margolis22a.pdf}, url = {https://proceedings.mlr.press/v164/margolis22a.html}, abstract = {Today’s robotic quadruped systems can robustly walk over a diverse range of rough but continuous terrains, where the terrain elevation varies gradually. Locomotion on discontinuous terrains, such as those with gaps or obstacles, presents a complementary set of challenges. In discontinuous settings, it becomes necessary to plan ahead using visual inputs and to execute agile behaviors beyond robust walking, such as jumps. Such dynamic motion results in significant motion of onboard sensors, which introduces a new set of challenges for real-time visual processing. The requirements of agility and terrain awareness in this setting reinforce the need for robust control. We present Depth-based Impulse Control (DIC), a method for synthesizing highly agile visually-guided locomotion behaviors. DIC affords the flexibility of model-free learning but regularizes behavior through explicit model-based optimization of ground reaction forces. We evaluate performance both in simulation and in the real world.} }
Endnote
%0 Conference Paper %T Learning to Jump from Pixels %A Gabriel B Margolis %A Tao Chen %A Kartik Paigwar %A Xiang Fu %A Donghyun Kim %A Sang bae Kim %A Pulkit Agrawal %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-margolis22a %I PMLR %P 1025--1034 %U https://proceedings.mlr.press/v164/margolis22a.html %V 164 %X Today’s robotic quadruped systems can robustly walk over a diverse range of rough but continuous terrains, where the terrain elevation varies gradually. Locomotion on discontinuous terrains, such as those with gaps or obstacles, presents a complementary set of challenges. In discontinuous settings, it becomes necessary to plan ahead using visual inputs and to execute agile behaviors beyond robust walking, such as jumps. Such dynamic motion results in significant motion of onboard sensors, which introduces a new set of challenges for real-time visual processing. The requirements of agility and terrain awareness in this setting reinforce the need for robust control. We present Depth-based Impulse Control (DIC), a method for synthesizing highly agile visually-guided locomotion behaviors. DIC affords the flexibility of model-free learning but regularizes behavior through explicit model-based optimization of ground reaction forces. We evaluate performance both in simulation and in the real world.
APA
Margolis, G.B., Chen, T., Paigwar, K., Fu, X., Kim, D., Kim, S.b. & Agrawal, P.. (2022). Learning to Jump from Pixels. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:1025-1034 Available from https://proceedings.mlr.press/v164/margolis22a.html.

Related Material