A learning-based framework to adapt legged robots on-the-fly to unexpected disturbances

Nolan Fey, He Li, Nicholas Adrian, Patrick Wensing, Michael Lemmon
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1161-1173, 2024.

Abstract

State-of-the-art control methods for legged robots demonstrate impressive performance and robustness on a variety of terrains. Still, these approaches often lack an ability to learn how to adapt to changing conditions online. Such adaptation is especially critical if the robot encounters an environment with dynamics different than those considered in its model or in prior offline training. This paper proposes a learning-based framework that allows a walking robot to stabilize itself under disturbances neglected by its base controller. We consider an approach that simplifies the learning problem into two tasks: learning a model to estimate the robot’s steady-state response and learning a dynamics model for the system near its steady-state behavior. Through experiments with the MIT Mini Cheetah, we show that we can learn these models offline in simulation and transfer them to the real world, optionally finetuning them as the robot collects data. We demonstrate the effectiveness of our approach by applying it to stabilize the quadruped as it carries a box of water on its back.

Cite this Paper


BibTeX
@InProceedings{pmlr-v242-fey24a, title = {A learning-based framework to adapt legged robots on-the-fly to unexpected disturbances}, author = {Fey, Nolan and Li, He and Adrian, Nicholas and Wensing, Patrick and Lemmon, Michael}, booktitle = {Proceedings of the 6th Annual Learning for Dynamics & Control Conference}, pages = {1161--1173}, year = {2024}, editor = {Abate, Alessandro and Cannon, Mark and Margellos, Kostas and Papachristodoulou, Antonis}, volume = {242}, series = {Proceedings of Machine Learning Research}, month = {15--17 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v242/fey24a/fey24a.pdf}, url = {https://proceedings.mlr.press/v242/fey24a.html}, abstract = {State-of-the-art control methods for legged robots demonstrate impressive performance and robustness on a variety of terrains. Still, these approaches often lack an ability to learn how to adapt to changing conditions online. Such adaptation is especially critical if the robot encounters an environment with dynamics different than those considered in its model or in prior offline training. This paper proposes a learning-based framework that allows a walking robot to stabilize itself under disturbances neglected by its base controller. We consider an approach that simplifies the learning problem into two tasks: learning a model to estimate the robot’s steady-state response and learning a dynamics model for the system near its steady-state behavior. Through experiments with the MIT Mini Cheetah, we show that we can learn these models offline in simulation and transfer them to the real world, optionally finetuning them as the robot collects data. We demonstrate the effectiveness of our approach by applying it to stabilize the quadruped as it carries a box of water on its back.} }
Endnote
%0 Conference Paper %T A learning-based framework to adapt legged robots on-the-fly to unexpected disturbances %A Nolan Fey %A He Li %A Nicholas Adrian %A Patrick Wensing %A Michael Lemmon %B Proceedings of the 6th Annual Learning for Dynamics & Control Conference %C Proceedings of Machine Learning Research %D 2024 %E Alessandro Abate %E Mark Cannon %E Kostas Margellos %E Antonis Papachristodoulou %F pmlr-v242-fey24a %I PMLR %P 1161--1173 %U https://proceedings.mlr.press/v242/fey24a.html %V 242 %X State-of-the-art control methods for legged robots demonstrate impressive performance and robustness on a variety of terrains. Still, these approaches often lack an ability to learn how to adapt to changing conditions online. Such adaptation is especially critical if the robot encounters an environment with dynamics different than those considered in its model or in prior offline training. This paper proposes a learning-based framework that allows a walking robot to stabilize itself under disturbances neglected by its base controller. We consider an approach that simplifies the learning problem into two tasks: learning a model to estimate the robot’s steady-state response and learning a dynamics model for the system near its steady-state behavior. Through experiments with the MIT Mini Cheetah, we show that we can learn these models offline in simulation and transfer them to the real world, optionally finetuning them as the robot collects data. We demonstrate the effectiveness of our approach by applying it to stabilize the quadruped as it carries a box of water on its back.
APA
Fey, N., Li, H., Adrian, N., Wensing, P. & Lemmon, M.. (2024). A learning-based framework to adapt legged robots on-the-fly to unexpected disturbances. Proceedings of the 6th Annual Learning for Dynamics & Control Conference, in Proceedings of Machine Learning Research 242:1161-1173 Available from https://proceedings.mlr.press/v242/fey24a.html.

Related Material