[edit]
A learning-based framework to adapt legged robots on-the-fly to unexpected disturbances
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1161-1173, 2024.
Abstract
State-of-the-art control methods for legged robots demonstrate impressive performance and robustness on a variety of terrains. Still, these approaches often lack an ability to learn how to adapt to changing conditions online. Such adaptation is especially critical if the robot encounters an environment with dynamics different than those considered in its model or in prior offline training. This paper proposes a learning-based framework that allows a walking robot to stabilize itself under disturbances neglected by its base controller. We consider an approach that simplifies the learning problem into two tasks: learning a model to estimate the robot’s steady-state response and learning a dynamics model for the system near its steady-state behavior. Through experiments with the MIT Mini Cheetah, we show that we can learn these models offline in simulation and transfer them to the real world, optionally finetuning them as the robot collects data. We demonstrate the effectiveness of our approach by applying it to stabilize the quadruped as it carries a box of water on its back.