Legged Robot State Estimation using Invariant Kalman Filtering and Learned Contact Events

Tzu-Yuan Lin, Ray Zhang, Justin Yu, Maani Ghaffari
Proceedings of the 5th Conference on Robot Learning, PMLR 164:1057-1066, 2022.

Abstract

This work develops a learning-based contact estimator for legged robots that bypasses the need for physical sensors and takes multi-modal proprioceptive sensory data as input. Unlike vision-based state estimators, proprioceptive state estimators are agnostic to perceptually degraded situations such as dark or foggy scenes. While some robots are equipped with dedicated physical sensors to detect necessary contact data for state estimation, some robots do not have dedicated contact sensors, and the addition of such sensors is non-trivial without redesigning the hardware. The trained network can estimate contact events on different terrains. The experiments show that a contact-aided invariant extended Kalman filter can generate accurate odometry trajectories compared to a state-of-the-art visual SLAM system, enabling robust proprioceptive odometry.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-lin22b, title = {Legged Robot State Estimation using Invariant Kalman Filtering and Learned Contact Events}, author = {Lin, Tzu-Yuan and Zhang, Ray and Yu, Justin and Ghaffari, Maani}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {1057--1066}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/lin22b/lin22b.pdf}, url = {https://proceedings.mlr.press/v164/lin22b.html}, abstract = {This work develops a learning-based contact estimator for legged robots that bypasses the need for physical sensors and takes multi-modal proprioceptive sensory data as input. Unlike vision-based state estimators, proprioceptive state estimators are agnostic to perceptually degraded situations such as dark or foggy scenes. While some robots are equipped with dedicated physical sensors to detect necessary contact data for state estimation, some robots do not have dedicated contact sensors, and the addition of such sensors is non-trivial without redesigning the hardware. The trained network can estimate contact events on different terrains. The experiments show that a contact-aided invariant extended Kalman filter can generate accurate odometry trajectories compared to a state-of-the-art visual SLAM system, enabling robust proprioceptive odometry.} }
Endnote
%0 Conference Paper %T Legged Robot State Estimation using Invariant Kalman Filtering and Learned Contact Events %A Tzu-Yuan Lin %A Ray Zhang %A Justin Yu %A Maani Ghaffari %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-lin22b %I PMLR %P 1057--1066 %U https://proceedings.mlr.press/v164/lin22b.html %V 164 %X This work develops a learning-based contact estimator for legged robots that bypasses the need for physical sensors and takes multi-modal proprioceptive sensory data as input. Unlike vision-based state estimators, proprioceptive state estimators are agnostic to perceptually degraded situations such as dark or foggy scenes. While some robots are equipped with dedicated physical sensors to detect necessary contact data for state estimation, some robots do not have dedicated contact sensors, and the addition of such sensors is non-trivial without redesigning the hardware. The trained network can estimate contact events on different terrains. The experiments show that a contact-aided invariant extended Kalman filter can generate accurate odometry trajectories compared to a state-of-the-art visual SLAM system, enabling robust proprioceptive odometry.
APA
Lin, T., Zhang, R., Yu, J. & Ghaffari, M.. (2022). Legged Robot State Estimation using Invariant Kalman Filtering and Learned Contact Events. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:1057-1066 Available from https://proceedings.mlr.press/v164/lin22b.html.

Related Material