Seeing-Eye Quadruped Navigation with Force Responsive Locomotion Control

David DeFazio, Eisuke Hirota, Shiqi Zhang
Proceedings of The 7th Conference on Robot Learning, PMLR 229:2184-2194, 2023.

Abstract

Seeing-eye robots are very useful tools for guiding visually impaired people, potentially producing a huge societal impact given the low availability and high cost of real guide dogs. Although a few seeing-eye robot systems have already been demonstrated, none considered external tugs from humans, which frequently occur in a real guide dog setting. In this paper, we simultaneously train a locomotion controller that is robust to external tugging forces via Reinforcement Learning (RL), and an external force estimator via supervised learning. The controller ensures stable walking, and the force estimator enables the robot to respond to the external forces from the human. These forces are used to guide the robot to the global goal, which is unknown to the robot, while the robot guides the human around nearby obstacles via a local planner. Experimental results in simulation and on hardware show that our controller is robust to external forces, and our seeing-eye system can accurately detect force direction. We demonstrate our full seeing-eye robot system on a real quadruped robot with a blindfolded human.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-defazio23a, title = {Seeing-Eye Quadruped Navigation with Force Responsive Locomotion Control}, author = {DeFazio, David and Hirota, Eisuke and Zhang, Shiqi}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {2184--2194}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/defazio23a/defazio23a.pdf}, url = {https://proceedings.mlr.press/v229/defazio23a.html}, abstract = {Seeing-eye robots are very useful tools for guiding visually impaired people, potentially producing a huge societal impact given the low availability and high cost of real guide dogs. Although a few seeing-eye robot systems have already been demonstrated, none considered external tugs from humans, which frequently occur in a real guide dog setting. In this paper, we simultaneously train a locomotion controller that is robust to external tugging forces via Reinforcement Learning (RL), and an external force estimator via supervised learning. The controller ensures stable walking, and the force estimator enables the robot to respond to the external forces from the human. These forces are used to guide the robot to the global goal, which is unknown to the robot, while the robot guides the human around nearby obstacles via a local planner. Experimental results in simulation and on hardware show that our controller is robust to external forces, and our seeing-eye system can accurately detect force direction. We demonstrate our full seeing-eye robot system on a real quadruped robot with a blindfolded human.} }
Endnote
%0 Conference Paper %T Seeing-Eye Quadruped Navigation with Force Responsive Locomotion Control %A David DeFazio %A Eisuke Hirota %A Shiqi Zhang %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-defazio23a %I PMLR %P 2184--2194 %U https://proceedings.mlr.press/v229/defazio23a.html %V 229 %X Seeing-eye robots are very useful tools for guiding visually impaired people, potentially producing a huge societal impact given the low availability and high cost of real guide dogs. Although a few seeing-eye robot systems have already been demonstrated, none considered external tugs from humans, which frequently occur in a real guide dog setting. In this paper, we simultaneously train a locomotion controller that is robust to external tugging forces via Reinforcement Learning (RL), and an external force estimator via supervised learning. The controller ensures stable walking, and the force estimator enables the robot to respond to the external forces from the human. These forces are used to guide the robot to the global goal, which is unknown to the robot, while the robot guides the human around nearby obstacles via a local planner. Experimental results in simulation and on hardware show that our controller is robust to external forces, and our seeing-eye system can accurately detect force direction. We demonstrate our full seeing-eye robot system on a real quadruped robot with a blindfolded human.
APA
DeFazio, D., Hirota, E. & Zhang, S.. (2023). Seeing-Eye Quadruped Navigation with Force Responsive Locomotion Control. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:2184-2194 Available from https://proceedings.mlr.press/v229/defazio23a.html.

Related Material