Robot Reinforcement Learning on the Constraint Manifold

Puze Liu, Davide Tateo, Haitham Bou Ammar, Jan Peters
Proceedings of the 5th Conference on Robot Learning, PMLR 164:1357-1366, 2022.

Abstract

Reinforcement learning in robotics is extremely challenging due to many practical issues, including safety, mechanical constraints, and wear and tear. Typically, these issues are not considered in the machine learning literature. One crucial problem in applying reinforcement learning in the real world is Safe Exploration, which requires physical and safety constraints satisfaction throughout the learning process. To explore in such a safety-critical environment, leveraging known information such as robot models and constraints is beneficial to provide more robust safety guarantees. Exploiting this knowledge, we propose a novel method to learn robotics tasks in simulation efficiently while satisfying the constraints during the learning process.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-liu22c, title = {Robot Reinforcement Learning on the Constraint Manifold}, author = {Liu, Puze and Tateo, Davide and Ammar, Haitham Bou and Peters, Jan}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {1357--1366}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/liu22c/liu22c.pdf}, url = {https://proceedings.mlr.press/v164/liu22c.html}, abstract = {Reinforcement learning in robotics is extremely challenging due to many practical issues, including safety, mechanical constraints, and wear and tear. Typically, these issues are not considered in the machine learning literature. One crucial problem in applying reinforcement learning in the real world is Safe Exploration, which requires physical and safety constraints satisfaction throughout the learning process. To explore in such a safety-critical environment, leveraging known information such as robot models and constraints is beneficial to provide more robust safety guarantees. Exploiting this knowledge, we propose a novel method to learn robotics tasks in simulation efficiently while satisfying the constraints during the learning process.} }
Endnote
%0 Conference Paper %T Robot Reinforcement Learning on the Constraint Manifold %A Puze Liu %A Davide Tateo %A Haitham Bou Ammar %A Jan Peters %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-liu22c %I PMLR %P 1357--1366 %U https://proceedings.mlr.press/v164/liu22c.html %V 164 %X Reinforcement learning in robotics is extremely challenging due to many practical issues, including safety, mechanical constraints, and wear and tear. Typically, these issues are not considered in the machine learning literature. One crucial problem in applying reinforcement learning in the real world is Safe Exploration, which requires physical and safety constraints satisfaction throughout the learning process. To explore in such a safety-critical environment, leveraging known information such as robot models and constraints is beneficial to provide more robust safety guarantees. Exploiting this knowledge, we propose a novel method to learn robotics tasks in simulation efficiently while satisfying the constraints during the learning process.
APA
Liu, P., Tateo, D., Ammar, H.B. & Peters, J.. (2022). Robot Reinforcement Learning on the Constraint Manifold. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:1357-1366 Available from https://proceedings.mlr.press/v164/liu22c.html.

Related Material