Look Before You Leap: Safe Model-Based Reinforcement Learning with Human Intervention

Yunkun Xu, Zhenyu Liu, Guifang Duan, Jiangcheng Zhu, Xiaolong Bai, Jianrong Tan
Proceedings of the 5th Conference on Robot Learning, PMLR 164:332-341, 2022.

Abstract

Safety has become one of the main challenges of applying deep reinforcement learning to real world systems. Currently, the incorporation of external knowledge such as human oversight is the only means to prevent the agent from visiting the catastrophic state. In this paper, we propose MBHI, a novel framework for safe model-based reinforcement learning, which ensures safety in the state-level and can effectively avoid both local and non-local catastrophes. An ensemble of supervised learners are trained in MBHI to imitate human blocking decisions. Similar to human decision-making process, MBHI will roll out an imagined trajectory in the dynamics model before executing actions to the environment, and estimate its safety. When the imagination encounters a catastrophe, MBHI will block the current action and use an efficient MPC method to output a safety policy. We evaluate our method on several safety tasks, and the results show that MBHI achieved better performance in terms of sample efficiency and number of catastrophes compared to the baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-xu22a, title = {Look Before You Leap: Safe Model-Based Reinforcement Learning with Human Intervention}, author = {Xu, Yunkun and Liu, Zhenyu and Duan, Guifang and Zhu, Jiangcheng and Bai, Xiaolong and Tan, Jianrong}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {332--341}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/xu22a/xu22a.pdf}, url = {https://proceedings.mlr.press/v164/xu22a.html}, abstract = {Safety has become one of the main challenges of applying deep reinforcement learning to real world systems. Currently, the incorporation of external knowledge such as human oversight is the only means to prevent the agent from visiting the catastrophic state. In this paper, we propose MBHI, a novel framework for safe model-based reinforcement learning, which ensures safety in the state-level and can effectively avoid both local and non-local catastrophes. An ensemble of supervised learners are trained in MBHI to imitate human blocking decisions. Similar to human decision-making process, MBHI will roll out an imagined trajectory in the dynamics model before executing actions to the environment, and estimate its safety. When the imagination encounters a catastrophe, MBHI will block the current action and use an efficient MPC method to output a safety policy. We evaluate our method on several safety tasks, and the results show that MBHI achieved better performance in terms of sample efficiency and number of catastrophes compared to the baselines.} }
Endnote
%0 Conference Paper %T Look Before You Leap: Safe Model-Based Reinforcement Learning with Human Intervention %A Yunkun Xu %A Zhenyu Liu %A Guifang Duan %A Jiangcheng Zhu %A Xiaolong Bai %A Jianrong Tan %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-xu22a %I PMLR %P 332--341 %U https://proceedings.mlr.press/v164/xu22a.html %V 164 %X Safety has become one of the main challenges of applying deep reinforcement learning to real world systems. Currently, the incorporation of external knowledge such as human oversight is the only means to prevent the agent from visiting the catastrophic state. In this paper, we propose MBHI, a novel framework for safe model-based reinforcement learning, which ensures safety in the state-level and can effectively avoid both local and non-local catastrophes. An ensemble of supervised learners are trained in MBHI to imitate human blocking decisions. Similar to human decision-making process, MBHI will roll out an imagined trajectory in the dynamics model before executing actions to the environment, and estimate its safety. When the imagination encounters a catastrophe, MBHI will block the current action and use an efficient MPC method to output a safety policy. We evaluate our method on several safety tasks, and the results show that MBHI achieved better performance in terms of sample efficiency and number of catastrophes compared to the baselines.
APA
Xu, Y., Liu, Z., Duan, G., Zhu, J., Bai, X. & Tan, J.. (2022). Look Before You Leap: Safe Model-Based Reinforcement Learning with Human Intervention. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:332-341 Available from https://proceedings.mlr.press/v164/xu22a.html.

Related Material