Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning

Kun Huang, Edward S. Hu, Dinesh Jayaraman
Proceedings of The 6th Conference on Robot Learning, PMLR 205:11-21, 2023.

Abstract

Physical interactions can often help reveal information that is not readily apparent. For example, we may tug at a table leg to evaluate whether it is built well, or turn a water bottle upside down to check that it is watertight. We propose to train robots to acquire such interactive behaviors automatically, for the purpose of evaluating the result of an attempted robotic skill execution. These evaluations in turn serve as "interactive reward functions" (IRFs) for training reinforcement learning policies to perform the target skill, such as screwing the table leg tightly. In addition, even after task policies are fully trained, IRFs can serve as verification mechanisms that improve online task execution. For any given task, our IRFs can be conveniently trained using only examples of successful outcomes, and no further specification is needed to train the task policy thereafter. In our evaluations on door locking and weighted block stacking in simulation, and screw tightening on a real robot, IRFs enable large performance improvements, even outperforming baselines with access to demonstrations or carefully engineered rewards.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-huang23a, title = {Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning}, author = {Huang, Kun and Hu, Edward S. and Jayaraman, Dinesh}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {11--21}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/huang23a/huang23a.pdf}, url = {https://proceedings.mlr.press/v205/huang23a.html}, abstract = {Physical interactions can often help reveal information that is not readily apparent. For example, we may tug at a table leg to evaluate whether it is built well, or turn a water bottle upside down to check that it is watertight. We propose to train robots to acquire such interactive behaviors automatically, for the purpose of evaluating the result of an attempted robotic skill execution. These evaluations in turn serve as "interactive reward functions" (IRFs) for training reinforcement learning policies to perform the target skill, such as screwing the table leg tightly. In addition, even after task policies are fully trained, IRFs can serve as verification mechanisms that improve online task execution. For any given task, our IRFs can be conveniently trained using only examples of successful outcomes, and no further specification is needed to train the task policy thereafter. In our evaluations on door locking and weighted block stacking in simulation, and screw tightening on a real robot, IRFs enable large performance improvements, even outperforming baselines with access to demonstrations or carefully engineered rewards.} }
Endnote
%0 Conference Paper %T Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning %A Kun Huang %A Edward S. Hu %A Dinesh Jayaraman %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-huang23a %I PMLR %P 11--21 %U https://proceedings.mlr.press/v205/huang23a.html %V 205 %X Physical interactions can often help reveal information that is not readily apparent. For example, we may tug at a table leg to evaluate whether it is built well, or turn a water bottle upside down to check that it is watertight. We propose to train robots to acquire such interactive behaviors automatically, for the purpose of evaluating the result of an attempted robotic skill execution. These evaluations in turn serve as "interactive reward functions" (IRFs) for training reinforcement learning policies to perform the target skill, such as screwing the table leg tightly. In addition, even after task policies are fully trained, IRFs can serve as verification mechanisms that improve online task execution. For any given task, our IRFs can be conveniently trained using only examples of successful outcomes, and no further specification is needed to train the task policy thereafter. In our evaluations on door locking and weighted block stacking in simulation, and screw tightening on a real robot, IRFs enable large performance improvements, even outperforming baselines with access to demonstrations or carefully engineered rewards.
APA
Huang, K., Hu, E.S. & Jayaraman, D.. (2023). Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:11-21 Available from https://proceedings.mlr.press/v205/huang23a.html.

Related Material