Assisted Robust Reward Design

Jerry Zhi-Yang He, Anca D. Dragan
Proceedings of the 5th Conference on Robot Learning, PMLR 164:1234-1246, 2022.

Abstract

Real-world robotic tasks require complex reward functions. When we define the problem the robot needs to solve, we pretend that a designer specifies this complex reward exactly, and it is set in stone from then on. In practice, however, reward design is an iterative process: the designer chooses a reward, eventually encounters an “edge-case” environment where the reward incentivizes the wrong behavior, revises the reward, and repeats. What would it mean to rethink robotics problems to formally account for this iterative nature of reward design? We propose that the robot not take the specified reward for granted, but rather have uncertainty about it, and account for the future design iterations as future evidence. We contribute an Assisted Reward Design method that speeds up the design process by anticipating and influencing this future evidence: rather than letting the designer eventually encounter failure cases and revise the reward then, the method actively exposes the designer to such environments during the development phase. We test this method in an autonomous driving task and find that it more quickly improves the car’s behavior in held-out environments by proposing environments that are “edge cases” for the current reward.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-he22a, title = {Assisted Robust Reward Design}, author = {He, Jerry Zhi-Yang and Dragan, Anca D.}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {1234--1246}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/he22a/he22a.pdf}, url = {https://proceedings.mlr.press/v164/he22a.html}, abstract = {Real-world robotic tasks require complex reward functions. When we define the problem the robot needs to solve, we pretend that a designer specifies this complex reward exactly, and it is set in stone from then on. In practice, however, reward design is an iterative process: the designer chooses a reward, eventually encounters an “edge-case” environment where the reward incentivizes the wrong behavior, revises the reward, and repeats. What would it mean to rethink robotics problems to formally account for this iterative nature of reward design? We propose that the robot not take the specified reward for granted, but rather have uncertainty about it, and account for the future design iterations as future evidence. We contribute an Assisted Reward Design method that speeds up the design process by anticipating and influencing this future evidence: rather than letting the designer eventually encounter failure cases and revise the reward then, the method actively exposes the designer to such environments during the development phase. We test this method in an autonomous driving task and find that it more quickly improves the car’s behavior in held-out environments by proposing environments that are “edge cases” for the current reward.} }
Endnote
%0 Conference Paper %T Assisted Robust Reward Design %A Jerry Zhi-Yang He %A Anca D. Dragan %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-he22a %I PMLR %P 1234--1246 %U https://proceedings.mlr.press/v164/he22a.html %V 164 %X Real-world robotic tasks require complex reward functions. When we define the problem the robot needs to solve, we pretend that a designer specifies this complex reward exactly, and it is set in stone from then on. In practice, however, reward design is an iterative process: the designer chooses a reward, eventually encounters an “edge-case” environment where the reward incentivizes the wrong behavior, revises the reward, and repeats. What would it mean to rethink robotics problems to formally account for this iterative nature of reward design? We propose that the robot not take the specified reward for granted, but rather have uncertainty about it, and account for the future design iterations as future evidence. We contribute an Assisted Reward Design method that speeds up the design process by anticipating and influencing this future evidence: rather than letting the designer eventually encounter failure cases and revise the reward then, the method actively exposes the designer to such environments during the development phase. We test this method in an autonomous driving task and find that it more quickly improves the car’s behavior in held-out environments by proposing environments that are “edge cases” for the current reward.
APA
He, J.Z. & Dragan, A.D.. (2022). Assisted Robust Reward Design. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:1234-1246 Available from https://proceedings.mlr.press/v164/he22a.html.

Related Material