Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences

Daniel Brown, Russell Coleman, Ravi Srinivasan, Scott Niekum
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:1165-1177, 2020.

Abstract

Bayesian reward learning from demonstrations enables rigorous safety and uncertainty analysis when performing imitation learning. However, Bayesian reward learning methods are typically computationally intractable for complex control problems. We propose Bayesian Reward Extrapolation (Bayesian REX), a highly efficient Bayesian reward learning algorithm that scales to high-dimensional imitation learning problems by pre-training a low-dimensional feature encoding via self-supervised tasks and then leveraging preferences over demonstrations to perform fast Bayesian inference. Bayesian REX can learn to play Atari games from demonstrations, without access to the game score and can generate 100,000 samples from the posterior over reward functions in only 5 minutes on a personal laptop. Bayesian REX also results in imitation learning performance that is competitive with or better than state-of-the-art methods that only learn point estimates of the reward function. Finally, Bayesian REX enables efficient high-confidence policy evaluation without having access to samples of the reward function. These high-confidence performance bounds can be used to rank the performance and risk of a variety of evaluation policies and provide a way to detect reward hacking behaviors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-brown20a, title = {Safe Imitation Learning via Fast {B}ayesian Reward Inference from Preferences}, author = {Brown, Daniel and Coleman, Russell and Srinivasan, Ravi and Niekum, Scott}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {1165--1177}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/brown20a/brown20a.pdf}, url = {https://proceedings.mlr.press/v119/brown20a.html}, abstract = {Bayesian reward learning from demonstrations enables rigorous safety and uncertainty analysis when performing imitation learning. However, Bayesian reward learning methods are typically computationally intractable for complex control problems. We propose Bayesian Reward Extrapolation (Bayesian REX), a highly efficient Bayesian reward learning algorithm that scales to high-dimensional imitation learning problems by pre-training a low-dimensional feature encoding via self-supervised tasks and then leveraging preferences over demonstrations to perform fast Bayesian inference. Bayesian REX can learn to play Atari games from demonstrations, without access to the game score and can generate 100,000 samples from the posterior over reward functions in only 5 minutes on a personal laptop. Bayesian REX also results in imitation learning performance that is competitive with or better than state-of-the-art methods that only learn point estimates of the reward function. Finally, Bayesian REX enables efficient high-confidence policy evaluation without having access to samples of the reward function. These high-confidence performance bounds can be used to rank the performance and risk of a variety of evaluation policies and provide a way to detect reward hacking behaviors.} }
Endnote
%0 Conference Paper %T Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences %A Daniel Brown %A Russell Coleman %A Ravi Srinivasan %A Scott Niekum %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-brown20a %I PMLR %P 1165--1177 %U https://proceedings.mlr.press/v119/brown20a.html %V 119 %X Bayesian reward learning from demonstrations enables rigorous safety and uncertainty analysis when performing imitation learning. However, Bayesian reward learning methods are typically computationally intractable for complex control problems. We propose Bayesian Reward Extrapolation (Bayesian REX), a highly efficient Bayesian reward learning algorithm that scales to high-dimensional imitation learning problems by pre-training a low-dimensional feature encoding via self-supervised tasks and then leveraging preferences over demonstrations to perform fast Bayesian inference. Bayesian REX can learn to play Atari games from demonstrations, without access to the game score and can generate 100,000 samples from the posterior over reward functions in only 5 minutes on a personal laptop. Bayesian REX also results in imitation learning performance that is competitive with or better than state-of-the-art methods that only learn point estimates of the reward function. Finally, Bayesian REX enables efficient high-confidence policy evaluation without having access to samples of the reward function. These high-confidence performance bounds can be used to rank the performance and risk of a variety of evaluation policies and provide a way to detect reward hacking behaviors.
APA
Brown, D., Coleman, R., Srinivasan, R. & Niekum, S.. (2020). Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:1165-1177 Available from https://proceedings.mlr.press/v119/brown20a.html.

Related Material