Reward Estimation for Variance Reduction in Deep Reinforcement Learning

Joshua Romoff, Peter Henderson, Alexandre Piche, Vincent Francois-Lavet, Joelle Pineau
Proceedings of The 2nd Conference on Robot Learning, PMLR 87:674-699, 2018.

Abstract

Reinforcement Learning (RL) agents require the specification of a reward signal for learning behaviours. However, introduction of corrupt or stochastic rewards can yield high variance in learning. Such corruption may be a direct result of goal misspecification, randomness in the reward signal, or correlation of the reward with external factors that are not known to the agent. Corruption or stochasticity of the reward signal can be especially problematic in robotics, where goal specification can be particularly difficult for complex tasks. While many variance reduction techniques have been studied to improve the robustness of the RL process, handling such stochastic or corrupted reward structures remains difficult. As an alternative for handling this scenario in model-free RL methods, we suggest using an estimator for both rewards and value functions. We demonstrate that this improves performance under corrupted stochastic rewards in both the tabular and non-linear function approximation settings for a variety of noise types and environments. The use of reward estimation is a robust and easy-to-implement improvement for handling corrupted reward signals in model-free RL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v87-romoff18a, title = {Reward Estimation for Variance Reduction in Deep Reinforcement Learning}, author = {Romoff, Joshua and Henderson, Peter and Piche, Alexandre and Francois-Lavet, Vincent and Pineau, Joelle}, booktitle = {Proceedings of The 2nd Conference on Robot Learning}, pages = {674--699}, year = {2018}, editor = {Billard, Aude and Dragan, Anca and Peters, Jan and Morimoto, Jun}, volume = {87}, series = {Proceedings of Machine Learning Research}, month = {29--31 Oct}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v87/romoff18a/romoff18a.pdf}, url = {https://proceedings.mlr.press/v87/romoff18a.html}, abstract = {Reinforcement Learning (RL) agents require the specification of a reward signal for learning behaviours. However, introduction of corrupt or stochastic rewards can yield high variance in learning. Such corruption may be a direct result of goal misspecification, randomness in the reward signal, or correlation of the reward with external factors that are not known to the agent. Corruption or stochasticity of the reward signal can be especially problematic in robotics, where goal specification can be particularly difficult for complex tasks. While many variance reduction techniques have been studied to improve the robustness of the RL process, handling such stochastic or corrupted reward structures remains difficult. As an alternative for handling this scenario in model-free RL methods, we suggest using an estimator for both rewards and value functions. We demonstrate that this improves performance under corrupted stochastic rewards in both the tabular and non-linear function approximation settings for a variety of noise types and environments. The use of reward estimation is a robust and easy-to-implement improvement for handling corrupted reward signals in model-free RL. } }
Endnote
%0 Conference Paper %T Reward Estimation for Variance Reduction in Deep Reinforcement Learning %A Joshua Romoff %A Peter Henderson %A Alexandre Piche %A Vincent Francois-Lavet %A Joelle Pineau %B Proceedings of The 2nd Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2018 %E Aude Billard %E Anca Dragan %E Jan Peters %E Jun Morimoto %F pmlr-v87-romoff18a %I PMLR %P 674--699 %U https://proceedings.mlr.press/v87/romoff18a.html %V 87 %X Reinforcement Learning (RL) agents require the specification of a reward signal for learning behaviours. However, introduction of corrupt or stochastic rewards can yield high variance in learning. Such corruption may be a direct result of goal misspecification, randomness in the reward signal, or correlation of the reward with external factors that are not known to the agent. Corruption or stochasticity of the reward signal can be especially problematic in robotics, where goal specification can be particularly difficult for complex tasks. While many variance reduction techniques have been studied to improve the robustness of the RL process, handling such stochastic or corrupted reward structures remains difficult. As an alternative for handling this scenario in model-free RL methods, we suggest using an estimator for both rewards and value functions. We demonstrate that this improves performance under corrupted stochastic rewards in both the tabular and non-linear function approximation settings for a variety of noise types and environments. The use of reward estimation is a robust and easy-to-implement improvement for handling corrupted reward signals in model-free RL.
APA
Romoff, J., Henderson, P., Piche, A., Francois-Lavet, V. & Pineau, J.. (2018). Reward Estimation for Variance Reduction in Deep Reinforcement Learning. Proceedings of The 2nd Conference on Robot Learning, in Proceedings of Machine Learning Research 87:674-699 Available from https://proceedings.mlr.press/v87/romoff18a.html.

Related Material