Counterfactual Credit Assignment in Model-Free Reinforcement Learning

Thomas Mesnard, Theophane Weber, Fabio Viola, Shantanu Thakoor, Alaa Saade, Anna Harutyunyan, Will Dabney, Thomas S Stepleton, Nicolas Heess, Arthur Guez, Eric Moulines, Marcus Hutter, Lars Buesing, Remi Munos
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:7654-7664, 2021.

Abstract

Credit assignment in reinforcement learning is the problem of measuring an action’s influence on future rewards. In particular, this requires separating skill from luck, i.e. disentangling the effect of an action on rewards from that of external factors and subsequent actions. To achieve this, we adapt the notion of counterfactuals from causality theory to a model-free RL setup. The key idea is to condition value functions on future events, by learning to extract relevant information from a trajectory. We formulate a family of policy gradient algorithms that use these future-conditional value functions as baselines or critics, and show that they are provably low variance. To avoid the potential bias from conditioning on future information, we constrain the hindsight information to not contain information about the agent’s actions. We demonstrate the efficacy and validity of our algorithm on a number of illustrative and challenging problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-mesnard21a, title = {Counterfactual Credit Assignment in Model-Free Reinforcement Learning}, author = {Mesnard, Thomas and Weber, Theophane and Viola, Fabio and Thakoor, Shantanu and Saade, Alaa and Harutyunyan, Anna and Dabney, Will and Stepleton, Thomas S and Heess, Nicolas and Guez, Arthur and Moulines, Eric and Hutter, Marcus and Buesing, Lars and Munos, Remi}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {7654--7664}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/mesnard21a/mesnard21a.pdf}, url = {https://proceedings.mlr.press/v139/mesnard21a.html}, abstract = {Credit assignment in reinforcement learning is the problem of measuring an action’s influence on future rewards. In particular, this requires separating skill from luck, i.e. disentangling the effect of an action on rewards from that of external factors and subsequent actions. To achieve this, we adapt the notion of counterfactuals from causality theory to a model-free RL setup. The key idea is to condition value functions on future events, by learning to extract relevant information from a trajectory. We formulate a family of policy gradient algorithms that use these future-conditional value functions as baselines or critics, and show that they are provably low variance. To avoid the potential bias from conditioning on future information, we constrain the hindsight information to not contain information about the agent’s actions. We demonstrate the efficacy and validity of our algorithm on a number of illustrative and challenging problems.} }
Endnote
%0 Conference Paper %T Counterfactual Credit Assignment in Model-Free Reinforcement Learning %A Thomas Mesnard %A Theophane Weber %A Fabio Viola %A Shantanu Thakoor %A Alaa Saade %A Anna Harutyunyan %A Will Dabney %A Thomas S Stepleton %A Nicolas Heess %A Arthur Guez %A Eric Moulines %A Marcus Hutter %A Lars Buesing %A Remi Munos %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-mesnard21a %I PMLR %P 7654--7664 %U https://proceedings.mlr.press/v139/mesnard21a.html %V 139 %X Credit assignment in reinforcement learning is the problem of measuring an action’s influence on future rewards. In particular, this requires separating skill from luck, i.e. disentangling the effect of an action on rewards from that of external factors and subsequent actions. To achieve this, we adapt the notion of counterfactuals from causality theory to a model-free RL setup. The key idea is to condition value functions on future events, by learning to extract relevant information from a trajectory. We formulate a family of policy gradient algorithms that use these future-conditional value functions as baselines or critics, and show that they are provably low variance. To avoid the potential bias from conditioning on future information, we constrain the hindsight information to not contain information about the agent’s actions. We demonstrate the efficacy and validity of our algorithm on a number of illustrative and challenging problems.
APA
Mesnard, T., Weber, T., Viola, F., Thakoor, S., Saade, A., Harutyunyan, A., Dabney, W., Stepleton, T.S., Heess, N., Guez, A., Moulines, E., Hutter, M., Buesing, L. & Munos, R.. (2021). Counterfactual Credit Assignment in Model-Free Reinforcement Learning. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:7654-7664 Available from https://proceedings.mlr.press/v139/mesnard21a.html.

Related Material