RL, but don’t do anything I wouldn’t do

Michael K. Cohen, Marcus Hutter, Yoshua Bengio, Stuart Russell
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:821-836, 2025.

Abstract

In reinforcement learning (RL), if the agent’s reward differs from the designers’ true utility, even only rarely, the state distribution resulting from the agent’s policy can be very bad, in theory and in practice. When RL policies would devolve into undesired behavior, a common countermeasure is KL regularization to a trusted policy ("Don’t do anything I wouldn’t do"). All current cutting-edge language models are RL agents that are KL-regularized to a "base policy" that is purely predictive. Unfortunately, we demonstrate that when this base policy is a Bayesian predictive model of a trusted policy, the KL constraint is no longer reliable for controlling the behavior of an advanced RL agent. We demonstrate this theoretically using algorithmic information theory, and while systems today are too weak to exhibit this theorized failure precisely, we RL-finetune a language model and find evidence that our formal results are plausibly relevant in practice. We also propose a theoretical alternative that avoids this problem by replacing the "Don’t do anything I wouldn’t do" principle with "Don’t do anything I mightn’t do".

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-cohen25a, title = {RL, but don’t do anything I wouldn’t do}, author = {Cohen, Michael K. and Hutter, Marcus and Bengio, Yoshua and Russell, Stuart}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {821--836}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/cohen25a/cohen25a.pdf}, url = {https://proceedings.mlr.press/v286/cohen25a.html}, abstract = {In reinforcement learning (RL), if the agent’s reward differs from the designers’ true utility, even only rarely, the state distribution resulting from the agent’s policy can be very bad, in theory and in practice. When RL policies would devolve into undesired behavior, a common countermeasure is KL regularization to a trusted policy ("Don’t do anything I wouldn’t do"). All current cutting-edge language models are RL agents that are KL-regularized to a "base policy" that is purely predictive. Unfortunately, we demonstrate that when this base policy is a Bayesian predictive model of a trusted policy, the KL constraint is no longer reliable for controlling the behavior of an advanced RL agent. We demonstrate this theoretically using algorithmic information theory, and while systems today are too weak to exhibit this theorized failure precisely, we RL-finetune a language model and find evidence that our formal results are plausibly relevant in practice. We also propose a theoretical alternative that avoids this problem by replacing the "Don’t do anything I wouldn’t do" principle with "Don’t do anything I mightn’t do".} }
Endnote
%0 Conference Paper %T RL, but don’t do anything I wouldn’t do %A Michael K. Cohen %A Marcus Hutter %A Yoshua Bengio %A Stuart Russell %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-cohen25a %I PMLR %P 821--836 %U https://proceedings.mlr.press/v286/cohen25a.html %V 286 %X In reinforcement learning (RL), if the agent’s reward differs from the designers’ true utility, even only rarely, the state distribution resulting from the agent’s policy can be very bad, in theory and in practice. When RL policies would devolve into undesired behavior, a common countermeasure is KL regularization to a trusted policy ("Don’t do anything I wouldn’t do"). All current cutting-edge language models are RL agents that are KL-regularized to a "base policy" that is purely predictive. Unfortunately, we demonstrate that when this base policy is a Bayesian predictive model of a trusted policy, the KL constraint is no longer reliable for controlling the behavior of an advanced RL agent. We demonstrate this theoretically using algorithmic information theory, and while systems today are too weak to exhibit this theorized failure precisely, we RL-finetune a language model and find evidence that our formal results are plausibly relevant in practice. We also propose a theoretical alternative that avoids this problem by replacing the "Don’t do anything I wouldn’t do" principle with "Don’t do anything I mightn’t do".
APA
Cohen, M.K., Hutter, M., Bengio, Y. & Russell, S.. (2025). RL, but don’t do anything I wouldn’t do. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:821-836 Available from https://proceedings.mlr.press/v286/cohen25a.html.

Related Material