[edit]
Do no harm: A counterfactual approach to safe reinforcement learning
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1675-1687, 2024.
Abstract
Reinforcement Learning (RL) for control has become increasingly popular due to its ability to learn feedback policies that can take into account complex representations of the environment and uncertainty. When considering safety constraints, constrained optimization approaches where agents are penalized for constraint violations are commonly used. In such methods, if agents are initialized in or must visit states where constraint violation might be inevitable, it is unclear if or how much they should be penalized. We address this challenge by formulating a constraint on the counterfactual harm of the learned policy compared to an alternate, safe policy. In a philosophical sense this method only penalizes the learner for constraint violations that it caused; in a practical sense it maintains feasibility of the optimal control problem when constraint violation is inevitable. We present simulation studies on a rover with uncertain road friction and a tractor-trailer parking environment that demonstrate our constraint formulation enables agents to learn safer policies than traditional constrained RL methods.