[edit]
Balancing Learning Speed and Stability in Policy Gradient via Adaptive Exploration
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1188-1199, 2020.
Abstract
In many Reinforcement Learning (RL) applications, the goal is to find an optimal deterministic policy. However, most RL algorithms require the policy to be stochastic in order to avoid instabilities and perform a sufficient amount of exploration. Adjusting the level of stochasticity during the learning process is non-trivial, as it is difficult to assess whether the costs of random exploration will be repaid in the long run, and to contain the risk of instability.We study this problem in the context of policy gradients (PG) with Gaussian policies. Using tools from the safe PG literature, we design a surrogate objective for the policy variance that captures the effects this parameter has on the learning speed and on the quality of the final solution. Furthermore, we provide a way to optimize this objective that guarantees stable improvement of the original performance measure. We evaluate the proposed methods on simulated continuous control tasks.