[edit]
Leveraging exploration in off-policy algorithms via normalizing flows
Proceedings of the Conference on Robot Learning, PMLR 100:430-444, 2020.
Abstract
The ability to discover approximately optimal policies in domains with sparse rewards is crucial to applying reinforcement learning (RL) in many real-world scenarios. Approaches such as neural density models and continuous exploration (e.g., Go-Explore) have been proposed to maintain the high exploration rate necessary to find high performing and generalizable policies. Soft actor-critic (SAC) is another method for improving exploration that aims to combine efficient learning via off-policy updates, while maximizing the policy entropy. In this work, we extend SAC to a richer class of probability distributions (e.g., multimodal) through normalizing flows (NF) and show that this significantly improves performance by accelerating discovery of good policies while using much smaller policy representations. Our approach, which we call SAC-NF, is a simple, efficient, easy-to-implement modification and improvement to SAC on continuous control baselines such as MuJoCo and PyBullet Roboschool domains. Finally, SAC-NF does this while being significantly parameter efficient, using as few as 5.5% the parameters for an equivalent SAC model.