[edit]
Learning Without Time-Based Embodiment Resets in Soft-Actor Critic
Proceedings of The 4th Conference on Lifelong Learning Agents, PMLR 330:112-135, 2026.
Abstract
When creating new continuous-control reinforcement learning tasks, practitioners often accelerate the learning process by incorporating into the task several accessory components, such as breaking the environment interaction into independent episodes and frequently resetting the environment. Although they can enable the learning of complex intelligent behaviors, such task accessories can result in unnatural task setups and hinder long-term performance in the real world. In this work, we explore the challenges of learning without episode terminations and robot embodiment resets using the Soft Actor-Critic (SAC) algorithm. To learn without terminations, we present a continuing version of the SAC algorithm and show that, with simple modifications to the reward functions of existing tasks, continuing SAC can perform as well as or better than episodic SAC while reducing the sensitivity of performance to the value of the discount rate $\gamma$. On a modified Gym Reacher task, we investigate possible explanations for the failure of continuing SAC when learning without embodiment resets. Our results suggest that a slowly-changing action-value function can lead to poor exploration of the state space in the SAC algorithm, resulting in failure of or significantly slower learning without embodiment resets. Finally, we compare several interventions for improving exploration and recovering the lost performance when learning without embodiment resets and validate the best-performing interventions on additional simulated tasks and a real-robot vision task.