Learning Without Time-Based Embodiment Resets in Soft-Actor Critic

Homayoon Farrahi, A. Rupam Mahmood
Proceedings of The 4th Conference on Lifelong Learning Agents, PMLR 330:112-135, 2026.

Abstract

When creating new continuous-control reinforcement learning tasks, practitioners often accelerate the learning process by incorporating into the task several accessory components, such as breaking the environment interaction into independent episodes and frequently resetting the environment. Although they can enable the learning of complex intelligent behaviors, such task accessories can result in unnatural task setups and hinder long-term performance in the real world. In this work, we explore the challenges of learning without episode terminations and robot embodiment resets using the Soft Actor-Critic (SAC) algorithm. To learn without terminations, we present a continuing version of the SAC algorithm and show that, with simple modifications to the reward functions of existing tasks, continuing SAC can perform as well as or better than episodic SAC while reducing the sensitivity of performance to the value of the discount rate $\gamma$. On a modified Gym Reacher task, we investigate possible explanations for the failure of continuing SAC when learning without embodiment resets. Our results suggest that a slowly-changing action-value function can lead to poor exploration of the state space in the SAC algorithm, resulting in failure of or significantly slower learning without embodiment resets. Finally, we compare several interventions for improving exploration and recovering the lost performance when learning without embodiment resets and validate the best-performing interventions on additional simulated tasks and a real-robot vision task.

Cite this Paper


BibTeX
@InProceedings{pmlr-v330-farrahi26a, title = {Learning Without Time-Based Embodiment Resets in Soft-Actor Critic}, author = {Farrahi, Homayoon and Mahmood, A. Rupam}, booktitle = {Proceedings of The 4th Conference on Lifelong Learning Agents}, pages = {112--135}, year = {2026}, editor = {Chandar, Sarath and Pascanu, Razvan and Eaton, Eric and Liu, Bing and Mahmood, Rupam and Rannen-Triki, Amal}, volume = {330}, series = {Proceedings of Machine Learning Research}, month = {11--14 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v330/main/assets/farrahi26a/farrahi26a.pdf}, url = {https://proceedings.mlr.press/v330/farrahi26a.html}, abstract = {When creating new continuous-control reinforcement learning tasks, practitioners often accelerate the learning process by incorporating into the task several accessory components, such as breaking the environment interaction into independent episodes and frequently resetting the environment. Although they can enable the learning of complex intelligent behaviors, such task accessories can result in unnatural task setups and hinder long-term performance in the real world. In this work, we explore the challenges of learning without episode terminations and robot embodiment resets using the Soft Actor-Critic (SAC) algorithm. To learn without terminations, we present a continuing version of the SAC algorithm and show that, with simple modifications to the reward functions of existing tasks, continuing SAC can perform as well as or better than episodic SAC while reducing the sensitivity of performance to the value of the discount rate $\gamma$. On a modified Gym Reacher task, we investigate possible explanations for the failure of continuing SAC when learning without embodiment resets. Our results suggest that a slowly-changing action-value function can lead to poor exploration of the state space in the SAC algorithm, resulting in failure of or significantly slower learning without embodiment resets. Finally, we compare several interventions for improving exploration and recovering the lost performance when learning without embodiment resets and validate the best-performing interventions on additional simulated tasks and a real-robot vision task.} }
Endnote
%0 Conference Paper %T Learning Without Time-Based Embodiment Resets in Soft-Actor Critic %A Homayoon Farrahi %A A. Rupam Mahmood %B Proceedings of The 4th Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2026 %E Sarath Chandar %E Razvan Pascanu %E Eric Eaton %E Bing Liu %E Rupam Mahmood %E Amal Rannen-Triki %F pmlr-v330-farrahi26a %I PMLR %P 112--135 %U https://proceedings.mlr.press/v330/farrahi26a.html %V 330 %X When creating new continuous-control reinforcement learning tasks, practitioners often accelerate the learning process by incorporating into the task several accessory components, such as breaking the environment interaction into independent episodes and frequently resetting the environment. Although they can enable the learning of complex intelligent behaviors, such task accessories can result in unnatural task setups and hinder long-term performance in the real world. In this work, we explore the challenges of learning without episode terminations and robot embodiment resets using the Soft Actor-Critic (SAC) algorithm. To learn without terminations, we present a continuing version of the SAC algorithm and show that, with simple modifications to the reward functions of existing tasks, continuing SAC can perform as well as or better than episodic SAC while reducing the sensitivity of performance to the value of the discount rate $\gamma$. On a modified Gym Reacher task, we investigate possible explanations for the failure of continuing SAC when learning without embodiment resets. Our results suggest that a slowly-changing action-value function can lead to poor exploration of the state space in the SAC algorithm, resulting in failure of or significantly slower learning without embodiment resets. Finally, we compare several interventions for improving exploration and recovering the lost performance when learning without embodiment resets and validate the best-performing interventions on additional simulated tasks and a real-robot vision task.
APA
Farrahi, H. & Mahmood, A.R.. (2026). Learning Without Time-Based Embodiment Resets in Soft-Actor Critic. Proceedings of The 4th Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 330:112-135 Available from https://proceedings.mlr.press/v330/farrahi26a.html.

Related Material