A Theoretical Justification for Asymmetric Actor-Critic Algorithms

Gaspard Lambrechts, Damien Ernst, Aditya Mahajan
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:32375-32405, 2025.

Abstract

In reinforcement learning for partially observable environments, many successful algorithms have been developed within the asymmetric learning paradigm. This paradigm leverages additional state information available at training time for faster learning. Although the proposed learning objectives are usually theoretically sound, these methods still lack a precise theoretical justification for their potential benefits. We propose such a justification for asymmetric actor-critic algorithms with linear function approximators by adapting a finite-time convergence analysis to this setting. The resulting finite-time bound reveals that the asymmetric critic eliminates error terms arising from aliasing in the agent state.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-lambrechts25a, title = {A Theoretical Justification for Asymmetric Actor-Critic Algorithms}, author = {Lambrechts, Gaspard and Ernst, Damien and Mahajan, Aditya}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {32375--32405}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/lambrechts25a/lambrechts25a.pdf}, url = {https://proceedings.mlr.press/v267/lambrechts25a.html}, abstract = {In reinforcement learning for partially observable environments, many successful algorithms have been developed within the asymmetric learning paradigm. This paradigm leverages additional state information available at training time for faster learning. Although the proposed learning objectives are usually theoretically sound, these methods still lack a precise theoretical justification for their potential benefits. We propose such a justification for asymmetric actor-critic algorithms with linear function approximators by adapting a finite-time convergence analysis to this setting. The resulting finite-time bound reveals that the asymmetric critic eliminates error terms arising from aliasing in the agent state.} }
Endnote
%0 Conference Paper %T A Theoretical Justification for Asymmetric Actor-Critic Algorithms %A Gaspard Lambrechts %A Damien Ernst %A Aditya Mahajan %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-lambrechts25a %I PMLR %P 32375--32405 %U https://proceedings.mlr.press/v267/lambrechts25a.html %V 267 %X In reinforcement learning for partially observable environments, many successful algorithms have been developed within the asymmetric learning paradigm. This paradigm leverages additional state information available at training time for faster learning. Although the proposed learning objectives are usually theoretically sound, these methods still lack a precise theoretical justification for their potential benefits. We propose such a justification for asymmetric actor-critic algorithms with linear function approximators by adapting a finite-time convergence analysis to this setting. The resulting finite-time bound reveals that the asymmetric critic eliminates error terms arising from aliasing in the agent state.
APA
Lambrechts, G., Ernst, D. & Mahajan, A.. (2025). A Theoretical Justification for Asymmetric Actor-Critic Algorithms. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:32375-32405 Available from https://proceedings.mlr.press/v267/lambrechts25a.html.

Related Material