Approximate Equivariance in Reinforcement Learning

Jung Yeon Park, Sujay Bhatt, Sihan Zeng, Lawson L.S. Wong, Alec Koppel, Sumitra Ganesh, Robin Walters
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4177-4185, 2025.

Abstract

Equivariant neural networks have shown great success in reinforcement learning, improving sample efficiency and generalization when there is symmetry in the task. However, in many problems, only approximate symmetry is present, which makes imposing exact symmetry inappropriate. Recently, approximately equivariant networks have been proposed for supervised classification and modeling physical systems. In this work, we develop approximately equivariant algorithms in reinforcement learning (RL). We define approximately equivariant MDPs and theoretically characterize the effect of approximate equivariance on the optimal Q function. We propose novel RL architectures using relaxed group and steerable convolutions and experiment on several continuous control domains and stock trading with real financial data. Our results demonstrate that the approximately equivariant network performs on par with exactly equivariant networks when exact symmetries are present, and outperforms them when the domains exhibit approximate symmetry. As an added byproduct of these techniques, we observe increased robustness to noise at test time. Our code is available at \url{https://github.com/jypark0/approx_equiv_rl.}

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-park25d, title = {Approximate Equivariance in Reinforcement Learning}, author = {Park, Jung Yeon and Bhatt, Sujay and Zeng, Sihan and Wong, Lawson L.S. and Koppel, Alec and Ganesh, Sumitra and Walters, Robin}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4177--4185}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/park25d/park25d.pdf}, url = {https://proceedings.mlr.press/v258/park25d.html}, abstract = {Equivariant neural networks have shown great success in reinforcement learning, improving sample efficiency and generalization when there is symmetry in the task. However, in many problems, only approximate symmetry is present, which makes imposing exact symmetry inappropriate. Recently, approximately equivariant networks have been proposed for supervised classification and modeling physical systems. In this work, we develop approximately equivariant algorithms in reinforcement learning (RL). We define approximately equivariant MDPs and theoretically characterize the effect of approximate equivariance on the optimal Q function. We propose novel RL architectures using relaxed group and steerable convolutions and experiment on several continuous control domains and stock trading with real financial data. Our results demonstrate that the approximately equivariant network performs on par with exactly equivariant networks when exact symmetries are present, and outperforms them when the domains exhibit approximate symmetry. As an added byproduct of these techniques, we observe increased robustness to noise at test time. Our code is available at \url{https://github.com/jypark0/approx_equiv_rl.}} }
Endnote
%0 Conference Paper %T Approximate Equivariance in Reinforcement Learning %A Jung Yeon Park %A Sujay Bhatt %A Sihan Zeng %A Lawson L.S. Wong %A Alec Koppel %A Sumitra Ganesh %A Robin Walters %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-park25d %I PMLR %P 4177--4185 %U https://proceedings.mlr.press/v258/park25d.html %V 258 %X Equivariant neural networks have shown great success in reinforcement learning, improving sample efficiency and generalization when there is symmetry in the task. However, in many problems, only approximate symmetry is present, which makes imposing exact symmetry inappropriate. Recently, approximately equivariant networks have been proposed for supervised classification and modeling physical systems. In this work, we develop approximately equivariant algorithms in reinforcement learning (RL). We define approximately equivariant MDPs and theoretically characterize the effect of approximate equivariance on the optimal Q function. We propose novel RL architectures using relaxed group and steerable convolutions and experiment on several continuous control domains and stock trading with real financial data. Our results demonstrate that the approximately equivariant network performs on par with exactly equivariant networks when exact symmetries are present, and outperforms them when the domains exhibit approximate symmetry. As an added byproduct of these techniques, we observe increased robustness to noise at test time. Our code is available at \url{https://github.com/jypark0/approx_equiv_rl.}
APA
Park, J.Y., Bhatt, S., Zeng, S., Wong, L.L., Koppel, A., Ganesh, S. & Walters, R.. (2025). Approximate Equivariance in Reinforcement Learning. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4177-4185 Available from https://proceedings.mlr.press/v258/park25d.html.

Related Material