On-Robot Learning With Equivariant Models

Dian Wang, Mingxi Jia, Xupeng Zhu, Robin Walters, Robert Platt
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1345-1354, 2023.

Abstract

Recently, equivariant neural network models have been shown to improve sample efficiency for tasks in computer vision and reinforcement learning. This paper explores this idea in the context of on-robot policy learning in which a policy must be learned entirely on a physical robotic system without reference to a model, a simulator, or an offline dataset. We focus on applications of Equivariant SAC to robotic manipulation and explore a number of variations of the algorithm. Ultimately, we demonstrate the ability to learn several non-trivial manipulation tasks completely through on-robot experiences in less than an hour or two of wall clock time.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-wang23c, title = {On-Robot Learning With Equivariant Models}, author = {Wang, Dian and Jia, Mingxi and Zhu, Xupeng and Walters, Robin and Platt, Robert}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1345--1354}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/wang23c/wang23c.pdf}, url = {https://proceedings.mlr.press/v205/wang23c.html}, abstract = {Recently, equivariant neural network models have been shown to improve sample efficiency for tasks in computer vision and reinforcement learning. This paper explores this idea in the context of on-robot policy learning in which a policy must be learned entirely on a physical robotic system without reference to a model, a simulator, or an offline dataset. We focus on applications of Equivariant SAC to robotic manipulation and explore a number of variations of the algorithm. Ultimately, we demonstrate the ability to learn several non-trivial manipulation tasks completely through on-robot experiences in less than an hour or two of wall clock time. } }
Endnote
%0 Conference Paper %T On-Robot Learning With Equivariant Models %A Dian Wang %A Mingxi Jia %A Xupeng Zhu %A Robin Walters %A Robert Platt %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-wang23c %I PMLR %P 1345--1354 %U https://proceedings.mlr.press/v205/wang23c.html %V 205 %X Recently, equivariant neural network models have been shown to improve sample efficiency for tasks in computer vision and reinforcement learning. This paper explores this idea in the context of on-robot policy learning in which a policy must be learned entirely on a physical robotic system without reference to a model, a simulator, or an offline dataset. We focus on applications of Equivariant SAC to robotic manipulation and explore a number of variations of the algorithm. Ultimately, we demonstrate the ability to learn several non-trivial manipulation tasks completely through on-robot experiences in less than an hour or two of wall clock time.
APA
Wang, D., Jia, M., Zhu, X., Walters, R. & Platt, R.. (2023). On-Robot Learning With Equivariant Models. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1345-1354 Available from https://proceedings.mlr.press/v205/wang23c.html.

Related Material