S4RL: Surprisingly Simple Self-Supervision for Offline Reinforcement Learning in Robotics

Samarth Sinha, Ajay Mandlekar, Animesh Garg
Proceedings of the 5th Conference on Robot Learning, PMLR 164:907-917, 2022.

Abstract

Offline reinforcement learning proposes to learn policies from large collected datasets without interacting with the physical environment. These algorithms have made it possible to learn useful skills from data that can then be deployed in the environment in real-world settings where interactions may be costly or dangerous, such as autonomous driving or factories. However, offline agents are unable to access the environment to collect new data, and therefore are trained on a static dataset. In this paper, we study the effectiveness of performing data augmentations on the state space, and study 7 different augmentation schemes and how they behave with existing offline RL algorithms. We then combine the best data performing augmentation scheme with a state-of-the-art Q-learning technique, and improve the function approximation of the Q-networks by smoothening out the learned state-action space. We experimentally show that using this Surprisingly Simple Self-Supervision technique in RL (S4RL), we significantly improve over the current state-of-the-art algorithms on offline robot learning environments such as MetaWorld [1] and RoboSuite [2,3], and benchmark datasets such as D4RL [4].

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-sinha22a, title = {S4RL: Surprisingly Simple Self-Supervision for Offline Reinforcement Learning in Robotics}, author = {Sinha, Samarth and Mandlekar, Ajay and Garg, Animesh}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {907--917}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/sinha22a/sinha22a.pdf}, url = {https://proceedings.mlr.press/v164/sinha22a.html}, abstract = {Offline reinforcement learning proposes to learn policies from large collected datasets without interacting with the physical environment. These algorithms have made it possible to learn useful skills from data that can then be deployed in the environment in real-world settings where interactions may be costly or dangerous, such as autonomous driving or factories. However, offline agents are unable to access the environment to collect new data, and therefore are trained on a static dataset. In this paper, we study the effectiveness of performing data augmentations on the state space, and study 7 different augmentation schemes and how they behave with existing offline RL algorithms. We then combine the best data performing augmentation scheme with a state-of-the-art Q-learning technique, and improve the function approximation of the Q-networks by smoothening out the learned state-action space. We experimentally show that using this Surprisingly Simple Self-Supervision technique in RL (S4RL), we significantly improve over the current state-of-the-art algorithms on offline robot learning environments such as MetaWorld [1] and RoboSuite [2,3], and benchmark datasets such as D4RL [4].} }
Endnote
%0 Conference Paper %T S4RL: Surprisingly Simple Self-Supervision for Offline Reinforcement Learning in Robotics %A Samarth Sinha %A Ajay Mandlekar %A Animesh Garg %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-sinha22a %I PMLR %P 907--917 %U https://proceedings.mlr.press/v164/sinha22a.html %V 164 %X Offline reinforcement learning proposes to learn policies from large collected datasets without interacting with the physical environment. These algorithms have made it possible to learn useful skills from data that can then be deployed in the environment in real-world settings where interactions may be costly or dangerous, such as autonomous driving or factories. However, offline agents are unable to access the environment to collect new data, and therefore are trained on a static dataset. In this paper, we study the effectiveness of performing data augmentations on the state space, and study 7 different augmentation schemes and how they behave with existing offline RL algorithms. We then combine the best data performing augmentation scheme with a state-of-the-art Q-learning technique, and improve the function approximation of the Q-networks by smoothening out the learned state-action space. We experimentally show that using this Surprisingly Simple Self-Supervision technique in RL (S4RL), we significantly improve over the current state-of-the-art algorithms on offline robot learning environments such as MetaWorld [1] and RoboSuite [2,3], and benchmark datasets such as D4RL [4].
APA
Sinha, S., Mandlekar, A. & Garg, A.. (2022). S4RL: Surprisingly Simple Self-Supervision for Offline Reinforcement Learning in Robotics. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:907-917 Available from https://proceedings.mlr.press/v164/sinha22a.html.

Related Material