Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks

Kuan Fang, Patrick Yin, Ashvin Nair, Homer Rich Walke, Gengchen Yan, Sergey Levine
Proceedings of The 6th Conference on Robot Learning, PMLR 205:106-117, 2023.

Abstract

The use of broad datasets has proven to be crucial for generalization for a wide range of fields. However, how to effectively make use of diverse multi-task data for novel downstream tasks still remains a grand challenge in reinforcement learning and robotics. To tackle this challenge, we introduce a framework that acquires goal-conditioned policies for unseen temporally extended tasks via offline reinforcement learning on broad data, in combination with online fine-tuning guided by subgoals in a learned lossy representation space. When faced with a novel task goal, our framework uses an affordance model to plan a sequence of lossy representations as subgoals that decomposes the original task into easier problems. Learned from the broad prior data, the lossy representation emphasizes task-relevant information about states and goals while abstracting away redundant contexts that hinder generalization. It thus enables subgoal planning for unseen tasks, provides a compact input to the policy, and facilitates reward shaping during fine-tuning. We show that our framework can be pre-trained on large-scale datasets of robot experience from prior work and efficiently fine-tuned for novel tasks, entirely from visual inputs without any manual reward engineering.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-fang23a, title = {Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks}, author = {Fang, Kuan and Yin, Patrick and Nair, Ashvin and Walke, Homer Rich and Yan, Gengchen and Levine, Sergey}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {106--117}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/fang23a/fang23a.pdf}, url = {https://proceedings.mlr.press/v205/fang23a.html}, abstract = {The use of broad datasets has proven to be crucial for generalization for a wide range of fields. However, how to effectively make use of diverse multi-task data for novel downstream tasks still remains a grand challenge in reinforcement learning and robotics. To tackle this challenge, we introduce a framework that acquires goal-conditioned policies for unseen temporally extended tasks via offline reinforcement learning on broad data, in combination with online fine-tuning guided by subgoals in a learned lossy representation space. When faced with a novel task goal, our framework uses an affordance model to plan a sequence of lossy representations as subgoals that decomposes the original task into easier problems. Learned from the broad prior data, the lossy representation emphasizes task-relevant information about states and goals while abstracting away redundant contexts that hinder generalization. It thus enables subgoal planning for unseen tasks, provides a compact input to the policy, and facilitates reward shaping during fine-tuning. We show that our framework can be pre-trained on large-scale datasets of robot experience from prior work and efficiently fine-tuned for novel tasks, entirely from visual inputs without any manual reward engineering.} }
Endnote
%0 Conference Paper %T Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks %A Kuan Fang %A Patrick Yin %A Ashvin Nair %A Homer Rich Walke %A Gengchen Yan %A Sergey Levine %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-fang23a %I PMLR %P 106--117 %U https://proceedings.mlr.press/v205/fang23a.html %V 205 %X The use of broad datasets has proven to be crucial for generalization for a wide range of fields. However, how to effectively make use of diverse multi-task data for novel downstream tasks still remains a grand challenge in reinforcement learning and robotics. To tackle this challenge, we introduce a framework that acquires goal-conditioned policies for unseen temporally extended tasks via offline reinforcement learning on broad data, in combination with online fine-tuning guided by subgoals in a learned lossy representation space. When faced with a novel task goal, our framework uses an affordance model to plan a sequence of lossy representations as subgoals that decomposes the original task into easier problems. Learned from the broad prior data, the lossy representation emphasizes task-relevant information about states and goals while abstracting away redundant contexts that hinder generalization. It thus enables subgoal planning for unseen tasks, provides a compact input to the policy, and facilitates reward shaping during fine-tuning. We show that our framework can be pre-trained on large-scale datasets of robot experience from prior work and efficiently fine-tuned for novel tasks, entirely from visual inputs without any manual reward engineering.
APA
Fang, K., Yin, P., Nair, A., Walke, H.R., Yan, G. & Levine, S.. (2023). Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:106-117 Available from https://proceedings.mlr.press/v205/fang23a.html.

Related Material