Can Foundation Models Perform Zero-Shot Task Specification For Robot Manipulation?
Proceedings of The 4th Annual Learning for Dynamics and Control Conference, PMLR 168:893-905, 2022.
Task specification is at the core of programming autonomous robots. A low-effort modality for task specification is critical for engagement of non-expert end users and ultimate adoption of personalized robot agents. A widely studied approach to task specification is through goals, using either compact state space vectors or goal images from the same robot scene. The former is often not easily human interpretable and necessitates detailed state estimation and scene understanding. The latter requires the generation of desired goal image, which often requires a human to complete the task, defeating the purpose of having autonomous robots. In this work, we explore alternate and more general forms of goal specification that are expected to be easier for humans to specify and use such as images obtained from the internet, hand sketches that provide a visual description of the desired task, or simple language descriptions. As a first step towards this, we study the capabilities of large scale pre-trained models (foundation models) for zero-shot goal specification, and find that they are surprisingly effective in a collection of simulated robot manipulation tasks and real-world datasets.