SLAP: Spatial-Language Attention Policies

Priyam Parashar, Vidhi Jain, Xiaohan Zhang, Jay Vakil, Sam Powers, Yonatan Bisk, Chris Paxton
Proceedings of The 7th Conference on Robot Learning, PMLR 229:3571-3596, 2023.

Abstract

Despite great strides in language-guided manipulation, existing work has been constrained to table-top settings. Table-tops allow for perfect and consistent camera angles, properties are that do not hold in mobile manipulation. Task plans that involve moving around the environment must be robust to egocentric views and changes in the plane and angle of grasp. A further challenge is ensuring this is all true while still being able to learn skills efficiently from limited data. We propose Spatial-Language Attention Policies (SLAP) as a solution. SLAP uses three-dimensional tokens as the input representation to train a single multi-task, language-conditioned action prediction policy. Our method shows an $80%$ success rate in the real world across eight tasks with a single model, and a $47.5%$ success rate when unseen clutter and unseen object configurations are introduced, even with only a handful of examples per task. This represents an improvement of $30%$ over prior work ($20%$ given unseen distractors and configurations). We see a 4x improvement over baseline in mobile manipulation setting. In addition, we show how SLAPs robustness allows us to execute Task Plans from open-vocabulary instructions using a large language model for multi-step mobile manipulation. For videos, see the website: https://robotslap.github.io

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-parashar23a, title = {SLAP: Spatial-Language Attention Policies}, author = {Parashar, Priyam and Jain, Vidhi and Zhang, Xiaohan and Vakil, Jay and Powers, Sam and Bisk, Yonatan and Paxton, Chris}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {3571--3596}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/parashar23a/parashar23a.pdf}, url = {https://proceedings.mlr.press/v229/parashar23a.html}, abstract = {Despite great strides in language-guided manipulation, existing work has been constrained to table-top settings. Table-tops allow for perfect and consistent camera angles, properties are that do not hold in mobile manipulation. Task plans that involve moving around the environment must be robust to egocentric views and changes in the plane and angle of grasp. A further challenge is ensuring this is all true while still being able to learn skills efficiently from limited data. We propose Spatial-Language Attention Policies (SLAP) as a solution. SLAP uses three-dimensional tokens as the input representation to train a single multi-task, language-conditioned action prediction policy. Our method shows an $80%$ success rate in the real world across eight tasks with a single model, and a $47.5%$ success rate when unseen clutter and unseen object configurations are introduced, even with only a handful of examples per task. This represents an improvement of $30%$ over prior work ($20%$ given unseen distractors and configurations). We see a 4x improvement over baseline in mobile manipulation setting. In addition, we show how SLAPs robustness allows us to execute Task Plans from open-vocabulary instructions using a large language model for multi-step mobile manipulation. For videos, see the website: https://robotslap.github.io} }
Endnote
%0 Conference Paper %T SLAP: Spatial-Language Attention Policies %A Priyam Parashar %A Vidhi Jain %A Xiaohan Zhang %A Jay Vakil %A Sam Powers %A Yonatan Bisk %A Chris Paxton %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-parashar23a %I PMLR %P 3571--3596 %U https://proceedings.mlr.press/v229/parashar23a.html %V 229 %X Despite great strides in language-guided manipulation, existing work has been constrained to table-top settings. Table-tops allow for perfect and consistent camera angles, properties are that do not hold in mobile manipulation. Task plans that involve moving around the environment must be robust to egocentric views and changes in the plane and angle of grasp. A further challenge is ensuring this is all true while still being able to learn skills efficiently from limited data. We propose Spatial-Language Attention Policies (SLAP) as a solution. SLAP uses three-dimensional tokens as the input representation to train a single multi-task, language-conditioned action prediction policy. Our method shows an $80%$ success rate in the real world across eight tasks with a single model, and a $47.5%$ success rate when unseen clutter and unseen object configurations are introduced, even with only a handful of examples per task. This represents an improvement of $30%$ over prior work ($20%$ given unseen distractors and configurations). We see a 4x improvement over baseline in mobile manipulation setting. In addition, we show how SLAPs robustness allows us to execute Task Plans from open-vocabulary instructions using a large language model for multi-step mobile manipulation. For videos, see the website: https://robotslap.github.io
APA
Parashar, P., Jain, V., Zhang, X., Vakil, J., Powers, S., Bisk, Y. & Paxton, C.. (2023). SLAP: Spatial-Language Attention Policies. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:3571-3596 Available from https://proceedings.mlr.press/v229/parashar23a.html.

Related Material