Visuo-Tactile Transformers for Manipulation

Yizhou Chen, Mark Van der Merwe, Andrea Sipos, Nima Fazeli
Proceedings of The 6th Conference on Robot Learning, PMLR 205:2026-2040, 2023.

Abstract

Learning representations in the joint domain of vision and touch can improve manipulation dexterity, robustness, and sample-complexity by exploiting mutual information and complementary cues. Here, we present Visuo-Tactile Transformers (VTTs), a novel multimodal representation learning approach suited for model-based reinforcement learning and planning. Our approach extends the Visual Transformer to handle visuo-tactile feedback. Specifically, VTT uses tactile feedback together with self and cross-modal attention to build latent heatmap representations that focus attention on important task features in the visual domain. We demonstrate the efficacy of VTT for representation learning with a comparative evaluation against baselines on four simulated robot tasks and one real world block pushing task. We conduct an ablation study over the components of VTT to highlight the importance of cross-modality in representation learning for robotic manipulation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-chen23d, title = {Visuo-Tactile Transformers for Manipulation}, author = {Chen, Yizhou and Merwe, Mark Van der and Sipos, Andrea and Fazeli, Nima}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {2026--2040}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/chen23d/chen23d.pdf}, url = {https://proceedings.mlr.press/v205/chen23d.html}, abstract = {Learning representations in the joint domain of vision and touch can improve manipulation dexterity, robustness, and sample-complexity by exploiting mutual information and complementary cues. Here, we present Visuo-Tactile Transformers (VTTs), a novel multimodal representation learning approach suited for model-based reinforcement learning and planning. Our approach extends the Visual Transformer to handle visuo-tactile feedback. Specifically, VTT uses tactile feedback together with self and cross-modal attention to build latent heatmap representations that focus attention on important task features in the visual domain. We demonstrate the efficacy of VTT for representation learning with a comparative evaluation against baselines on four simulated robot tasks and one real world block pushing task. We conduct an ablation study over the components of VTT to highlight the importance of cross-modality in representation learning for robotic manipulation. } }
Endnote
%0 Conference Paper %T Visuo-Tactile Transformers for Manipulation %A Yizhou Chen %A Mark Van der Merwe %A Andrea Sipos %A Nima Fazeli %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-chen23d %I PMLR %P 2026--2040 %U https://proceedings.mlr.press/v205/chen23d.html %V 205 %X Learning representations in the joint domain of vision and touch can improve manipulation dexterity, robustness, and sample-complexity by exploiting mutual information and complementary cues. Here, we present Visuo-Tactile Transformers (VTTs), a novel multimodal representation learning approach suited for model-based reinforcement learning and planning. Our approach extends the Visual Transformer to handle visuo-tactile feedback. Specifically, VTT uses tactile feedback together with self and cross-modal attention to build latent heatmap representations that focus attention on important task features in the visual domain. We demonstrate the efficacy of VTT for representation learning with a comparative evaluation against baselines on four simulated robot tasks and one real world block pushing task. We conduct an ablation study over the components of VTT to highlight the importance of cross-modality in representation learning for robotic manipulation.
APA
Chen, Y., Merwe, M.V.d., Sipos, A. & Fazeli, N.. (2023). Visuo-Tactile Transformers for Manipulation. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:2026-2040 Available from https://proceedings.mlr.press/v205/chen23d.html.

Related Material