Cross-Sensor Touch Generation

Samanta Rodriguez, Yiming Dou, Miquel Oller, Andrew Owens, Nima Fazeli
Proceedings of The 9th Conference on Robot Learning, PMLR 305:152-167, 2025.

Abstract

Today’s visuo-tactile sensors come in many shapes and sizes, making it challenging to develop general-purpose tactile representations. This is because most models are tied to a specific sensor design. To address this challenge, we propose two approaches to cross-sensor image generation. The first is an end-to-end method that leverages paired data (Touch2Touch). The second method builds an intermediate depth representation and does not require paired data (T2D2: Touch-to-Depth-to-Touch). Both methods enable the use of sensor-specific models across multiple sensors via the cross-sensor touch generation process. Together, these models offer flexible solutions for sensor translation, depending on data availability and application needs. We demonstrate their effectiveness on downstream tasks such as cup stacking and tool insertion, where models originally designed for one sensor are successfully transferred to another using in-hand pose estimation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-rodriguez25a, title = {Cross-Sensor Touch Generation}, author = {Rodriguez, Samanta and Dou, Yiming and Oller, Miquel and Owens, Andrew and Fazeli, Nima}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {152--167}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/rodriguez25a/rodriguez25a.pdf}, url = {https://proceedings.mlr.press/v305/rodriguez25a.html}, abstract = {Today’s visuo-tactile sensors come in many shapes and sizes, making it challenging to develop general-purpose tactile representations. This is because most models are tied to a specific sensor design. To address this challenge, we propose two approaches to cross-sensor image generation. The first is an end-to-end method that leverages paired data (Touch2Touch). The second method builds an intermediate depth representation and does not require paired data (T2D2: Touch-to-Depth-to-Touch). Both methods enable the use of sensor-specific models across multiple sensors via the cross-sensor touch generation process. Together, these models offer flexible solutions for sensor translation, depending on data availability and application needs. We demonstrate their effectiveness on downstream tasks such as cup stacking and tool insertion, where models originally designed for one sensor are successfully transferred to another using in-hand pose estimation.} }
Endnote
%0 Conference Paper %T Cross-Sensor Touch Generation %A Samanta Rodriguez %A Yiming Dou %A Miquel Oller %A Andrew Owens %A Nima Fazeli %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-rodriguez25a %I PMLR %P 152--167 %U https://proceedings.mlr.press/v305/rodriguez25a.html %V 305 %X Today’s visuo-tactile sensors come in many shapes and sizes, making it challenging to develop general-purpose tactile representations. This is because most models are tied to a specific sensor design. To address this challenge, we propose two approaches to cross-sensor image generation. The first is an end-to-end method that leverages paired data (Touch2Touch). The second method builds an intermediate depth representation and does not require paired data (T2D2: Touch-to-Depth-to-Touch). Both methods enable the use of sensor-specific models across multiple sensors via the cross-sensor touch generation process. Together, these models offer flexible solutions for sensor translation, depending on data availability and application needs. We demonstrate their effectiveness on downstream tasks such as cup stacking and tool insertion, where models originally designed for one sensor are successfully transferred to another using in-hand pose estimation.
APA
Rodriguez, S., Dou, Y., Oller, M., Owens, A. & Fazeli, N.. (2025). Cross-Sensor Touch Generation. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:152-167 Available from https://proceedings.mlr.press/v305/rodriguez25a.html.

Related Material