[edit]
Cross-Sensor Touch Generation
Proceedings of The 9th Conference on Robot Learning, PMLR 305:152-167, 2025.
Abstract
Today’s visuo-tactile sensors come in many shapes and sizes, making it challenging to develop general-purpose tactile representations. This is because most models are tied to a specific sensor design. To address this challenge, we propose two approaches to cross-sensor image generation. The first is an end-to-end method that leverages paired data (Touch2Touch). The second method builds an intermediate depth representation and does not require paired data (T2D2: Touch-to-Depth-to-Touch). Both methods enable the use of sensor-specific models across multiple sensors via the cross-sensor touch generation process. Together, these models offer flexible solutions for sensor translation, depending on data availability and application needs. We demonstrate their effectiveness on downstream tasks such as cup stacking and tool insertion, where models originally designed for one sensor are successfully transferred to another using in-hand pose estimation.