CLAMP: Crowdsourcing a LArge-scale in-the-wild haptic dataset with an open-source device for Multimodal robot Perception

Pranav N. Thakkar, Shubhangi Sinha, Karan Baijal, Yuhan (Anjelica) Bian, Leah Lackey, Ben Dodson, Heisen Kong, Jueun Kwon, Amber Li, Yifei Hu, alexios rekoutis, Tom Silver, Tapomayukh Bhattacharjee
Proceedings of The 9th Conference on Robot Learning, PMLR 305:941-960, 2025.

Abstract

Robust robot manipulation in unstructured environments often requires understanding object properties that extend beyond geometry, such as material or compliance—properties that can be challenging to infer using vision alone. Multimodal haptic sensing provides a promising avenue for inferring such properties, yet progress has been constrained by the lack of large, diverse, and realistic haptic datasets. In this work, we introduce the CLAMP device, a low-cost (< $200) sensorized reacher-grabber designed to collect large-scale, in-the-wild multimodal haptic data from non-expert users in everyday settings. We deployed 16 CLAMP devices to 41 participants, resulting in the CLAMP dataset, the largest open-source multimodal haptic dataset to date, comprising 12.3 million datapoints across 5357 household objects. Using this dataset, we train a haptic encoder that can infer material and compliance object properties from multimodal haptic data. We leverage this encoder to create the CLAMP model, a visuo-haptic perception model for material recognition that generalizes to novel objects and three robot embodiments with minimal finetuning. We also demonstrate the effectiveness of our model in three real-world robot manipulation tasks: sorting recyclable and non-recyclable waste, retrieving objects from a cluttered bag, and distinguishing overripe from ripe bananas. Our results show that large-scale, in-the-wild haptic data collection can unlock new capabilities for generalizable robot manipulation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-thakkar25a, title = {CLAMP: Crowdsourcing a LArge-scale in-the-wild haptic dataset with an open-source device for Multimodal robot Perception}, author = {Thakkar, Pranav N. and Sinha, Shubhangi and Baijal, Karan and Bian, Yuhan (Anjelica) and Lackey, Leah and Dodson, Ben and Kong, Heisen and Kwon, Jueun and Li, Amber and Hu, Yifei and rekoutis, alexios and Silver, Tom and Bhattacharjee, Tapomayukh}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {941--960}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/thakkar25a/thakkar25a.pdf}, url = {https://proceedings.mlr.press/v305/thakkar25a.html}, abstract = {Robust robot manipulation in unstructured environments often requires understanding object properties that extend beyond geometry, such as material or compliance—properties that can be challenging to infer using vision alone. Multimodal haptic sensing provides a promising avenue for inferring such properties, yet progress has been constrained by the lack of large, diverse, and realistic haptic datasets. In this work, we introduce the CLAMP device, a low-cost (< $200) sensorized reacher-grabber designed to collect large-scale, in-the-wild multimodal haptic data from non-expert users in everyday settings. We deployed 16 CLAMP devices to 41 participants, resulting in the CLAMP dataset, the largest open-source multimodal haptic dataset to date, comprising 12.3 million datapoints across 5357 household objects. Using this dataset, we train a haptic encoder that can infer material and compliance object properties from multimodal haptic data. We leverage this encoder to create the CLAMP model, a visuo-haptic perception model for material recognition that generalizes to novel objects and three robot embodiments with minimal finetuning. We also demonstrate the effectiveness of our model in three real-world robot manipulation tasks: sorting recyclable and non-recyclable waste, retrieving objects from a cluttered bag, and distinguishing overripe from ripe bananas. Our results show that large-scale, in-the-wild haptic data collection can unlock new capabilities for generalizable robot manipulation.} }
Endnote
%0 Conference Paper %T CLAMP: Crowdsourcing a LArge-scale in-the-wild haptic dataset with an open-source device for Multimodal robot Perception %A Pranav N. Thakkar %A Shubhangi Sinha %A Karan Baijal %A Yuhan (Anjelica) Bian %A Leah Lackey %A Ben Dodson %A Heisen Kong %A Jueun Kwon %A Amber Li %A Yifei Hu %A alexios rekoutis %A Tom Silver %A Tapomayukh Bhattacharjee %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-thakkar25a %I PMLR %P 941--960 %U https://proceedings.mlr.press/v305/thakkar25a.html %V 305 %X Robust robot manipulation in unstructured environments often requires understanding object properties that extend beyond geometry, such as material or compliance—properties that can be challenging to infer using vision alone. Multimodal haptic sensing provides a promising avenue for inferring such properties, yet progress has been constrained by the lack of large, diverse, and realistic haptic datasets. In this work, we introduce the CLAMP device, a low-cost (< $200) sensorized reacher-grabber designed to collect large-scale, in-the-wild multimodal haptic data from non-expert users in everyday settings. We deployed 16 CLAMP devices to 41 participants, resulting in the CLAMP dataset, the largest open-source multimodal haptic dataset to date, comprising 12.3 million datapoints across 5357 household objects. Using this dataset, we train a haptic encoder that can infer material and compliance object properties from multimodal haptic data. We leverage this encoder to create the CLAMP model, a visuo-haptic perception model for material recognition that generalizes to novel objects and three robot embodiments with minimal finetuning. We also demonstrate the effectiveness of our model in three real-world robot manipulation tasks: sorting recyclable and non-recyclable waste, retrieving objects from a cluttered bag, and distinguishing overripe from ripe bananas. Our results show that large-scale, in-the-wild haptic data collection can unlock new capabilities for generalizable robot manipulation.
APA
Thakkar, P.N., Sinha, S., Baijal, K., Bian, Y.(., Lackey, L., Dodson, B., Kong, H., Kwon, J., Li, A., Hu, Y., rekoutis, a., Silver, T. & Bhattacharjee, T.. (2025). CLAMP: Crowdsourcing a LArge-scale in-the-wild haptic dataset with an open-source device for Multimodal robot Perception. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:941-960 Available from https://proceedings.mlr.press/v305/thakkar25a.html.

Related Material