ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter

Yaoyao Qian, Xupeng Zhu, Ondrej Biza, Shuo Jiang, Linfeng Zhao, Haojie Huang, Yu Qi, Robert Platt
Proceedings of The 8th Conference on Robot Learning, PMLR 270:3568-3586, 2025.

Abstract

Robotic grasping in cluttered environments remains a significant challenge due to occlusions and complex object arrangements. We have developed ThinkGrasp, a plug-and-play vision-language grasping system that makes use of GPT-4o’s advanced contextual reasoning for grasping strategies. ThinkGrasp can effectively identify and generate grasp poses for target objects, even when they are heavily obstructed or nearly invisible, by using goal-oriented language to guide the removal of obstructing objects. This approach progressively uncovers the target object and ultimately grasps it with a few steps and a high success rate. In both simulated and real experiments, ThinkGrasp achieved a high success rate and significantly outperformed state-of-the-art methods in heavily cluttered environments or with diverse unseen objects, demonstrating strong generalization capabilities.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-qian25c, title = {ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter}, author = {Qian, Yaoyao and Zhu, Xupeng and Biza, Ondrej and Jiang, Shuo and Zhao, Linfeng and Huang, Haojie and Qi, Yu and Platt, Robert}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {3568--3586}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/qian25c/qian25c.pdf}, url = {https://proceedings.mlr.press/v270/qian25c.html}, abstract = {Robotic grasping in cluttered environments remains a significant challenge due to occlusions and complex object arrangements. We have developed ThinkGrasp, a plug-and-play vision-language grasping system that makes use of GPT-4o’s advanced contextual reasoning for grasping strategies. ThinkGrasp can effectively identify and generate grasp poses for target objects, even when they are heavily obstructed or nearly invisible, by using goal-oriented language to guide the removal of obstructing objects. This approach progressively uncovers the target object and ultimately grasps it with a few steps and a high success rate. In both simulated and real experiments, ThinkGrasp achieved a high success rate and significantly outperformed state-of-the-art methods in heavily cluttered environments or with diverse unseen objects, demonstrating strong generalization capabilities.} }
Endnote
%0 Conference Paper %T ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter %A Yaoyao Qian %A Xupeng Zhu %A Ondrej Biza %A Shuo Jiang %A Linfeng Zhao %A Haojie Huang %A Yu Qi %A Robert Platt %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-qian25c %I PMLR %P 3568--3586 %U https://proceedings.mlr.press/v270/qian25c.html %V 270 %X Robotic grasping in cluttered environments remains a significant challenge due to occlusions and complex object arrangements. We have developed ThinkGrasp, a plug-and-play vision-language grasping system that makes use of GPT-4o’s advanced contextual reasoning for grasping strategies. ThinkGrasp can effectively identify and generate grasp poses for target objects, even when they are heavily obstructed or nearly invisible, by using goal-oriented language to guide the removal of obstructing objects. This approach progressively uncovers the target object and ultimately grasps it with a few steps and a high success rate. In both simulated and real experiments, ThinkGrasp achieved a high success rate and significantly outperformed state-of-the-art methods in heavily cluttered environments or with diverse unseen objects, demonstrating strong generalization capabilities.
APA
Qian, Y., Zhu, X., Biza, O., Jiang, S., Zhao, L., Huang, H., Qi, Y. & Platt, R.. (2025). ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:3568-3586 Available from https://proceedings.mlr.press/v270/qian25c.html.

Related Material