ATK: Automatic Task-driven Keypoint Selection for Robust Policy Learning

Yunchu Zhang, Shubham Mittal, Zhengyu Zhang, Liyiming Ke, Siddhartha Srinivasa, Abhishek Gupta
Proceedings of The 9th Conference on Robot Learning, PMLR 305:2603-2627, 2025.

Abstract

Learning visuamotor policy through imitation learning often suffers from perceptual challenges, where visual differences between training and evaluation environments degrade policy performance. Policies relying on state estimations like 6D pose, require task-specific tracking and are difficult to scale, while raw sensor-based policies may lack robustness to small visual disturbances. In this work, we leverage 2D keypoints — spatially consistent features in the image frame — as a state representation for robust policy learning, and apply it to both sim-to-real transfer and real-world imitation learning. However, the choice of which keypoints to use can vary across objects and tasks. We propose a novel method -ATK, to automatically select keypoints in a task-driven manner, such that the chosen keypoints are that are predictive of optimal behavior for the given task. Our proposal optimizes for a minimal set of task-relevant keypoints that preserve policy performance and robustness. We distill expert data (either from an expert policy in simulation or a human expert) into a policy that operates on RGB images while tracking the selected keypoints. By leveraging pre-trained visual modules, our system effectively tracks keypoints and transfers policies to the real-world evaluation scenario, even given perceptual challenges like transparent objects or fine-grained manipulation, or widely varying scene appearance. We validate our approach on various robotic tasks, demonstrating that these minimal keypoint representations improve robustness to visual disturbances and environmental variations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-zhang25e, title = {ATK: Automatic Task-driven Keypoint Selection for Robust Policy Learning}, author = {Zhang, Yunchu and Mittal, Shubham and Zhang, Zhengyu and Ke, Liyiming and Srinivasa, Siddhartha and Gupta, Abhishek}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {2603--2627}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/zhang25e/zhang25e.pdf}, url = {https://proceedings.mlr.press/v305/zhang25e.html}, abstract = {Learning visuamotor policy through imitation learning often suffers from perceptual challenges, where visual differences between training and evaluation environments degrade policy performance. Policies relying on state estimations like 6D pose, require task-specific tracking and are difficult to scale, while raw sensor-based policies may lack robustness to small visual disturbances. In this work, we leverage 2D keypoints — spatially consistent features in the image frame — as a state representation for robust policy learning, and apply it to both sim-to-real transfer and real-world imitation learning. However, the choice of which keypoints to use can vary across objects and tasks. We propose a novel method -ATK, to automatically select keypoints in a task-driven manner, such that the chosen keypoints are that are predictive of optimal behavior for the given task. Our proposal optimizes for a minimal set of task-relevant keypoints that preserve policy performance and robustness. We distill expert data (either from an expert policy in simulation or a human expert) into a policy that operates on RGB images while tracking the selected keypoints. By leveraging pre-trained visual modules, our system effectively tracks keypoints and transfers policies to the real-world evaluation scenario, even given perceptual challenges like transparent objects or fine-grained manipulation, or widely varying scene appearance. We validate our approach on various robotic tasks, demonstrating that these minimal keypoint representations improve robustness to visual disturbances and environmental variations.} }
Endnote
%0 Conference Paper %T ATK: Automatic Task-driven Keypoint Selection for Robust Policy Learning %A Yunchu Zhang %A Shubham Mittal %A Zhengyu Zhang %A Liyiming Ke %A Siddhartha Srinivasa %A Abhishek Gupta %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-zhang25e %I PMLR %P 2603--2627 %U https://proceedings.mlr.press/v305/zhang25e.html %V 305 %X Learning visuamotor policy through imitation learning often suffers from perceptual challenges, where visual differences between training and evaluation environments degrade policy performance. Policies relying on state estimations like 6D pose, require task-specific tracking and are difficult to scale, while raw sensor-based policies may lack robustness to small visual disturbances. In this work, we leverage 2D keypoints — spatially consistent features in the image frame — as a state representation for robust policy learning, and apply it to both sim-to-real transfer and real-world imitation learning. However, the choice of which keypoints to use can vary across objects and tasks. We propose a novel method -ATK, to automatically select keypoints in a task-driven manner, such that the chosen keypoints are that are predictive of optimal behavior for the given task. Our proposal optimizes for a minimal set of task-relevant keypoints that preserve policy performance and robustness. We distill expert data (either from an expert policy in simulation or a human expert) into a policy that operates on RGB images while tracking the selected keypoints. By leveraging pre-trained visual modules, our system effectively tracks keypoints and transfers policies to the real-world evaluation scenario, even given perceptual challenges like transparent objects or fine-grained manipulation, or widely varying scene appearance. We validate our approach on various robotic tasks, demonstrating that these minimal keypoint representations improve robustness to visual disturbances and environmental variations.
APA
Zhang, Y., Mittal, S., Zhang, Z., Ke, L., Srinivasa, S. & Gupta, A.. (2025). ATK: Automatic Task-driven Keypoint Selection for Robust Policy Learning. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:2603-2627 Available from https://proceedings.mlr.press/v305/zhang25e.html.

Related Material