Reinforcement Learning of Active Vision for Manipulating Objects under Occlusions

[edit]

Ricson Cheng, Arpit Agarwal, Katerina Fragkiadaki ;
Proceedings of The 2nd Conference on Robot Learning, PMLR 87:422-431, 2018.

Abstract

We consider artificial agents that learn to jointly control their gripper and camera in order to reinforcement learn manipulation policies in the presence of occlusions from distractor objects. Distractors often occlude the object of interest and cause it to disappear from the field of view. We propose hand/eye controllers that learn to move the camera to keep the object within the field of view and visible, in coordination to manipulating it to achieve the desired goal, e.g., pushing it to a target location. We incorporate structural biases of object-centric attention within our actor-critic architectures, which our experiments suggest to be a key for good performance. Our results further highlight the importance of curriculum with regards to environment difficulty. The resulting active vision / manipulation policies outperform static camera setups for a variety of cluttered environments.

Related Material