Policy Learning for Active Target Tracking over Continuous $SE(3)$ Trajectories

Pengzhi Yang, Shumon Koga, Arash Asgharivaskasi, Nikolay Atanasov
Proceedings of The 5th Annual Learning for Dynamics and Control Conference, PMLR 211:64-75, 2023.

Abstract

This paper proposes a novel \emph{model-based policy gradient algorithm} for tracking dynamic targets using a mobile robot, equipped with an onboard sensor with a limited field of view. The task is to obtain a continuous control policy for the mobile robot to collect sensor measurements that reduce uncertainty in the target states, measured by the target distribution entropy. We design a neural network control policy with the robot $SE(3)$ pose and the mean vector and information matrix of the joint target distribution as inputs and attention layers to handle variable numbers of targets. We also derive the gradient of the target entropy with respect to the network parameters explicitly, allowing efficient model-based policy gradient optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v211-yang23a, title = {Policy Learning for Active Target Tracking over Continuous $SE(3)$ Trajectories}, author = {Yang, Pengzhi and Koga, Shumon and Asgharivaskasi, Arash and Atanasov, Nikolay}, booktitle = {Proceedings of The 5th Annual Learning for Dynamics and Control Conference}, pages = {64--75}, year = {2023}, editor = {Matni, Nikolai and Morari, Manfred and Pappas, George J.}, volume = {211}, series = {Proceedings of Machine Learning Research}, month = {15--16 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v211/yang23a/yang23a.pdf}, url = {https://proceedings.mlr.press/v211/yang23a.html}, abstract = {This paper proposes a novel \emph{model-based policy gradient algorithm} for tracking dynamic targets using a mobile robot, equipped with an onboard sensor with a limited field of view. The task is to obtain a continuous control policy for the mobile robot to collect sensor measurements that reduce uncertainty in the target states, measured by the target distribution entropy. We design a neural network control policy with the robot $SE(3)$ pose and the mean vector and information matrix of the joint target distribution as inputs and attention layers to handle variable numbers of targets. We also derive the gradient of the target entropy with respect to the network parameters explicitly, allowing efficient model-based policy gradient optimization.} }
Endnote
%0 Conference Paper %T Policy Learning for Active Target Tracking over Continuous $SE(3)$ Trajectories %A Pengzhi Yang %A Shumon Koga %A Arash Asgharivaskasi %A Nikolay Atanasov %B Proceedings of The 5th Annual Learning for Dynamics and Control Conference %C Proceedings of Machine Learning Research %D 2023 %E Nikolai Matni %E Manfred Morari %E George J. Pappas %F pmlr-v211-yang23a %I PMLR %P 64--75 %U https://proceedings.mlr.press/v211/yang23a.html %V 211 %X This paper proposes a novel \emph{model-based policy gradient algorithm} for tracking dynamic targets using a mobile robot, equipped with an onboard sensor with a limited field of view. The task is to obtain a continuous control policy for the mobile robot to collect sensor measurements that reduce uncertainty in the target states, measured by the target distribution entropy. We design a neural network control policy with the robot $SE(3)$ pose and the mean vector and information matrix of the joint target distribution as inputs and attention layers to handle variable numbers of targets. We also derive the gradient of the target entropy with respect to the network parameters explicitly, allowing efficient model-based policy gradient optimization.
APA
Yang, P., Koga, S., Asgharivaskasi, A. & Atanasov, N.. (2023). Policy Learning for Active Target Tracking over Continuous $SE(3)$ Trajectories. Proceedings of The 5th Annual Learning for Dynamics and Control Conference, in Proceedings of Machine Learning Research 211:64-75 Available from https://proceedings.mlr.press/v211/yang23a.html.

Related Material