Learning Navigation Costs from Demonstrations with Semantic Observations

Tianyu Wang, Vikas Dhiman, Nikolay Atanasov
Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:245-255, 2020.

Abstract

This paper focuses on inverse reinforcement learning (IRL) for autonomous robot navigation using semantic observations. The objective is to infer a cost function that explains demonstrated behavior while relying only on the expert’s observations and state-control trajectory. We develop a map encoder, which infers semantic class probabilities from the observation sequence, and a cost encoder, defined as deep neural network over the semantic features. Since the expert cost is not directly ob-servable, the representation parameters can only be optimized by differentiating the error between demonstrated controls and a control policy computed from the cost estimate. The error is optimized using a closed-form subgradient computed only over a subset of promising states via a motion planning algorithm. We show that our approach learns to follow traffic rules in the autonomous driving CARLA simulator by relying on semantic observations of cars, sidewalks and road lanes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v120-wang20a, title = {Learning Navigation Costs from Demonstrations with Semantic Observations}, author = {Wang, Tianyu and Dhiman, Vikas and Atanasov, Nikolay}, booktitle = {Proceedings of the 2nd Conference on Learning for Dynamics and Control}, pages = {245--255}, year = {2020}, editor = {Bayen, Alexandre M. and Jadbabaie, Ali and Pappas, George and Parrilo, Pablo A. and Recht, Benjamin and Tomlin, Claire and Zeilinger, Melanie}, volume = {120}, series = {Proceedings of Machine Learning Research}, month = {10--11 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v120/wang20a/wang20a.pdf}, url = {https://proceedings.mlr.press/v120/wang20a.html}, abstract = {This paper focuses on inverse reinforcement learning (IRL) for autonomous robot navigation using semantic observations. The objective is to infer a cost function that explains demonstrated behavior while relying only on the expert’s observations and state-control trajectory. We develop a map encoder, which infers semantic class probabilities from the observation sequence, and a cost encoder, defined as deep neural network over the semantic features. Since the expert cost is not directly ob-servable, the representation parameters can only be optimized by differentiating the error between demonstrated controls and a control policy computed from the cost estimate. The error is optimized using a closed-form subgradient computed only over a subset of promising states via a motion planning algorithm. We show that our approach learns to follow traffic rules in the autonomous driving CARLA simulator by relying on semantic observations of cars, sidewalks and road lanes.} }
Endnote
%0 Conference Paper %T Learning Navigation Costs from Demonstrations with Semantic Observations %A Tianyu Wang %A Vikas Dhiman %A Nikolay Atanasov %B Proceedings of the 2nd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2020 %E Alexandre M. Bayen %E Ali Jadbabaie %E George Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire Tomlin %E Melanie Zeilinger %F pmlr-v120-wang20a %I PMLR %P 245--255 %U https://proceedings.mlr.press/v120/wang20a.html %V 120 %X This paper focuses on inverse reinforcement learning (IRL) for autonomous robot navigation using semantic observations. The objective is to infer a cost function that explains demonstrated behavior while relying only on the expert’s observations and state-control trajectory. We develop a map encoder, which infers semantic class probabilities from the observation sequence, and a cost encoder, defined as deep neural network over the semantic features. Since the expert cost is not directly ob-servable, the representation parameters can only be optimized by differentiating the error between demonstrated controls and a control policy computed from the cost estimate. The error is optimized using a closed-form subgradient computed only over a subset of promising states via a motion planning algorithm. We show that our approach learns to follow traffic rules in the autonomous driving CARLA simulator by relying on semantic observations of cars, sidewalks and road lanes.
APA
Wang, T., Dhiman, V. & Atanasov, N.. (2020). Learning Navigation Costs from Demonstrations with Semantic Observations. Proceedings of the 2nd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 120:245-255 Available from https://proceedings.mlr.press/v120/wang20a.html.

Related Material