Learning Heuristic Search via Imitation

Mohak Bhardwaj, Sanjiban Choudhury, Sebastian Scherer
Proceedings of the 1st Annual Conference on Robot Learning, PMLR 78:271-280, 2017.

Abstract

Robotic motion planning problems are typically solved by constructing a search tree of valid maneuvers from a start to a goal configuration. Limited onboard computation and real-time planning constraints impose a limit on how large this search tree can grow. Heuristics play a crucial role in such situations by guiding the search towards potentially good directions and consequently minimizing search effort. Moreover, it must infer such directions in an efficient manner using only the information uncovered by the search up until that time. However, state of the art methods do not address the problem of computing a heuristic that \emphexplicitly minimizes search effort. In this paper, we do so by training a heuristic policy that maps the partial information from the search to decide which node of the search tree to expand. Unfortunately, naively training such policies leads to slow convergence and poor local minima. We present \textscSaIL, an efficient algorithm that trains heuristic policies by imitating \emphclairvoyant oracles - oracles that have full information about the world and demonstrate decisions that minimize search effort. We leverage the fact that such oracles can be efficiently computed using dynamic programming and derive performance guarantees for the learnt heuristic. We validate the approach on a spectrum of environments which show that \textscSaIL consistently outperforms state of the art algorithms. Our approach paves the way forward for learning heuristics that demonstrate an anytime nature - finding feasible solutions quickly and incrementally refining it over time. Open-source code and details can be found here: https://goo.gl/YXkQAC.

Cite this Paper


BibTeX
@InProceedings{pmlr-v78-bhardwaj17a, title = {Learning Heuristic Search via Imitation}, author = {Bhardwaj, Mohak and Choudhury, Sanjiban and Scherer, Sebastian}, booktitle = {Proceedings of the 1st Annual Conference on Robot Learning}, pages = {271--280}, year = {2017}, editor = {Levine, Sergey and Vanhoucke, Vincent and Goldberg, Ken}, volume = {78}, series = {Proceedings of Machine Learning Research}, month = {13--15 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v78/bhardwaj17a/bhardwaj17a.pdf}, url = {https://proceedings.mlr.press/v78/bhardwaj17a.html}, abstract = {Robotic motion planning problems are typically solved by constructing a search tree of valid maneuvers from a start to a goal configuration. Limited onboard computation and real-time planning constraints impose a limit on how large this search tree can grow. Heuristics play a crucial role in such situations by guiding the search towards potentially good directions and consequently minimizing search effort. Moreover, it must infer such directions in an efficient manner using only the information uncovered by the search up until that time. However, state of the art methods do not address the problem of computing a heuristic that \emphexplicitly minimizes search effort. In this paper, we do so by training a heuristic policy that maps the partial information from the search to decide which node of the search tree to expand. Unfortunately, naively training such policies leads to slow convergence and poor local minima. We present \textscSaIL, an efficient algorithm that trains heuristic policies by imitating \emphclairvoyant oracles - oracles that have full information about the world and demonstrate decisions that minimize search effort. We leverage the fact that such oracles can be efficiently computed using dynamic programming and derive performance guarantees for the learnt heuristic. We validate the approach on a spectrum of environments which show that \textscSaIL consistently outperforms state of the art algorithms. Our approach paves the way forward for learning heuristics that demonstrate an anytime nature - finding feasible solutions quickly and incrementally refining it over time. Open-source code and details can be found here: https://goo.gl/YXkQAC.} }
Endnote
%0 Conference Paper %T Learning Heuristic Search via Imitation %A Mohak Bhardwaj %A Sanjiban Choudhury %A Sebastian Scherer %B Proceedings of the 1st Annual Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2017 %E Sergey Levine %E Vincent Vanhoucke %E Ken Goldberg %F pmlr-v78-bhardwaj17a %I PMLR %P 271--280 %U https://proceedings.mlr.press/v78/bhardwaj17a.html %V 78 %X Robotic motion planning problems are typically solved by constructing a search tree of valid maneuvers from a start to a goal configuration. Limited onboard computation and real-time planning constraints impose a limit on how large this search tree can grow. Heuristics play a crucial role in such situations by guiding the search towards potentially good directions and consequently minimizing search effort. Moreover, it must infer such directions in an efficient manner using only the information uncovered by the search up until that time. However, state of the art methods do not address the problem of computing a heuristic that \emphexplicitly minimizes search effort. In this paper, we do so by training a heuristic policy that maps the partial information from the search to decide which node of the search tree to expand. Unfortunately, naively training such policies leads to slow convergence and poor local minima. We present \textscSaIL, an efficient algorithm that trains heuristic policies by imitating \emphclairvoyant oracles - oracles that have full information about the world and demonstrate decisions that minimize search effort. We leverage the fact that such oracles can be efficiently computed using dynamic programming and derive performance guarantees for the learnt heuristic. We validate the approach on a spectrum of environments which show that \textscSaIL consistently outperforms state of the art algorithms. Our approach paves the way forward for learning heuristics that demonstrate an anytime nature - finding feasible solutions quickly and incrementally refining it over time. Open-source code and details can be found here: https://goo.gl/YXkQAC.
APA
Bhardwaj, M., Choudhury, S. & Scherer, S.. (2017). Learning Heuristic Search via Imitation. Proceedings of the 1st Annual Conference on Robot Learning, in Proceedings of Machine Learning Research 78:271-280 Available from https://proceedings.mlr.press/v78/bhardwaj17a.html.

Related Material