AC-Teach: A Bayesian Actor-Critic Method for Policy Learning with an Ensemble of Suboptimal Teachers

Andrey Kurenkov, Ajay Mandlekar, Roberto Martin-Martin, Silvio Savarese, Animesh Garg
; Proceedings of the Conference on Robot Learning, PMLR 100:717-734, 2020.

Abstract

The exploration mechanism used by a Deep Reinforcement Learning (RL) agent plays a key role in determining its sample efficiency. Thus, improving over random exploration is crucial to solve long-horizon tasks with sparse rewards. We propose to leverage an ensemble of partial solutions as teachers that guide the agent’s exploration with action suggestions throughout training. While the setup of learning with teachers has been previously studied, our proposed approach – Actor-Critic with Teacher Ensembles (AC-Teach) – is the first to work with an ensemble of suboptimal teachers that may solve only part of the problem or contradict other each other, forming a unified algorithmic solution that is compatible with a broad range of teacher ensembles. AC-Teach leverages a probabilistic representation of the expected outcome of the teachers’ and student’s actions to direct exploration, reduce dithering, and adapt to the dynamically changing quality of the learner. We evaluate a variant of AC-Teach that guides the learning of a Bayesian DDPG agent on three tasks – path following, robotic pick and place, and robotic cube sweeping using a hook – and show that it improves largely on sampling efficiency over a set of baselines, both for our target scenario of unconstrained suboptimal teachers and for easier setups with optimal or single teachers. Additional results and videos at https://sites.google.com/view/acteach/.

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-kurenkov20a, title = {AC-Teach: A Bayesian Actor-Critic Method for Policy Learning with an Ensemble of Suboptimal Teachers}, author = {Kurenkov, Andrey and Mandlekar, Ajay and Martin-Martin, Roberto and Savarese, Silvio and Garg, Animesh}, pages = {717--734}, year = {2020}, editor = {Leslie Pack Kaelbling and Danica Kragic and Komei Sugiura}, volume = {100}, series = {Proceedings of Machine Learning Research}, address = {}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/kurenkov20a/kurenkov20a.pdf}, url = {http://proceedings.mlr.press/v100/kurenkov20a.html}, abstract = {The exploration mechanism used by a Deep Reinforcement Learning (RL) agent plays a key role in determining its sample efficiency. Thus, improving over random exploration is crucial to solve long-horizon tasks with sparse rewards. We propose to leverage an ensemble of partial solutions as teachers that guide the agent’s exploration with action suggestions throughout training. While the setup of learning with teachers has been previously studied, our proposed approach – Actor-Critic with Teacher Ensembles (AC-Teach) – is the first to work with an ensemble of suboptimal teachers that may solve only part of the problem or contradict other each other, forming a unified algorithmic solution that is compatible with a broad range of teacher ensembles. AC-Teach leverages a probabilistic representation of the expected outcome of the teachers’ and student’s actions to direct exploration, reduce dithering, and adapt to the dynamically changing quality of the learner. We evaluate a variant of AC-Teach that guides the learning of a Bayesian DDPG agent on three tasks – path following, robotic pick and place, and robotic cube sweeping using a hook – and show that it improves largely on sampling efficiency over a set of baselines, both for our target scenario of unconstrained suboptimal teachers and for easier setups with optimal or single teachers. Additional results and videos at https://sites.google.com/view/acteach/.} }
Endnote
%0 Conference Paper %T AC-Teach: A Bayesian Actor-Critic Method for Policy Learning with an Ensemble of Suboptimal Teachers %A Andrey Kurenkov %A Ajay Mandlekar %A Roberto Martin-Martin %A Silvio Savarese %A Animesh Garg %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-kurenkov20a %I PMLR %J Proceedings of Machine Learning Research %P 717--734 %U http://proceedings.mlr.press %V 100 %W PMLR %X The exploration mechanism used by a Deep Reinforcement Learning (RL) agent plays a key role in determining its sample efficiency. Thus, improving over random exploration is crucial to solve long-horizon tasks with sparse rewards. We propose to leverage an ensemble of partial solutions as teachers that guide the agent’s exploration with action suggestions throughout training. While the setup of learning with teachers has been previously studied, our proposed approach – Actor-Critic with Teacher Ensembles (AC-Teach) – is the first to work with an ensemble of suboptimal teachers that may solve only part of the problem or contradict other each other, forming a unified algorithmic solution that is compatible with a broad range of teacher ensembles. AC-Teach leverages a probabilistic representation of the expected outcome of the teachers’ and student’s actions to direct exploration, reduce dithering, and adapt to the dynamically changing quality of the learner. We evaluate a variant of AC-Teach that guides the learning of a Bayesian DDPG agent on three tasks – path following, robotic pick and place, and robotic cube sweeping using a hook – and show that it improves largely on sampling efficiency over a set of baselines, both for our target scenario of unconstrained suboptimal teachers and for easier setups with optimal or single teachers. Additional results and videos at https://sites.google.com/view/acteach/.
APA
Kurenkov, A., Mandlekar, A., Martin-Martin, R., Savarese, S. & Garg, A.. (2020). AC-Teach: A Bayesian Actor-Critic Method for Policy Learning with an Ensemble of Suboptimal Teachers. Proceedings of the Conference on Robot Learning, in PMLR 100:717-734

Related Material