MAME : Model-Agnostic Meta-Exploration

Swaminathan Gurumurthy, Sumit Kumar, Katia Sycara
; Proceedings of the Conference on Robot Learning, PMLR 100:910-922, 2020.

Abstract

Meta-Reinforcement learning approaches aim to develop learning procedures that can adapt quickly to a distribution of tasks with the help of a few examples. Developing efficient exploration strategies capable of finding the most useful samples becomes critical in such settings. Existing approaches towards finding efficient exploration strategies add auxiliary objectives to promote exploration by the pre-update policy, however, this makes the adaptation using a few gradient steps difficult as the pre-update (exploration) and post-update (exploitation) policies are often quite different. Instead, we propose to explicitly model a separate exploration policy for the task distribution. Having two different policies gives more flexibility in training the exploration policy and also makes adaptation to any specific task easier. We show that using self-supervised or supervised learning objectives for adaptation allows for more efficient inner-loop updates and also demonstrate the superior performance of our model compared to prior works in this domain.

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-gurumurthy20a, title = {MAME : Model-Agnostic Meta-Exploration}, author = {Gurumurthy, Swaminathan and Kumar, Sumit and Sycara, Katia}, booktitle = {Proceedings of the Conference on Robot Learning}, pages = {910--922}, year = {2020}, editor = {Leslie Pack Kaelbling and Danica Kragic and Komei Sugiura}, volume = {100}, series = {Proceedings of Machine Learning Research}, address = {}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/gurumurthy20a/gurumurthy20a.pdf}, url = {http://proceedings.mlr.press/v100/gurumurthy20a.html}, abstract = {Meta-Reinforcement learning approaches aim to develop learning procedures that can adapt quickly to a distribution of tasks with the help of a few examples. Developing efficient exploration strategies capable of finding the most useful samples becomes critical in such settings. Existing approaches towards finding efficient exploration strategies add auxiliary objectives to promote exploration by the pre-update policy, however, this makes the adaptation using a few gradient steps difficult as the pre-update (exploration) and post-update (exploitation) policies are often quite different. Instead, we propose to explicitly model a separate exploration policy for the task distribution. Having two different policies gives more flexibility in training the exploration policy and also makes adaptation to any specific task easier. We show that using self-supervised or supervised learning objectives for adaptation allows for more efficient inner-loop updates and also demonstrate the superior performance of our model compared to prior works in this domain.} }
Endnote
%0 Conference Paper %T MAME : Model-Agnostic Meta-Exploration %A Swaminathan Gurumurthy %A Sumit Kumar %A Katia Sycara %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-gurumurthy20a %I PMLR %J Proceedings of Machine Learning Research %P 910--922 %U http://proceedings.mlr.press %V 100 %W PMLR %X Meta-Reinforcement learning approaches aim to develop learning procedures that can adapt quickly to a distribution of tasks with the help of a few examples. Developing efficient exploration strategies capable of finding the most useful samples becomes critical in such settings. Existing approaches towards finding efficient exploration strategies add auxiliary objectives to promote exploration by the pre-update policy, however, this makes the adaptation using a few gradient steps difficult as the pre-update (exploration) and post-update (exploitation) policies are often quite different. Instead, we propose to explicitly model a separate exploration policy for the task distribution. Having two different policies gives more flexibility in training the exploration policy and also makes adaptation to any specific task easier. We show that using self-supervised or supervised learning objectives for adaptation allows for more efficient inner-loop updates and also demonstrate the superior performance of our model compared to prior works in this domain.
APA
Gurumurthy, S., Kumar, S. & Sycara, K.. (2020). MAME : Model-Agnostic Meta-Exploration. Proceedings of the Conference on Robot Learning, in PMLR 100:910-922

Related Material