Learning Novel Policies For Tasks

Yunbo Zhang, Wenhao Yu, Greg Turk
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:7483-7492, 2019.

Abstract

In this work, we present a reinforcement learning algorithm that can find a variety of policies (novel policies) for a task that is given by a task reward function. Our method does this by creating a second reward function that recognizes previously seen state sequences and rewards those by novelty, which is measured using autoencoders that have been trained on state sequences from previously discovered policies. We present a two-objective update technique for policy gradient algorithms in which each update of the policy is a compromise between improving the task reward and improving the novelty reward. Using this method, we end up with a collection of policies that solves a given task as well as carrying out action sequences that are distinct from one another. We demonstrate this method on maze navigation tasks, a reaching task for a simulated robot arm, and a locomotion task for a hopper. We also demonstrate the effectiveness of our approach on deceptive tasks in which policy gradient methods often get stuck.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-zhang19q, title = {Learning Novel Policies For Tasks}, author = {Zhang, Yunbo and Yu, Wenhao and Turk, Greg}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {7483--7492}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/zhang19q/zhang19q.pdf}, url = {https://proceedings.mlr.press/v97/zhang19q.html}, abstract = {In this work, we present a reinforcement learning algorithm that can find a variety of policies (novel policies) for a task that is given by a task reward function. Our method does this by creating a second reward function that recognizes previously seen state sequences and rewards those by novelty, which is measured using autoencoders that have been trained on state sequences from previously discovered policies. We present a two-objective update technique for policy gradient algorithms in which each update of the policy is a compromise between improving the task reward and improving the novelty reward. Using this method, we end up with a collection of policies that solves a given task as well as carrying out action sequences that are distinct from one another. We demonstrate this method on maze navigation tasks, a reaching task for a simulated robot arm, and a locomotion task for a hopper. We also demonstrate the effectiveness of our approach on deceptive tasks in which policy gradient methods often get stuck.} }
Endnote
%0 Conference Paper %T Learning Novel Policies For Tasks %A Yunbo Zhang %A Wenhao Yu %A Greg Turk %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-zhang19q %I PMLR %P 7483--7492 %U https://proceedings.mlr.press/v97/zhang19q.html %V 97 %X In this work, we present a reinforcement learning algorithm that can find a variety of policies (novel policies) for a task that is given by a task reward function. Our method does this by creating a second reward function that recognizes previously seen state sequences and rewards those by novelty, which is measured using autoencoders that have been trained on state sequences from previously discovered policies. We present a two-objective update technique for policy gradient algorithms in which each update of the policy is a compromise between improving the task reward and improving the novelty reward. Using this method, we end up with a collection of policies that solves a given task as well as carrying out action sequences that are distinct from one another. We demonstrate this method on maze navigation tasks, a reaching task for a simulated robot arm, and a locomotion task for a hopper. We also demonstrate the effectiveness of our approach on deceptive tasks in which policy gradient methods often get stuck.
APA
Zhang, Y., Yu, W. & Turk, G.. (2019). Learning Novel Policies For Tasks. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:7483-7492 Available from https://proceedings.mlr.press/v97/zhang19q.html.

Related Material