Task-Conditioned Variational Autoencoders for Learning Movement Primitives

Michael Noseworthy, Rohan Paul, Subhro Roy, Daehyung Park, Nicholas Roy
Proceedings of the Conference on Robot Learning, PMLR 100:933-944, 2020.

Abstract

Consider a task such as pouring liquid from a cup into a container. Some parameters, such as the location of the pour, are crucial to task success, while others, such as the length of the pour, can exhibit larger variation. In this work, we propose a method that differentiates between specified task parameters and learned manner parameters. We would like to allow a designer to specify a subset of the parameters while learning the remaining parameters from a set of demonstrations. This is difficult because the learned parameters need to be interpretable and remain independent of the specified task parameters. To disentangle the parameter sets, we propose a Task-Conditioned Variational Autoencoder (TC-VAE) that conditions on the specified task parameters while learning the rest from demonstrations. We use an adversarial loss function to ensure the learned parameters encode no information about the task parameters. We evaluate our method on pouring demonstrations on a Baxter robot from the MIME dataset. We show that the TC-VAE can generalize to task instances unseen during training and that changing the learned parameters does not affect the success of the motion.

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-noseworthy20a, title = {Task-Conditioned Variational Autoencoders for Learning Movement Primitives}, author = {Noseworthy, Michael and Paul, Rohan and Roy, Subhro and Park, Daehyung and Roy, Nicholas}, booktitle = {Proceedings of the Conference on Robot Learning}, pages = {933--944}, year = {2020}, editor = {Kaelbling, Leslie Pack and Kragic, Danica and Sugiura, Komei}, volume = {100}, series = {Proceedings of Machine Learning Research}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/noseworthy20a/noseworthy20a.pdf}, url = {https://proceedings.mlr.press/v100/noseworthy20a.html}, abstract = {Consider a task such as pouring liquid from a cup into a container. Some parameters, such as the location of the pour, are crucial to task success, while others, such as the length of the pour, can exhibit larger variation. In this work, we propose a method that differentiates between specified task parameters and learned manner parameters. We would like to allow a designer to specify a subset of the parameters while learning the remaining parameters from a set of demonstrations. This is difficult because the learned parameters need to be interpretable and remain independent of the specified task parameters. To disentangle the parameter sets, we propose a Task-Conditioned Variational Autoencoder (TC-VAE) that conditions on the specified task parameters while learning the rest from demonstrations. We use an adversarial loss function to ensure the learned parameters encode no information about the task parameters. We evaluate our method on pouring demonstrations on a Baxter robot from the MIME dataset. We show that the TC-VAE can generalize to task instances unseen during training and that changing the learned parameters does not affect the success of the motion.} }
Endnote
%0 Conference Paper %T Task-Conditioned Variational Autoencoders for Learning Movement Primitives %A Michael Noseworthy %A Rohan Paul %A Subhro Roy %A Daehyung Park %A Nicholas Roy %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-noseworthy20a %I PMLR %P 933--944 %U https://proceedings.mlr.press/v100/noseworthy20a.html %V 100 %X Consider a task such as pouring liquid from a cup into a container. Some parameters, such as the location of the pour, are crucial to task success, while others, such as the length of the pour, can exhibit larger variation. In this work, we propose a method that differentiates between specified task parameters and learned manner parameters. We would like to allow a designer to specify a subset of the parameters while learning the remaining parameters from a set of demonstrations. This is difficult because the learned parameters need to be interpretable and remain independent of the specified task parameters. To disentangle the parameter sets, we propose a Task-Conditioned Variational Autoencoder (TC-VAE) that conditions on the specified task parameters while learning the rest from demonstrations. We use an adversarial loss function to ensure the learned parameters encode no information about the task parameters. We evaluate our method on pouring demonstrations on a Baxter robot from the MIME dataset. We show that the TC-VAE can generalize to task instances unseen during training and that changing the learned parameters does not affect the success of the motion.
APA
Noseworthy, M., Paul, R., Roy, S., Park, D. & Roy, N.. (2020). Task-Conditioned Variational Autoencoders for Learning Movement Primitives. Proceedings of the Conference on Robot Learning, in Proceedings of Machine Learning Research 100:933-944 Available from https://proceedings.mlr.press/v100/noseworthy20a.html.

Related Material