Task-Conditioned Variational Autoencoders for Learning Movement Primitives
Proceedings of the Conference on Robot Learning, PMLR 100:933-944, 2020.
Consider a task such as pouring liquid from a cup into a container. Some parameters, such as the location of the pour, are crucial to task success, while others, such as the length of the pour, can exhibit larger variation. In this work, we propose a method that differentiates between specified task parameters and learned manner parameters. We would like to allow a designer to specify a subset of the parameters while learning the remaining parameters from a set of demonstrations. This is difficult because the learned parameters need to be interpretable and remain independent of the specified task parameters. To disentangle the parameter sets, we propose a Task-Conditioned Variational Autoencoder (TC-VAE) that conditions on the specified task parameters while learning the rest from demonstrations. We use an adversarial loss function to ensure the learned parameters encode no information about the task parameters. We evaluate our method on pouring demonstrations on a Baxter robot from the MIME dataset. We show that the TC-VAE can generalize to task instances unseen during training and that changing the learned parameters does not affect the success of the motion.