Generative adversarial training of product of policies for robust and adaptive movement primitives

Emmanuel Pignat, Hakan Girgin, Sylvain Calinon
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:1456-1470, 2021.

Abstract

In learning from demonstrations, many generative models of trajectories make simplifying assumptions of independence. Correctness is sacrificed in the name of tractability and speed of the learning phase. The ignored dependencies, which are often the kinematic and dynamic constraints of the system, are then only restored when synthesizing the motion, which introduces possibly heavy distortions. In this work, we propose to use those approximate trajectory distributions as close-to-optimal discriminators in the popular generative adversarial framework to stabilize and accelerate the learning procedure. The two problems of adaptability and robustness are addressed with our method. In order to adapt the motions to varying contexts, we propose to use a product of Gaussian policies defined in several parametrized task spaces. Robustness to perturbations and varying dynamics is ensured with the use of stochastic gradient descent and ensemble methods to learn the stochastic dynamics. Two experiments are performed on a 7-DoF manipulator to validate the approach.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-pignat21a, title = {Generative adversarial training of product of policies for robust and adaptive movement primitives}, author = {Pignat, Emmanuel and Girgin, Hakan and Calinon, Sylvain}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {1456--1470}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/pignat21a/pignat21a.pdf}, url = {https://proceedings.mlr.press/v155/pignat21a.html}, abstract = {In learning from demonstrations, many generative models of trajectories make simplifying assumptions of independence. Correctness is sacrificed in the name of tractability and speed of the learning phase. The ignored dependencies, which are often the kinematic and dynamic constraints of the system, are then only restored when synthesizing the motion, which introduces possibly heavy distortions. In this work, we propose to use those approximate trajectory distributions as close-to-optimal discriminators in the popular generative adversarial framework to stabilize and accelerate the learning procedure. The two problems of adaptability and robustness are addressed with our method. In order to adapt the motions to varying contexts, we propose to use a product of Gaussian policies defined in several parametrized task spaces. Robustness to perturbations and varying dynamics is ensured with the use of stochastic gradient descent and ensemble methods to learn the stochastic dynamics. Two experiments are performed on a 7-DoF manipulator to validate the approach.} }
Endnote
%0 Conference Paper %T Generative adversarial training of product of policies for robust and adaptive movement primitives %A Emmanuel Pignat %A Hakan Girgin %A Sylvain Calinon %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-pignat21a %I PMLR %P 1456--1470 %U https://proceedings.mlr.press/v155/pignat21a.html %V 155 %X In learning from demonstrations, many generative models of trajectories make simplifying assumptions of independence. Correctness is sacrificed in the name of tractability and speed of the learning phase. The ignored dependencies, which are often the kinematic and dynamic constraints of the system, are then only restored when synthesizing the motion, which introduces possibly heavy distortions. In this work, we propose to use those approximate trajectory distributions as close-to-optimal discriminators in the popular generative adversarial framework to stabilize and accelerate the learning procedure. The two problems of adaptability and robustness are addressed with our method. In order to adapt the motions to varying contexts, we propose to use a product of Gaussian policies defined in several parametrized task spaces. Robustness to perturbations and varying dynamics is ensured with the use of stochastic gradient descent and ensemble methods to learn the stochastic dynamics. Two experiments are performed on a 7-DoF manipulator to validate the approach.
APA
Pignat, E., Girgin, H. & Calinon, S.. (2021). Generative adversarial training of product of policies for robust and adaptive movement primitives. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:1456-1470 Available from https://proceedings.mlr.press/v155/pignat21a.html.

Related Material