Learning Control by Iterative Inversion

Gal Leibovich, Guy Jacob, Or Avner, Gal Novik, Aviv Tamar
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:19228-19255, 2023.

Abstract

We propose iterative inversion - an algorithm for learning an inverse function without input-output pairs, but only with samples from the desired output distribution and access to the forward function. The key challenge is a distribution shift between the desired outputs and the outputs of an initial random guess, and we prove that iterative inversion can steer the learning correctly, under rather strict conditions on the function. We apply iterative inversion to learn control. Our input is a set of demonstrations of desired behavior, given as video embeddings of trajectories (without actions), and our method iteratively learns to imitate trajectories generated by the current policy, perturbed by random exploration noise. Our approach does not require rewards, and only employs supervised learning, which can be easily scaled to use state-of-the-art trajectory embedding techniques and policy representations. Indeed, with a VQ-VAE embedding, and a transformer-based policy, we demonstrate non-trivial continuous control on several tasks (videos available at https://sites.google.com/view/iter-inver). Further, we report an improved performance on imitating diverse behaviors compared to reward based methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-leibovich23a, title = {Learning Control by Iterative Inversion}, author = {Leibovich, Gal and Jacob, Guy and Avner, Or and Novik, Gal and Tamar, Aviv}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {19228--19255}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/leibovich23a/leibovich23a.pdf}, url = {https://proceedings.mlr.press/v202/leibovich23a.html}, abstract = {We propose iterative inversion - an algorithm for learning an inverse function without input-output pairs, but only with samples from the desired output distribution and access to the forward function. The key challenge is a distribution shift between the desired outputs and the outputs of an initial random guess, and we prove that iterative inversion can steer the learning correctly, under rather strict conditions on the function. We apply iterative inversion to learn control. Our input is a set of demonstrations of desired behavior, given as video embeddings of trajectories (without actions), and our method iteratively learns to imitate trajectories generated by the current policy, perturbed by random exploration noise. Our approach does not require rewards, and only employs supervised learning, which can be easily scaled to use state-of-the-art trajectory embedding techniques and policy representations. Indeed, with a VQ-VAE embedding, and a transformer-based policy, we demonstrate non-trivial continuous control on several tasks (videos available at https://sites.google.com/view/iter-inver). Further, we report an improved performance on imitating diverse behaviors compared to reward based methods.} }
Endnote
%0 Conference Paper %T Learning Control by Iterative Inversion %A Gal Leibovich %A Guy Jacob %A Or Avner %A Gal Novik %A Aviv Tamar %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-leibovich23a %I PMLR %P 19228--19255 %U https://proceedings.mlr.press/v202/leibovich23a.html %V 202 %X We propose iterative inversion - an algorithm for learning an inverse function without input-output pairs, but only with samples from the desired output distribution and access to the forward function. The key challenge is a distribution shift between the desired outputs and the outputs of an initial random guess, and we prove that iterative inversion can steer the learning correctly, under rather strict conditions on the function. We apply iterative inversion to learn control. Our input is a set of demonstrations of desired behavior, given as video embeddings of trajectories (without actions), and our method iteratively learns to imitate trajectories generated by the current policy, perturbed by random exploration noise. Our approach does not require rewards, and only employs supervised learning, which can be easily scaled to use state-of-the-art trajectory embedding techniques and policy representations. Indeed, with a VQ-VAE embedding, and a transformer-based policy, we demonstrate non-trivial continuous control on several tasks (videos available at https://sites.google.com/view/iter-inver). Further, we report an improved performance on imitating diverse behaviors compared to reward based methods.
APA
Leibovich, G., Jacob, G., Avner, O., Novik, G. & Tamar, A.. (2023). Learning Control by Iterative Inversion. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:19228-19255 Available from https://proceedings.mlr.press/v202/leibovich23a.html.

Related Material