Adversarial Active Exploration for Inverse Dynamics Model Learning

Zhang-Wei Hong, Tsu-Jui Fu, Tzu-Yun Shann, Chun-Yi Lee
Proceedings of the Conference on Robot Learning, PMLR 100:552-565, 2020.

Abstract

We present an adversarial active exploration for inverse dynamics model learning, a simple yet effective learning scheme that incentivizes exploration in an environment without any human intervention. Our framework consists of a deep reinforcement learning (DRL) agent and an inverse dynamics model contesting with each other. The former collects training samples for the latter, with an objective to maximize the error of the latter. The latter is trained with samples collected by the former, and generates rewards for the former when it fails to predict the actual action taken by the former. In such a competitive setting, the DRL agent learns to generate samples that the inverse dynamics model fails to predict correctly, while the inverse dynamics model learns to adapt to the challenging samples. We further propose a reward structure that ensures the DRL agent to collect only moderately hard samples but not overly hard ones that prevent the inverse model from predicting effectively. We evaluate the effectiveness of our method on several robotic arm and hand manipulation tasks against multiple baseline models. Experimental results show that our method is comparable to those directly trained with expert demonstrations, and superior to the other baselines even without any human priors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-hong20a, title = {Adversarial Active Exploration for Inverse Dynamics Model Learning}, author = {Hong, Zhang-Wei and Fu, Tsu-Jui and Shann, Tzu-Yun and Lee, Chun-Yi}, booktitle = {Proceedings of the Conference on Robot Learning}, pages = {552--565}, year = {2020}, editor = {Kaelbling, Leslie Pack and Kragic, Danica and Sugiura, Komei}, volume = {100}, series = {Proceedings of Machine Learning Research}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/hong20a/hong20a.pdf}, url = {https://proceedings.mlr.press/v100/hong20a.html}, abstract = {We present an adversarial active exploration for inverse dynamics model learning, a simple yet effective learning scheme that incentivizes exploration in an environment without any human intervention. Our framework consists of a deep reinforcement learning (DRL) agent and an inverse dynamics model contesting with each other. The former collects training samples for the latter, with an objective to maximize the error of the latter. The latter is trained with samples collected by the former, and generates rewards for the former when it fails to predict the actual action taken by the former. In such a competitive setting, the DRL agent learns to generate samples that the inverse dynamics model fails to predict correctly, while the inverse dynamics model learns to adapt to the challenging samples. We further propose a reward structure that ensures the DRL agent to collect only moderately hard samples but not overly hard ones that prevent the inverse model from predicting effectively. We evaluate the effectiveness of our method on several robotic arm and hand manipulation tasks against multiple baseline models. Experimental results show that our method is comparable to those directly trained with expert demonstrations, and superior to the other baselines even without any human priors.} }
Endnote
%0 Conference Paper %T Adversarial Active Exploration for Inverse Dynamics Model Learning %A Zhang-Wei Hong %A Tsu-Jui Fu %A Tzu-Yun Shann %A Chun-Yi Lee %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-hong20a %I PMLR %P 552--565 %U https://proceedings.mlr.press/v100/hong20a.html %V 100 %X We present an adversarial active exploration for inverse dynamics model learning, a simple yet effective learning scheme that incentivizes exploration in an environment without any human intervention. Our framework consists of a deep reinforcement learning (DRL) agent and an inverse dynamics model contesting with each other. The former collects training samples for the latter, with an objective to maximize the error of the latter. The latter is trained with samples collected by the former, and generates rewards for the former when it fails to predict the actual action taken by the former. In such a competitive setting, the DRL agent learns to generate samples that the inverse dynamics model fails to predict correctly, while the inverse dynamics model learns to adapt to the challenging samples. We further propose a reward structure that ensures the DRL agent to collect only moderately hard samples but not overly hard ones that prevent the inverse model from predicting effectively. We evaluate the effectiveness of our method on several robotic arm and hand manipulation tasks against multiple baseline models. Experimental results show that our method is comparable to those directly trained with expert demonstrations, and superior to the other baselines even without any human priors.
APA
Hong, Z., Fu, T., Shann, T. & Lee, C.. (2020). Adversarial Active Exploration for Inverse Dynamics Model Learning. Proceedings of the Conference on Robot Learning, in Proceedings of Machine Learning Research 100:552-565 Available from https://proceedings.mlr.press/v100/hong20a.html.

Related Material