Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics

Nishant Shukla, Yunzhong He, Frank Chen, Song-Chun Zhu
Proceedings of the 1st Annual Conference on Robot Learning, PMLR 78:448-457, 2017.

Abstract

We uncouple three components of autonomous behavior (utilitarian value, causal reasoning, and fine motion control) to design an interpretable model of tasks from video demonstrations. Utilitarian value is learned from aggregating human preferences to understand the implicit goal of a task, explaining \textitwhy an action sequence was performed. Causal reasoning is seeded from observations and grows from robot experiences to explain \textithow to deductively accomplish sub-goals. And lastly, fine motion control describes \textitwhat actuators to move. In our experiments, a robot learns how to fold t-shirts from visual demonstrations, and proposes a plan (by answering \textitwhy, \textithow, and \textitwhat) when folding never-before-seen articles of clothing.

Cite this Paper


BibTeX
@InProceedings{pmlr-v78-shukla17a, title = {Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics}, author = {Shukla, Nishant and He, Yunzhong and Chen, Frank and Zhu, Song-Chun}, booktitle = {Proceedings of the 1st Annual Conference on Robot Learning}, pages = {448--457}, year = {2017}, editor = {Levine, Sergey and Vanhoucke, Vincent and Goldberg, Ken}, volume = {78}, series = {Proceedings of Machine Learning Research}, month = {13--15 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v78/shukla17a/shukla17a.pdf}, url = {https://proceedings.mlr.press/v78/shukla17a.html}, abstract = {We uncouple three components of autonomous behavior (utilitarian value, causal reasoning, and fine motion control) to design an interpretable model of tasks from video demonstrations. Utilitarian value is learned from aggregating human preferences to understand the implicit goal of a task, explaining \textitwhy an action sequence was performed. Causal reasoning is seeded from observations and grows from robot experiences to explain \textithow to deductively accomplish sub-goals. And lastly, fine motion control describes \textitwhat actuators to move. In our experiments, a robot learns how to fold t-shirts from visual demonstrations, and proposes a plan (by answering \textitwhy, \textithow, and \textitwhat) when folding never-before-seen articles of clothing.} }
Endnote
%0 Conference Paper %T Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics %A Nishant Shukla %A Yunzhong He %A Frank Chen %A Song-Chun Zhu %B Proceedings of the 1st Annual Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2017 %E Sergey Levine %E Vincent Vanhoucke %E Ken Goldberg %F pmlr-v78-shukla17a %I PMLR %P 448--457 %U https://proceedings.mlr.press/v78/shukla17a.html %V 78 %X We uncouple three components of autonomous behavior (utilitarian value, causal reasoning, and fine motion control) to design an interpretable model of tasks from video demonstrations. Utilitarian value is learned from aggregating human preferences to understand the implicit goal of a task, explaining \textitwhy an action sequence was performed. Causal reasoning is seeded from observations and grows from robot experiences to explain \textithow to deductively accomplish sub-goals. And lastly, fine motion control describes \textitwhat actuators to move. In our experiments, a robot learns how to fold t-shirts from visual demonstrations, and proposes a plan (by answering \textitwhy, \textithow, and \textitwhat) when folding never-before-seen articles of clothing.
APA
Shukla, N., He, Y., Chen, F. & Zhu, S.. (2017). Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics. Proceedings of the 1st Annual Conference on Robot Learning, in Proceedings of Machine Learning Research 78:448-457 Available from https://proceedings.mlr.press/v78/shukla17a.html.

Related Material