[edit]
Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics
Proceedings of the 1st Annual Conference on Robot Learning, PMLR 78:448-457, 2017.
Abstract
We uncouple three components of autonomous behavior (utilitarian value, causal reasoning, and fine motion control) to design an interpretable model of tasks from video demonstrations. Utilitarian value is learned from aggregating human preferences to understand the implicit goal of a task, explaining \textitwhy an action sequence was performed. Causal reasoning is seeded from observations and grows from robot experiences to explain \textithow to deductively accomplish sub-goals. And lastly, fine motion control describes \textitwhat actuators to move. In our experiments, a robot learns how to fold t-shirts from visual demonstrations, and proposes a plan (by answering \textitwhy, \textithow, and \textitwhat) when folding never-before-seen articles of clothing.