ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning

Ryan Hoque, Ashwin Balakrishna, Ellen Novoseller, Albert Wilcox, Daniel S. Brown, Ken Goldberg
Proceedings of the 5th Conference on Robot Learning, PMLR 164:598-608, 2022.

Abstract

Effective robot learning often requires online human feedback and interventions that can cost significant human time, giving rise to the central challenge in interactive imitation learning: is it possible to control the timing and length of interventions to both facilitate learning and limit burden on the human supervisor? This paper presents ThriftyDAgger, an algorithm for actively querying a human supervisor given a desired budget of human interventions. ThriftyDAgger uses a learned switching policy to solicit interventions only at states that are sufficiently (1) novel, where the robot policy has no reference behavior to imitate, or (2) risky, where the robot has low confidence in task completion. To detect the latter, we introduce a novel metric for estimating risk under the current robot policy. Experiments in simulation and on a physical cable routing experiment suggest that ThriftyDAgger’s intervention criteria balances task performance and supervisor burden more effectively than prior algorithms. ThriftyDAgger can also be applied at execution time, where it achieves a 100% success rate on both the simulation and physical tasks. A user study (N=10) in which users control a three-robot fleet while also performing a concentration task suggests that ThriftyDAgger increases human and robot performance by 58% and 80% respectively compared to the next best algorithm while reducing supervisor burden. See https://tinyurl.com/thrifty-dagger for supplementary material.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-hoque22a, title = {ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning}, author = {Hoque, Ryan and Balakrishna, Ashwin and Novoseller, Ellen and Wilcox, Albert and Brown, Daniel S. and Goldberg, Ken}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {598--608}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/hoque22a/hoque22a.pdf}, url = {https://proceedings.mlr.press/v164/hoque22a.html}, abstract = {Effective robot learning often requires online human feedback and interventions that can cost significant human time, giving rise to the central challenge in interactive imitation learning: is it possible to control the timing and length of interventions to both facilitate learning and limit burden on the human supervisor? This paper presents ThriftyDAgger, an algorithm for actively querying a human supervisor given a desired budget of human interventions. ThriftyDAgger uses a learned switching policy to solicit interventions only at states that are sufficiently (1) novel, where the robot policy has no reference behavior to imitate, or (2) risky, where the robot has low confidence in task completion. To detect the latter, we introduce a novel metric for estimating risk under the current robot policy. Experiments in simulation and on a physical cable routing experiment suggest that ThriftyDAgger’s intervention criteria balances task performance and supervisor burden more effectively than prior algorithms. ThriftyDAgger can also be applied at execution time, where it achieves a 100% success rate on both the simulation and physical tasks. A user study (N=10) in which users control a three-robot fleet while also performing a concentration task suggests that ThriftyDAgger increases human and robot performance by 58% and 80% respectively compared to the next best algorithm while reducing supervisor burden. See https://tinyurl.com/thrifty-dagger for supplementary material.} }
Endnote
%0 Conference Paper %T ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning %A Ryan Hoque %A Ashwin Balakrishna %A Ellen Novoseller %A Albert Wilcox %A Daniel S. Brown %A Ken Goldberg %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-hoque22a %I PMLR %P 598--608 %U https://proceedings.mlr.press/v164/hoque22a.html %V 164 %X Effective robot learning often requires online human feedback and interventions that can cost significant human time, giving rise to the central challenge in interactive imitation learning: is it possible to control the timing and length of interventions to both facilitate learning and limit burden on the human supervisor? This paper presents ThriftyDAgger, an algorithm for actively querying a human supervisor given a desired budget of human interventions. ThriftyDAgger uses a learned switching policy to solicit interventions only at states that are sufficiently (1) novel, where the robot policy has no reference behavior to imitate, or (2) risky, where the robot has low confidence in task completion. To detect the latter, we introduce a novel metric for estimating risk under the current robot policy. Experiments in simulation and on a physical cable routing experiment suggest that ThriftyDAgger’s intervention criteria balances task performance and supervisor burden more effectively than prior algorithms. ThriftyDAgger can also be applied at execution time, where it achieves a 100% success rate on both the simulation and physical tasks. A user study (N=10) in which users control a three-robot fleet while also performing a concentration task suggests that ThriftyDAgger increases human and robot performance by 58% and 80% respectively compared to the next best algorithm while reducing supervisor burden. See https://tinyurl.com/thrifty-dagger for supplementary material.
APA
Hoque, R., Balakrishna, A., Novoseller, E., Wilcox, A., Brown, D.S. & Goldberg, K.. (2022). ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:598-608 Available from https://proceedings.mlr.press/v164/hoque22a.html.

Related Material