Learning Visualization Policies of Augmented Reality for Human-Robot Collaboration

Kishan Dhananjay Chandan, Jack Albertson, Shiqi Zhang
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1233-1243, 2023.

Abstract

In human-robot collaboration domains, augmented reality (AR) technologies have enabled people to visualize the state of robots. Current AR-based visualization policies are designed manually, which requires a lot of human efforts and domain knowledge. When too little information is visualized, human users find the AR interface not useful; when too much information is visualized, they find it difficult to process the visualized information. In this paper, we develop an intelligent AR agent that learns visualization policies (what to visualize, when, and how) from demonstrations. We created a Unity-based platform for simulating warehouse environments where human-robot teammates work on collaborative delivery tasks. We have collected a dataset that includes demonstrations of visualizing robots’ current and planned behaviors. Our results from experiments with real human participants show that, compared with competitive baselines from the literature, our learned visualization strategies significantly increase the efficiency of human-robot teams in delivery tasks, while reducing the distraction level of human users.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-chandan23a, title = {Learning Visualization Policies of Augmented Reality for Human-Robot Collaboration}, author = {Chandan, Kishan Dhananjay and Albertson, Jack and Zhang, Shiqi}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1233--1243}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/chandan23a/chandan23a.pdf}, url = {https://proceedings.mlr.press/v205/chandan23a.html}, abstract = {In human-robot collaboration domains, augmented reality (AR) technologies have enabled people to visualize the state of robots. Current AR-based visualization policies are designed manually, which requires a lot of human efforts and domain knowledge. When too little information is visualized, human users find the AR interface not useful; when too much information is visualized, they find it difficult to process the visualized information. In this paper, we develop an intelligent AR agent that learns visualization policies (what to visualize, when, and how) from demonstrations. We created a Unity-based platform for simulating warehouse environments where human-robot teammates work on collaborative delivery tasks. We have collected a dataset that includes demonstrations of visualizing robots’ current and planned behaviors. Our results from experiments with real human participants show that, compared with competitive baselines from the literature, our learned visualization strategies significantly increase the efficiency of human-robot teams in delivery tasks, while reducing the distraction level of human users.} }
Endnote
%0 Conference Paper %T Learning Visualization Policies of Augmented Reality for Human-Robot Collaboration %A Kishan Dhananjay Chandan %A Jack Albertson %A Shiqi Zhang %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-chandan23a %I PMLR %P 1233--1243 %U https://proceedings.mlr.press/v205/chandan23a.html %V 205 %X In human-robot collaboration domains, augmented reality (AR) technologies have enabled people to visualize the state of robots. Current AR-based visualization policies are designed manually, which requires a lot of human efforts and domain knowledge. When too little information is visualized, human users find the AR interface not useful; when too much information is visualized, they find it difficult to process the visualized information. In this paper, we develop an intelligent AR agent that learns visualization policies (what to visualize, when, and how) from demonstrations. We created a Unity-based platform for simulating warehouse environments where human-robot teammates work on collaborative delivery tasks. We have collected a dataset that includes demonstrations of visualizing robots’ current and planned behaviors. Our results from experiments with real human participants show that, compared with competitive baselines from the literature, our learned visualization strategies significantly increase the efficiency of human-robot teams in delivery tasks, while reducing the distraction level of human users.
APA
Chandan, K.D., Albertson, J. & Zhang, S.. (2023). Learning Visualization Policies of Augmented Reality for Human-Robot Collaboration. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1233-1243 Available from https://proceedings.mlr.press/v205/chandan23a.html.

Related Material