Assisted Perception: Optimizing Observations to Communicate State

Siddharth Reddy, Sergey Levine, Anca Dragan
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:748-764, 2021.

Abstract

We aim to help users estimate the state of the world in tasks like robotic teleoperation and navigation with visual impairment, where user may have systematic biases that lead to suboptimal behavior: they might struggle to process observations from multiple sensors simultaneously, receive delayed observations, or underestimate distances to obstacles. While we cannot directly change the user’s internal beliefs or their internal state estimation process, our insight is that we can still assist them by modifying the user’s observations. Instead of showing the user their true observations, ***we synthesize new observations that lead to more accurate internal state estimates when processed by the user***. We refer to this method as assistive state estimation (ASE): an automated assistant uses the true observations to infer the state of the world, then generates a modified observation for the user to consume (e.g., through an augmented reality interface), and optimizes the modification to induce the user’s new beliefs to match the assistant’s current beliefs. To predict the effect of the modified observation on the user’s beliefs, ASE learns a model of the user’s state estimation process: after each task completion, it searches for a model that would have led to beliefs that explain the user’s actions. We evaluate ASE in a user study with 12 participants who each perform four tasks: two tasks with known user biases – bandwidth-limited image classification and a driving video game with observation delay – and two with unknown biases that our method has to learn – guided 2D navigation and a lunar lander teleoperation video game. ASE’s general-purpose approach to synthesizing informative observations enables a different assistance strategy to emerge in each domain, such as quickly revealing informative pixels to speed up image classification, using a dynamics model to undo observation delay in driving, identifying nearby landmarks for navigation, and exaggerating a visual indicator of tilt in the lander game. The results show that ASE substantially improves the task performance of users with bandwidth constraints, observation delay, and other unknown biases.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-reddy21a, title = {Assisted Perception: Optimizing Observations to Communicate State}, author = {Reddy, Siddharth and Levine, Sergey and Dragan, Anca}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {748--764}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/reddy21a/reddy21a.pdf}, url = {https://proceedings.mlr.press/v155/reddy21a.html}, abstract = {We aim to help users estimate the state of the world in tasks like robotic teleoperation and navigation with visual impairment, where user may have systematic biases that lead to suboptimal behavior: they might struggle to process observations from multiple sensors simultaneously, receive delayed observations, or underestimate distances to obstacles. While we cannot directly change the user’s internal beliefs or their internal state estimation process, our insight is that we can still assist them by modifying the user’s observations. Instead of showing the user their true observations, ***we synthesize new observations that lead to more accurate internal state estimates when processed by the user***. We refer to this method as assistive state estimation (ASE): an automated assistant uses the true observations to infer the state of the world, then generates a modified observation for the user to consume (e.g., through an augmented reality interface), and optimizes the modification to induce the user’s new beliefs to match the assistant’s current beliefs. To predict the effect of the modified observation on the user’s beliefs, ASE learns a model of the user’s state estimation process: after each task completion, it searches for a model that would have led to beliefs that explain the user’s actions. We evaluate ASE in a user study with 12 participants who each perform four tasks: two tasks with known user biases – bandwidth-limited image classification and a driving video game with observation delay – and two with unknown biases that our method has to learn – guided 2D navigation and a lunar lander teleoperation video game. ASE’s general-purpose approach to synthesizing informative observations enables a different assistance strategy to emerge in each domain, such as quickly revealing informative pixels to speed up image classification, using a dynamics model to undo observation delay in driving, identifying nearby landmarks for navigation, and exaggerating a visual indicator of tilt in the lander game. The results show that ASE substantially improves the task performance of users with bandwidth constraints, observation delay, and other unknown biases.} }
Endnote
%0 Conference Paper %T Assisted Perception: Optimizing Observations to Communicate State %A Siddharth Reddy %A Sergey Levine %A Anca Dragan %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-reddy21a %I PMLR %P 748--764 %U https://proceedings.mlr.press/v155/reddy21a.html %V 155 %X We aim to help users estimate the state of the world in tasks like robotic teleoperation and navigation with visual impairment, where user may have systematic biases that lead to suboptimal behavior: they might struggle to process observations from multiple sensors simultaneously, receive delayed observations, or underestimate distances to obstacles. While we cannot directly change the user’s internal beliefs or their internal state estimation process, our insight is that we can still assist them by modifying the user’s observations. Instead of showing the user their true observations, ***we synthesize new observations that lead to more accurate internal state estimates when processed by the user***. We refer to this method as assistive state estimation (ASE): an automated assistant uses the true observations to infer the state of the world, then generates a modified observation for the user to consume (e.g., through an augmented reality interface), and optimizes the modification to induce the user’s new beliefs to match the assistant’s current beliefs. To predict the effect of the modified observation on the user’s beliefs, ASE learns a model of the user’s state estimation process: after each task completion, it searches for a model that would have led to beliefs that explain the user’s actions. We evaluate ASE in a user study with 12 participants who each perform four tasks: two tasks with known user biases – bandwidth-limited image classification and a driving video game with observation delay – and two with unknown biases that our method has to learn – guided 2D navigation and a lunar lander teleoperation video game. ASE’s general-purpose approach to synthesizing informative observations enables a different assistance strategy to emerge in each domain, such as quickly revealing informative pixels to speed up image classification, using a dynamics model to undo observation delay in driving, identifying nearby landmarks for navigation, and exaggerating a visual indicator of tilt in the lander game. The results show that ASE substantially improves the task performance of users with bandwidth constraints, observation delay, and other unknown biases.
APA
Reddy, S., Levine, S. & Dragan, A.. (2021). Assisted Perception: Optimizing Observations to Communicate State. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:748-764 Available from https://proceedings.mlr.press/v155/reddy21a.html.

Related Material