Synthesizing Adversarial Visual Scenarios for Model-Based Robotic Control

Shubhankar Agarwal, Sandeep P. Chinchali
Proceedings of The 6th Conference on Robot Learning, PMLR 205:800-811, 2023.

Abstract

Today’s robots often interface data-driven perception and planning models with classical model-predictive controllers (MPC). Often, such learned perception/planning models produce erroneous waypoint predictions on out-of-distribution (OoD) or even adversarial visual inputs, which increase control cost. However, today’s methods to train robust perception models are largely task-agnostic – they augment a dataset using random image transformations or adversarial examples targeted at the vision model in isolation. As such, they often introduce pixel perturbations that are ultimately benign for control. In contrast to prior work that synthesizes adversarial examples for single-step vision tasks, our key contribution is to synthesize adversarial scenarios tailored to multi-step, model-based control. To do so, we use differentiable MPC methods to calculate the sensitivity of a model-based controller to errors in state estimation. We show that re-training vision models on these adversarial datasets improves control performance on OoD test scenarios by up to 36.2% compared to standard task-agnostic data augmentation. We demonstrate our method on examples of robotic navigation, manipulation in RoboSuite, and control of an autonomous air vehicle.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-agarwal23b, title = {Synthesizing Adversarial Visual Scenarios for Model-Based Robotic Control}, author = {Agarwal, Shubhankar and Chinchali, Sandeep P.}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {800--811}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/agarwal23b/agarwal23b.pdf}, url = {https://proceedings.mlr.press/v205/agarwal23b.html}, abstract = {Today’s robots often interface data-driven perception and planning models with classical model-predictive controllers (MPC). Often, such learned perception/planning models produce erroneous waypoint predictions on out-of-distribution (OoD) or even adversarial visual inputs, which increase control cost. However, today’s methods to train robust perception models are largely task-agnostic – they augment a dataset using random image transformations or adversarial examples targeted at the vision model in isolation. As such, they often introduce pixel perturbations that are ultimately benign for control. In contrast to prior work that synthesizes adversarial examples for single-step vision tasks, our key contribution is to synthesize adversarial scenarios tailored to multi-step, model-based control. To do so, we use differentiable MPC methods to calculate the sensitivity of a model-based controller to errors in state estimation. We show that re-training vision models on these adversarial datasets improves control performance on OoD test scenarios by up to 36.2% compared to standard task-agnostic data augmentation. We demonstrate our method on examples of robotic navigation, manipulation in RoboSuite, and control of an autonomous air vehicle.} }
Endnote
%0 Conference Paper %T Synthesizing Adversarial Visual Scenarios for Model-Based Robotic Control %A Shubhankar Agarwal %A Sandeep P. Chinchali %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-agarwal23b %I PMLR %P 800--811 %U https://proceedings.mlr.press/v205/agarwal23b.html %V 205 %X Today’s robots often interface data-driven perception and planning models with classical model-predictive controllers (MPC). Often, such learned perception/planning models produce erroneous waypoint predictions on out-of-distribution (OoD) or even adversarial visual inputs, which increase control cost. However, today’s methods to train robust perception models are largely task-agnostic – they augment a dataset using random image transformations or adversarial examples targeted at the vision model in isolation. As such, they often introduce pixel perturbations that are ultimately benign for control. In contrast to prior work that synthesizes adversarial examples for single-step vision tasks, our key contribution is to synthesize adversarial scenarios tailored to multi-step, model-based control. To do so, we use differentiable MPC methods to calculate the sensitivity of a model-based controller to errors in state estimation. We show that re-training vision models on these adversarial datasets improves control performance on OoD test scenarios by up to 36.2% compared to standard task-agnostic data augmentation. We demonstrate our method on examples of robotic navigation, manipulation in RoboSuite, and control of an autonomous air vehicle.
APA
Agarwal, S. & Chinchali, S.P.. (2023). Synthesizing Adversarial Visual Scenarios for Model-Based Robotic Control. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:800-811 Available from https://proceedings.mlr.press/v205/agarwal23b.html.

Related Material