Generating Transferable Adversarial Simulation Scenarios for Self-Driving via Neural Rendering

Yasasa Abeysirigoonawardena, Kevin Xie, Chuhan Chen, Salar Hosseini Khorasgani, Ruiting Chen, Ruiqi Wang, Florian Shkurti
Proceedings of The 7th Conference on Robot Learning, PMLR 229:3710-3731, 2023.

Abstract

Self-driving software pipelines include components that are learned from a significant number of training examples, yet it remains challenging to evaluate the overall system’s safety and generalization performance. Together with scaling up the real-world deployment of autonomous vehicles, it is of critical importance to automatically find simulation scenarios where the driving policies will fail. We propose a method that efficiently generates adversarial simulation scenarios for autonomous driving by solving an optimal control problem that aims to maximally perturb the policy from its nominal trajectory. Given an image-based driving policy, we show that we can inject new objects in a neural rendering representation of the deployment scene, and optimize their texture in order to generate adversarial sensor inputs to the policy. We demonstrate that adversarial scenarios discovered purely in the neural renderer (surrogate scene) can often be successfully transferred to the deployment scene, without further optimization. We demonstrate this transfer occurs both in simulated and real environments, provided the learned surrogate scene is sufficiently close to the deployment scene.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-abeysirigoonawardena23a, title = {Generating Transferable Adversarial Simulation Scenarios for Self-Driving via Neural Rendering}, author = {Abeysirigoonawardena, Yasasa and Xie, Kevin and Chen, Chuhan and Khorasgani, Salar Hosseini and Chen, Ruiting and Wang, Ruiqi and Shkurti, Florian}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {3710--3731}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/abeysirigoonawardena23a/abeysirigoonawardena23a.pdf}, url = {https://proceedings.mlr.press/v229/abeysirigoonawardena23a.html}, abstract = {Self-driving software pipelines include components that are learned from a significant number of training examples, yet it remains challenging to evaluate the overall system’s safety and generalization performance. Together with scaling up the real-world deployment of autonomous vehicles, it is of critical importance to automatically find simulation scenarios where the driving policies will fail. We propose a method that efficiently generates adversarial simulation scenarios for autonomous driving by solving an optimal control problem that aims to maximally perturb the policy from its nominal trajectory. Given an image-based driving policy, we show that we can inject new objects in a neural rendering representation of the deployment scene, and optimize their texture in order to generate adversarial sensor inputs to the policy. We demonstrate that adversarial scenarios discovered purely in the neural renderer (surrogate scene) can often be successfully transferred to the deployment scene, without further optimization. We demonstrate this transfer occurs both in simulated and real environments, provided the learned surrogate scene is sufficiently close to the deployment scene.} }
Endnote
%0 Conference Paper %T Generating Transferable Adversarial Simulation Scenarios for Self-Driving via Neural Rendering %A Yasasa Abeysirigoonawardena %A Kevin Xie %A Chuhan Chen %A Salar Hosseini Khorasgani %A Ruiting Chen %A Ruiqi Wang %A Florian Shkurti %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-abeysirigoonawardena23a %I PMLR %P 3710--3731 %U https://proceedings.mlr.press/v229/abeysirigoonawardena23a.html %V 229 %X Self-driving software pipelines include components that are learned from a significant number of training examples, yet it remains challenging to evaluate the overall system’s safety and generalization performance. Together with scaling up the real-world deployment of autonomous vehicles, it is of critical importance to automatically find simulation scenarios where the driving policies will fail. We propose a method that efficiently generates adversarial simulation scenarios for autonomous driving by solving an optimal control problem that aims to maximally perturb the policy from its nominal trajectory. Given an image-based driving policy, we show that we can inject new objects in a neural rendering representation of the deployment scene, and optimize their texture in order to generate adversarial sensor inputs to the policy. We demonstrate that adversarial scenarios discovered purely in the neural renderer (surrogate scene) can often be successfully transferred to the deployment scene, without further optimization. We demonstrate this transfer occurs both in simulated and real environments, provided the learned surrogate scene is sufficiently close to the deployment scene.
APA
Abeysirigoonawardena, Y., Xie, K., Chen, C., Khorasgani, S.H., Chen, R., Wang, R. & Shkurti, F.. (2023). Generating Transferable Adversarial Simulation Scenarios for Self-Driving via Neural Rendering. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:3710-3731 Available from https://proceedings.mlr.press/v229/abeysirigoonawardena23a.html.

Related Material