CAT: Closed-loop Adversarial Training for Safe End-to-End Driving

Linrui Zhang, Zhenghao Peng, Quanyi Li, Bolei Zhou
Proceedings of The 7th Conference on Robot Learning, PMLR 229:2357-2372, 2023.

Abstract

Driving safety is a top priority for autonomous vehicles. Orthogonal to prior work handling accident-prone traffic events by algorithm designs at the policy level, we investigate a Closed-loop Adversarial Training (CAT) framework for safe end-to-end driving in this paper through the lens of environment augmentation. CAT aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios that are dynamically generated over time. A novel resampling technique is developed to turn log-replay real-world driving scenarios into safety-critical ones via probabilistic factorization, where the adversarial traffic generation is modeled as the multiplication of standard motion prediction sub-problems. Consequently, CAT can launch more efficient physical attacks compared to existing safety-critical scenario generation methods and yields a significantly less computational cost in the iterative learning pipeline. We incorporate CAT into the MetaDrive simulator and validate our approach on hundreds of driving scenarios imported from real-world driving datasets. Experimental results demonstrate that CAT can effectively generate adversarial scenarios countering the agent being trained. After training, the agent can achieve superior driving safety in both log-replay and safety-critical traffic scenarios on the held-out test set. Code and data are available at: https://metadriverse.github.io/cat

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-zhang23g, title = {CAT: Closed-loop Adversarial Training for Safe End-to-End Driving}, author = {Zhang, Linrui and Peng, Zhenghao and Li, Quanyi and Zhou, Bolei}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {2357--2372}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/zhang23g/zhang23g.pdf}, url = {https://proceedings.mlr.press/v229/zhang23g.html}, abstract = {Driving safety is a top priority for autonomous vehicles. Orthogonal to prior work handling accident-prone traffic events by algorithm designs at the policy level, we investigate a Closed-loop Adversarial Training (CAT) framework for safe end-to-end driving in this paper through the lens of environment augmentation. CAT aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios that are dynamically generated over time. A novel resampling technique is developed to turn log-replay real-world driving scenarios into safety-critical ones via probabilistic factorization, where the adversarial traffic generation is modeled as the multiplication of standard motion prediction sub-problems. Consequently, CAT can launch more efficient physical attacks compared to existing safety-critical scenario generation methods and yields a significantly less computational cost in the iterative learning pipeline. We incorporate CAT into the MetaDrive simulator and validate our approach on hundreds of driving scenarios imported from real-world driving datasets. Experimental results demonstrate that CAT can effectively generate adversarial scenarios countering the agent being trained. After training, the agent can achieve superior driving safety in both log-replay and safety-critical traffic scenarios on the held-out test set. Code and data are available at: https://metadriverse.github.io/cat} }
Endnote
%0 Conference Paper %T CAT: Closed-loop Adversarial Training for Safe End-to-End Driving %A Linrui Zhang %A Zhenghao Peng %A Quanyi Li %A Bolei Zhou %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-zhang23g %I PMLR %P 2357--2372 %U https://proceedings.mlr.press/v229/zhang23g.html %V 229 %X Driving safety is a top priority for autonomous vehicles. Orthogonal to prior work handling accident-prone traffic events by algorithm designs at the policy level, we investigate a Closed-loop Adversarial Training (CAT) framework for safe end-to-end driving in this paper through the lens of environment augmentation. CAT aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios that are dynamically generated over time. A novel resampling technique is developed to turn log-replay real-world driving scenarios into safety-critical ones via probabilistic factorization, where the adversarial traffic generation is modeled as the multiplication of standard motion prediction sub-problems. Consequently, CAT can launch more efficient physical attacks compared to existing safety-critical scenario generation methods and yields a significantly less computational cost in the iterative learning pipeline. We incorporate CAT into the MetaDrive simulator and validate our approach on hundreds of driving scenarios imported from real-world driving datasets. Experimental results demonstrate that CAT can effectively generate adversarial scenarios countering the agent being trained. After training, the agent can achieve superior driving safety in both log-replay and safety-critical traffic scenarios on the held-out test set. Code and data are available at: https://metadriverse.github.io/cat
APA
Zhang, L., Peng, Z., Li, Q. & Zhou, B.. (2023). CAT: Closed-loop Adversarial Training for Safe End-to-End Driving. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:2357-2372 Available from https://proceedings.mlr.press/v229/zhang23g.html.

Related Material