How safe am I given what I see? Calibrated prediction of safety chances for image-controlled autonomy

Zhenjiang Mao, Carson Sobolewski, Ivan Ruchkin
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1370-1387, 2024.

Abstract

End-to-end learning has emerged as a major paradigm for developing autonomous controllers. Unfortunately, with its performance and convenience comes an even greater challenge of safety assurance. A key factor in this challenge is the absence of low-dimensional and interpretable dynamical states, around which traditional assurance methods revolve. Focusing on the online safety prediction problem, this paper systematically investigates a flexible family of learning pipelines based on generative world models, which do not require low-dimensional states. To implement these pipelines, we overcome the challenges of missing safety labels under prediction-induced distribution shift and learning safety-informed latent representations. Moreover, we provide statistical calibration guarantees for our safety chance predictions based on conformal inference. An extensive evaluation of our predictor family on two image-controlled case studies, a racing car and a cartpole, delivers counterintuitive results and highlights open problems in deep safety prediction.

Cite this Paper


BibTeX
@InProceedings{pmlr-v242-mao24c, title = {How safe am {I} given what {I} see? Calibrated prediction of safety chances for image-controlled autonomy}, author = {Mao, Zhenjiang and Sobolewski, Carson and Ruchkin, Ivan}, booktitle = {Proceedings of the 6th Annual Learning for Dynamics & Control Conference}, pages = {1370--1387}, year = {2024}, editor = {Abate, Alessandro and Cannon, Mark and Margellos, Kostas and Papachristodoulou, Antonis}, volume = {242}, series = {Proceedings of Machine Learning Research}, month = {15--17 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v242/mao24c/mao24c.pdf}, url = {https://proceedings.mlr.press/v242/mao24c.html}, abstract = {End-to-end learning has emerged as a major paradigm for developing autonomous controllers. Unfortunately, with its performance and convenience comes an even greater challenge of safety assurance. A key factor in this challenge is the absence of low-dimensional and interpretable dynamical states, around which traditional assurance methods revolve. Focusing on the online safety prediction problem, this paper systematically investigates a flexible family of learning pipelines based on generative world models, which do not require low-dimensional states. To implement these pipelines, we overcome the challenges of missing safety labels under prediction-induced distribution shift and learning safety-informed latent representations. Moreover, we provide statistical calibration guarantees for our safety chance predictions based on conformal inference. An extensive evaluation of our predictor family on two image-controlled case studies, a racing car and a cartpole, delivers counterintuitive results and highlights open problems in deep safety prediction.} }
Endnote
%0 Conference Paper %T How safe am I given what I see? Calibrated prediction of safety chances for image-controlled autonomy %A Zhenjiang Mao %A Carson Sobolewski %A Ivan Ruchkin %B Proceedings of the 6th Annual Learning for Dynamics & Control Conference %C Proceedings of Machine Learning Research %D 2024 %E Alessandro Abate %E Mark Cannon %E Kostas Margellos %E Antonis Papachristodoulou %F pmlr-v242-mao24c %I PMLR %P 1370--1387 %U https://proceedings.mlr.press/v242/mao24c.html %V 242 %X End-to-end learning has emerged as a major paradigm for developing autonomous controllers. Unfortunately, with its performance and convenience comes an even greater challenge of safety assurance. A key factor in this challenge is the absence of low-dimensional and interpretable dynamical states, around which traditional assurance methods revolve. Focusing on the online safety prediction problem, this paper systematically investigates a flexible family of learning pipelines based on generative world models, which do not require low-dimensional states. To implement these pipelines, we overcome the challenges of missing safety labels under prediction-induced distribution shift and learning safety-informed latent representations. Moreover, we provide statistical calibration guarantees for our safety chance predictions based on conformal inference. An extensive evaluation of our predictor family on two image-controlled case studies, a racing car and a cartpole, delivers counterintuitive results and highlights open problems in deep safety prediction.
APA
Mao, Z., Sobolewski, C. & Ruchkin, I.. (2024). How safe am I given what I see? Calibrated prediction of safety chances for image-controlled autonomy. Proceedings of the 6th Annual Learning for Dynamics & Control Conference, in Proceedings of Machine Learning Research 242:1370-1387 Available from https://proceedings.mlr.press/v242/mao24c.html.

Related Material