[edit]
How safe am I given what I see? Calibrated prediction of safety chances for image-controlled autonomy
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:1370-1387, 2024.
Abstract
End-to-end learning has emerged as a major paradigm for developing autonomous controllers. Unfortunately, with its performance and convenience comes an even greater challenge of safety assurance. A key factor in this challenge is the absence of low-dimensional and interpretable dynamical states, around which traditional assurance methods revolve. Focusing on the online safety prediction problem, this paper systematically investigates a flexible family of learning pipelines based on generative world models, which do not require low-dimensional states. To implement these pipelines, we overcome the challenges of missing safety labels under prediction-induced distribution shift and learning safety-informed latent representations. Moreover, we provide statistical calibration guarantees for our safety chance predictions based on conformal inference. An extensive evaluation of our predictor family on two image-controlled case studies, a racing car and a cartpole, delivers counterintuitive results and highlights open problems in deep safety prediction.