[edit]
Failing with Grace: Learning Neural Network Controllers that are Boundedly Unsafe
Proceedings of The 5th Annual Learning for Dynamics and Control Conference, PMLR 211:954-965, 2023.
Abstract
This work considers the problem of learning a feed-forward neural network controller to safely steer an arbitrarily shaped planar robot in a compact, obstacle-occluded workspace. When training neural network controllers, existing closed-loop safety assurances impose stringent data density requirements close to the boundary of the safe state space, which are hard to satisfy in practice. We propose an approach that lifts these strong assumptions and instead admits graceful safety violations, i.e., of a bounded, spatially controlled magnitude. The method employs reachability analysis techniques to include safety constraints in the training process. The method can simultaneously learn a safe vector field for the closed-loop system and provide proven numerical worst-case bounds on safety violations over the whole configuration space, defined by the overlap between an over-approximation of the closed-loop system’s forward reachable set and the set of unsafe states.