Sample-Free Safety Assessment of Neural Network Controllers via Taylor Methods

arXiv:2602.11332v1 Announce Type: new
Abstract: In recent years, artificial neural networks have been increasingly studied as feedback controllers for guidance problems. While effective in complex scenarios, they lack the verification guarantees found in classical guidance policies. Their black-box nature creates significant concerns regarding trustworthiness, limiting their adoption in safety-critical spaceflight applications. This work addresses this gap by developing a method to assess the safety of a trained neural network feedback controller via automatic domain splitting and polynomial bounding. The methodology involves embedding the trained neural network into the system’s dynamical equations, rendering the closed-loop system autonomous. The system flow is then approximated by high-order Taylor polynomials, which are subsequently manipulated to construct polynomial maps that project state uncertainties onto an event manifold. Automatic domain splitting ensures the polynomials are accurate over their relevant subdomains, whilst also allowing an extensive state-space to be analysed efficiently. Utilising polynomial bounding techniques, the resulting event values may be rigorously constrained and analysed within individual subdomains, thereby establishing bounds on the range of possible closed-loop outcomes from using such neural network controllers and supporting safety assessment and informed operational decision-making in real-world missions.

Liked Liked