Robust Adversarial Training for Sequential Decision Making in Safety-Critical Cyber-Physical Systems

Cyber-physical systems (CPS) in safety-critical domains, including autonomous driving and robotic surgery, high-speed railways and power grids, increasingly rely on reinforcement learning (RL) as a method for decision-making through time. Unfortunately, deep RL policies are extremely brittle to adversarial perturbations; small, carefully crafted alterations to a policy’s observations or dynamics can result in catastrophic failure. Existing adversarial training methods mainly address static perception tasks and miss the nature of expected temporal compounding of perturbations under hard safety constraints unique to CPS. We present RADAR (Robust Adversarial Decision-making with Adaptive Resilience), a novel adversarial training framework for safety-critical sequential decision-making. RADAR casts the problem as a constrained robust Markov decision process and learns adversarial attacks that respect both physical dynamics and safety constraints at training time, propagating perturbations through time via a recurrent latent dynamics model. A Lagrangian-type min-max optimization jointly optimizes the robustness of the policy and the satisfaction of the safety constraint. RADAR achieves as much as 35% higher worst‑case reward and over 80% fewer safety violations (compared to strong RL under the strongest attacks) than strong baselines on benchmarks for autonomous vehicle lane‑keeping and power grid voltage control, with only minor degradation in nominal performance. RADAR offers an approach to robustify RL-based controllers against adversarial perturbations in a principled, scalable way that reconciles adversarial robustness with safe control.

Liked Liked