RL-Loop: Reinforcement Learning-Driven Real-Time 5G Slice Control for Connected and Autonomous Mobility Services
arXiv:2604.02461v1 Announce Type: new
Abstract: Smart and connected mobility systems rely on 5G edge infrastructure to support real-time communication, control, and service differentiation. Achieving this requires adaptive resource management mechanisms that can react to rapidly changing traffic conditions. In this paper, we propose RL-Loop, a closed-loop reinforcement learning framework for real-time CPU resource control in 5G network slicing environments supporting connected mobility services. RL-Loop employs a Proximal Policy Optimization (PPO) agent that continuously observes slice-level key performance indicators and adjusts edge CPU allocations at one-second granularity on a real testbed. The framework leverages real-time observability and feedback to enable adaptive, software-defined edge intelligence. Experimental results suggest that RL-Loop can reduce average CPU allocation by over 55% relative to the reference operating point while reaching a comparable quality-of-service degradation region. These results indicate that lightweight reinforcement learning–based feedback control can provide efficient and responsive resource management for 5G-enabled smart mobility and connected vehicle services.