From Simulation to Reality: Practical Deep Reinforcement Learning-based Link Adaptation for Cellular Networks
arXiv:2603.00689v1 Announce Type: new
Abstract: Link Adaptation (LA) that dynamically adjusts the Modulation and Coding Schemes (MCS) to accommodate time-varying channels is crucial and challenging in cellular networks. Deep reinforcement learning (DRL)-based LA that learns to make decision through the interaction with the environment is a promising approach to improve throughput. However, existing DRL-based LA algorithms are typically evaluated in simplified simulation environments, neglecting practical issues such as ACK/NACK feedback delay, retransmission and parallel hybrid automatic repeat request (HARQ). Moreover, these algorithms overlook the impact of DRL execution latency, which can significantly degrade system performance. To address these challenges, we propose Decoupling-DQN (DC-DQN), a new DRL framework that separates traditional DRL’s coupled training and inference processes into two modules based on Deep Q Networks (DQN): a real-time inference module and an out-of-decision-loop training module. Based on this framework, we introduce a novel DRL-based LA algorithm, DC-DQN-LA. The algorithm incorporates practical considerations by designing state, action, and reward functions that account for feedback delays, parallel HARQ, and retransmissions. We implemented a prototype using USRP software-defined radios and srsRAN software. Experimental results demonstrate that DC-DQN-LA improves throughput by 40% to 70% in mobile scenario compared with baseline LA algorithms, while maintaining comparable block error rates, and can quickly adapt to environment changes in mobile-to-static scenario. These results highlight the efficiency and practicality of the proposed DRL-based LA algorithm.