Cooperative Edge Caching with Large Language Model in Wireless Networks
arXiv:2602.13307v1 Announce Type: new
Abstract: Cooperative edge caching in overlapping zones creates intricate coupling among Base Station (BS) decisions, making content replacement highly sensitive to topology and temporal reuse. While heuristics are often myopic and Deep Reinforcement Learning lacks robustness under dynamics, this paper proposes a Large Language Model (LLM)-based multi-BS orchestrator. The LLM acts as the sole autonomous engine, interacting with the environment via a validated text-to-action interface. Each time slot, the system renders environmental states — including cache inventories and frequency statistics — into prompts, parsing LLM-generated decisions against strict feasibility constraints. We align the model through a two-stage paradigm: Supervised Fine-Tuning on oracle trajectories for syntax and initialization, followed by Group Relative Policy Optimization. The latter employs an “opportunity-aware” reward that prioritizes multi-step cooperative gains relative to a No-Operation baseline. Evaluated on identical request traces, the orchestrator approaches exhaustive-search performance (0.610 vs. 0.617 in a 5-BS scenario), outperforms classical baselines (e.g., +4.1% over least-frequently used), and demonstrates robust zero-shot transfer across varying cache capacities, library sizes, and user densities.