Digital Twin-Enabled Mobility-Aware Cooperative Caching in Vehicular Edge Computing

arXiv:2603.06653v1 Announce Type: new
Abstract: With the advancement of vehicle-to-vehicle (V2V) ad hoc networks and wireless communication technologies, mobile edge caching has become a key enabler for enhancing network performance and user experience. However, traditional federated learning-based collaborative caching approaches in vehicular scenarios suffer from inadequate client selection mechanisms and limited prediction accuracy, which result in suboptimal cache hit ratios and increased content transmission latency. To address these challenges, we propose a Digital Twin-based Asynchronous Federated Learning-driven Predictive Edge Caching with Deep Reinforcement Learning (DAPR) framework. DAPR employs an intelligent client selection strategy based on asynchronous federated learning, which leverages mobility prediction and data quality assessment to avoid selecting highly mobile clients or clients with low-quality data, thereby significantly improving model convergence efficiency. In addition, we design a GRU-VAE prediction model that uses a Variational Autoencoder (VAE) to capture latent data distribution features and Gated Recurrent Units (GRUs) to model temporal dependencies, thereby substantially enhancing the accuracy of content request prediction. The predicted content popularities are then fed into a deep reinforcement learning-driven caching decision engine to dynamically optimize edge caching resource allocation. Extensive experiments demonstrate that DAPR achieves superior performance in terms of average reward, cache hit ratio, and transmission latency, thereby effectively improving the overall efficiency of vehicular edge caching systems.

Liked Liked