R2E-VID: Two-Stage Robust Routing via Temporal Gating for Elastic Edge-Cloud Video Inference
arXiv:2604.09681v1 Announce Type: new
Abstract: With the rapid growth of large-scale video analytics applications, edge-cloud collaborative systems have become the dominant paradigm for real-time inference. However, existing approaches often fail to dynamically adapt to heterogeneous video content and fluctuating resource conditions, resulting in suboptimal routing efficiency and high computational costs. In this paper, we propose R2E-VID, a two-stage robust routing framework via temporal gating for elastic edge-cloud video inference. In the first stage, R2E-VID introduces a temporal gating mechanism that models the temporal consistency and motion dynamics of incoming video streams to predict the optimal routing pattern for each segment. This enables adaptive partitioning of inference workloads between edge and cloud nodes, achieving fine-grained spatiotemporal elasticity. In the second stage, a robust routing optimization module refines the allocation through multi-model adaptation, jointly minimizing inference delay and resource consumption under dynamic network and workload variations. Extensive experiments on public datasets demonstrate that R2E-VID achieves up to 60% reduction in overall cost compared to cloud-centric baselines, and delivers 35-45% lower delay while improving inference accuracy by 2-7% over state-of-the-art edge-cloud solutions.