Hybrid Hierarchical Federated Learning over 5G/NextG Wireless Networking

arXiv:2604.09680v1 Announce Type: new
Abstract: Today’s 5G and NextG wireless networks are moving toward using the coordinated multi-point (CoMP) transmission and reception technique, where a client can be simultaneously served by multiple base stations (BSs) for better communication performance. However, traditional hierarchical federated learning (HFL) architectures impose the constraint that each client can be associated with only one edge server (ES) at a time. If we keep using the traditional HFL architectures in modern hierarchical networks for model training, the benefits of the CoMP technique would remain unexploited and leave room for further improvements in training efficiency. To address this issue, we propose hybrid hierarchical federated learning (HHFL), which allows clients in overlapping regions to simultaneously communicate with multiple edge servers (ESs) for model aggregation. HHFL is able to enhance inter-ES knowledge sharing, thereby mitigating model divergence and improving training efficiency. We provide a rigorous theoretical convergence analysis with a convergence upper bound to validate its effectiveness. Experimental results show that HHFL outperforms traditional HFL, particularly when the data across different ESs is not independent and identically distributed (non-IID). For example, when each ES is dominated by only two of the ten classes and 15 out of the 57 clients can connect to multiple ESs, HHFL achieves up to 2x faster convergence under the same configuration. These results demonstrate that HHFL provides a scalable and efficient solution for FL model training in today’s and NextG wireless networks.

Liked Liked