TrustGraph-DFL: Byzantine-Resilient Decentralized Federated Learning via Consistency-Weighted Neighborhood Aggregation
Decentralized federated learning (DFL) eliminates the single point of failure inherent in server-based architectures, enabling peer-to-peer collaborative model training. However, the absence of a central authority makes DFL particularly vulnerable to Byzantine attacks from malicious participants. Existing Byzantine-robust methods often fail to exploit the network topology structure of DFL. We propose TrustGraph-DFL, a novel defense mechanism that leverages graph-based trust modeling for Byzantine resilience. Our key insight is that consistency between a neighbor’s model update direction and a node’s local validation gradient can serve as an effective trust indicator. Each node computes consistency scores by comparing received updates against locally computed validation gradients, then maps these scores to dynamic edge weights for robust weighted aggregation. Experiments on CIFAR-10 demonstrate that TrustGraph-DFL achieves 3–5% higher accuracy than existing methods under 30% Byzantine nodes while maintaining a low false positive rate (approximately 9% at 50% Byzantine fraction, compared to 35% for Krum).