Constructing Brain-Inspired Sparse Topologies for Energy-Efficient ANN-to-SNN Conversion via Cannistraci-Hebb Training
While ANN-to-SNN conversion is a pivotal approach to obtain SNNs, current methods mostly focus on dense architectures, disregarding the structural sparsity fundamental to brain neural networks. To bridge this gap, we propose a novel framework that integrates Cannistraci-Hebb Training (CHT)—a brain-inspired Dynamic Sparse Training algorithm—to instill biologically plausible topologies into SNNs. Through our framework, the converted SNNs directly inherit emergent brain-like properties, such as meta-depth and small-worldness, from the sparse ANNs. We confirm the brain-like topology trained by CHT and then investigate our framework across different conversion approaches. Our approach achieves comparable or superior accuracy to dense counterparts on both convolutional neural networks (CNNs) and Vision Transformer (ViT), while reducing theoretical energy consumption by over 60%. Empirically, we validate the framework’s superiority over pruning baselines and direct SNN sparse training in terms of the accuracy-energy trade-off.