Memory-Augmented Spiking Networks: Synergistic Integration of Complementary Mechanisms for Neuromorphic Vision
arXiv:2603.08730v1 Announce Type: new
Abstract: Spiking Neural Networks (SNNs) provide biological plausibility and energy efficiency, yet systematic investigations of memory augmentation strategies remain limited. We conduct a five-model ablation study integrating Leaky Integrate-and-Fire neurons, Supervised Contrastive Learning (SCL), Hopfield networks, and Hierarchical Gated Recurrent Networks (HGRN) on the N-MNIST dataset. Baseline SNNs exhibit organized neuronal groupings, or structured assemblies, characterized by a silhouette score of $0.687 pm 0.012$. Individual augmentations introduce trade-offs: SCL improves accuracy by $0.28%$ but reduces clustering (silhouette score $0.637 pm 0.015$), while HGRN yields consistent gains in both accuracy ($+1.01%$) and computational efficiency ($170.6times$). Full integration achieves a balanced improvement across metrics, reaching a silhouette score of $0.715 pm 0.008$, classification accuracy of $97.49 pm 0.10%$, energy consumption of $1.85 pm 0.06,mumathrm{J}$, and sparsity of $97.0%$. These results indicate that optimal performance emerges from architectural balance rather than isolated optimization, establishing design principles for memory-augmented neuromorphic systems.