Uncertainty-Guided Interpretable Neural Networks with Adaptive Weight Analysis for Medical Imaging
Neural network interpretability methods have produced powerful approaches for gradient-based feature attribution, energy landscape analysis, and uncertainty quantification. State-of-the-art methods rely on structural weight analysis, Monte Carlo dropout uncertainty estimation, and attention mechanisms for interpretable predictions with quantified confidence. However, existing methods face fundamental challenges: unreliable explanations with poor uncertainty quantification on complex medical imaging tasks, difficulty identifying important network weights due to fixed thresholds, and computational overhead from attention mechanisms operating without uncertainty guidance. We introduce Adaptive Uncertainty-Guided Interpretable Networks (AUGIN), a framework combining adaptive structural weight analysis, uncertainty-aware prediction intervals, and uncertainty-guided attention through three interconnected modules. Our approach models adaptive threshold computation evolving with layer depth and architecture, integrates feature-level uncertainty into prediction interval generation, and introduces uncertainty-guided attention focusing on uncertain regions. Experiments on ISLES stroke prediction, BraTS brain tumor segmentation, and CT-CTA thrombectomy outcome prediction demonstrate superior performance: 0.7891 AUC-ROC on ISLES (1.59 percentage point improvement), 0.8567 Dice score on BraTS, and 0.7456 attention IoU, while maintaining excellent uncertainty calibration (0.8123) and explanation fidelity (0.8234). Our work advances neural network interpretability by providing accurate predictions with reliable, uncertainty-quantified explanations for safety-critical medical imaging applications.