Explainable AI for Securing Perception-Layer Sensor Data in IoT Environmental Danger Detection Systems
This paper presents an explainable defense framework against perception-layer and Man-in-the-Middle (MitM) attacks in Internet of Things (IoT)-based environmental hazard warning systems. These systems rely on heterogeneous sensors (gas, light, sound, temperature, and humidity) whose integrity is crucial for reliable environmental alerts. Perception-layer attacks such as spoofing, jamming, and data injection can compromise sensor readings, while MitM attacks threaten communication reliability. The proposed approach integrates Dynamic Time Warping (DTW) for time-series anomaly detection with Shapley Additive Explanations (SHAP) for interpretability. A comparative evaluation framework jointly considers detection performance and explanation quality through metrics including pre-registering a Casual Ground Truth based on network protocol specifications and measuring the Sperman’s rank correlation of SHAP outputs, which eliminates the need for manual expert evaluation. Experimental simulations using an authentic EdgeIIoT-2022 dataset demonstrate high detection accuracy and moderated explainability scores. The results prove the framework’s ability to detect and explain adversarial behaviors in sensor networks, strengthening trust, transparency, and resilience in safety-critical IoT infrastructures.