STAR-RL: Stealth-Aware Targeted Adversarial Attack on Multimodal Sensors in Human Activity Recognition via Reinforcement Learning
Deep learning-based Human Activity Recognition (HAR) systems using multimodal wearable sensors are increasingly deployed in safety-critical applications including healthcare monitoring, elderly care, and security authentication. However, the vulnerability of these systems to adversarial attacks remains insufficiently understood, particularly for attacks that must evade detection while manipulating multiple sensor modalities simultaneously. This paper presents STAR-RL (Stealth-aware Targeted Adversarial attack via Reinforcement Learning), a novel framework that generates effective and stealthy adversarial examples against multimodal sensor-based HAR systems. STAR-RL introduces three key innovations: (1) a multi-strategy attack engine that adaptively selects among diverse perturbation algorithms based on real-time attack progress, (2) a sensor-aware stealth mechanism that concentrates perturbations on naturally noisy sensors to minimize detection likelihood, and (3) a reinforcement learning-based meta-controller that learns optimal attack policies through interaction with the target classifier. Comprehensive experiments on the MHEALTH dataset demonstrate that STAR-RL achieves 95.20% attack success rate, substantially outperforming baseline methods including FGSM (6.00%), PGD (88.60%), and C&W (69.00%). The stealth analysis confirms that 51.35% of perturbation energy is successfully directed to weak sensors (gyroscopes and magnetometers), validating the effectiveness of the sensor-aware allocation strategy. Our findings reveal critical security vulnerabilities in production HAR systems and provide insights for developing robust defense mechanisms against adaptive adversarial threats.