ppAIsec: Privacy-Preserving Artificial Intelligence Models in Healthcare Security—A Synthesis of AI Frameworks
As artificial intelligence (AI) technologies, particularly generative and collaborative learning models— are increasingly integrated into healthcare and other sensitive domains, data privacy, security, and fairness concerns have grown significantly. This paper focuses on a thorough examination of current privacy-preserving AI models, including federated learning (FL), differential privacy (DP), homomorphic encryption, and generative adversarial networks (GANs). Key contributions are reviewed across recent works that explore privacy-preserving mechanisms within domains such as clinical diagnostics, drug discovery, Internet of Medical Things (IoMT), and virtual health systems. Dynamic federated models (e.g., DynamicFL) that adjust model architecture based on computational heterogeneity and encryption-augmented FL architectures are presented to maintain data locality while ensuring equitable performance. GAN-based synthetic data generators (e.g., medGAN, CorGAN) offer alternative solutions to share healthcare data without compromising patient identity and introducing new threats if misused. Across these models, a multi-phase life cycle of threats is identified—spanning data collection, model training, inference, and system integration—highlighting the importance of proactive governance. Information compliance frameworks such as the EU AI Act and the U.S. AI Bill of Rights are counting for standardizing technological implementation in healthcare data management. This research work will cover explaining existing AI models and trying to identify the best one worked for ensuring data privacy and shareability with ethical responsibility for proposing a layered privacy-preservation paradigm essential for safely deploying AI in sensitive environments.