How a Conviction for Stealing AI Secrets Failed to Fix the Real Problem
Author(s): MohamedAbdelmenem Originally published on Towards AI. For engineers and managers building with AI: a historic conviction reveals why our security must now account for non-human “insiders.” On January 30, 2026, a federal jury convicted former Google engineer Linwei Ding of 14 federal counts for stealing over 2,000 pages of AI trade secrets to benefit Chinese ventures. This was the first U.S. conviction for AI-related economic espionage. You’ll get a three-part plan to secure your AI systems against the non-human insiders this case previews. The threat evolves from human fingerprints to the digital signatures of autonomous AI agents. Made By Author.The article discusses the implications of the historic conviction of Linwei Ding for stealing AI trade secrets, emphasizing the need for organizations to reassess their security protocols to accommodate potential non-human threats stemming from AI technology. It highlights how traditional security models, which focus primarily on human insiders, are outdated in the face of emerging autonomous AI agents that can perform functions with unprecedented speed and efficiency. Ultimately, the article proposes a shift in architectural security to assume betrayal and implement rigorous monitoring of AI identities to safeguard sensitive information. Read the full blog for free on Medium. Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor. Published via Towards AI