The New Insider Risk: AI Changes How Data Moves Inside the Enterprise

Before the acceleration of AI, insider risk always centered on human intent. Security teams monitored high-risk employees who were prone to downloading files before leaving the company, or negligent employees who engaged in thoughtless behavior (e.g., clicking a phishing link.) In other words, the insider risk threat model was based on people doing things they shouldn’t.

AI has fundamentally changed this paradigm. The risk is no longer limited to ill-intentioned employees but also to those who wish to make their work easier. By using widely available AI tools and platforms for work, they may inadvertently share sensitive data outside their secure business environment. 

The data on heightened insider risks tracks. Based on Proofpoint’s Voice of the CISO Report, “Two-thirds of CISOs experienced material data loss in the past year, with insider-driven incidents topping the list of causes.”

Data movement has also dramatically changed. Instead of moving vertically through standard approval channels, data now moves horizontally across teams, tools, and AI assistants. In turn, security leaders are struggling to gain visibility into their activity. Data from Proofpoint shows that 80% of CISOs in the U.S. are concerned about customer data leaking through public generative AI platforms.

A user may ask an AI assistant a simple question. The model will subsequently gather data from internal systems (e.g., CRM, ERP) and multiple external sources (e.g., websites, social media), then forward the combined results to all members of the user’s collaboration platform (Slack).

AI as a Data Movement Layer

AI fundamentally altered the data lifecycle inside organizations. Traditionally, there were clear handoffs, where an employee downloaded a file, edited it, and sent it to a coworker. Security teams could track each step of the process.

AI workflows are not the same. A worker asks a question. The AI assistant pulls information from many sources to answer the question. It condenses documents, changes the data format, and produces an output that can be shared on collaboration platforms. Each step happens in seconds with very little human oversight.

“As organizations adopt autonomous agents that can browse, write code, and act across multiple systems, autonomy becomes a major risk multiplier,” says Proofpoint professionals in a related co-authored post

This ‘multi-hop’ activity makes it harder to protect. AI agents and plugins can be installed in more than one system. They gather data from your CRM, compare it to email archives, add it to information from a collaboration platform, and send the results to an analytics tool that’s not part of your company. That’s four or five data movements in one automated workflow.

The Insider Expansion: Intent vs. Convenience

The previous insider threat model was based on intent. You had bad actors within the company stealing data, and careless workers who didn’t pay attention to security training. AI has tilted the risk scale from intention to convenience.

Employees aren’t trying to cause harm. They’re trying to work more efficiently and meet tomorrow’s deadlines. Pasting a customer contract into ChatGPT and asking for a summary is faster than manually reading through 40 pages. Uploading a financial spreadsheet to an AI assistant generates charts in seconds instead of hours. Each decision prioritizes speed over protocol.

Most workers still don’t know where the lines are for policies. Companies are slow to define what data workers can give to AI tools. They haven’t defined acceptable AI use inthe workplace or what happens to data once it’s entered into an AI system. When the rules aren’t clear, people do what they need to do to get their work done.

Collaboration Platforms as AI Gateways

Collaboration platforms are the first place most data moves into AI systems. Slack, Teams, Gmail, Google Workspace, and Microsoft 365 are all the ‘first hop’ for data as it leaves your company’s premises and is ingested by today’s AI tools.

OAuth permissions and API connectors enable the connection. An employee installs a productivity plugin that claims to summarize Slack conversations or help them write email replies. The app requests extensive access to workspace data. A lot of people click “allow” without knowing what they’re permitting. Now, a third-party AI service can read months’ worth of messages, shared files, and calendar information.

CISOs have significant gaps in their ability to see what’s happening in these workflows. They often can’t answer simple questions about which AI tools employees are using to work together. They don’t have any telemetry to track what data goes through these integrations. They don’t know who is sending private information to AI systems or where the outputs go after they’re generated.

Why Traditional Controls Don’t Map Cleanly

Legacy security frameworks were built for different, more predictable threat models. As such, they have significant drawbacks in dealing with how AI actually moves data across modern organizations.

  • Static vs. dynamic: Traditional DLP and CASB controls rely on static governance, which is based on pre-defined rules regarding where data can be sent. AI introduces fluid and multi-step workflows where data is both consumed and redistributed in mere seconds. The velocity and variability of AI workflows make static policies irrelevant and impractical.
  • Human vs. automated: In recent years, insider behavior required humans to make decisions at most steps. AI partially automates the ingestion, transformation, and redistribution of data through agents and/or assistants. IAM frameworks provide access to users but are not designed to control what automated AI workflows may request on behalf of a user.
  • Predictable vs. emergent: When workflows follow predictable patterns, security teams can typically pre-classify likely exposure pathways. AI workflows are emergent; an employee may ingest data from five different systems in a way no one anticipated. It’s nearly impossible to create rules to protect all possible combinations of data.

What CISOs Are Asking (or Will Soon Ask)

The questions security leaders are asking help point to where the market is going. These aren’t just theoretical questions; they’re real-world challenges that require strategic governance.

  • Which AI tools are sanctioned versus shadow? To build a successful governance framework, you need to understand all the AI applications currently in use.
  • What data categories can enter AI workflows? Companies need to be clear about whether AI systems can handle customer records, financial data, source code, or data that is subject to rules.
  • Where does AI-created content travel afterward? AI makes summaries or reports that are shared with other teams or outside partners. It’s just as important to keep track of the output as it is the input.
  • How do we audit, classify, and log AI interactions? Without logging capabilities, security teams can’t investigate incidents or have records ready for compliance regulators when they start asking questions.
  • Who owns AI data governance across the organization? Making it clear who is responsible for governance oversight (e.g., the CISO, data teams, legal, or compliance) helps avoid gaps where no one is held accountable.

Industry Outlook on Insider Risk 

Insider risk will no longer be just a problem specific to people; it will also include studying AI workflows and data movement patterns. User behavior analytics alone won’t help security teams deal with this. Automated agents, API connectors, and integrations with collaboration platforms are now part of the threat model. 

“Organizations will stop treating human signals, identity data, and technical events as separate streams,” predicts Proofpoint. “The next evolution of insider risk management depends on connecting these areas, because true risk rarely shows up in a single dimension.”

For this reason, adopting AI will require the whole company to share responsibility. When it comes to managing data governance, application security, legal policy, and compliance mandates, CISOs can’t handle this problem on their own. Expect to see joint accountability frameworks in which chief data officers and security leaders work together on AI-specific governance, rather than operating in isolated units.

Takeaway

AI isn’t making new insiders. It’s changing how people who work with data share it every day. Instead of asking about intent, the focus has shifted to making these tools more visible and to setting up rules that fit how employees actually use them. 

:::tip
This story was distributed as a release by Jon Stojan under HackerNoon’s Business Blogging Program.

:::

Liked Liked