Mend.io Releases AI Security Governance Framework Covering Asset Inventory, Risk Tiering, AI Supply Chain Security, and Maturity Model
There’s a pattern playing out inside almost every engineering organization right now. A developer installs GitHub Copilot to ship code faster. A data analyst starts querying a new LLM tool for reporting. A product team quietly embeds a third-party model into a feature branch. By the time the security team hears about any of it, the AI is already running in production — processing real data, touching real systems, making real decisions.
That gap between how fast AI enters an organization and how slowly governance catches up is exactly where risk lives. According to a new practical framework guide ‘AI Security Governance: A Practical Framework for Security and Development Teams,’ from Mend, most organizations still aren’t equipped to close it. It doesn’t assume you have a mature security program already built around AI. It assumes you’re an AppSec lead, an engineering manager, or a data scientist trying to figure out where to start — and it builds the playbook from there.
The Inventory Problem
The framework begins with the critical premise that governance is impossible without visibility (‘you cannot govern what you cannot see’). To ensure this visibility, it broadly defines ‘AI assets’ to include everything from AI development tools (like Copilot and Codeium) and third-party APIs (like OpenAI and Google Gemini), to open-source models, AI features in SaaS tools (like Notion AI), internal models, and autonomous AI agents. To solve the issue of ‘shadow AI’ (tools in use that security hasn’t approved or catalogued), the framework stresses that finding these tools must be a non-punitive process, ensuring developers feel safe disclosing them
A Risk Tier System That Actually Scales
The framework uses a risk tier system to categorize AI deployments instead of treating them all as equally dangerous. Each AI asset is scored from 1 to 3 across five dimensions: Data Sensitivity, Decision Authority, System Access, External Exposure, and Supply Chain Origin. The total score determines the required governance:
- Tier 1 (Low Risk): Scores 5–7, requiring only standard security review and lightweight monitoring.
- Tier 2 (Medium Risk): Scores 8–11, which triggers enhanced review, access controls, and quarterly behavioral audits.
- Tier 3 (High Risk): Scores 12–15, which mandates a full security assessment, design review, continuous monitoring, and a deployment-ready incident response playbook.
It is essential to note that a model’s risk tier can shift dramatically (e.g., from Tier 1 to Tier 3) without changing its underlying code, based on integration changes like adding write access to a production database or exposing it to external users.
Least Privilege Doesn’t Stop at IAM
The framework emphasizes that most AI security failures are due to poor access control, not flaws in the models themselves. To counter this, it mandates applying the principle of least privilege to AI systems—just as it would be applied to human users. This means API keys must be narrowly scoped to specific resources, shared credentials between AI and human users should be avoided, and read-only access should be the default where write access is unnecessary.
Output controls are equally critical, as AI-generated content can inadvertently become a data leak by reconstructing or inferring sensitive information. The framework requires output filtering for regulated data patterns (such as SSNs, credit card numbers, and API keys) and insists that AI-generated code be treated as untrusted input, subject to the same security scans (SAST, SCA, and secrets scanning) as human-written code.
Your Model is a Supply Chain
When you deploy a third-party model, you’re inheriting the security posture of whoever trained it, whatever dataset it learned from, and whatever dependencies were bundled with it. The framework introduces the AI Bill of Materials (AI-BOM) — an extension of the traditional SBOM concept to model artifacts, datasets, fine-tuning inputs, and inference infrastructure. A complete AI-BOM documents model name, version, and source; training data references; fine-tuning datasets; all software dependencies required to run the model; inference infrastructure components; and known vulnerabilities with their remediation status. Several emerging regulations — including the EU AI Act and NIST AI RMF — explicitly reference supply chain transparency requirements, making an AI-BOM useful for compliance regardless of which framework your organization aligns to.
Monitoring for Threats Traditional SIEM Can’t Catch
Traditional SIEM rules, network-based anomaly detection, and endpoint monitoring don’t catch the failure modes specific to AI systems: prompt injection, model drift, behavioral manipulation, or jailbreak attempts at scale. The framework defines three distinct monitoring layers that AI workloads require.
At the model layer, teams should watch for prompt injection indicators in user-supplied inputs, attempts to extract system prompts or model configuration, and significant shifts in output patterns or confidence scores. At the application integration layer, the key signals are AI outputs being passed to sensitive sinks — database writes, external API calls, command execution — and high-volume API calls deviating from baseline usage. At the infrastructure layer, monitoring should cover unauthorized access to model artifacts or training data storage, and unexpected egress to external AI APIs not in the approved inventory.
Build Policy Teams Will Actually Follow
The framework’s policy section defines six core components:
- Tool Approval: Maintain a list of pre-approved AI tools that teams can adopt without additional review.
- Tiered Review: Use a tiered approval process that remains lightweight for low-risk cases (Tier 1) while reserving deeper scrutiny for Tier 2 and Tier 3 assets.
- Data Handling: Establish explicit rules that distinguish between internal AI and external AI (third-party APIs or hosted models).
- Code Security: Require AI-generated code to undergo the same security review as human-written code.
- Disclosure: Mandate that AI integrations be declared during architecture reviews and threat modeling.
- Prohibited Uses: Explicitly outline uses that are forbidden, such as training models on regulated customer data without approval.
Governance and Enforcement
Effective policy requires clear ownership. The framework assigns accountability across four roles:
- AI Security Owner: Responsible for maintaining the approved AI inventory and escalating high-risk cases.
- Development Teams: Accountable for declaring AI tool use and submitting AI-generated code for security review.
- Procurement and Legal: Focused on reviewing vendor contracts for adequate data protection terms.
- Executive Visibility: Required to sign off on risk acceptance for high-risk (Tier 3) deployments.
The most durable enforcement is achieved through tooling. This includes using SAST and SCA scanning in CI/CD pipelines, implementing network controls that block egress to unapproved AI endpoints, and applying IAM policies that restrict AI service accounts to minimum necessary permissions.
Four Maturity Stages, One Honest Diagnosis
The framework closes with an AI Security Maturity Model organized into four stages — Emerging (Ad Hoc/Awareness), Developing (Defined/Reactive), Controlling (Managed/Proactive), and Leading (Optimized/Adaptive) — that maps directly to NIST AI RMF, OWASP AIMA, ISO/IEC 42001, and the EU AI Act. Most organizations today sit at Stage 1 or 2, which the framework frames not as failure but as an accurate reflection of how fast AI adoption has outpaced governance.
Each stage transition comes with a clear priority and business outcome. Moving from Emerging to Developing is a visibility-first exercise: deploy an AI-BOM, assign ownership, and run an initial threat model. Moving from Developing to Controlling means automating guardrails — system prompt hardening, CI/CD AI checks, policy enforcement — to deliver consistent protection without slowing development. Reaching the Leading stage requires continuous validation through automated red teaming, AIWE (AI Weakness Enumeration) scoring, and runtime monitoring. At that point, security stops being a bottleneck and starts enabling AI adoption velocity.
The full guide, including a self-assessment that scores your organization’s AI maturity against NIST, OWASP, ISO, and EU AI Act controls in under five minutes, is available for download.
The post Mend.io Releases AI Security Governance Framework Covering Asset Inventory, Risk Tiering, AI Supply Chain Security, and Maturity Model appeared first on MarkTechPost.