Intelligence should be owned, not rented
Read Online | Sign Up | Advertise
Good morning, {{ first_name | AI enthusiasts }}. AI is often described as a “digital teammate,” but the reality is more complex.
Embedding AI systems into core workflows reshapes how teams operate — and raises some tough questions: What actually gets faster? What becomes riskier? And where must humans stay in control?
To unpack how this shift is playing out in practice, we sat down with DJ Sampath, SVP of AI Software and Platform at Cisco, on the sidelines of the Cisco AI Summit for a candid look at what it takes to build, secure, and scale with AI in the loop.
In today’s AI rundown:
-
The rise of a new agentic workforce
-
Sampath’s structured, multi-model workflow
-
Rethinking AI readiness from the ground up
-
Today’s biggest AI security risk
-
Why intelligence should be owned, not rented
LATEST DEVELOPMENTS
AGENTIC SHIFT
🤖 The rise of a new agentic workforce
The Rundown: Cisco sees AI agents as a digital workforce that absorbs routine tasks (like outages), enabling teams to focus on the complex strategic work — and what comes next. The key takeaway for companies? Mastering human-agent collaboration.
Cheung: Cisco has said AI will make the world feel like it has “80B people.” What does that mean inside a company? And how far can AI go in network ops?
Sampath: For the first time, we’re deploying digital teammates that can plan, reason, and execute with autonomy. Every leader will manage a constellation of agents working in parallel — investigating, analyzing, remediating — while humans move up the stack to creativity, judgment, and strategic direction.
Within 12 months, I expect AI to resolve roughly 80% of pattern-based, routine network incidents autonomously. The final 20%, which are multi-vendor, legacy-heavy, or edge-case complexities, will take longer. But just like self-driving, progress will compound.
Sampath added: Over the next five years, the companies that learn to design for human–agent collaboration, with trust, governance, and intent at the core, will define the next era of operational performance.
Why it matters: Humans won’t be replaced by AI, but they will be pushed up the stack. As agents absorb the predictable and procedural, the premium will shift to judgment, creativity, and strategic thinking. The winning edge for businesses will come from pairing that human depth with agents’ speed and scale.
Image: Kiki Wu / The Rundown
The Rundown: Sampath practices what he preaches — using AI to rethink how work gets done daily. From multi-model ideation to coding agents that automate daily briefs, here’s how Cisco’s head of AI actually uses the technology.
Cheung: What kind of workflows are you automating with AI? Can you give a few examples?
Sampath: At work, I’ve been experimenting with a simple but structured way of using multiple AI tools. First, I separate idea generation from evaluation by drafting a memo, strategy, or customer narrative in one model, then bringing it into a second model to critique and improve it. This helps me think more clearly and produce stronger work.
Next, I use Cursor to store context in markdown files and folders that AI can reference. Over time, this builds a knowledge base, like a long-term thought partner that understands my frameworks and past work.
Sampath added: I also connect AI to my calendar and meeting notes to review context before customer, partner, or analyst conversations. Plus, I’ve started using coding agents to automate work like daily briefs, product reviews, and document analysis.
Why it matters: Sampath’s approach shows that the future of work will be defined by how teams stitch together agents, models, and systems into structured but adaptable workflows. The Cursor-as-knowledge-base idea is especially actionable, turning one-off AI interactions into a compounding system that gets smarter over time.
AI READINESS
🦄 Rethinking AI readiness from the ground up
The Rundown: Most enterprises are held back from AI adoption not by a lack of ambition, but by infrastructure debt and siloed data. Sampath says the real unlock requires pairing modern infrastructure with leadership clarity — and embedding intelligence directly into products.
Cheung: Cisco’s AI Readiness Index shows only 28% of organizations believe they’re ready for AI workloads. What’s holding back the rest, and what does it take to be a true AI company today?
Sampath: What’s holding back the other 72% isn’t just missing GPUs. It’s AI infrastructure debt: legacy networks, fragmented data, siloed tooling. Systems built for yesterday’s applications can’t support the throughput, real-time processing, and autonomy that modern AI demands.
Another key component is the need to pair modern infrastructure with leadership clarity — governance, strategy, and alignment to business outcomes. The leaders have to solve for both, defining tech and how work will happen with AI.
Sampath added: The sustainable advantage will come when intelligence is embedded into the product itself. When the model is trained on your contextual enterprise data, it improves continuously and directly drives outcomes. So, the product becomes the model, and the model becomes the product.
Why it matters: Being “AI-ready” means rethinking the whole stack, from infra to security to the application layer. Companies that make AI core to their product (not just a feature) can unlock feedback loops backed by proprietary data, where outcomes improve continuously and enable them to move faster.
AI SECURITY
🧠 Today’s biggest AI security risk
The Rundown: Sampath says that the most urgent AI security threat is the risk of agent compromise. As these systems take on more tasks, they become powerful attack surfaces — with the risk only growing with time.
Cheung: What’s the most concrete, real AI security threat right now?
Sampath: The most immediate AI security risk is the compromise and misuse of autonomous agents. As enterprises deploy agentic systems that access data, invoke tools, and make decisions independently, those agents become a new attack surface. They can be hijacked, impersonated, or manipulated to exfiltrate data or execute unauthorized commands at machine speed.
We’re already seeing attackers probe these gaps. That’s why we consider AI security from two perspectives: protecting the enterprise from agents and protecting agents from the world — with zero-trust identity, control over agent protocols and tool registries, and continuous behavioral monitoring.
Cheung: If an enterprise is moving from AI pilots to production, what’s the first system they need to harden? And where should a human always stay in the loop?
Sampath: The first thing to harden is the agent infrastructure. The real risk sits in the connective tissue: the protocols that link agents to tools, data, and each other. Standards like Model Context Protocol and agent-to-agent have become the backbone of autonomous workflows, but they scaled faster than the security around them.
As teams roll out agentic AI, anything that affects trust, access, or control over critical systems — granting privileges, changing production environments, authorizing sensitive data access, initiating irreversible actions — should never run fully autonomous. When consequences are real, accountability has to be human.
The right model isn’t human-out-of-the-loop. It’s AI-in-the-loop. Let agents handle the routine and low-risk at speed, and keep humans as the authority where impact is high.
Why it matters: The shift from models that answer to agents that act introduces a new class of risk — systemic, fast-moving, and hard to contain. To secure operations in this future, organizations will have to bake their agentic deployments with identity, guardrails, and constant oversight, treating them like real entities.
THE AI COMPANY THESIS
⚡️ Why intelligence should be owned, not rented
The Rundown: Many companies are racing to bolt AI onto existing products, but Sampath argues the real moat comes from embedding intelligence into the product itself — and makes a bold case that the future of AI shouldn’t be controlled by a handful of centralized providers.
Cheung: You’ve said “companies that are a thin shim on top of a model — their days are numbered.” What does that actually mean?
Sampath: We mean this: adding a generative API to an existing product isn’t a strategy — it’s a feature.
Sustainable advantage comes when intelligence is embedded into the product itself. When the model is trained on your contextual enterprise data, it improves continuously and directly drives outcomes. That closed loop — fueled by proprietary machine data — is the moat.
Cheung: If you weren’t at Cisco, what AI problem would you want to work on?
Sampath: The problem I care most about is ownership of intelligence. I don’t believe the future belongs to a handful of centralized models controlled by a few providers. I believe intelligence should be owned by enterprises — and ultimately, by individuals.
That means building a full stack that allows organizations to develop, fine-tune, deploy, and govern models on their own terms.
Why it matters: “Intelligence should be owned, not rented” is an interesting thesis when most enterprises currently leverage centralized AI providers. It raises a question every company should be asking: are you building AI capabilities that compound over time, or renting them from someone who could eventually change the terms?
GO DEEPER
AI SUMMIT
📺 Watch the Cisco AI Summit on-demand
Watch all of the sessions from the Cisco AI Summit on-demand, featuring discussions including:
-
‘Frontier Models & AI’ with OpenAI’s Sam Altman
-
‘The AI Factory – Infrastructure for Intelligence’ with Nvidia’s Jensen Huang
-
‘Enterprise & AI’ with Anthropic Labs lead Mike Krieger
Click here to see all the sessions and watch on-demand.
See you soon,
Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown







