Databricks: Enterprise AI adoption shifts to agentic systems
According to Databricks, enterprise AI adoption is shifting to agentic systems as organisations embrace intelligent workflows.
Generative AI’s first wave promised business transformation but often delivered little more than isolated chatbots and stalled pilot programmes. Technology leaders found themselves managing high expectations with limited operational utility. However, new telemetry from Databricks suggests the market has turned a corner.
Data from over 20,000 organisations – including 60 percent of the Fortune 500 – indicates a rapid shift toward “agentic” architectures where models do not just retrieve information but independently plan and execute workflows.
This evolution represents a fundamental reallocation of engineering resources. Between June and October 2025, the use of multi-agent workflows on the Databricks platform grew by 327 percent. This surge signals that AI is graduating to a core component of system architecture.
The ‘Supervisor Agent’ drives enterprise adoption of agentic AI
Driving this growth is the ‘Supervisor Agent’. Rather than relying on a single model to handle every request, a supervisor acts as an orchestrator, breaking down complex queries and delegating tasks to specialised sub-agents or tools.
Since its launch in July 2025, the Supervisor Agent has become the leading agent use case, accounting for 37 percent of usage by October. This pattern mirrors human organisational structures: a manager does not perform every task but ensures the team executes them. Similarly, a supervisor agent manages intent detection and compliance checks before routing work to domain-specific tools.
Technology companies currently lead this adoption, building nearly four times more multi-agent systems than any other industry. Yet the utility extends across sectors. A financial services firm, for instance, might employ a multi-agent system to handle document retrieval and regulatory compliance simultaneously, delivering a verified client response without human intervention.
Traditional infrastructure under pressure
As agents graduate from answering questions to executing tasks, underlying data infrastructure faces new demands. Traditional Online Transaction Processing (OLTP) databases were designed for human-speed interactions with predictable transactions and infrequent schema changes. Agentic workflows invert these assumptions.
AI agents now generate continuous, high-frequency read and write patterns, often creating and tearing down environments programmatically to test code or run scenarios. The scale of this automation is visible in the telemetry data. Two years ago, AI agents created just 0.1 percent of databases; today, that figure sits at 80 percent.
Furthermore, 97 percent of database testing and development environments are now built by AI agents. This capability allows developers and “vibe coders” to spin up ephemeral environments in seconds rather than hours. Over 50,000 data and AI apps have been created since the Public Preview of Databricks Apps, with a 250 percent growth rate over the past six months.
The multi-model standard
Vendor lock-in remains a persistent risk for enterprise leaders as they seek to increase agentic AI adoption. The data indicates that organisations are actively mitigating this by adopting multi-model strategies. As of October 2025, 78 percent of companies utilised two or more Large Language Model (LLM) families, such as ChatGPT, Claude, Llama, and Gemini.
The sophistication of this approach is increasing. The proportion of companies using three or more model families rose from 36 percent to 59 percent between August and October 2025. This diversity allows engineering teams to route simpler tasks to smaller and more cost-effective models while reserving frontier models for complex reasoning.
Retail companies are setting the pace, with 83 percent employing two or more model families to balance performance and cost. A unified platform capable of integrating various proprietary and open-source models is rapidly becoming a prerequisite for the modern enterprise AI stack.
Contrary to the big data legacy of batch processing, agentic AI operates primarily in the now. The report highlights that 96 percent of all inference requests are processed in real-time.
This is particularly evident in sectors where latency correlates directly with value. The technology sector processes 32 real-time requests for every single batch request. In healthcare and life sciences, where applications may involve patient monitoring or clinical decision support, the ratio is 13 to one. For IT leaders, this reinforces the need for inference serving infrastructure capable of handling traffic spikes without degrading user experience.
Governance accelerates enterprise AI deployments
Perhaps the most counter-intuitive finding for many executives is the relationship between governance and velocity. Often viewed as a bottleneck, rigorous governance and evaluation frameworks function as accelerators for production deployment.
Organisations using AI governance tools put over 12 times more AI projects into production compared to those that do not. Similarly, companies employing evaluation tools to systematically test model quality achieve nearly six times more production deployments.
The rationale is straightforward. Governance provides necessary guardrails – such as defining how data is used and setting rate limits – which gives stakeholders the confidence to approve deployment. Without these controls, pilots often get stuck in the proof-of-concept phase due to unquantified safety or compliance risks.
The value of ‘boring’ enterprise automation from agentic AI
While autonomous agents often conjure images of futuristic capabilities, current enterprise value from agentic AI lies in automating the routine, mundane, yet necessary tasks. The top AI use cases vary by sector but focus on solving specific business problems:
- Manufacturing and automotive: 35% of use cases focus on predictive maintenance.
- Health and life sciences: 23% of use cases involve medical literature synthesis.
- Retail and consumer goods: 14% of use cases are dedicated to market intelligence.
Furthermore, 40 percent of the top AI use cases address practical customer concerns such as customer support, advocacy, and onboarding. These applications drive measurable efficiency and build the organisational muscle required for more advanced agentic workflows.
For the C-suite, the path forward involves less focus on the “magic” of AI and more on the engineering rigour surrounding it. Dael Williamson, EMEA CTO at Databricks, highlights that the conversation has shifted.
“For businesses across EMEA, the conversation has moved on from AI experimentation to operational reality,” says Williamson. “AI agents are already running critical parts of enterprise infrastructure, but the organisations seeing real value are those treating governance and evaluation as foundations, not afterthoughts.”
Williamson emphasises that competitive advantage is shifting back towards how companies build, rather than simply what they buy.
“Open, interoperable platforms allow organisations to apply AI to their own enterprise data, rather than relying on embedded AI features that deliver short-term productivity but not long-term differentiation.”
In highly regulated markets, this combination of openness and control is “what separates pilots from competitive advantage.”
See also: Anthropic selected to build government AI assistant pilot

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
The post Databricks: Enterprise AI adoption shifts to agentic systems appeared first on AI News.