We Are Scaling AI Capability Faster Than We Are Scaling Comprehension

Arun Kumar Elengovan, Director of Engineering Security at Okta, recently delivered a keynote at AICCONS 2026, held from April 28 to April 30, 2026, at the University of Wollongong in Dubai, UAE. The conference brought together researchers, engineers, and academic leaders at a time when artificial intelligence continues to evolve at a remarkable pace.

Arun brings a mix of deep technical expertise and real world execution to his work. He is a Fellow of the British Computer Society, a Gartner Information Security Ambassador, and a member of the Forbes Technology Council. In 2025, he was recognized with international awards such as Excellence in Cloud and AI Security and Cyber Sentinel of the Year. His professional journey and ongoing insights can be followed on his LinkedIn profile.

His keynote, Before You Race Ahead: Relearning AI Foundations in a Fast Moving Era, did not try to introduce yet another trend in an already crowded space. Instead, it did something that felt both simple and necessary. It asked the audience to pause.

He opened with a reflection that immediately resonated across the room. The AI ecosystem is moving fast. Agent based systems, autonomous workflows, and synthetic data are becoming part of everyday engineering conversations. But understanding, he pointed out, is not keeping pace.

“We are scaling capability faster than we are scaling comprehension,” he said. It was a quiet moment, but one that set the tone for everything that followed.

As the session unfolded, Arun grounded his observations in reality. Many teams today are building systems that are powerful, but not always well understood. Especially when things go wrong. He made it clear that this is not a distant concern.

“The risks are no longer future risks. They are already in production,” he noted, pointing to issues like prompt injection, data leakage, and hallucination driven outputs that are already being observed.

Rather than making the discussion more complex, he simplified it. He brought everything back to three foundational ideas. Representation, learning, and reasoning. These are not new concepts, but they are often overlooked in practice.

“No matter how advanced AI becomes, it cannot escape these three foundations,” he said.

One of the most engaging moments came when he described how representation has changed over time. Earlier systems relied on explicit rules and symbolic logic. Today, models operate in vector spaces where meaning is inferred through proximity.

“Meaning has moved from logic to geometry,” he explained. It was a line that stayed with many in the audience because it made a complex shift feel intuitive.

When he spoke about learning, he addressed a common misconception directly. Models are often described as understanding systems. But in reality, they are learning patterns.

“Models are not trained on truth. They are trained on patterns,” he said, bringing clarity to how these systems actually work.

The conversation around reasoning added another layer of reflection. Models can produce outputs that appear logical and structured. But Arun encouraged the audience to question what is happening beneath the surface.

“What looks like reasoning may just be very good pattern completion,” he said, leaving the room thinking.

He also touched on the growing influence of generative models. Their ability to produce fluent responses is what makes them so compelling. But that same fluency can create a false sense of confidence.

“Fluency creates confidence. It does not guarantee correctness,” he said, a point that resonated strongly with researchers focused on evaluation.

Drawing from his experience with production systems, Arun brought the discussion into practical territory. Models behave differently under edge conditions. Context limitations can quietly influence outputs. Failures are not always obvious.

“In production, models do not fail loudly. They fail subtly,” he observed, highlighting the importance of validation and guardrails.

As the talk progressed, he shifted to one of the most active areas in AI today. Agent based systems. These systems move beyond generating responses and begin to take actions.

“When AI moves from answering questions to taking actions, the risk profile changes completely,” he explained, emphasizing how errors can now compound across steps.

Security and trust naturally became part of the conversation. Arun reframed trust in a way that connected deeply with the audience.

“Trust in AI is not just about what it knows. It is about what it is allowed to do,” he said.

For the academic community, his message was thoughtful and clear. There is still meaningful work to be done. In interpretability, in evaluation, and in building systems that are robust and secure.

“Industry optimizes for speed. Academia protects correctness,” he said, acknowledging the role research plays in shaping the future of the field.

Before concluding, Arun offered a simple framework that brought everything together. When building or evaluating AI systems, ask a few basic questions. What is being represented. What is being learned. What kind of reasoning is expected? And where can it fail?

“Clarity before capability is what makes systems trustworthy,” he added.

As the session came to a close, the takeaway was clear. AI will continue to advance. That is not in question. But without a strong understanding of its foundations, that progress can become fragile.

For many attendees, the keynote felt like a reset. In a field that often celebrates speed, Arun Kumar Elengovan’s message was a reminder that understanding is what ultimately makes progress sustainable.

n

:::tip
This story was distributed as a release by Jon Stojan under HackerNoon’s Business Blogging Program.

:::

Liked Liked