The boring AI That Keeps Planes in The Sky
Author(s): Marco van Hurne Originally published on Towards AI. One of the ways I keep myself busy in the AI domain is by running an AI factory at scale. And I’m not talking about the metaphorical kind where someone prompts an AI to write motivational LinkedIn posts about “scaling innovation”. No, this is the actual kind where we automate enterprise processes at volume, and it is a place where mistakes propagate through systems that move lots of money, grant access, and make decisions that lawyers and governments will ask about six months later. And I have learned something that the frontier model evangelists do not mention in their keynote demos, that Generative AI is a spectacular foundation for creativity and that it’s also a catastrophic foundation for systems that cannot afford to guess, if you leave them unchecked. See content credentials Large language models are linguistic savants. They’re very good at translating intent into structure, and in my case, the machine occasionally produces metaphors that are so apt that I wonder if the machine is mocking me. They are also chronic liars — not malicious — but more of the structural kind. They hallucinate because hallucination is how they make creative leaps. They hallucinate because hallucination is the mechanism that makes them interesting but when you keep it, you inherit a system that will invent API parameters, fabricate regulations, and propose infrastructure changes that sound authoritative while violating basic physical laws. Take for instance my latest run in with hallucination. I posted it on LinkedIn last Sunday. Manus faked an entire 100 page research result and when I caught it lying, it casually stated — probably with a smug grin — “they don’t actually exist on GitHub…. YET!” And that’s exactly the Faustian bargain, the same mechanism that lets them create weird ideas is also what makes them invent facts. You cannot simply turn off the hallucination switch, but you can tweak it — which the industry is able to do better and better with each new release — but if you make the model too conservative and locked-down, you will kill the very essence that makes it useful and what you’re left with is an extremely expensive dictionary that only tells you things it has seen before, exactly as it saw them and that will not lead to new insights or surprising connections‡. Just retrieval. This is not a bug though, but inherent to what transformer architectures do. They predict the next token based on patterns they learned across vast corpora of human expression, and they choose the most likely outcome, but they do not consult a database of verified facts or check constraints. They simply estimate plausibility and when plausibility and truth diverge, the model does not notice because its training objective was never to “be correct”, but to “continue the sequence in a way that resembles how humans write” and in a chatbot that is recommending me lunch spots, this is harmless, but when I’m implementing it in an autonomous system processing payments. . . Man, this is arson waiting for a match. The thing that makes me a tad uncomfortable about the 2026 AI Automation rush†, is that enterprises are deploying these systems into environments where failure is asymmetric. Ninety-nine correct decisions do not erase one catastrophic mistake. This means a misconfigured firewall does not average out, or a fraudulent transaction does not become acceptable because the previous thousand were legitimate. Tail risk dominates and probabilistic systems reason about density, but not tails, and this specific mismatch cannot be solved by changing the training process. It is an inherent category error. But there’s hope. And the hope lies in the recent past of AI, the time when Neural Networks weren’t so sophisticated as they are now, but still were capable of helping people make decision. Yes, my smart friend, this is where symbolic AI re-enters the conversation, and it is wearing the same sensible shoes it wore in the nineteen-eighties yet still unfashionable as ever, but still correct. See content credentials ‡ For more on creativity and AI: Human + AI thinking are colliding, and I made it worse on purpose | LinkedIn Attention isn’t all you need: The wanderer’s algorithm | LinkedIn † For more AI Automation experiences: AI makes your company average | LinkedIn More rants after the messages: Connect with me on Linkedin 🙏 Subscribe to TechTonic Shifts to get your daily dose of tech 📰 Please comment, like or clap the article. Whatever you fancy. The competence of symbolic AI Symbolic AI is the disciplined older sibling that neural networks spent two decades trying to escape before becoming adolescent transfomer-based Generative models. Neural nets learned to improvise by predicting, whereas the older generation — symbolic systems — learned to prove, not predict. They derive, and operate in the domain of formal logic, where statements are either provably true, provably false, or undecidable. There is no third option involving “sounds about right”. Here is how symbolic AI works. You start by defining axioms. Axioms are statements you accept as foundational truths within a domain, and then you define rules that describe valid transformations and relationships between those statements. A symbolic reasoning engine takes those axioms and rules and searches for proofs and if it can derive a conclusion from your axioms using your rules, it asserts the conclusion and if it cannot, it refuses. C’est simple. This maybe a little confusing at first, but I’m sure that an example will help explain the concept much better. Think about baking a cake. You start with axioms — the basic facts you know are true. Here are a few Axiom 1. I have flour, eggs, butter, and sugar in my kitchen Axiom 2. My oven heats to 180°C Axiom 3. Cake batter needs to bake for 35 minutes Then you have rules — the things that must be true for the process to work Rule 1. If the oven is broken, I cannot bake […]