Navigating AI: What Two Years of Hype Actually Taught Us
We now have enough track record to separate what is working from what is salesmanship. The capability is real. So is the discipline required to capture it.

Every leader has heard the same pitch for the past two years. AI will transform your business. The models are ready. Your competitors are moving. You need to act now.
Some of that is true. Some of it is salesmanship. And now, in 2026, we have enough track record to tell the difference.
The technology is genuinely better than the last hype cycle
This is the part people in our profession sometimes resist saying. We have watched technology hype cycles since the early days of ERP, through the first web bubble, through big data, through blockchain. The pattern is always the same: real technology, legitimate use cases, followed by a gold rush that outpaces genuine readiness.
What is different about the current moment is that the underlying capability is substantial. Reasoning models like GPT-4o, Claude 3.7 Sonnet, and Gemini 2.0 can handle tasks that would have been genuinely impossible three years ago. Code generation, document analysis, structured data extraction from unstructured inputs: these work, often at production quality, for a real set of problems.
The hype is real. The capability is also real. That combination is rarer than it sounds.
The failure modes are also real
For every genuine win, we have watched expensive failures. And the failures follow a consistent pattern.
They begin with a proof of concept that works. Someone builds a demo in two weeks. Leadership is excited. A vendor makes a compelling proposal. Resources are committed. Then the system hits production data. And production data is nothing like demo data.
A customer service agent that handled 200 canned questions beautifully falls apart on the 201st, which turns out to be a refund request for a product that has three different SKUs, two of which were discontinued, and one of which has a supplier dispute attached to it. An internal knowledge assistant that worked perfectly on a curated document set starts confidently hallucinating once connected to the full document store, because that store contains contradictory policy documents from 2019, 2021, and 2024, and no one ever resolved the contradictions.
The technology did not fail. The data preparation failed. The governance failed. The problem scoping failed.
Agents are a genuinely new risk category
For two years, the primary risk was a bad answer you could ignore. A chatbot that got something wrong. A summary that missed a nuance. Annoying, but containable. A person reads the output, notices the problem, and corrects it.
Agentic AI removes the person from that loop. An agent does not just answer questions. It takes actions: it sends emails, updates records, places orders, schedules meetings, makes decisions at machine speed. When the data it is working with is incomplete, stale, or contradictory, the agent does not pause and ask for clarification. It acts on what it has. It may complete 400 tasks before anyone notices that the underlying assumption driving those tasks was wrong.
We have seen this in production. A document routing agent that sent 180 contracts to the wrong review queue because a department rename had not been propagated through the metadata. A scheduling agent that booked 60 appointments using availability data cached from the previous day. The capability was working exactly as designed. The data was not.
What the disciplined organizations are doing differently
The organizations getting real value from AI in 2026 share a few characteristics that have nothing to do with the models they chose. They defined the problem before they selected the technology. They built their data foundation before they scaled the application. They started with narrow, high-confidence use cases and earned the right to expand scope.
They also treated AI like a new employee: supervised, with limited initial access, earning trust through demonstrated reliability before taking on more responsibility.
The opportunity is real. So is the discipline required to capture it. Those two things have always been true of every technology cycle worth being a part of.
