The AI Hype Cycle: Revolutionary Tech, Inflated Expectations

The AI Hype Cycle

I work with AI daily. I build tools with LLMs, use Claude, use small models, and genuinely believe this technology is transformative. But the current discourse around AI has drifted far from reality, and I think it's worth talking about.

This isn't an anti-AI article. It's a call for clarity.

LLMs Are a Revolution — But Not Magic

Large Language Models are one of the most significant advances in computing in decades. They can generate code, summarize documents, translate languages, and assist with tasks that previously required expensive human effort. That's real.

But here's what often gets lost: LLMs are probabilistic text generators. They predict the next token based on statistical patterns learned from training data. They approximate reasoning through statistical pattern learning rather than symbolic logic, and they don't know when they're wrong.

Compare this to a linear regression model. When a linear model tells you that increasing ad spend by $1,000 predicts a $200 increase in revenue, you can inspect the coefficients, understand why, and assess confidence intervals. Simple models like linear regression expose their parameters and assumptions, making them easier to interpret.

An LLM gives you a plausible-sounding answer with no explanation of how it got there. For many use cases that's fine. For high-stakes business decisions, that opacity is a problem.

When people say "AI" in a meeting, they usually mean "something that sounds intelligent." That vagueness is dangerous. It turns AI into a black box that leadership buys into without understanding what it can and cannot do.

The "Agent Era" Is Oversold

The current narrative says we've entered the "agent era" — autonomous AI systems that plan, execute, and self-correct across complex workflows. The pitch is compelling: agents that manage your infrastructure, write and deploy code, handle customer support end-to-end.

The reality is more modest. Most "agents" today are LLM calls wrapped in loops with tool access. They work well for constrained, well-defined tasks. They struggle with ambiguity, long-horizon planning, and error recovery. They hallucinate tool calls. They get stuck in loops. For instance, an agent tasked with managing infrastructure may repeatedly call the same failing API or hallucinate a command that doesn't exist — and without guardrails, it will loop indefinitely.

Are agents useful? Absolutely — for specific, bounded problems. Are we in an "era" where autonomous AI agents are replacing knowledge workers? Not even close.

The gap between the marketing and the reality creates a dangerous dynamic: companies invest heavily based on the promise, then quietly scale back when the results don't match. Meanwhile, the vendors have already locked them in.

The Evaluation Gap

Another issue is evaluation. Many AI systems are deployed without rigorous testing because their outputs look plausible. Unlike traditional software where a function either returns the correct result or doesn't, LLM failures are subtle and probabilistic — a query that works 95% of the time still fails unpredictably on the other 5%.

Without careful evaluation frameworks, companies risk shipping systems that appear to work but fail in ways that are hard to detect, hard to reproduce, and hard to debug. And because the failures look like reasonable answers, they can go unnoticed for a long time.

The Vendor Lock-In Trap

This is what concerns me most. The current AI adoption playbook looks like this:

  1. Cloud providers and AI companies offer subsidized access to powerful models
  2. Companies build their workflows, products, and internal tools on top of these APIs
  3. Switching costs accumulate — prompt engineering, fine-tuning, integrations, institutional knowledge
  4. Prices go up, or the model changes, or the vendor pivots — and you're stuck

We've seen this pattern before with cloud computing. Companies moved everything to a single cloud provider for convenience, then discovered that migration costs made switching practically impossible. AI is following the same trajectory, but faster.

And the economics are fragile. Running frontier models is extraordinarily expensive — the current pricing is partly subsidized by venture funding and strategic market capture. If the economics change, many companies will discover that their AI features are far more expensive than they anticipated.

The dependency is even deeper with AI because the model itself is opaque. When you build on a proprietary API, you don't own the model, you don't control the weights, you can't audit the behavior, and you can't reproduce results when the provider updates the model behind the scenes.

According to a Gartner 2025 survey, CEOs themselves believe their executive teams lack AI savviness. Companies are making massive bets on technology that their own leadership doesn't fully understand. The infrastructure is being outsourced before the strategy exists.

What Should You Actually Do?

I'm not saying don't use AI. I'm saying be intentional about it.

Use open-source models where you can

Models like Llama, Mistral, Qwen, and DeepSeek run locally or on your own infrastructure. You own the weights, you control the deployment, and you can fine-tune them for your domain. The performance gap with proprietary models is shrinking fast.

I fine-tuned a 7B parameter model to generate MongoDB queries from natural language with 99% accuracy on unseen schemas. It runs on a single GPU with ~5 GB of VRAM and ~1 second latency. You don't always need GPT-4.

Think about where you actually need an LLM

Not every problem needs a language model. If you need classification, use a classifier. If you need search, use embeddings and vector search. If you need predictions, use traditional ML — it's faster, cheaper, interpretable, and battle-tested.

LLMs shine at tasks involving natural language generation, open-ended reasoning, and working with unstructured text. Use them there. Don't use a 70B parameter model to decide if an email is spam or Claude Code Opus to automate monitoring/alterting of infrastructure.

Generating code is not the same as engineering software

AI coding tools are incredibly productive. They autocomplete functions, generate boilerplate, and help you move faster. I use them daily. But there's a narrative forming that AI will replace software engineers, and it fundamentally misunderstands what software engineering is.

Code is not just something you generate — it's an asset and a liability. It needs to be maintained, extended, tested, debugged, and adapted to changing requirements. It needs to be readable by the next person who touches it. Code cannot become a black box.

Auto-generating code is like auto-generating legal contracts — the output might look right, but someone qualified needs to review it, understand the implications, and take accountability for it. Tools like Claude Code are accelerators, not replacements. They make engineers faster, but they don't eliminate the need for engineers who understand architecture, trade-offs, and the business context behind every decision.

The companies that treat AI as a way to drastically reduce headcount in engineering will end up with codebases no one understands and no one can maintain. The ones that treat it as a multiplier for skilled engineers will build better software, faster.

Build internal capability, don't just outsource

The companies that will benefit most from AI are the ones building internal teams that understand the technology — not just consuming APIs. Understanding how models work, when they fail, and how to evaluate them is a competitive advantage. Treating AI as a black box you rent from a vendor is not.

The Hype Will Settle — The Tech Will Stay

Every major technology goes through a hype cycle. The internet had the dot-com bubble. Cloud had its overpromise phase. Mobile, blockchain, VR — all went through inflated expectations before settling into their actual value.

AI will be no different. The hype will cool. Some startups will disappear. Some enterprise AI projects will quietly get shelved. And what will remain is genuinely useful technology that makes people more productive.

The key is to not get caught on the wrong side of the cycle — over-invested in hype, locked into vendors, and dependent on technology you don't understand.

The companies that win in the AI era won't be the ones using the most AI APIs — they'll be the ones who actually understand the technology.

← Back to Home