Kyle Hennessy

Why Most AI ROI Projections Are Wrong (And How to Get It Right)

AI ROI projections are often overstated. Here's where they go wrong — and how to build a business case that survives contact with reality.

If you’ve sat through an AI vendor pitch, you’ve seen the ROI slide.

It’s always impressive. A chart showing costs going down and productivity going up. A payback period measured in months. A return multiple that makes the investment look like a no-brainer.

And yet, when you talk to companies that have actually deployed AI, the story is often different. The savings didn’t materialize quite like the model predicted. The timeline slipped. The costs were higher than expected. The ROI that looked so compelling on the slide turned out to be optimistic at best, fictional at worst.

This isn’t because AI doesn’t work. It does. There are real, measurable returns to be had. The problem is how ROI gets calculated — and more specifically, what gets left out of the calculation.

If you’re evaluating an AI investment, you need to understand where these projections go wrong. Not so you can dismiss AI entirely, but so you can build a business case that actually holds up when reality arrives.


The five ways AI ROI projections go wrong

1. They measure the wrong things

The most common ROI error is measuring what’s easy to measure instead of what actually matters.

Time savings is the classic example. A projection might claim: “This tool will save each employee 5 hours per week.” Sounds great. But here’s the question nobody asks: what happens to those 5 hours?

If those hours get absorbed into other low-value work, you haven’t created any real value — you’ve just shifted where time gets wasted. If employees use that time to leave early or browse the internet, you’ve improved their quality of life, but you haven’t improved your bottom line.

Time savings only translate to ROI when they convert to one of three things: actual cost reduction (fewer people needed), revenue generation (time redirected to selling or serving customers), or capacity increase (ability to handle more volume without adding headcount).

A rigorous ROI projection doesn’t stop at “time saved.” It traces that time all the way to a financial outcome.

The fix: For every projected benefit, ask: “And then what?” Keep asking until you reach a dollar figure that would actually show up in financial statements.


2. They ignore the adoption curve

ROI models typically assume full adoption from day one. The tool gets deployed, everyone uses it immediately, and benefits start flowing.

This almost never happens.

Real adoption follows a curve. There’s a learning period. There’s resistance from people who prefer the old way. There are technical issues that need to be resolved. There are edge cases the tool doesn’t handle well. Months can pass before usage reaches the levels assumed in the projection.

During that adoption period, you’re paying for the solution but not yet receiving the full benefits. The longer adoption takes, the further your actual ROI drifts from the projection.

Some implementations never reach full adoption at all. The tool works, but only 40% of the target users actually use it consistently. That’s 40% of the projected benefit — which might still be worthwhile, but it’s a very different business case than what was originally presented.

The fix: Build adoption assumptions into your model explicitly. What percentage of users will be actively using the tool at 30, 60, and 90 days? What’s your plan to drive adoption, and what resources does that require? Stress-test your ROI at different adoption levels.


3. They undercount the real costs

The sticker price of an AI solution is rarely the actual cost.

Implementation costs are frequently underestimated. Integrating with existing systems takes longer than expected. Data needs to be cleaned, formatted, or migrated. Custom configuration is required. The “quick deployment” turns into a multi-month project.

Ongoing costs get overlooked entirely. Who maintains the system? Who handles exceptions and edge cases? Who retrains the model when performance degrades? Who manages the vendor relationship? These tasks require time — often from your most capable people.

Opportunity costs are invisible but real. Every hour your team spends on AI implementation is an hour not spent on something else. If your best engineer is configuring an AI tool for three months, what projects didn’t get done?

And then there are the costs of failure. If the project doesn’t work, you don’t just lose the investment — you lose credibility for future initiatives. The organization becomes more skeptical, more resistant, more likely to say “we tried that AI thing and it didn’t work.”

The fix: Build a comprehensive cost model that includes implementation, integration, training, ongoing maintenance, internal time, and a realistic contingency for overruns. Then add 20% more, because you’re probably still underestimating.


4. They assume stable conditions

ROI projections are snapshots. They assume that the problem you’re solving today will still be a problem tomorrow, at the same scale, with the same characteristics.

But businesses change. The process you’re automating might get restructured. The team using the tool might get reorganized. The data sources might shift. The competitive environment might evolve in ways that change your priorities entirely.

A two-year payback period assumes two years of stable conditions. That’s a big assumption in most businesses.

There’s also technology risk. AI is moving fast. The solution that’s cutting-edge today might be commoditized in 18 months. The platform you’re building on might pivot or get acquired. The capabilities you’re paying a premium for might become standard features in tools you already own.

This doesn’t mean you shouldn’t invest. It means you should be thoughtful about time horizons and build flexibility into your approach.

The fix: Favor shorter payback periods and modular implementations. Be skeptical of any ROI case that requires three or more years to break even. Build in decision points where you can reassess.


5. They conflate potential with probability

This is perhaps the most insidious error, because it’s rarely intentional.

ROI projections often describe what could happen under ideal conditions: full adoption, perfect execution, everything going according to plan. But they present these numbers as if they’re what will happen.

The gap between potential and probability can be enormous. Yes, AI could reduce processing time by 80%. But what’s the probability that you’ll actually achieve that? Based on what evidence?

Vendors have every incentive to present the optimistic case. They’re selling. It’s your job to discount those projections appropriately — not because vendors are lying, but because they’re showing you the upside without fully weighting the risks.

The fix: For every benefit claimed, assign a probability. Not a precise number — just a rough sense of confidence. “We’re 90% confident we can achieve X” is a very different statement than “We’re 40% confident we can achieve X.” Weight your projected returns accordingly.


How to build an ROI case that actually holds up

Given all these pitfalls, how should you approach AI ROI? Here’s a framework that acknowledges uncertainty while still enabling confident decision-making.

Start with the problem, not the solution

Before you calculate any returns, make sure you deeply understand the cost of the problem you’re solving.

What is this problem actually costing you today? Not in theoretical terms — in real, observable terms. How much time? How many errors? How much revenue lost or delayed? What would you be doing differently if this problem didn’t exist?

If you can’t quantify the problem, you can’t meaningfully quantify the solution’s value. And if the problem isn’t that costly to begin with, maybe it’s not worth solving with AI at all.

Define success in terms you can actually measure

What specific metrics will tell you whether this investment worked?

Good success metrics are:

  • Observable: You can actually track them with data you have access to.
  • Attributable: Changes can reasonably be connected to the AI implementation, not confounded by other factors.
  • Meaningful: They connect to financial outcomes, not just activity metrics.

“Improved efficiency” is not a success metric. “Reduced average processing time from 45 minutes to 12 minutes” is.

Model scenarios, not just outcomes

Instead of a single ROI projection, build three scenarios:

Conservative: Adoption is slower than expected, benefits are at the low end of estimates, costs run over. What does ROI look like?

Expected: Things go roughly according to plan, with normal friction and challenges.

Optimistic: Everything works well, adoption is high, benefits materialize quickly.

If the conservative case still shows acceptable returns, you have a robust investment. If you need the optimistic case to justify the investment, you should be nervous.

Build in checkpoints

Don’t commit to a full implementation before you have evidence that it works.

Structure your investment in phases. Start with a pilot. Define what success looks like for the pilot. If the pilot succeeds, expand. If it doesn’t, you’ve limited your losses and learned something valuable.

This isn’t timidity — it’s intelligence. The best ROI comes from scaling what works, not from betting big on what might work.

Agree on the math before you start

Here’s a practice that will save you enormous headaches: before you begin any AI project, align with all stakeholders on how ROI will be calculated.

What costs are included? What benefits count? Over what time period? How will you measure? Who has to agree that the targets have been met?

Getting this alignment upfront prevents the frustrating post-hoc debates where different people have different definitions of success. It also forces clarity about what you’re actually trying to achieve — which often reveals misalignment that’s better discovered early.


The honest conversation about ROI

Here’s what we tell clients: we don’t start projects without an agreed-upon ROI target.

Not because we’re trying to cover ourselves, but because that discipline is what separates projects that create real value from projects that create expensive case studies about what not to do.

If we can’t articulate a clear, credible path to returns — in terms that would survive scrutiny from a skeptical CFO — we’ll tell you. We’d rather lose a project than take your money for something that won’t pay off.

That might sound like a limitation. We think it’s a feature. It means that when we do move forward, we’ve done the hard thinking upfront. We’ve pressure-tested the assumptions. We’ve aligned on what success looks like.

That’s how you build AI initiatives that actually deliver.


What this means for your next AI investment

If you’re evaluating an AI opportunity, bring healthy skepticism to any ROI projection — including ones you build yourself. Ask hard questions:

  • What assumptions is this model making about adoption? Are those realistic?
  • What costs aren’t included here? What could go wrong?
  • How confident are we in these benefit estimates? What’s the evidence?
  • What does ROI look like if things don’t go perfectly?
  • How will we actually measure whether this worked?

You’re not trying to kill the project. You’re trying to make sure that if you proceed, you proceed with clear eyes. The goal isn’t to avoid all risk — it’s to understand the risks you’re taking and make sure the potential reward justifies them.

AI can deliver real, substantial returns. But only when the business case is built on honesty, not hope.


Want a realistic view of what AI could do for you?

We help companies cut through the hype and build AI business cases that actually hold up. No inflated projections. No best-case-scenario math. Just an honest assessment of what’s possible and what it would take to get there.

Want to find out where AI fits in your business?

We'll help you identify the opportunities, understand the ROI, and figure out what's actually worth doing.