Sorry Wharton, but money is the only metric that matters

Sorry Wharton, but money is the only metric that matters

Two major 2025 studies reach opposite conclusions about enterprise AI.

  • Wharton says AI is succeeding.
  • MIT says it isn’t.

They’re both right — but only one of them answers the question that matters.

Ironically, the role of each institution flipped:

  • Wharton, a business school, framed AI success in terms of adoption, sentiment, and executive confidence — how many people are using the tools and how leaders feel about the results.
  • MIT, a technology school, applied a far harsher test: whether AI delivers measurable financial impact at enterprise scale. The tech school followed the money. The business school followed the vibes. That inversion tells you almost everything you need to know about where enterprise AI actually stands in 2025.

What Wharton proves

Wharton’s 2025 AI Adoption Report shows that:

  • AI usage is widespread inside enterprises
  • Employees are using AI weekly or daily
  • Executives say they are “measuring ROI”
  • Leaders report “positive returns”

In short: AI is being used.

That’s not meaningless. It tells us AI has crossed the curiosity threshold. It tells us tools like ChatGPT have become normal at work. It tells us executives feel pressure to show progress.

But none of that answers the only question finance cares about.

What Wharton does not prove

Wharton does not show that AI:

  • materially increases revenue
  • materially reduces costs
  • materially improves margins
  • materially changes enterprise unit economics

“Positive ROI” is self-reported.
Productivity gains are loosely defined.
No P&L linkage is demonstrated.

Usage is not monetization.
Optimism is not earnings.

What MIT measures instead

MIT’s State of AI in Business 2025 report applies a much harsher test:

Does AI produce measurable financial impact at enterprise scale?

Not pilot success.
Not user satisfaction.
Not anecdotal productivity.

Actual dollars.

By that standard, the results are brutal:

  • ~95% of enterprise AI efforts show no measurable P&L impact
  • Only ~5% generate material economic value
  • Most deployments improve individual output but do not move enterprise economics

MIT is not asking whether people like AI.
MIT is asking whether AI pays for itself.

Why the conclusions conflict

The contradiction disappears once you align the definitions.

Wharton defines success as:

  • adoption
  • engagement
  • organizational momentum

MIT defines success as:

  • revenue up
  • cost down
  • margins improved

Those are not equivalent.

Historically, widespread usage without financial impact is not success — it’s the prelude to retrenchment.

The uncomfortable truth

If more people are using AI and enterprises are not making more money or spending less money, then AI is not succeeding as a business technology.

It may be:

  • interesting
  • helpful
  • impressive
  • unavoidable

But it is not economically justified.

In fact, in inference-heavy systems, usage without control often increases cost faster than value.

That is not transformation.
That is burn.

Why “usage” is a dangerous metric

Every failed enterprise technology wave looked successful by usage metrics first:

  • ERP
  • CRM
  • big data
  • digital transformation
  • blockchain

Adoption always comes before accountability.

Finance does not kill technologies because they aren’t used.
Finance kills them because they don’t pay.

The only honest conclusion

If the only metric that matters is money — and in business, it is — then:

  • MIT is authoritative
  • Wharton is descriptive
  • and “more people are using AI” is not a victory

It’s a warning signal.

Because when usage is high and returns are low, the next phase is not celebration.

It’s scrutiny.

– Published Sunday, January 25, 2026



Contact us

© 2025 BrassTacksDesign, LLC