It’s not the model. It’s the business model.
By Alan Jacobson, AI Economics Strategist
AI is compressing margins at 25 companies — and now even OpenAI is pulling back:
- OpenAI is retreating from “everything everywhere.”
- Microsoft’s CEO is stepping into Copilot.
- Meta is delaying deployment.
These aren’t product problems.
They’re cost problems.
For the past two years, the narrative around AI has been simple: whoever builds the best model wins.
Bigger models. Better benchmarks. More parameters. Faster releases.
But that narrative is starting to crack — not because the models aren’t improving, but because the economics behind them aren’t working.
Three signals, from three of the most powerful companies in the world, all point in the same direction.
1) OpenAI pulls back from “everything everywhere all at once”
OpenAI was supposed to be the company that did it all.
Chat. Search. Agents. Shopping. Voice. Video. Enterprise. Consumer. Platform.
Everything. Everywhere.
And then — quietly — it wasn’t.
The company has begun pulling back, refocusing on business customers and cutting what one executive described internally as “side quests.”
That’s not a product decision.
That’s a cost decision.
Because every “side quest” carries inference cost. Every feature expands usage. Every expansion increases the gap between what AI costs to run and what it earns.
This is what retreat looks like when the model works — but the business model doesn’t.
2) Satya Nadella steps in
At Microsoft, CEO Satya Nadella has moved to personally oversee engineering teams working on Copilot.
That’s not normal.
CEOs don’t step into product trenches unless something is off.
Copilot isn’t a science experiment anymore. It’s supposed to be a revenue engine embedded across Office, Windows, and Azure.
And yet, despite massive distribution and deep integration, the economics are still unclear:
- high compute cost per query
- unclear willingness to pay
- uneven enterprise adoption
This isn’t about whether Copilot works.
It’s about whether Copilot pays.
3) Meta delays again
At Meta, the story is similar — even with one of the largest user bases on the planet.
A major AI deployment (“Avocado”) has reportedly been delayed after underperforming internal evaluations.
But the more important signal isn’t the delay.
It’s the hesitation.
Meta has:
- distribution at global scale
- capital to spend tens of billions
- control of its own platforms
And still — it pauses.
Because deploying AI at scale is not just a technical decision.
It’s a cost commitment.
Every marginal improvement in model performance must justify its marginal increase in cost.
And increasingly, it doesn’t.
The pattern
These are not startups running out of runway.
These are three of the best-resourced companies in the world.
- OpenAI: retreating from breadth
- Microsoft: escalating executive oversight
- Meta: delaying deployment
Different actions.
Same underlying pressure.
The misunderstanding
The current AI conversation is obsessed with:
- model capability
- benchmarks
- features
- agents
- “what’s next”
But businesses don’t run on benchmarks.
They run on margins.
And right now, the core dynamic looks like this:
- inference cost shows up immediately
- revenue shows up later — if at all
- margins get compressed in between
That gap is the problem.
The shift that’s coming
This is why the next phase of AI won’t be won by:
- the best model
- the most features
- the fastest release cycle
It will be won by something much less exciting:
cost control.
The companies that survive this transition won’t be the ones that can do everything.
They’ll be the ones that can afford to.
The bottom line
The models are getting better.
That was never the question.
The question is whether they make money.
And across the most powerful companies in AI, the early answer is becoming harder to ignore:
It’s not the model.
It’s the business model.
– Published Thursday, March 19, 2026
Where is the business model—not the model—driving margin pressure in AI?
Send signal: signal@revenuemodel.ai
I read every signal.