We don’t need more powerful models. We need models that make dollars and sense.

We don’t need more powerful models. We need models that make dollars and sense.

By Alan Jacobson, AI Economics Strategist

Another week, another “step change in capability,” according to Fortune’s Jeremy Kahn.

This time, it’s Anthropic’s reportedly more powerful model, Mythos—a system described as a meaningful leap beyond its current generation. The details surfaced in an unsecured draft blog post, an unusual disclosure that nonetheless underscores a familiar pattern in the AI race: progress is measured in capability, not in economics.

That framing is becoming harder to sustain.

Because the central issue facing AI today isn’t whether models can do more. It’s whether the business of using them actually works.

Across the industry, a consistent dynamic is emerging.

  • Revenue is rising.
  • Usage is growing.
  • Adoption—at least in certain domains—is real.

But so are the costs required to support that growth.

AI doesn’t behave like traditional software, where scale drives margin expansion. It behaves more like a utility, where every unit of usage carries a corresponding unit of cost. The more powerful the model, the more it tends to be used—and the more compute it consumes in the process.

That creates a structural tension.

Each new generation of models increases capability.
But it also increases:

  • compute consumption
  • cost volatility
  • margin pressure

So the industry finds itself in a paradox:

Revenue is up.
Costs are up.
Profits remain unclear.

In that context, a “step change in capability” may be technically impressive—but economically incomplete.

Part of the challenge lies in how usage is measured.

Most systems today rely on tokens as the unit of billing and observation. But tokens are, at best, a rough proxy for the work being performed.

They are semantically blind: treating all words as if they carry the same computational weight, even when the underlying tasks differ dramatically in complexity.

And they are operationally blind: missing large portions of agentic execution—tool calls, retrieval, iteration, and background processes—that consume compute without generating corresponding tokens.

The result is a system where the primary cost driver—compute—is only partially visible, and therefore difficult to predict, govern, or control.

That’s why, even as companies push for “show me value” and move beyond experimentation, the economics remain unsettled.

The problem isn’t a lack of innovation.
It’s a lack of alignment between cost and value.

Until that alignment exists, more powerful models will continue to amplify the underlying issue rather than resolve it.

Which leads to a reframing of the question.

The next breakthrough in AI may not be a model that can do more.

It may be a system that makes what it does economically sustainable.

Because ultimately, the market won’t be defined by who builds the most powerful model.

It will be defined by who builds the first one that makes dollars—and sense.

– Published on Thursday, March 26, 2026



Contact us

© 2025 BrassTacksDesign, LLC