AI doesn’t add up. These numbers don’t lie.

AI doesn’t add up. These numbers don’t lie.
By Alan Jacobson, Systems Architect & Analyst

Can LLMs “scale” their way to profitability by charging a flat rate of $20/month for unlimited compute?

  • If every man, woman and child in America pays $20/month = $81.6B/year
  • Capex depreciation: $50B/year
  • Staff: $11.25B/year
  • Cost of electricity for compute: $8B/year

$81.6B – ($50B + $11.25B + $8B) = $12.35B in profit on $81.6B in revenue.

But that’s at 100% adoption, including newborns and seniors whose microwaves have been flashing 12:00 since 1998 – and without enumerating a whole host of expenses listed below.

Where these numbers come from, starting with revenue

U.S. population: 340M
Adoption rate: 100% (every man, woman and child)
Price: $20/month
Annual revenue: 340M × $20 × 12 = $81.6B / year

That’s the absolute ceiling.

Expenses

Capex reality check

Hyperscalers are spending tens of billions each, hundreds of billions collectively.

Microsoft, Google, Meta, Amazon combined: ~$300–350B in recent annual guidance and disclosures.

And that capex is growing, not shrinking.

Even if you amortize generously:

$300B in AI datacenters over 5–7 years
$40–60B per year just in depreciation

That alone eats half to three-quarters of your $81.6B revenue.

Staff

Fully-loaded cost of a senior technical employee in Big Tech is commonly:

$100K–$200K per year (salary + benefits + payroll taxes + recruiting + overhead) with ~75,000 employees allocated across:

  • AI research
  • Inference infrastructure
  • SRE / ops
  • Security
  • Datacenter staff
  • Product
  • Support
  • Corporate overhead
  • Legal
  • Marketing
  • PR
  • IR
  • HR

75,000 × ~$150K ≈ $11.25B per year

This is conservative for companies operating global AI platforms. Staff costs are shown using a conservative blended global average.

Electricity for compute

Scenario A: Absurdly optimistic (5 watts per user). This is less than a nightlight.

Total load: 340M × 5 W = 1.7 GW
Annual energy: 1.7 GW × 8,760 h ≈ 14.9 TWh
At ~$0.14/kWh = $2.1B / year

Scenario B: Still optimistic (20 watts per user). This is closer to reality for daily inference + retries + embeddings.

Load: 6.8 GW
Energy: ~59.6 TWh
Cost: ~$8.5B / year

Scenario C: Realistic daily usage (50 watts per user)

Load: 17 GW
Energy: ~149 TWh
Cost: ~$21B / year

That is electricity only.

But electricity is not the only expense — it’s just the first line item. You still haven’t paid for:

  • Networking
  • Cooling overhead (often 30–50% of power again)
  • Datacenter construction
  • Land
  • Water
  • Maintenance
  • Redundancy
  • Idle capacity
  • Peak provisioning
  • Failover
  • Free-tier users

Take the most generous scenario:

  • Revenue: $81.6B
  • Capex depreciation: $40–60B
  • Staff: $11.25B
  • Electricity: $2–8B

You are already at: $53.25–79.25B in hard costs, before:

  • Profit
  • Growth
  • Retries
  • Margin
  • Outages
  • Mistakes

This is with:

  • 100% adoption
  • Everyone paying
  • Every month
  • Forever
  • With zero churn

Which means the current model only survives because:

  • Usage is throttled
  • Users are subsidized
  • Costs are hidden in capex
  • And pricing is disconnected from compute.

That’s not a growth model.

That’s a rationing model dressed up as abundance.

And once someone says that out loud — the story breaks.

And advertising is not an option

AI is not social media. AI is one-to-one. Social media is one-to-many. Advertising works when platforms can aggregate large audiences with shared attributes and sell access to those audiences. Social media does this by tracking user behavior, inferring interests and maintaining persistent profiles across time and context.

That model does not translate to LLMs.

Targeted advertising requires exactly the capabilities LLM providers have promised not to build: durable user profiling, cross-session behavioral continuity and intent inference that persists beyond a single interaction.

Strip away targeting and ads don’t pay. Add targeting and trust collapses. Either way, advertising fails the moment AI is treated as infrastructure rather than content.

If OpenAI moves toward advertising, it won’t solve the revenue problem. It will expose it — by trading user trust for income that still doesn’t cover inference cost.

Advertising doesn’t fund AI. It liquidates trust to buy time.

And here’s the problem the industry keeps avoiding

  • Flat-rate pricing does not scale at the enterprise level.
  • Token-based billing does not measure cost.
  • Tokens measure words. And words are a terrible proxy for compute.

Consider these two scenarios:

A guy talks to AI for thirty minutes about his girlfriend. He goes on and on…

How she seems distant.
How she is slow to respond to texts.
How she is mysteriously unavailable.

The system dutifully transcribes every word, responds empathetically and consumes a massive number of tokens — all while avoiding the four words a human would scream immediately: SHE’S CHEATING ON YOU!

Now consider a three-word query:

“Is God real?”

Few questions demand more reasoning, context, philosophy and depth. Yet under token-based billing, that interaction may never recover the cost of compute.

And in both cases, look at the asymmetry between input, output and effort. There is no correlation between number of tokens — either in or out — and compute.

That alone should end the debate over billing.

Tokens are not compute. They are a proxy — and an inaccurate one. If you want to bill for cost, you must meter compute.

Metering compute is not impossible. It’s patent pending.

My name is Alan Jacobson.

A top-five Silicon Valley firm is prosecuting a portfolio of patents focused on AI cost reduction, revenue mechanics, and mass adoption.

I am seeking to license this IP to major AI platform providers.

Longer-term civic goals exist, but they are downstream of successful licensing, not a condition of it.

You can reach me here.

© 2025 BrassTacksDesign, LLC