Selling AI is not like selling rubber duckies, Amazon, Google or Facebook

Selling AI is not like selling rubber duckies, Amazon, Google or Facebook

Despite endless comparisons to Amazon, Google, and Facebook, AI is not like monetizing eCommerce, Search, or Social – because unit economics improve with scale for them. But not for AI.

Rubber duckies are cheap to make one at a time, and cheaper to make a million at a time. Amazon loses money early, then spreads fixed costs across more orders. Google loses money early, then amortizes infrastructure across more searches. Facebook loses money early, then shows more ads to more people at essentially zero marginal cost.

Different products. Same physics.

More volume lowers the cost per unit. That’s the engine.

AI does not have that engine.

Large language models do not sell a reusable product. They sell a bespoke response to a specific prompt, generated in real time, consumed once, and discarded forever. Every query is custom. Every answer is ephemeral. Nothing is stocked. Nothing is resold. Nothing sits on a shelf waiting for the next buyer.

Ask one question or a billion questions — the system still has to do the work every single time.

And unlike search or social, the cost of answering that question does not trend meaningfully toward zero as usage increases. In fact, it often moves in the opposite direction.

As models become larger, as context windows expand, and as users expect deeper reasoning and longer outputs, the cost per response goes up, not down. More parameters. More tokens. More compute. More energy. More money.

Scale does not fix this. Scale exposes it.

This is the fundamental problem with AI unit economics, and it’s the part most AI coverage avoids because it’s deeply inconvenient to the prevailing narrative.

If scale doesn’t drive costs down, then AI businesses are boxed into only three possible paths:

First, they can radically reduce the cost of producing each response. That means breakthroughs in model efficiency, inference optimization, hardware utilization, or architectural change — not marginal gains, but step-function reductions. So far, those gains have been incremental, not transformative.

Second, they can charge dramatically more for each interaction. That requires AI to deliver value so obvious, so indispensable, that users willingly pay far more than they do today. But current behavior is blunt: the overwhelming majority of users pay nothing for AI, and converting free users to paid has proven far harder than expected.

Third, they can rely on advertising. But this is where the comparison to Google and Facebook collapses completely.

Search and social work because attention is cheap and abundant. AI attention is neither. An AI response is expensive to generate, fleeting in duration, and consumed with intense focus for a short period of time. To make advertising work, AI would need to command ad rates orders of magnitude higher than search or social just to cover its costs — for a medium that does not encourage browsing, scrolling, or passive exposure.

That is a brutal mismatch.

None of this means AI is useless. None of this means AI is a scam. And none of this means AI won’t matter.

It means AI is not a rubber ducky business.

It means you cannot assume that growth fixes losses, that scale cures inefficiency, or that time magically converts cost into margin. Those assumptions come from other industries with very different physics.

AI has its own physics. And until investors, executives, and boards confront that reality, they will keep mistaking impressive demos for viable businesses.

The future of AI will not be decided by agents, interfaces, or viral clips.

It will be decided by whether someone figures out how to make the math work.



Contact us

© 2025 BrassTacksDesign, LLC