An open letter to Mark Zuckerberg: LLaMA is dead as a doornail
Dear Mark,
Like Jacob Marley, LLaMA is dead as a doornail.
No growth since Spring 2025.
Miserly Scrooge & Marley would never tolerate this expensive cost center.
WTF: 6 AI giants, 6 giant messes
Distribution is not the problem. Distribution is never a problem for Meta:
Facebook, Instagram and WhatsApp give Meta reach that no other company can touch. When Meta ships something that resonates, adoption follows quickly and visibly.
That’s why the signal coming out of LLaMA matters.
Despite massive investment, widespread publicity, and deep integration across Meta’s ecosystem, LLaMA has not driven sustained user adoption. Usage has plateaued. Engagement has stabilized. Growth has stalled. The curves are flat.

This is not a marketing failure.
It’s a product signal.
AI is present, but it is not sticking. And when adoption stalls at Meta scale, it means something structural is missing.
Which brings us to the three problems LLaMA must solve before adoption is even possible.
1. Memory
LLMs survive their memory limits the same way JPEGs survived slow networks: through lossy compression.
JPEGs throw away pixels. LLMs throw away facts.
At first glance, the loss isn’t obvious. But look closely and the seams appear: blurred edges, missing detail, artifacts that weren’t visible at first. With LLMs, those artifacts are missing facts and broken continuity.
What JPEGs lose are pixels.
What LLMs lose is truth.
Without 100% loss-less memory, AI cannot be trusted. Without trust, there is no adoption. Without adoption there is no scale. And without scale, the market caps tied to AI infrastructure evaporate.
If you believe memory is a problem you can solve later, please know that a solution to this problem has been filed and is patent pending.
2. Governance
Enterprises will not adopt systems they cannot control. And users need agency as well:
Over how AI behaves, when it escalates, when it refuses and how it explains itself. They need visibility, constraint and the ability to govern outcomes rather than react to them after failure.
Right now, governance is implicit, opaque and centralized. That is tolerable for demos. It is unacceptable for real work.
Joni Mitchell never accepted an instrument as it was handed to her. She tuned it — again and again — until it matched the sound she heard in her heart. She custom-tuned her guitar for many of her songs, including “California.”

Governance in AI should work the same way: not as control imposed from above, but as user-level tuning that lets people shape how the system behaves, remembers and responds.
AI systems that do not give users control will be treated as toys, not tools.
If you believe governance is a problem you can solve later, please know that a solution to this problem has been filed and is patent pending.
3. Revenue
This is the problem the industry keeps avoiding.
Flat-rate pricing does not scale at the enterprise level. Token-based billing does not measure cost. Tokens measure words. Words are a inaccurate proxy for compute.
Consider these two scenarios
A user talks to AI for thirty minutes about his girlfriend:
How she seems distant.
How she is slow to respond to texts.
How she is mysteriously unavailable.
The system dutifully transcribes every word, responds empathetically and consumes a massive number of tokens — all while avoiding the four words a human would scream immediately: SHE’S CHEATING ON YOU!
Now consider a three-word query:
“Is God real?”
Few questions demand more reasoning, context, philosophy and depth. Yet under token-based billing, that interaction may never recover the cost of compute.
That alone should end the debate over billing.
Tokens are not compute. They are a proxy—and a inaccurate one. If you want to bill for cost, you must meter compute.
And if you believe compute-based metering is something you can defer, please know this:
It’s not impossible. It’s patent pending.
Mark, this isn’t a question of intelligence or ambition. Meta has both.
The question is whether users are being given a reason to stay.
AI systems that cannot be steered, taught, or trusted eventually become novelty features — impressive at first, then ignored. Without user agency, every failure feels arbitrary, every success feels disposable and every interaction resets the relationship.
That’s why adoption stalls.
Growth will not come from bigger models or louder launches. It will come from giving users control over how AI behaves, remembers, and adapts to them — turning AI from something they consume into something they collaborate with.
Meta knows how to scale products that people make their own.
AI will be no different — once users are finally allowed to.