The Hallucination Engine

The Hallucination Engine

Ever notice how the solution to a problem only creates another problem?

How the cure is sometime worse than the illness?

How the horseless carriage begat a whole host of problems we never had on horseback?

How social media, which was supposed to connect us, only drove us deeper into isolation?

Submitted for your approval: RAG, or Retrieval-Augmented Generation – a means by which LLMs could bridge the gaps between memory and cognition, and “fill in the blanks” imposed by the limitations of persistent memory and context windows.

What might have seemed like a good idea at the time, has turned into a Frankenstein's monster: fabrications and out-and-out lies.

Here is how the story goes. Faced with models that made things up whenever they ran out of context or confidence, the smart kids in the lab reached for an obvious patch. If the model does not know enough, make it look something up. Bolt a search engine onto the side of the brain, feed the results into the prompt, and call it “augmented.” On the whiteboard, it sounded elegant. In the press releases, it sounded miraculous.

In practice, it turned your friendly autocomplete into a hallucination engine with better props.

RAG does not understand. RAG retrieves. It hauls in whatever happens to float to the top of a ranking algorithm at the moment you ask your question. Maybe it is a solid research paper. Maybe it is a half-baked blog post. Maybe it is an SEO farm dressed up as expertise. All of it is shoved into the model’s short-term memory and then stitched together into something that sounds confident and coherent.

The result is not wisdom. The result is improv.

When retrieval is slightly off, the model is confidently wrong. When the documents disagree, the model happily averages contradictions into nonsense. When the source is outdated, the answer is obsolete but delivered with the same smooth assurance. And because RAG can now sprinkle in quotes and citations, the lie shows up wearing a lab coat and a stethoscope. It looks more trustworthy precisely because it has more moving parts.

We told ourselves that retrieval would “ground” the model. Instead, it often does the opposite. It multiplies the number of places things can go wrong: bad ranking, bad filtering, bad context, bad synthesis. A bigger haystack does not make the needle easier to find. It just gives the hallucination engine more straw to work with.

And so here we are, with a technology that was supposed to reduce hallucinations, and instead has given us a new species of error: plausible, well-written, source-decorated fiction. The kind of fiction that passes a quick skim, slips into a slide deck, lands in a board memo, and then shapes real decisions about money, infrastructure, health, safety and law.

The tragedy is not that the engineers were stupid. They were not. The tragedy is that the whole stack was built on a quiet assumption: that if you just shove enough “stuff” into the model’s head at the last second, it will somehow turn that chaos into truth. That assumption was wrong. You do not get reliability by throwing more information at an unreliable process. You get more elaborate failure.

So yes, RAG was a clever hack. A quick fix. An emergency bridge between what the models could remember and what the real world actually requires. But as with so many clever hacks, the bill is now coming due. The question is not whether we can retrieve more. The question is whether we can finally admit that retrieval alone will never cure hallucination, and whether we are willing to build something better than a dressed-up hallucination engine.

That is where the real story starts.

My name is Alan Jacobson. I'm a web developer, UI designer and AI systems architect.

I have 13 patent applications pending before the United States Patent and Trademark Office. They are designed to prevent the kinds of tragedies you can read about here.

I want to license my AI systems architecture to the major LLM platforms—ChatGPT, Gemini, Claude, Llama, Co‑Pilot, Apple Intelligence—at companies like Apple, Microsoft, Google, Amazon and Facebook.

Collectively, those companies are worth $15.3 trillion. That’s trillion, with a “T” — twice the annual budget of the government of the United States. What I’m talking about is a rounding error to them.

With those funds, I intend to stand up 1,414 local news operations across the United States to restore public safety and trust.

AI will be the most powerful force the world has ever seen.

A free, robust press is the only force that can hold it accountable.

You can reach me here.

© 2025 BrassTacksDesign, LLC