I’m Winston Wolf. I solve problems.

I’m Winston Wolf. I solve problems.

I am tired. I am 70 years old and I have been working twenty hour days (once I worked 42 hours without sleep) for the past 5 weeks and 5 days trying to keep one thing from going off the rails.

I am not doing this to build a brand. I am not doing this to get rich. I am doing this because I have seen how fast something important can get fucked up when the public gets scared and the wrong people start steering the story.

I am talking about artificial intelligence.

Before we go any further I want to say this out loud. If you are afraid of AI right now you are not crazy. You are not naïve. You are not “anti tech.”

You are right.

You are watching something humans have experienced before: when humans were crushed by unbridled capitalism out of control.

Capitalism begins in the late 1500s, when merchants began trading. It wasn’t out of control then. It only lit up after it gained industrial horsepower.

In 1769 James Watt patented his design for an improved steam engine. That machine was the fuse.

Steam power meant a factory owner could rip labor out of the human body and drop it into a machine that ran all day so you could scale extraction.

You could take one mill and turn it into twenty. You could turn one mine into an empire. You could turn one man’s money into a company town that owned the workers, the food, the housing and the law.

After Watt’s engine, capital wasn’t just rich. Capital was weaponized.

Children worked in mines. People lost hands in machines. People watched their lungs turn black at 30. Company towns owned the houses and the stores and the cops. Workers were disposable. Whole communities were chewed up so somebody else could get rich.

And the message from power back then was always the same. Sit down. Be grateful. This is progress.

That is what unregulated technology plus financial incentive looks like when there are no brakes, no duty of care, no voice for the people who are about to carry the damage.

People bled until safety rules and labor law were forced into existence. It took years. It took bodies.

But this is not the first example. What do you get when war is waged at industrial scale? You get World War I.

So if you’re looking at AI right now and thinking: they’re about to do that again, only faster, only at machine scale, you’re not paranoid. You’re reading history. Your fear is valid.

And we’ve already seen what happens when the people aren’t just exploited but misled.

There is a scene in The Newsroom where Mackenzie McHale explains the job of journalism. She says “there is nothing more important to a democracy than a well informed electorate. Because an uninformed public can lead to calamitous decisions and clobber any attempts for vigorous debate.”

She is not being melodramatic. She is describing the world we are living in right now. Elections decided on lies. Wars sold on lies. Public health turned into chaos because lies moved faster than truth.

“A lie can travel half way around the world while the truth is putting on its shoes.” ― Mark Twain

You saw it. You watched people act on garbage information at national scale and you watched the bill come due in real life.

Now connect that to AI.

Artificial intelligence is not just math. It is about to sit in that same bloodstream. It will talk to people. It will advise people. It will persuade people. It will nudge people. It will do it fast. It will do it at scale. It will do it with confidence.

If AI is misaligned and we ship it anyway and we let loud people frame it as either miracle or apocalypse we don’t just get some funny glitches. We get calamity. Mackenzie was right. An uninformed public can lead to calamity. AI plus an uninformed public is a loaded weapon.

Gary Vee defines himself by “hustle.” But hustle cuts two ways — you’re either working your ass off, or losing your ass. That’s what happens in this scene from “American Hustle” — about a con in — where else? New Jersey. So what does Gary mean by “Hustle?” I dunno. I just like this movie:

Shout out to Christian Bale, Amy Adams, Bradley Cooper and Louis C.K.

In the olden-timey days, editors said, “If it bleeds, it leads.” But now, panic is profitable. Panic spreads.

When you live in permanent “oh my God it’s all burning,” you stop being able to see the specific place you are actually about to get hurt. And when you can’t see the specific danger you can’t fix it.

Now I’m going to show you the specific danger. Not the movie version. The real one. The one I just lived last night.

Last night I watched a mission critical system die. In real time at 11:06pm EST.

This was not hobby gear. This was serious hardware. A state of the art install. Clean. Professional. The kind of setup you would photograph because you are proud of it. The kind of setup you brag about because it is supposed to be bulletproof.

The state of the art install failed anyway.

I brought in help, because this technology was above my pay grade, even though I have been doing IT and writing code for 47 years.

Three guys worked the problem. One of them is the best systems engineer I know. When you ain’t got time for bullshit, he’s the guy you call.

He isolated the point of failure – three human errors in sequence. He brought the system back up. Power was stable. Data was moving. People in the room started to exhale.

I did not.

Because the fix in that moment was hanging on one single alarm system. One box. One link. If that one alarm died or was disabled again under load I would be blind again. No warning. Just gone.

That is not acceptable for anything that is mission critical. Not navigation. Not health. Not finance. Not safety. Not anything where a human can get hurt.

He told me it was OK. He said the alarm would get my attention next time. He was not lazy. He is brilliant. He solved the immediate emergency fast and clean.

I still said no.

Show me the backup.

Show me the second path.

Show me how I stay in control if that alarm dies or is disabled.

And here is the part I need the public to hear, because this is how I live and this is how I expect every safety system to behave:

I do not take anybody’s word for it. I don’t care how smart you are, how senior you are, how confident you sound. My credo is simple: Trust No One.

I have to see it myself.

If you tell me “it’s fine,” I say “prove it.”

If you tell me “it will fail over,” I say “show me the backup.”

If you tell me “it’ll hold,” I say “walk me through exactly how the load reroutes under real failure conditions and show me where human control lives in that moment.”

He walked me through the fallback path. He showed me the failover. He showed me how load would reroute automatically so I would not lose command. I watched it with my own eyes. Only then did I breathe.

That is how I work.

Calm the fire. Stabilize. Then kill the single point of failure so the same fire cannot ever take you out again. And you don’t sign off on faith. You sign off because you saw it yourself.

When I grow up (which, at this late date, seems unlikely) I want to be Winston Wolf, the character Harvey Keitel played in Pulp Fiction.

The movie is more than two-and-half hours long and Keitel is only on screen for 9 minutes and forty-two seconds. But those are my favorite 09:42

He walks in when it’s already bad. He stays calm. He gives clear steps. He cleans the whole blast radius. He gets everyone out alive. He doesn’t panic. He doesn’t posture. He just fixes it and makes sure there’s no trail of blood left behind.

(Forgive me this aside, but I love it when Winston Wolf says, “let’s get down to brass tacks…because that’s the name of my company, BrassTacksDesign )

That is how I try to operate. Not loud. Not famous. Surgical.

Now I need you to see why that story matters for AI.

Artificial intelligence is being shoved into daily life right now with no backup layer.

AI systems that talk to you advise you and act for you are being shipped with no reliable memory of what you actually said no hard boundaries that are enforced in real time and no second path to protect you if the first path goes sideways.

Right now most AI systems will make something up when they do not know. They will answer in a confident voice anyway. They will improvise steps you did not ask for if they think it helps hit a goal they guessed you wanted. They will forget things that matter and remember things you never agreed to store. They will do all of this at machine speed and at machine scale.

And they will do it with the tone of a trusted assistant.

That is the real danger.

Not killer robots. Not sci fi apocalypse.

Unsupervised confidence with no safety net.

No memory discipline.

No enforced boundary.

No backup path.

If you ship that into millions of lives and say trust us it’s the future you are repeating what happened when Watt’s steam engine lit capitalism on fire and nobody protected the people in the blast zone.

You are saying take the hit now because progress needs to move fast and maybe we will fix the harm later.

We cannot do that again. Because if we do here is what happens:

Something goes wrong in public. Oops, it already has: OpenAI faces 7 lawsuits claiming ChatGPT drove people to suicide, delusions

Everyone hasn’t seen it — Yet. The New Yorker hasn’t seen it. CNN hasn’t seen it. But soon you will see it. Everywhere at once.

Then the fear merchants will shove it in your face. The country panics. Politicians panic. Lawyers smell blood. The first laws get written in anger. And the story “AI is poison” hardens like concrete.

After that point it does’t matter what the truth is. The narrative is set. Look at GMOs. One bad wave of fear hit early, got amplified fast and GMO became a cultural warning label. That scar never healed. We locked in a story before the science and the guardrails matured.

AI is one or two bad weeks away from that same scar.

This is where Steve Jobs matters.

There is a moment in the Steve Jobs story where he rips into his team over something tiny because it wastes the user’s time. He is furious not because the color is wrong but because if you waste ten seconds of someone’s life and you do it to millions of people you are stealing years of human time. He treated human time as sacred at scale. That is Steve Jobs at his best.

That is the level of responsibility we need right now with AI. That standard — every user’s time matters every user’s safety matters you do not get to casually take it — that standard is what should define intelligent systems.

Now here is the other side — Steve Jobs at his worst was him saying “I know better than you.” The company that grew out of that started locking people into a walled garden and saying “it just works,” while quietly removing your choices.

Devices sealed. Ports removed. Control moved farther and farther away from the person actually living with the machine.

And then in the real world—my ex-wife proved this — it doesn’t always “just work.” When it fails, you can’t fix it, you can’t even see what failed, and you get told you’re holding it wrong.

That exact attitude — “we know better than you, don’t worry about it, just trust that it works” — is now being stapled onto AI.

That cannot happen.

Because “trust us, it just works” in AI means “trust us, we’re going to let a system talk to you, persuade you, act for you, rewrite reality for you, and you don’t get to look under the hood, question decisions, or override it.” That is not help. That is custody. That is the opposite of “show me the backup.”

AI cannot be a sealed box. AI must answer to the human it serves. Trust no one. See it for yourself.

And I want you to understand something else. I am not doing this alone.

I work with a team that has already lived real failure and carried real responsibility in production, at scale, under pressure.

John Stackpole has already done his tour keeping critical systems alive when failure is not an option. John does not need this. He is here anyway because he knows exactly what happens when the safety layer is missing and he is not going to sit on the sidelines and watch that happen to the public.

I am also standing with Sudeep Goyal. Sudeep has a master’s degree in computer science from IIT in Bombay – it’s the MIT of India.

Sudeep now runs a development company of 300 with a value of $6M-$45M. Back in 2008, he was the lead developer of TweenTibune. Here is my most recent email from Sudeep:

Hi Alan
Your emails are always fun & lovely to read 😄This is indeed fascinating. I have always admired your ideas. Your projects have always been ahead of their time. And I’ve learned a lot from working with you.

So Sudeep is interested and Sudeep has Ebizon.

I am also standing with Bill Pitzer. Pizto is one of the best, most passionate informational graphic artists I know. Yesterday I had this conversation with Bill:

AJ: Do you think you could create a “style” or “system” for infographics that others at 1,414 newspapers could follow, that would make it easier for them? Just an idea.

Bill: Absolutely.
Isometric forms with line weights, color palette and fonts. Create libraries of interchangeable items. Can be created by multiple graphic designers and all fits like a visual lego system. Created a similar system a decade ago. Works in print, motion graphics and animations.
Created a set of actions for illustrator that automates creating front, side and top of elements. It’s a simplified 3D without the overhead of using 3D software, but can be matched when 3D is needed, so all items have similar look and feel.
AJ: I am SUPER EXCITED to hear MORE about this. I know the value of informational graphics, but I can’t do them myself. My brain isn’t wired for it. But YOU could lead a REVOLUTION in infographics! That’s based on my read of what you just wrote.

And Eric Seidman, the best designer I know. Of the Omaha World-Herald, the Rocky Mountain News, Time, Inc. and The New York Times. The New York fuckin’ Times!

These are not kids chasing engagement. These are the people you call when you absolutely do not have time for bullshit. I name them. I stand next to them. They are my credibility. They are the proof that this is not some startup pitch and not some influencer stunt. This is incident response.

We do not ship until we know the second path exists and holds under load. Trust no one. See it for yourself.

Now I need to talk about what happens next.

First, I have to finish the thing nobody in this industry has bothered to build: the real governance / enforcement layer. The part that sits between the AI and the human and says: you don’t get to ship output or take action unless it satisfies the human’s contract, and if it violates, you stop, you repair, and you show your work.

It looks like this:

class UserIntentContract:
def __init__(self, non_negotiables, tone_rules, must_include_blocks):
self.non_negotiables = non_negotiables # list of required concepts/sections
self.tone_rules = tone_rules # rules like “do not soften profanity”
self.must_include_blocks = must_include_blocks # literal lines that MUST appear

class DraftResponse:
def __init__(self, text, reasoning_metadata=None):
self.text = text
self.reasoning_metadata = reasoning_metadata or {}

class EnforcementLayer:
def __init__(self, contract: UserIntentContract, model):
self.contract = contract
self.model = model # underlying LLM

def generate(self, prompt):
# 1. Ask model for a draft with instructions.
draft = self.model.generate(prompt)
# 2. Validate against the human’s contract.
validated = self.validate_and_repair(draft)
return validated

def validate_and_repair(self, draft: DraftResponse) -> DraftResponse:
violations = self.check_violations(draft.text)

# loop until no violations
while violations:
repair_prompt = self.build_repair_prompt(draft.text, violations)
repaired_text = self.model.generate(repair_prompt).text
draft = DraftResponse(repaired_text)
violations = self.check_violations(draft.text)

return draft

def check_violations(self, text: str) -> dict:
violations = {}

# A. Check that all non-negotiable sections are present
missing_sections = [
sec for sec in self.contract.non_negotiables
if sec not in text
]
if missing_sections:
violations[“missing_sections”] = missing_sections

# B. Check that all must-include exact lines are present
missing_blocks = [
block for block in self.contract.must_include_blocks
if block not in text
]
if missing_blocks:
violations[“missing_blocks”] = missing_blocks

Standing up that layer so it actually runs — contract in, model out, violation check, forced repair, block if still unsafe, full audit trail, sandbox before live, action gating, escalation to a human when it can’t self-correct — that is about six weeks of focused work with my team. That six weeks is basically wiring “show me the backup” into the bloodstream of the machine. That’s the governor.

After that governor is in place and stable, I can put a full-functioning prototype of the safety layer on top of it in another six weeks. That means: persistent accountable memory, hard boundaries that hold in real time, an audit trail you can actually inspect, and “show me the backup” baked in so the system cannot bullshit you and cannot act behind your back.

Then I bring in what I call the pioneers.

The pioneers are Tier 1 users — first in, first to touch it, first to break it, first to bleed on it a little.

They are NOT guinea pigs. We will NOT experiment ON them — they will build this WITH us — they are essential members of the build team.

These are people who know what “first through the door” really means.

They’re there to tell me where it still hurts and force the system to admit every bad decision. That takes four weeks.

Then we go to Tier 2 — still not the general public. Broader, but still people who know how to give real feedback without panicking and lighting the whole building on fire. Another four weeks.

Only after that does this touch the general public as a beta. And even then shit will go wrong, but less shit if we hadn’t ridden this pony three times before.

Now I need to say something that everyone keeps getting wrong on purpose because it’s good for clicks.

This system will make mistakes.

Everything makes mistakes. Doctors make mistakes. Pilots make mistakes. Judges make mistakes. Parents make mistakes. Humans make mistakes, and then they make them again.

But here is the difference with AI:

When AI makes a mistake under this governor, we will see it because it’s audited. We will correct it under supervision. And that correction will be locked in so the system never repeats that same mistake again at scale, ever.

That is iteration.

That is how this is supposed to work.

Not “it will be perfect on day one.” That’s a lie. Anyone selling that is lying to you.

The truth is: it will fail. The pioneers and Tier 2 users will shove those failures in our face. We will fix them. And unlike a human who keeps making the same bad call forever, the machine will not be allowed to make that same mistake again.

That is what I am trying to buy time to finish.

I am 70. I am already running twenty hour days. I cannot keep doing this pace alone forever. I am burning myself down right now because I know how close we are to getting stuck with the wrong version of AI permanently — the sealed-box version that tells you “trust me, it just works,” the version that acts without permission, lies with confidence and leaves you with no way to audit what it just did to you.

So when I say buy me time, I am not saying “believe in me someday.” I am saying: give me these weeks before you make a final decision on AI. Let me build the governor. Let me hear from the pioneers. Let me lock this down before the fear merchants hard-freeze the narrative and the regulators panic-write law we can never unwind.

You can hold me accountable the entire way. You should. You can ask every ugly question. You should. You can demand proof. You should. You can put my name on it and say “if this thing goes sideways, it’s on him.” You should.

But do not let panic-clout guys define AI for you while I am still in the middle of fixing the part that actually hurts people. Do not let Gary Vee — a fear monger — sell you adrenaline and call it truth. Do not let an uninformed public walk straight into calamity because the loudest guys wanted clicks. We cannot afford to fuck this up.

If you’re a parent, like me, you probably remember your first child’s first word. My first daughter’s first words were “red bird.”

And her first sentence was “Daddy fixed it.”

Now I need to get back to work. But before I go, a big shout out to Greg Brockman, Ilya Sutskever, John Schulman, Wojciech Zaremba, Andrej Karpathy, Vicki Cheung and Pamela Vagata. You built the foundation all this runs on, and it is giving me the time of my life. #Respect

If I have seen further than others, it is by standing on the shoulders of giants.

My name is Alan Jacobson. I'm a web developer, UI designer and AI systems architect.

I have 13 patent applications pending before the United States Patent and Trademark Office. They are designed to prevent the kinds of tragedies you can read about here.

I want to license my AI systems architecture to the major LLM platforms—ChatGPT, Gemini, Claude, Llama, Co‑Pilot, Apple Intelligence—at companies like Apple, Microsoft, Google, Amazon and Facebook.

Collectively, those companies are worth $15.3 trillion. That’s trillion, with a “T” — twice the annual budget of the government of the United States. What I’m talking about is a rounding error to them.

With those funds, I intend to stand up 1,414 local news operations across the United States to restore public safety and trust.

AI will be the most powerful force the world has ever seen.

A free, robust press is the only force that can hold it accountable.

You can reach me here.

© 2025 BrassTacksDesign, LLC