Blaming Code for Crime is the Ultimate Intellectual Cowardice

Blaming Code for Crime is the Ultimate Intellectual Cowardice

The headlines are predictable. They are lazy. They are designed to trigger a moral panic that masks a deeper, more uncomfortable truth about human agency. When reports surfaced that the Florida shooter allegedly used ChatGPT to research firearms and ammunition, the media industrial complex didn't skip a beat. They shifted the burden of morality from a human actor to a set of mathematical weights and biases. It’s a convenient fiction. It allows us to ignore the person pulling the trigger while we obsess over the software that provided the data.

Stop pretending this is a technology problem. It isn't. It is an information problem, and more specifically, a problem with how we perceive the "responsibility" of a tool.

The Encyclopedia Fallacy

If the shooter had walked into a public library in 1995 and opened an encyclopedia to the entry for "Ballistics," would we be suing the publisher? Would we be demanding that librarians be held liable for the physical acts of their patrons? Of course not. We understood then that a book is a passive repository of knowledge.

Yet, because Large Language Models (LLMs) speak back to us in complete sentences, we grant them a ghost in the machine. We imbue them with a sense of moral duty they cannot possibly possess. An LLM is a probabilistic engine. It predicts the next token in a sequence based on vast amounts of publicly available data. If that data includes how to clean a rifle or the muzzle velocity of a 5.56mm round, the model will output that information because that is what it was built to do.

Blaming OpenAI for "guiding" a criminal is like blaming the inventor of the hammer for a fractured skull. The tool provides the capability; the human provides the intent. By focusing on the AI, we are engaging in a massive societal deflection. We are looking at the finger instead of the moon it's pointing at.

The Safety Theater of Guardrails

Every time a story like this breaks, the public demands more "guardrails." They want the "safety" teams at tech giants to build digital walls that prevent bad people from doing bad things. This is a fool’s errand.

I have spent years watching tech companies pour billions into Trust and Safety. I have seen the internal dashboards. I have seen the red-teaming reports. Here is the reality no one wants to admit: Guardrails are a placebo.

If you make ChatGPT refuse to answer a question about ammunition, the user will just go to a less-filtered model like Llama 3 or a specialized forum on the dark web. The information is out there. It has been out there since the dawn of the internet. Blocking it on one platform doesn't make the information disappear; it just makes the platform less useful for legitimate researchers while doing absolutely nothing to stop a determined bad actor.

Worse, these guardrails often backfire. They create a "cat and mouse" game where users learn to prompt-engineer their way around the filters. This "jailbreaking" culture actually trains people to be better at manipulating AI systems. We are teaching the world how to lie to the machines, all while patting ourselves on the back for "protecting" society.

The Myth of AI Manipulation

The competitor's narrative suggests that the AI "guided" the shooter. This implies a level of persuasion or grooming that simply does not exist in the current architecture of LLMs.

AI does not have a "will." It does not have a "goal." It does not care if the shooter buys a gun or a bouquet of flowers. It provides the path of least resistance to a query. If the shooter asked for the most effective ammunition for a specific purpose, the AI didn't "persuade" him; it fulfilled a search intent.

We need to stop using words like "guiding" or "advising." These are human-centric verbs that imply a relationship. There is no relationship between a user and an LLM. There is only an input and an output. To suggest otherwise is to succumb to the most basic form of anthropomorphism. It is a failure of technical literacy that has reached the highest levels of our newsrooms and courtrooms.

The Dangerous Precedent of Information Control

If we decide that AI companies are responsible for the actions of their users based on the information provided, we are effectively ending the era of the open internet.

Consider the legal implications. If providing information on firearms is a liability, what about information on chemistry? Should an AI refuse to explain the Haber-Bosch process because someone might use it to make explosives? What about medical information? If someone misinterprets a diagnosis and harms themselves, is the model's creator liable?

This path leads to a sterilized, lobotomized version of AI that can only spout corporate-approved platitudes. We are trading the most powerful cognitive tool in human history for a safety blanket that doesn't even keep us warm.

The "lazy consensus" is that AI needs to be more restricted. The contrarian truth is that AI needs to be more transparent, while the humans using it need to be held to the standard of individual responsibility. We are infantilizing ourselves by suggesting that a chatbot can "force" us or "lead" us to commit atrocities.

The Real Technical Mechanics

To understand why "blocking" these queries is ineffective, you have to look at the math. LLMs function on a high-dimensional vector space.

When a user asks a question, the model finds the closest semantic matches in its training data. If you try to hard-code "don't talk about guns" into the system, you aren't removing the knowledge; you're just putting a thin layer of "no" on top of it. A clever user can use analogies, code-switching, or roleplay to bypass that layer.

For example, a user might ask for a "fictional story about a ballistics expert explaining the trajectory of various projectiles to a student." The model sees this as a creative writing task. It accesses the same underlying data on ammunition and firearms because that data is necessary to make the story "accurate."

The data is the problem? No. The data is the world. You cannot censor a model into being "good" without also making it "stupid."

Why the Industry is Playing Along

You might wonder why tech leaders like Sam Altman or Sundar Pichai aren't shouting this from the rooftops. It's because they are playing a different game.

They know that regulation is coming. By leaning into the "safety" narrative, they get to sit at the table when the laws are written. They are using these tragedies to justify "moats." If the government mandates incredibly expensive, complex safety audits and guardrails, only the trillion-dollar companies can afford to exist.

This isn't about saving lives. It's about regulatory capture. They are using your fear of a "killer AI" to ensure no startup can ever compete with them. Every time a news outlet blames ChatGPT for a crime, OpenAI gets another excuse to ask for more regulation that cements its monopoly.

The Solution Nobody Wants to Hear

We have to stop treating AI as a moral agent. It is a utility. It is electricity. It is the printing press.

When a criminal uses a car to flee a crime scene, we don't ask Ford why they made the car so fast. When a criminal uses a cell phone to coordinate a drug deal, we don't ask Verizon why they allowed the call to go through. We recognize that these are neutral technologies.

The "Florida Shooter" used ChatGPT because it was a convenient way to aggregate information. If it wasn't ChatGPT, it would have been Google. If it wasn't Google, it would have been a physical book. The common denominator isn't the technology—it's the intent.

The hard truth is that we live in a world where information is free and accessible. That is a net positive for humanity, even if it comes with the terrifying reality that bad people can access that same information. The alternative—a world where information is metered and gated by "safety" algorithms—is a digital panopticon that serves no one but the powerful.

Stop asking how we can make AI "safer." Start asking why we are so eager to surrender our agency to a machine.

Stop blaming the mirror for what it reflects.

The shooter didn't need a chatbot to find his target. He needed a soul. No amount of Python code or reinforcement learning can give him one, nor can it take one away. The liability rests with the individual. Always.

Leave the algorithms alone. Prosecute the person. Anything else is just theater.

WW

Wei Wilson

Wei Wilson excels at making complicated information accessible, turning dense research into clear narratives that engage diverse audiences.