Institutions Are the AIs Your Mother Warned You About

by Navarre Bartz


If you pick up a book or movie about Artificial Intelligence (AI), there’s a good chance you’ll find a story where robots or AI have subjugated humanity. The Terminator, the robots in The Matrix, and the Borg all strike fear into our hearts because they lack humanity. The cold, calculating logic by which they see the universe makes them alien and incapable of the things that define human experience like compassion or love. The thing is, the AIs your mother warned you about are already here. We call them institutions. 

In Brandon Sanderson’s fantasy novel Oathbringer, Nale, the Herald of Justice, says, “The purpose of the law is so we do not have to choose. So our native sentimentality will not harm us.” In modern times, we say the law is blind, but recent protests over racially-motivated violence committed in the name of the law show that removing human choice from the equation just creates an algorithm for oppression. We’ve given the appearance of impartiality to a process that is biased because of who wrote the laws and when they were written.

For example, computer AIs developed to help with criminal sentencing calculate recidivism probabilities based on historical policing data. The “impartial” AI looks benevolent, but when the data it is fed derives from hundreds of years of racist policing practices, it’s not hard to see why the AI is more likely to suggest a light sentence for a white defendant than a person of color. In January 2020, the increasing reliance of law enforcement on AI-driven facial recognition systems led to the first known wrongful arrest based on the inability of facial recognition systems to distinguish between people who aren’t white men. Modern law enforcement has been investing in tools that entrench racism behind a steel and plastic veneer of impartiality. The subjugation of parts of humanity is already in progress, and it’s grounded in the biases of programmers—who are all too human. One of the most basic thought experiments of AI gone wrong is Nick Bostrom’s proposed paperclip maximizer. Because it only has one goal, it will execute that function without taking other consequences into account. As the AI ramps up its production of paperclips, the planet it’s on is consumed by iron mines and paperclip factories until those who originally programmed the AI are consumed for their raw materials. While this example may seem ridiculous, it’s the logical conclusion to business models that are designed to maximize financial growth.

Corporations are single-minded AIs programmed to make a profit. Since corporations exist in large part to separate legal liability for the corporation’s actions from its members, there are few truly effective checks on a company’s behavior. With these inputs, it should come as no surprise that the corporations of the world have done irreparable harm to our biosphere. The board of directors and shareholders are still human, but as Upton Sinclair said, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”

While the AIs in The Matrix at least leave humans the illusion that they are not slaves, the Belters who work in the outposts of the solar system in the The Expanse series by James S. A. Corey are at risk of losing their very air and water if they do not comply with the demands of interplanetary corporations. Even when a discovery is made that would change the very nature of human existence, the Protogen company seeks to profit by starting a war between Earth and its former colony, Mars. The corporation’s pursuit of profit manages to oppress humankind without a single sentient computer. 

We don’t need to look to a dystopian future to find artificial intelligences bent on human domination. They’re already here. The first step to creating a world with AIs we can work with is disarming the dangerous ones. Congress has started the process of fighting corporations with its recent Investigation of Competition in Digital Markets Report coming after years of effort from groups like the Institute for Local Self-Reliance, small businesses, and cooperatives. At the same time, The Movement for Black Lives has been steadily growing to point out the flaws in the current legal system. Overcoming systemic racism and corporate power are the major battles we have against malicious AIs right here and now. We should be developing better ways to make humans part of the AI feedback loop, as Douglas Rushkoff suggests, so that when the computer-based generalized AIs come, we’ll be able to work alongside someone like Data instead of under the gaze of Skynet.


Navarre Bartz is a recovering academic writing about the intersection of technology, society, and the environment. Originally hailing from the hills of Missouri, he now lives in Virginia with his wife and feline overlords. You can find more of his musings at