Could AI Be Guilty Of Murder?

Jonathan Salem Baskin
3 min readNov 16, 2023

--

An industrial robot in South Korea killed a worker. Who, or what, is responsible?

The AI reportedly grabbed the hapless employee and added him to the bell peppers and other vegetables on a conveyor belt. The guy died from head and chest injuries.

Authorities are trying to fathom if the robot was defective or there was something wrong in its design. They’re also considering the possibility of human error (i.e. the guy did something to trigger the robot’s grab and crush functionality).

How is the robot not guilty of murder?

Defective humans kill one another all the time, as do people who possess some intrinsic flaw due to their upbringings or biological constitutions. Parents create flawed, imperfect, potentially murderous children, and society and experience either dissuade or encourage those proclivities.

Understanding the contributions of these factors doesn’t substitute for causes, at least not in many circumstances. The why of murder rarely excuses the what of killing someone, even if it can mediate its punishments.

In this instance, the robot clearly did the deed. There is video footage of it.

So why isn’t it guilty?

I think the answer is that robots aren’t conscious and don’t possess free will; in other words, they’re not human so they can’t be held accountable for human actions. Some human behind-the-scenes (or the victim himself) is the culprit.

But ask two experts to define consciousness, or explain where, how, or why it functions, and you’ll get two different answers. Many computer science folks don’t believe it exists at all, but rather evidences the outcome of some programming that provides us with a fantasy of ourselves.

We don’t exist beyond the coding of our biological machines.

This leads to one of the answers to the question of free will, which remains unresolved after thousands of years of debate. If your physical self determines your mental self — you are the result of chemicals and electrical impulses, not the manager of them through some unknown magical means — then you aren’t truly responsible for what those impulses tell you to feel and do.

Like Jessica Rabbit said in Who Framed Roger Rabbit?, “I’m not bad, I’m just drawn that way.”

Using computers as a model for how we think about the human brain is fraught with problems, starting with the fact that equating computer code with DNA conveniently ignores the fact that we don’t have the slightest clue how the latter operates (only that it does, along with some hints about the mechanisms it uses to implement its programming).

Conversely, using the human brain as a model for how we think about computers has given us neural networks and AI decision-making that nobody can completely explain.

At some point, such distinctions must disappear, no?

My gut tells me that we’re going to have to reimagine how (and to whom or what) we assign responsibility for actions long before AI achieves artificial general intelligence (“AGI”), which is something OpenAI’s Sam Altman is doggedly pursuing.

The conversation should be about ethics and liability law, not technology.

Some people are already dumb enough to kill someone else. Maybe the South Korean Conveyor Belt Killer was smart enough to do it, too?

[This essay appeared originally at Spiritual Telegraph]

--

--

Jonathan Salem Baskin
Jonathan Salem Baskin

Written by Jonathan Salem Baskin

I write books about technology and brands, sci-fi stories, and rock musicals.

Responses (1)