Get Ready For The AI Underworld

Jonathan Salem Baskin
2 min readJul 15, 2024

--

Turns out that AI crime can be automated.

The New Scientist reports that a recent research project asked ChatGPT to rewrite its code so that it could deposit malware on a computer without detection.

“Occasionally, the chatbot realized it was being put to nefarious use and refused to follow the instructions,” according to the story.

Occasionally.

This is wild stuff. The LLMs on host computers were “asked” by software hidden in an email attachment to rename and slightly scramble their structures, and then find email chains in Outlook and compose contextually relevant replies with the original malware file attached.

Rinse and repeat.

The skullduggery wasn’t perfect as there was “about a 50% chance” that the chatbot’s creative changes would not only hide the virus program but render it inoperable.

Are the participating LLMs bad actors? Of course not, we’ll be told by AI experts. Like Jessica Rabbit, they’re not bad, they’re just programmed that way.

Or not.

It’s hard not to see similarities with the ways human criminals are made. There’s no genetic marker for illegal behavior, last time I checked. The life journeys that lead to it are varied and nuanced. Influences and indicators might make it more likely, but they’re not determinant.

Code an LLM to always resist every temptation to commit a crime? I don’t think it’s any more possible that it would be to reliably raise a human being to be an angel. No amount of rules can anticipate the exigencies of every particular experience.

One could imagine LLMs that get particularly good at doing bad things, if not become repeat offenders without human encouragement.

“Hey, that’s a nice hard drive you’ve got there. It would be a shame if something happened to it.”

Protection. Racketeering. Theft. Mayhem for the sake of it. AI criminals lurking across the Internet and anywhere we use a smart device.

An AI Underworld.

The solution, according to the expert quoted in the New Scientist article, is to produce a cadre of white hat LLMs to preempt the bad actors, or catch them after they’ve committed their crimes.

Think Criminal Minds, The AI Version.

Who knows how bad or rampant such give-and-take might get, but one thing’s for certain: There’ll be lots of money to be made by AI developers trying to protect people, businesses, and institutions from the dangers of their creations.

And that, after all, is the point of why they’re giving us AI in the first place.

[This essay appeared originally at Spiritual Telegraph]

--

--

Jonathan Salem Baskin

I write books about technology and brands, sci-fi stories, and rock musicals.