All Hail Pliny The Prompter

Jonathan Salem Baskin
2 min read4 days ago

--

The promise of a world run by AI depends on our ability to trust its accuracy and reliability. There’s work underway to prove that promise a lie.

Pliny the Prompter is one of many hackers who “jailbreak” the latest genius AIs as soon as as they’re released. He got Llama 3 to provide a recipe for making napalm and Grok to praise Adolf Hitler. His hacked Godmode GPT can help you plan a crime.

The time it takes to blow up the accuracy and reliability of these tools? About 30 minutes, according to this Financial Times article.

How he and others do it is fascinating because it’s reminiscent of the mind games you’d play with someone to convince them to give you their money or join a cult. But what is far more meaningful is why.

Broadly speaking, the risks of using AI are twofold: It won’t be as good as we believe it is, and we won’t know how it came up short or what to do about it (or who’s responsible for such impacts).

Good-guy hackers strive to reveal these risks, along with the futility of hoping that the makers of AI or the laws and regulations developed to chase their creations will mitigate them. They also don’t like authority and love every opportunity to challenge it.

This is in stark contrast to the hackers who use their maliciously adapted LLMs to better steal data and exact ransom from their victims. Cybercrime cost businesses and governments an estimate US$11.5 billion last year and is forecasted to double over the next three years.

Without trustworthy accuracy and reliability, AI has no authority. It’s as imper

It’s as imperfect and troublesome as the decision making of the human beings it’s supposed to replace, whether hacker or hallucinated.

We don’t get a lot of public conversation about this reality. Sure, we see brief news reports when AIs malfunction but they don’t add up to any consistent debate. Autonomous cars mowing down pedestrians? Search results that feature people of color as Nazis or Vikings?

They’re just hiccups along the way. No need to dwell on them. Trust us.

Instead, its promoters waffle between singing the praises of building an AI that thinks like a human being (“AGI”), and that their efforts might well lead to the annihilation of all mankind.

Understanding the accuracy and reliability of AI should inform our decisions about if, when, and how we integrate it into our lives.

We’re never going to have it if businesses and governments have their way.

All hail Pliny the Prompter.

[This essay appeared originally at Spiritual Telegraph]

--

--

Jonathan Salem Baskin

I write books about technology and brands, sci-fi stories, and rock musicals.