The AI Industry Won’t Regulate Itself

Jonathan Salem Baskin
3 min readMay 20, 2024

OpenAI has disbanded its year-old research team focused on understanding the existential risks of its products.

This follows in a long line of businesses that take little responsibility for their actions.

The oil industry was a pioneer in using a strategy to deny or obfuscate the deleterious effects of their products. In the late 1970s, an Exxon researcher even coined the term “The Greenhouse Effect.” The industry continued to help incinerate the planet thereafter.

The sugar industry’s trade association funded a study in 1968 that seemed to show correlations between high-sugar diets and greater risks of heart attacks, strokes, and bladder cancer. It was cancelled before it was completed, its results buried, and cereal companies ran with decades of claims that sugar was “part of a balanced or nutritious breakfast.”

The granddaddy of them all is the tobacco industry which, after the links between smoking and lung cancer were proven by science in the late 1940s, embarked on an aggressive campaign to challenge those findings in order to protect their sales. Hundreds of thousands of Americans mov from direct and secondary effects of smoking last year.

There movie business is an example of an industry that successfully self-regulated the potential harm of its products, as the MPAA developed a voluntary ratings system in the mid-1960s to inform audiences of sexual content and themes.

But, after that, it’s mostly crickets. Academics on the topic tend to conflate self-regulation with the normally expected behaviors of business, namely, making stuff that doesn’t kill your customers but pleases them instead.

This type of self-regulation works until the benefits of killing them, whether by neglect, “externalities” of effects, or simply the vagaries inherent in the passage of time in which profits have been made are, well, greater than worrying about keeping them alive.

Remember that businesses are in the business of making money, not making the world a better place, the blather of corporate responsibility notwithstanding (another distraction campaign, BTW).

So, the idea that any AI (or tech) business would be able or reliably trusted to identify and reveal the risks of their products is kind of silly.

We should expect them to deny and obfuscate those risks.

It might be confusing, since OpenAI’s CEO Sam Altman, Elon Musk, and other tech boffins have explicitly said that their product could annihilate humanity and destroy the planet.

But in doing so, they shirk responsibility for that risk. AI will get smarter and more impactful regardless of what we want or do, its developers like mad children who cannot control themselves. All we can do is hope to mitigate some of its most infuriating or insulting impacts. Make technical adjusts to its operation.

That’s why any legislation contemplated or in effect focuses on trying to make AI easier, more reliable, and somehow “fairer” to use. Whether or not it’s good for us or the planet is not even on the table for discussion; like climate change and health, we’ll just hope it’ll defy the lessons of history and deliver benefits that outweigh the risks.

This is a purposeful communications strategy.

Disbanding the risk research team at OpenAI isn’t news.

The news is that it was never going to change anything in the first place.

[This essay originally appeared at Spiritual Telegraph]

--

--

Jonathan Salem Baskin

I write books about technology and brands, sci-fi stories, and rock musicals.