Jonathan Salem Baskin
3 min readNov 7, 2023

--

Last week, bureaucrats from 28 countries met in the UK and issued a declaration that they will “work together to meet the most significant challenges” of AI.

And nobody listened.

In fact, at the same time as the politicians were bloviating, Google announced it would integrate AI into the basic functions of its Chromebook laptops. Microsoft previously announced that it would incorporate AI into its Word, Excel, and other workplace programs for business customers (it also said that AI would empower its search engine with a personal shopper for consumers).

Over 1.3 million companies worldwide use Microsoft Office 365 and one estimate has nearly 7 million users relying on the company’s AI by the end of next year. Tens of millions of Chromebooks are sold to consumers every year (ChatGPT’s website gets 1.6 billion monthly visits) . Here in the US alone, more than 90% of us access the internet and search for information every day.

Signatories to the Bletchley Park Declaration promise “…shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety and research, particularly through greater scientific collaboration.”

Further, they’ll “…deepen the understanding of risks and capabilities that are not fully understood…[and] work together to support a network of scientific research on Frontier AI safety.”

Talk about fiddling while Rome burns.

I’m becoming increasingly frightened by the governmental reaction to the opportunities and challenges of AI development.

They’re being advised by an AI Industrial Complex of businesses and academics who have two goals: First, to focus governments on minor aspects of AI functionality, both now and in the future, and second, to obfuscate the real long-term issues in blathertastic gibberish that does little more than keep said advisors gainfully employed.

The idea that an AI could exhibit bias, violate privacy, or work imperfectly is not novel, nor does it require much new thinking on top of the laws and regulations that already exist to look for and remedy such infractions. A crime is a crime, whether committed by a human being or a machine built or operated by one.

Worse, the encouragement of the AI Industrial Complex has convinced governments to abandon centuries-old concepts of consequences and responsibility when it comes to the ills that errant or evil code might cause.

We already have liability laws that government could choose to enforce starting right now, and they’d have an immediate effect on the AI products and services that are being unleashed on us.

But that would impede innovation and the free market.

Silicon Valley evangelists would cry and and there’d be some validity to the complaint: Unfettered development of AI promises to make gazillions for US tech companies in the short and mid-term (America leads on the stuff, so far).

The thing about the long-term future is that it never arrives. The AI Industrial Complex knows this all too well.

They’ve convinced governments that there’s a risk that an AI will go rouge and annihilate humanity. This sci-fi silliness requires extensive and detailed (and expensive) technology studies and frequent meetings of experts to discuss said research. The attendees at Bletchley agreed to meet again in Korea in six months and in France later next year.

After all, the long-term future is so far away, why rush?

Everything in the world is about technology except technology. Technology is about power.

The world’s governments should be convening Manhattan Project-like studies on the economic, social, environmental, and psychological implications for a world that is turbocharged by AI. Who cares how the tech gets us there? What matters is what it’ll look like, and whether or not we’re prepared for it.

Or whether or not we choose to allow it to happen.

It’s clear to me that the AI Industrial Complex wants the power and authority to determine those outcomes for us. They’ve convinced governments to be their enablers and sometimes willing partners. Academics, consultants, and other experts are happy to get another year’s worth of funding to continue discussing tomorrow so we don’t talk about today.

None of them want us to listen to whatever got declared at Bletchley Park. They don’t want us to focus on the arrival of AI in our lives.

We’re supposed to stay distracted and silent as it happens.

[This essay appeared originally at Spiritual Telegraph]

--

--

Jonathan Salem Baskin

I write books about technology and brands, sci-fi stories, and rock musicals.