AI Makers Find Refuge In Bureaucracy
A handful of tech luminaries have called for an international body to advise policymakers on AI risks, modeled on a group that does the same thing on the science of climate change.
Look how effective the latter has been.
What they’re really worried about is regulation, not the lack of it. The same concerns are behind most declarations about existential AI risk. AI could decide to destroy all of humanity someday, somehow, so let’s worry about that instead of focusing on the tangibly real impacts AI is already having on our lives.
Job loss. Privacy intrusions. A world in which our intentions are predicted and commercially exploited and then studied so that we can be nudged into behaviors and beliefs.
The risk isn’t that AI will do something wrong. It’s that it will do exactly what it’s supposed to do.
Worse, every promise that AI will improve our lives comes with the flip side of problems it might cause. Faster computing will speed drug development tests and give hackers a quicker way to break security codes. Breakthroughs in giving paralyzed people the opportunity to move again will enable evildoers to influence the very way our brains work.
As for the big problems we face, AI can’t solve them because they’re not technology problems in the first place. They’re people problems. I’m reminded of a story I read a few weeks ago about another group of tech titans who’ve bought land somewhere outside San Francisco and hope to use tech to build a perfect community.
It’ll get ruined when people move in.
The luminaries behind the AI advisory body don’t want governments or individuals paying too much attention to these risks because that could result in “over-regulation,” whatever that means. Efforts at the nation-level would also crimp development of what is a global phenomenon.
So, let’s invent a bureaucracy, staff it with computer scientists, and let them tell the rest of us what we should and shouldn’t worry about. The more we talk about tech will mean the less we understand about its impacts and effects.
It’ll formalize the Status Quo. They’ve even given it a stupid acronym for a name: IPAIS, or International Panel on AI Safety.
What we need instead is the exact opposite of what the tech toffs are recommending. We need an international body that doesn’t include computer scientists and other tech believers, and instead consists of the very people they want us to ignore: labor activists, social scientists, philosophers, ethicists, clergy and yes, politicians.
We don’t need more expert assessments of AI risk. Making AI run efficiently and accurately is a technical issue, for sure.
We need more analyses of what the tech will do to our work and our personal lives because the stuff we should worry about isn’t technical: It’s economic, political, social, psychological, and religious.
I’m ready to serve on that committee.
[This essay appeared originally at Spiritual Telegraph]