California Just Folded On Regulating AI

Jonathan Salem Baskin
2 min readSep 30, 2024

--

California’s governor Gavin Newsom has vetoed the nation’s most thoughtful and comprehensive AI safety bill, opting instead to “partner” with “industry experts” to develop voluntary “guardrails.”

Newsom claimed the bill was flawed because it would put onerous burdens and legal culpability on the biggest AI models — i.e. the AI deployments that would be the most complex and impact the most people on the most complicated topics — and thereby “stifle innovation.”

By doing so, it would also disincentivize smaller innovators from building new stuff, since they’d be worried that they’d be held accountable for their actions later.

This argument parroted the blather that came from the developers, investors, politicians and “industry experts” who opposed the legislation…and who’ll benefit most financially from unleashing AI on the world while not taking responsibility for the consequences (except making money).

This is awful news for the rest of us.

Governments are proving to be utterly ineffective in regulating AI, if not downright disinterested in even trying. Only two US states have laws in place (Colorado and Utah), and they’re focused primarily on making sure users follow existing consumer protection requirements.

On a national level, the Feds have little going but pending requirements that AI developers assess their work and file reports, which is like what the EU has recently put into law.

It’s encouragement to voluntarily do the right thing, whatever that is.

Well, without any meaningful external public oversight, the “right thing” will be whatever those AI developers, investors, politicians, and “industry experts” think it is. This will likely draw on the prevailing Silicon Valley sophistry known as Effective Altruism, which claims that technologists can distill any messy challenge into an equation that will yield the best solution for the most people.

Who needs oversight from ill-informed politicians when the smartest and brightest (and often richest) tech entrepreneurs can arrive at such genius-level conclusions on their own?

Forget worrying about AIs going rogue and treating shoppers unfairly or deciding to blow up the planet; what if it does exactly what we’ve been promised it will do?

Social impacts of a world transformed by AI usage? Plans for economies that use capitalized robots in place of salaried workers? Impacts on energy usage, and thereby global climate change, from those AI servers chugging electricity?

Or, on a more personal level, will you or I get denied medical treatment, school or work access, or even survivability in a car crash because some database says that we’re worth less to society than someone else?

Don’t worry, the AI developers, investors, politicians, and “industry experts” will make those decisions for us.

Even though laws can be changed, amended, rescinded, and otherwise adapted to evolving insights and needs, California has joined governments around the world in choosing to err on the side of cynical neglect over imperfect oversight.

Don’t hold AI developers, investors, politicians, and “industry experts” accountable for their actions. Instead, let’s empower them to benefit financially from their work while shifting all the risks and costs onto the rest of us.

God forbid we stifle their innovation.

[This essay appeared originally at Spiritual Telegraph]

--

--

Jonathan Salem Baskin
Jonathan Salem Baskin

Written by Jonathan Salem Baskin

I write books about technology and brands, sci-fi stories, and rock musicals.

No responses yet