Business Insurance

Login  |  Register Subscribe



Demote the doomsters

This paper in Science on “managing extreme AI risks amid rapid progress” with 25 co-authors (Harari?) is getting quick attention. The paper leans heavily toward the AI doom, warning of “an irreversible loss of human control over AI systems” that “could autonomously deploy a variety of weapons, including biological ones,” leading if unchecked to “a […] The post Demote the doomsters appeared first on BuzzMachine .

This paper in Science on “managing extreme AI risks amid rapid progress” with 25 co-authors (Harari?) is getting quick attention. The paper leans heavily toward the AI doom, warning of “an irreversible loss of human control over AI systems” that “could autonomously deploy a variety of weapons, including biological ones,” leading if unchecked to “a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.”

Deep breath.

Such doomsaying is itself a perilous mix of technological determinism and moral panic. There are real, present-tense risks associated with AI — as the Stochastic Parrots paper by Timnit Gebru, Margaret Mitchell, Emily Bender, and Angelina McMillan-Major carefully laid out — involving bias of input and output, anthropomorphization and fraud (just listen to ChatGPT4o’s saccharine voice), harm to human workers cleaning data, and the environment. The Science paper, on the other hand, glosses over those current concerns to cry doom.

That doomsaying makes many assumptions.

It concentrates on the technology over the human use of it. Have we learned nothing from the internet? The problems with it have everything to do with human misuse.

It engages in the third-person effect and hypodermic theory of media brought to AI, assuming that AI will have some mystical ability to “gain human trust, acquire resources, and influence key decision-makers.” This is part and parcel with the doomsters’ belief that their machine will be smarter than everybody (except perhaps them). It is condescending and paternalistic in the extreme. 

It imagines that technology is the solution to the problems technology poses — its own form of technological determinism — in the belief that systems can be “aligned” with human values.

Now here’s the actual bad news. Any general machine can be misused by any malign actor with ill intent. The pursuit of failsafe guardrails in AI will prove futile, for it is impossible to predict every bad use that anyone could make of a machine that can be asked to do anything. That is to say, it is impossible to build foolproof guardrails against us, for there are too many fools among us. 

AI is, like the printing press, a general machine. Gutenberg could not design movable type to prevent its use in promoting propaganda or witch hunts. The analogy is apt, for at the beginning of any technology, the technologists are held liable — in the case of print, printers were beheaded, beheaded, and burned at the stake for what came off their presses. Today, the Science paper and many an AI panelist say that the makers of AI models should be held responsible for everything that could ever be done with them. At best, that further empowers the already rich companies that can afford liability insurance. At worst, it distracts from the real work to be done and the responsibility that also lies with users. 

All this is why we must move past discussions of AI led by AI people and instead hear from other disciplines — the humanities and social sciences — which study human beings.

It is becoming impossible to untangle the (male, white, and wealthy) human ego involved in much of the AI boys’ discussion of AI safety: ‘See how powerful I am. I am become death and the machine I build will destroy worlds. So invest in me. And let me write the laws that will govern what I do.’

Take the coverage of “safety” at OpenAI. The entire company is filled with true believers in the BS of AGI and so-called x-risk (presumptions also apparently swallowed by the Science paper’s authors). The “safety” team at OpenAI were the more fervent believers in doom, but everyone there seems to be in the same cult. They— the humans — are the ones I worry about. Yet in stories about the “safety” team’s departure, reporters take the word “safety” at face value and refuse to do their homework on the faux philosophies of #TESCREAL (Google it) and how they guide the chest-thumping of the doomsters.

The doomsaying in this paper is cloaked in niceties but it is all of a type.

All this is why I wrote my next book (I won’t turn this post into a plug for it) and why I am hoping to develop academic programs that bring other disciplines into this discussion. It is time to demote the geeks and the doomsters.

The post Demote the doomsters appeared first on BuzzMachine.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.

Business Insurance Webinars & Webcasts

August 12: "Get Retrofit: Insurance Savings from Property Upgrades"

August 7: "Friends & Foes: Best Practices for Social Media Risk Management"

August 17: "Supply Chain Crisis?Navigating Business Interruption Coverage and Claims After the Japanese Earthquakes"

September 8: "Dormant Dangers: Protecting Key Corporate Assets from Cyber Attacks"

View all webcasts & webinars


Business Insurance Upcoming Issues

Aug. 22/29: Industry Financials: First-Half Results
Health Care Reform: Impact on Firms

September 5: Special Report: Alternative Risks

September 12: Workers Comp & Safety Management

View editorial calendar
Subscribe to Business Insurance