AI ‘superintelligence’ would pose 5% or greater risk of extinction, researchers claim

.

Blue neon glowing weight balance scale holding red alphabet AI on a silver laptop. Illustration of the concept of legislation and regulations of artificial intelligence act and laws
Blue neon glowing weight balance scale holding red alphabet AI on a silver laptop. Illustration of the concept of legislation and regulations of artificial intelligence act and laws Dragon Claws/Getty Images

AI ‘superintelligence’ would pose 5% or greater risk of extinction, researchers claim

Video Embed

There’s a non-trivial chance that artificial intelligence could cause an extinction-level event for humanity if it ever becomes more intelligent than humans, according to researchers in the field.

The conclusion comes from a survey of 2,700 AI researchers who had published their work at six leading academic conferences on the subject. The experts were asked to share their thoughts on possible timelines and outcomes for the technology, as well as how it could affect society as a whole. The majority of respondents said that AI becoming smarter than humans, also known as a “superintelligence,” would entail a 5% or greater chance of causing an extinction-level event.

“It’s an important signal that most AI researchers don’t find it strongly implausible that advanced AI destroys humanity,” Katja Grace, an author of the survey and researcher at the Machine Intelligence Research Institute, told New Scientist. “I think this general belief in a non-minuscule risk is much more telling than the exact percentage risk.”

TWELVE DAYS OF WEX-MAS: REPUBLICANS HEAD INTO 2024 WITH DEEP INTRAPARTY DIVISIONS

That same group of researchers responded that, within the next decade, AI programs will have a 50% or greater chance of completing the vast majority of sample tasks listed, from writing songs that perfectly resemble a Taylor Swift song or coding a payment processor from scratch to even installing wiring into a new home. These tasks were used as a metric in the survey to determine if the tested AI is more intelligent than the average human. The researchers predicted that AI has a 50% chance of outperforming humans on all 39 tasks by 2047.

The vast majority of researchers also saw more immediate concerns. Seventy percent or more of the surveyed researchers said AI could be used to create fake images designed to manipulate the public (also known as deepfakes), create weapons, control a population, or manipulate economic inequality.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

These concerns have been regularly presented to lawmakers in Europe and the United States. Senate Majority Leader Chuck Schumer (D-NY) spent several weeks in the fall hosting experts to speak on how the technology could be adequately regulated and the sorts of guardrails needed to avoid the risks listed by researchers. The European Union’s legislative branch is also in the final stages of passing its AI Act, which could establish the first framework for the technology.

© 2024 Washington Examiner

Related Content