© 2025 | Jefferson Public Radio
Southern Oregon University
1250 Siskiyou Blvd.
Ashland, OR 97520
541.552.6301 | 800.782.6191
Listen | Discover | Engage a service of Southern Oregon University
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

As AI advances, doomers warn the superintelligence apocalypse is nigh

The Anthropic website on a laptop arranged in New Hyde Park, New York, on Aug. 22. Anthropic is one of the leading artificial intelligence companies. The company's CEO was among those that signed a public statement in 2023 acknowledging the "risk of extinction from AI."
Gabby Jones
/
Bloomberg via Getty Images
The Anthropic website on a laptop arranged in New Hyde Park, New York, on Aug. 22. Anthropic is one of the leading artificial intelligence companies. The company's CEO was among those that signed a public statement in 2023 acknowledging the "risk of extinction from AI."
Nate Soares, co-author of the book If Anyone Builds It, Everyone Dies, says time is running out to stop a superhuman AI from wiping out humanity.
Martin Kaste / NPR
/
NPR
Nate Soares, co-author of the book If Anyone Builds It, Everyone Dies, says time is running out to stop a superhuman AI from wiping out humanity.

What happens when we make an artificial intelligence that's smarter than us? Some AI researchers have long warned that moment will mean humanity's doom.

Now that AI is rapidly advancing, some "AI Doomers" say it's time to hit the brakes. They say the machine learning revolution that led to everyday AI models such as ChatGPT has also made it harder to figure out how to "align" artificial intelligence with our interests – namely, keeping AI from outsmarting humans. Researchers into AI safety say there's a chance that such a superhuman intelligence would act quickly to wipe us out.

NPR's Martin Kaste reports on the tensions in Silicon Valley over AI safety.

For a more detailed discussion on the arguments for — and against — AI doom, please listen to this special episode of NPR Explains:


And for the truly curious, a reading list:

The abbreviated version of the "Everyone Dies" argument, in the Atlantic.

The "useful idiots" rebuttal, also in the Atlantic

The potential timeline of an AI takeover

Research into "AI Faking" and deception

Smith College economics professor James Miller reflects on the game theory of expecting AI apocalypse, while hoping for AI salvation

Maybe AI isn't speeding up smarter AI, at least not yet. Research from METR

Analysis — and skepticism — from experts about the near-term likelihood of human- or superhuman-level artificial intelligence

Copyright 2025 NPR

Martin Kaste
Martin Kaste is a correspondent on NPR's National Desk. He covers law enforcement and privacy. He has been focused on police and use of force since before the 2014 protests in Ferguson, and that coverage led to the creation of NPR's Criminal Justice Collaborative.
Congress and the President have spoken. While this is a devastating result, JPR's commitment to its mission and values and our resolve to achieve them remain stronger than ever. Together with NPR, we’ll continue to bring you rigorous journalism, local news, courageous storytelling, and inspired music – every day. Help us increase listener support by 25% to make up for lost federal funding.