Risky Science Podcast and Model Q&As · · 2 min read

From Climate to AI: Why Some Catastrophic Risks Defy Easy Models

Why uncertainty defines catastrophic risk, how climate change, pandemics, and AI intersect, and why markets alone cannot address these perils.

From Climate to AI: Why Some Catastrophic Risks Defy Easy Models
Photo by Nahrizul Kadri / Unsplash
For RMN Subscribers

Podcast
Listen and follow the ‘Risky Science Podcast’
Apple Podcasts | Spotify | Overcast

As markets and policymakers confront the convergence of climate, pandemic, and technological risks, the Risky Science Podcast spoke with Dr. Seth Baum, Executive Director of the Global Catastrophic Risk Institute, about how researchers define, measure, and respond to risks that could threaten civilization itself.

Baum, who also serves as a research affiliate with Cambridge University’s Centre for the Study of Existential Risk, emphasized that the defining feature of global catastrophic risk is uncertainty and that overconfidence can be the biggest peril.

“One of the biggest mistakes people can make is to assume they have a strong understanding of the risk when in fact it’s just a really weak set of evidence,” he said.

From Climate to Catastrophes

Baum classifies climate change as a global catastrophic risk, comparable in potential severity to nuclear winter. Yet he stresses that the label itself shouldn’t change what governments and markets must do: cut emissions, adapt populations, and account for climate impacts.

The catastrophic framing simply adds urgency.

Pandemics offer another case. Experts long warned about inevitability, but COVID-19 showed how debates over origins can overshadow preparedness. Baum argued that regardless of how COVID started, societies must plan for the full range of future pandemics, both natural or artificial.

Unlike traditional catastrophe modeling, where decades of hurricane or earthquake data help calibrate probabilities, global catastrophic risks often lack historical precedent. That makes markets vulnerable to false precision.

“Financial markets, governments, whoever—need to take the possibility of rare extreme outcomes seriously, but also be humble in our understanding of those risks,” Baum noted.

AI in the Catastrophe Context

Artificial intelligence represents a dual role: a tool that could amplify other risks (e.g., nuclear systems) or a direct source of catastrophic failure if control is lost. While Baum is less alarmed than some peers about current AI systems, he stressed that society must plan for scenarios where AI advances beyond human oversight.

Markets can’t solve global catastrophic risks alone. Baum underscored the need for governance, international cooperation, and renewed citizen participation in democracy to address threats too large for private capital.

“If there’s one thing we can all agree on, it’s that global catastrophe is bad,” he said. “But unless we come together, I am concerned about how long we can go until something really bad happens that nobody wants.”

🎙️ Get the full episode of the podcast on your favorite player here.

Read next