Why this document matters
The May 2023 AI Risk Statement is a short text with outsized institutional weight. It marked a shift from technical alignment discourse into explicit public-risk framing, and it did so in language that policy, media, and executive audiences could repeat without specialist translation.
For sAIfe Hands, this page is a primary source anchor: read the exact wording first, then follow the linked interpretation, signatory, and discussion pages.
Primary statement text
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
This is the one-line statement released by the Center for AI Safety in May 2023 and signed by a broad set of researchers and public figures.
Primary sources
Why this matters in the sAIfe Hands lens
- The statement normalized extinction-risk language in mainstream institutions.
- It created a common baseline sentence that different camps could debate.
- It accelerated a policy transition from "is this speculative?" to "what controls are proportionate?"
- It exposed a continuing tension between long-horizon catastrophic risk and present-day deployment harms.
Explore further
- AI Risk Statement - Key Ideas
- AI Risk Statement - Signatories
- AI Risk Statement - Why 2023 mattered
- AI Risk Statement - Interpretations and critiques
Related discussions
Signal Room bridge
23 Feb 2026
An AI Expert Warning: 6 People Are (Quietly) Deciding Humanity’s Future!
A high-signal conversation with Stuart Russell that maps the same risk framing to governance and institutional response.
Play on sAIfe HandsLibrary deep dive
25 May 2021
Stuart Russell on the flaws that make today's AI architecture unsafe, and a new approach that could fix them
Foundational technical framing for understanding why extinction-risk language moved from fringe to central.
Play on sAIfe HandsPublic discourse linkage
11 Jan 2025
Yoshua Bengio - 2 Years Before Everything Changes
A mainstream-facing articulation of frontier risk and governance urgency from a core AI researcher.
Play on sAIfe Hands