MAY 2023 AI RISK STATEMENT
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
How to read this page
This is not a raw signer list. It is a practical map for readers who want to understand which voices mattered most, where sAIfe Hands already has strong coverage, and where deliberate expansion is still needed.
Where signer status is uncertain in our current dataset, we mark that clearly and avoid overclaiming.
Frontier labs and executive leadership
-
Sam Altman - central frontier-lab governance voice in the 2023-2026 cycle.
Related: Sam Altman - ReThinking transcript (future of AI and humanity), Sam Altman - OpenAI Pentagon red lines, Ilya Sutskever - safety and governance watch -
Demis Hassabis - DeepMind research and governance bridge.
Related: Demis Hassabis - scaling superhuman AIs, Demis Hassabis - urgent AI threat research -
Dario Amodei - frontier deployment and safety thresholds.
Related: Dario Amodei - OpenAI and global impact, Dario Amodei - safety red lines -
Ilya Sutskever - major figure in frontier model development and safety discourse.
Related: Ilya Sutskever - safety and governance watch -
Shane Legg - DeepMind cofounder with long-horizon AGI framing.
Related: Shane Legg - 2028 AGI and alignment architectures
AI pioneers and researchers
-
Geoffrey Hinton - central mainstream voice in 2023 risk escalation narratives.
Related (adjacent): Ex-OpenAI Ex-Tesla AI insider REVEALS it all...
Related: Geoffrey Hinton - 2023 warning arc, Geoffrey Hinton - Nobel Prize Conversations, Geoffrey Hinton - 60 Minutes AI dangers transcript -
Yoshua Bengio - core technical authority with public-risk relevance.
Related: Yoshua Bengio - Paris AI Safety Breakfast #3, Yoshua Bengio - Introducing LawZero, Yoshua Bengio - 2 years before everything changes -
Stuart Russell - foundational control and alignment framing.
Related: Stuart Russell (80,000 Hours), Diary discussion
Alignment and safety figures
-
Jan Leike - superalignment and practical safety approaches.
Related: Superalignment with Jan Leike -
Paul Christiano - alignment strategy, delegation, takeover risk.
Related: Preventing an AI takeover, OpenAI alignment solutions, AI existential risk with Paul Christiano -
Eliezer Yudkowsky - maximal-risk framing and public argument pressure.
Related: Why AI will kill us -
Max Tegmark - policy and existential-risk institution-building voice.
Related: Max Tegmark and institutional risk coordination
Editorial coverage status
Strong coverage exists for: Demis Hassabis, Dario Amodei, Stuart Russell, Paul Christiano, Jan Leike, Eliezer Yudkowsky, Shane Legg.
Expanded coverage now exists for: Geoffrey Hinton, Ilya Sutskever, Max Tegmark.
Partial coverage remains for: Sam Altman, Yoshua Bengio.
Related discussions
Mainstream legitimacy
2026
Geoffrey Hinton - 2023 warning arc
Shows how signatory influence translated into mainstream legitimacy for catastrophic-risk language.
Play on sAIfe HandsFrontier-lab signal
2026
Ilya Sutskever - safety and governance watch
Tracks a high-leverage signatory whose institutional role matters even when public output is limited.
Play on sAIfe HandsInstitutional layer
2026
Max Tegmark and institutional risk coordination
Adds the institutional-coordination layer that signatory lists alone cannot explain.
Play on sAIfe Hands