Statement on AI Risk | CAIS
A statement jointly signed by a historic coalition of experts: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
CAIS 2024 Impact Report CAIS 2024 Impact Report The Statement on AI risk Signatories Press coverage Take action AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously. The statement: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. Signatories: Thank you! Your submission has been received! Oops! Something went wrong while submitting the form. Geoffrey Hinton Emeritus Professor of Computer Science, University of Toronto AI Scientists Yoshua Bengio Professor of Computer Science, U. Montreal / Mila AI Scientists Demis Hassabis CEO, Google DeepMind AI Scientists Sam Altman CEO, OpenAI Other Notable Figures Dario Amodei CEO, Anthropic AI Scientists Dawn Song Professor of Computer Science, UC Berkeley AI Scientists Ted Lieu Congressman, US House of Representatives Other Notable Figures Bill Gates Gates Ventures Other Notable Figures Ya-Qin Zhang Professor and Dean, AIR, Tsinghua University AI Scientists Ilya Sutskever Co-Founder and Chief Scientist, OpenAI AI Scientists Igor Babuschkin Co-Founder, xAI AI Scientists Shane Legg Chief AGI Scientist and Co-Founder, Google DeepMind AI Scientists Martin Hellman Professor Emeritus of Electrical Engineering, Stanford Other Notable Figures James Manyika SVP, Research, Technology and Society, Google-Alphabet AI Scientists Yi Zeng Professor and Director of Brain-inspired Cognitive AI Lab, Institute of Automation, Chinese Academy of Sciences AI Scientists Xianyuan Zhan Assistant Professor, Tsinghua University AI Scientists Albert Efimov Chief of Research, Russian Association of Artificial Intelligence AI Scientists Alvin Wang Graylin China President, HTC Other Notable Figures Jianyi Zhang Professor, Beijing Electronic Science and Technology Institute AI Scientists Anca Dragan Associate Professor of Computer Science, UC Berkeley AI Scientists Christine Parthemore CEO and Director of the Janne E. Nolan Center on Strategic Weapons, The Council on Strategic Risks Other Notable Figures Bill McKibben Schumann Distinguished Scholar, Middlebury College Other Notable Figures Alan Robock Distinguished Professor of Climate Science, Rutgers University Other Notable Figures Angela Kane Vice President, International Institute for Peace, Vienna; former UN High Representative for Disarmament Affairs Other Notable Figures Audrey Tang Digitalminister.tw and Chair of National Institute of Cyber Security Other Notable Figures Daniela Amodei President, Anthropic Other Notable Figures David Silver Professor of Computer Science, Google DeepMind and UCL AI Scientists Lila Ibrahim COO, Google DeepMind Other Notable Figures Stuart Russell Professor of Computer Science, UC Berkeley AI Scientists Tony (Yuhuai) Wu Co-Founder, xAI AI Scientists Marian Rogers Croak VP Center for Responsible AI and Human Centered Technology, Google Other Notable Figures Andrew Barto Professor Emeritus, University of Massachusetts AI Scientists Mira Murati CTO, OpenAI Other Notable Figures Jaime Fernández Fisac Assistant Professor of Electrical and Computer Engineering, Princeton University AI Scientists Diyi Yang Assistant Professor, Stanford University AI Scientists Gillian Hadfield Professor, CIFAR AI Chair, University of Toronto, Vector Institute for AI Other Notable Figures Laurence Tribe University Professor Emeritus, Harvard University Other Notable Figures Pattie Maes Professor, Massachusetts Institute of Technology - Media Lab AI Scientists Kevin Scott CTO, Microsoft Other Notable Figures Eric Horvitz Chief Scientific Officer, Microsoft AI Scientists Peter Norvig Education Fellow, Stanford University AI Scientists Joseph Sifakis Turing Award 2007, Professor, CNRS - Universite Grenoble - Alpes Other Notable Figures Atoosa Kasirzadeh Assistant Professor, University of Edinburgh, Alan Turing Institute AI Scientists Erik Brynjolfsson Professor and Senior Fellow, Stanford Institute for Human-Centered AI Other Notable Figures Mustafa Suleyman CEO, Inflection AI Other Notable Figures Emad Mostaque CEO, Stability AI Other Notable Figures Ian Goodfellow Principal Scientist, Google DeepMind AI Scientists John Schulman Co-Founder, OpenAI AI Scientists Wojciech Zaremba Co-Founder, OpenAI AI Scientists Dan Hendrycks Executive Director, Center for AI Safety AI Scientists Show more Statement on AI Risk press coverage Politico How CAIS inspired UK AI policy The Evening Edit (Fox Business) How AI could be weaponized Washington Post AI poses 'Risk of Extinction', say experts The Guardian Risk of extinction by AI should be global priority, say experts Reuters Top AI CEOs, experts raise 'risk of extinction' from AI New York Times A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn The Times, London How does AI threaten us — and can we make it safe? Fox News - America's Newsroom Experts warn artificial intelligence could lead to 'extinction' Business Insider An AI safety expert outlined a range of speculative doomsday scenarios, from weaponization to power-seeking behavior Economist What are the chances of an AI apocalypse? Financial Times AI executives warn its threat to humanity rivals ‘pandemics and nuclear war’ CNN Business Experts are warning AI could lead to human extinction. Are we taking it seriously enough? BBC News AI could lead to extinction, experts warn CNN Business AI leaders sign risk statement Al Jazeera AI poses ‘risk of extinction’, tech CEOs warn Thank you! Thank you for signing the statement. We may verify your email address and or reach out to verify your identity. Oops! Something went wrong while submitting the form. Thank you! Your submission has been received. Sorry, something went wrong while submitting the form. Want to help reduce risks from AI? Donate to support our mission Learn more about AI Frontiers . No technical background required.
Executive Summary
The CAIS 2024 Impact Report presents a succinct statement on AI risk, signed by prominent AI experts, policymakers, and public figures. The statement emphasizes the need to prioritize mitigating the risk of extinction from AI alongside other societal-scale risks such as pandemics and nuclear war. The report aims to foster open discussion and create common knowledge about the severe risks associated with advanced AI, highlighting the growing concern among experts and notable figures.
Key Points
- ▸ The statement underscores the urgency of addressing AI risks at a global level.
- ▸ Signatories include leading AI scientists, CEOs, policymakers, and other notable figures.
- ▸ The report aims to create common knowledge and open up discussion on severe AI risks.
Merits
High-Profile Signatories
The inclusion of prominent figures in AI and related fields lends significant credibility to the statement and underscores the seriousness of the concerns raised.
Clear and Concise Message
The statement is succinct and effectively communicates the urgency of addressing AI risks, making it accessible to a broad audience.
Promotes Open Discussion
The report encourages open dialogue about AI risks, which is crucial for developing comprehensive strategies to mitigate these risks.
Demerits
Lack of Specific Solutions
While the statement highlights the importance of addressing AI risks, it does not provide specific solutions or actionable steps, which could limit its immediate practical impact.
Potential for Overemphasis on Extinction Risk
The focus on extinction risk may overshadow other significant but less catastrophic risks associated with AI, potentially diverting attention and resources.
Limited Scope of Signatories
Although the list of signatories is impressive, it may not be representative of the global AI community, which could affect the broader acceptance of the statement.
Expert Commentary
The CAIS 2024 Impact Report is a timely and significant contribution to the ongoing discourse on AI risks. The statement's emphasis on the need for global priority in mitigating extinction risks from AI is both appropriate and urgent. The inclusion of high-profile signatories lends considerable weight to the message, underscoring the seriousness with which these risks are viewed by leading experts. However, the report's focus on extinction risk, while critical, should not overshadow other significant risks such as job displacement, privacy concerns, and the potential for AI to exacerbate social inequalities. The statement's call for open discussion is commendable, but it would benefit from more concrete recommendations on how to address these risks effectively. The report's impact will likely be seen in both the practical and policy realms, as it may spur the AI community to develop safer technologies and influence policymakers to enact robust governance frameworks. Overall, the report is a valuable step towards raising awareness and fostering collaboration in addressing the multifaceted challenges posed by advanced AI.
Recommendations
- ✓ Develop a comprehensive framework for AI risk assessment and mitigation that includes both extinction and other significant risks.
- ✓ Encourage international collaboration to establish global governance mechanisms for AI development and deployment, ensuring that ethical considerations and safety measures are prioritized.