DeepMind’s AGI Safety Report Raises More Questions Than It Answers

DeepMind’s latest 145-page paper on AGI safety predicts its arrival by 2030 but faces skepticism.
Matilda
DeepMind’s AGI Safety Report Raises More Questions Than It Answers
Google DeepMind has released a 145-page report outlining its approach to artificial general intelligence (AGI) safety, and it’s already sparking heated debates. While DeepMind predicts AGI could emerge by 2030, the real question is whether their proposed safeguards are enough—or even addressing the right concerns.  Image:Google DeepMind defines AGI as AI that matches or surpasses human-level performance in a wide range of cognitive tasks. The report, co-authored by co-founder Shane Legg, warns of “severe harm,” including existential risks that could “permanently destroy humanity.” A key claim is that an “Exceptional AGI” will emerge before 2030—an AI system operating at the 99th percentile of human cognitive skills across multiple domains. If this projection holds, the implications for society, economies, and security could be profound. DeepMind vs. OpenAI and Anthropic: Who’s Right? DeepMind’s report critiques the safety strategies of competitors. It argues that: Anthropic prioritizes tran…