OpenAI’s Ex-Policy Lead Accuses Company of Rewriting AI Safety History

Ex-OpenAI researcher Miles Brundage accuses the company of downplaying past AI safety concerns.
Matilda
OpenAI’s Ex-Policy Lead Accuses Company of Rewriting AI Safety History
A high-profile ex-OpenAI policy researcher, Miles Brundage, took to social media on Wednesday to criticize OpenAI for “rewriting the history” of its deployment approach to potentially risky AI systems. Image Credits:FABRICE COFFRINI/AFP / Getty Images  Earlier this week, OpenAI published a document outlining its current philosophy on AI safety and alignment, the process of designing AI systems that behave in desirable and explainable ways. In the document, OpenAI said that it sees the development of AGI, broadly defined as AI systems that can perform any task a human can, as a “continuous path” that requires “iteratively deploying and learning” from AI technologies. “In a discontinuous world […] safety lessons come from treating the systems of today with outsized caution relative to their apparent power, [which] is the approach we took for [our AI model] GPT‑2,” OpenAI wrote. “We now view the first AGI as just one point along a series of systems of increasing usefulness […] In the contin…