Anthropic CEO Says AI Hallucinates Less Than Humans

Anthropic CEO claims AI models hallucinate less than humans, sparking debate on the future of AGI and AI accuracy.
Matilda
Anthropic CEO Says AI Hallucinates Less Than Humans
Do AI models hallucinate more than humans? Anthropic CEO Dario Amodei says no—and he’s betting on it. Speaking at the company’s inaugural Code with Claude developer event in San Francisco, Amodei claimed that advanced AI models, including those developed by Anthropic, now hallucinate—i.e., make up information and present it as fact—less often than people do. This bold assertion not only reframes how we perceive AI accuracy but also highlights Anthropic’s confidence in its trajectory toward Artificial General Intelligence (AGI).                    Image Credits:Maxwell Zeff AI Hallucination: A Barrier—or a Benchmark? The term AI hallucination has become a hot topic across the tech world, particularly as companies race toward building reliable AI systems for critical use cases like legal research, healthcare, and finance. While hallucinations are widely recognized as one of the core limitations of large language models (LLMs), Amodei downplayed their significance. “It really depends how yo…