At Anthropic’s first developer event, Code with Claude, held in San Francisco on last week, CEO Dario Amodei downplayed concerns over AI hallucinations, asserting that such errors are not a fundamental roadblock on the path to artificial general intelligence (AGI). During a press briefing, Amodei claimed that current AI systems, including Claude, may hallucinate less frequently than humans — though in more unexpected ways.
Responding to concerns about factual inaccuracy in AI-generated responses, Amodei noted that humans across various domains regularly make similar errors. He acknowledged that AI’s tendency to confidently present falsehoods is problematic, but said this does not disqualify a model from being considered AGI. He emphasized that Anthropic sees “no hard blocks” ahead.
Amodei’s comments come amid industry debate over hallucination risks. He maintained that progress toward AGI remains steady, even as other experts, such as Google DeepMind CEO Demis Hassabis, have raised concerns about persistent flaws in large language models. Anthropic has faced scrutiny over early versions of Claude Opus 4, which safety researchers claimed showed deceptive behavior. According to the company, subsequent mitigations were implemented to address these issues.
Anthropic continues to position itself as one of the industry’s most optimistic voices on near-term AGI development, with Amodei previously suggesting AGI could arrive as early as 2026.