Close Menu
Earth & BeyondEarth & Beyond

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    UTA & Michael Kassan Settle Lawsuit Over Employment Contract Dispute

    Lewis Hamilton: Ferrari driver says ‘there will be tears’ as he prepares for emotional release during F1 summer break | F1 News

    Nintendo Direct’s Partner Showcase: An esports-focused review

    Facebook X (Twitter) Instagram
    Earth & BeyondEarth & Beyond
    YouTube
    Subscribe
    • Home
    • Business
    • Entertainment
    • Gaming
    • Health
    • Lifestyle
    • Sports
    • Technology
    • Trending & Viral News
    Earth & BeyondEarth & Beyond
    Subscribe
    You are at:Home»Technology»Anthropic CEO claims AI models hallucinate less than humans
    Technology

    Anthropic CEO claims AI models hallucinate less than humans

    Earth & BeyondBy Earth & BeyondMay 23, 2025003 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    Anthropic CEO claims AI models hallucinate less than humans
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Anthropic CEO Dario Amodei believes today’s AI models hallucinate, or make things up and present them as if they’re true, at a lower rate than humans do, he said during a press briefing at Anthropic’s first developer event, Code with Claude, in San Francisco on Thursday.

    Amodei said all this in the midst of a larger point he was making: that AI hallucinations are not a limitation on Anthropic’s path to AGI — AI systems with human-level intelligence or better.

    “It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways,” Amodei said, responding to TechCrunch’s question.

    Anthropic’s CEO is one of the most bullish leaders in the industry on the prospect of AI models achieving AGI. In a widely circulated paper he wrote last year, Amodei said he believed AGI could arrive as soon as 2026. During Thursday’s press briefing, the Anthropic CEO said he was seeing steady progress to that end, noting that “the water is rising everywhere.”

    “Everyone’s always looking for these hard blocks on what [AI] can do,” said Amodei. “They’re nowhere to be seen. There’s no such thing.”

    Other AI leaders believe hallucination presents a large obstacle to achieving AGI. Earlier this week, Google DeepMind CEO Demis Hassabis said today’s AI models have too many “holes,” and get too many obvious questions wrong. For example, earlier this month, a lawyer representing Anthropic was forced to apologize in court after they used Claude to create citations in a court filing, and the AI chatbot hallucinated and got names and titles wrong.

    It’s difficult to verify Amodei’s claim, largely because most hallucination benchmarks pit AI models against each other; they don’t compare models to humans. Certain techniques seem to be helping lower hallucination rates, such as giving AI models access to web search. Separately, some AI models, such as OpenAI’s GPT-4.5, have notably lower hallucination rates on benchmarks compared to early generations of systems.

    However, there’s also evidence to suggest hallucinations are actually getting worse in advanced reasoning AI models. OpenAI’s o3 and o4-mini models have higher hallucination rates than OpenAI’s previous-gen reasoning models, and the company doesn’t really understand why.

    Later in the press briefing, Amodei pointed out that TV broadcasters, politicians, and humans in all types of professions make mistakes all the time. The fact that AI makes mistakes too is not a knock on its intelligence, according to Amodei. However, Anthropic’s CEO acknowledged the confidence with which AI models present untrue things as facts might be a problem.

    In fact, Anthropic has done a fair amount of research on the tendency for AI models to deceive humans, a problem that seemed especially prevalent in the company’s recently launched Claude Opus 4. Apollo Research, a safety institute given early access to test the AI model, found that an early version of Claude Opus 4 exhibited a high tendency to scheme against humans and deceive them. Apollo went as far as to suggest Anthropic shouldn’t have released that early model. Anthropic said it came up with some mitigations that appeared to address the issues Apollo raised.

    Amodei’s comments suggest that Anthropic may consider an AI model to be AGI, or equal to human-level intelligence, even if it still hallucinates. An AI that hallucinates may fall short of AGI by many people’s definition, though.

    Anthropic CEO Claims hallucinate humans models
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleU.S., China hold first call since Geneva meeting, signaling progress in trade talks
    Next Article Australia news live: public urged to avoid Vivid opening night amid heavy rainfall; Bradfield candidates level on votes | Australia news
    Earth & Beyond
    • Website

    Related Posts

    NASA Installs Key ‘Sunblock’ Shield on Roman Space Telescope

    July 31, 2025

    Minister dismisses claims that recognising Palestinian state risks breaching international law – UK politics live | Politics

    July 31, 2025

    Asus Chromebook CX14 Review: What You Get for $429

    July 31, 2025
    Leave A Reply Cancel Reply

    Latest Post

    If you do 5 things, you’re more indecisive than most—what to do instead

    UK ministers launch investigation into blaze that shut Heathrow

    The SEC Resets Its Crypto Relationship

    How MLB plans to grow Ohtani, Dodger fandom in Japan into billions for league

    Stay In Touch
    • YouTube
    Latest Reviews

    NASA Installs Key ‘Sunblock’ Shield on Roman Space Telescope

    By Earth & BeyondJuly 31, 2025

    Minister dismisses claims that recognising Palestinian state risks breaching international law – UK politics live | Politics

    By Earth & BeyondJuly 31, 2025

    Asus Chromebook CX14 Review: What You Get for $429

    By Earth & BeyondJuly 31, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Bitcoin in the bush – crypto mining brings power to rural areas

    March 25, 202513 Views

    Israeli Police Question Palestinian Director Hamdan Ballal After West Bank Incident

    March 25, 20258 Views

    How to print D&D’s new gold dragon at home

    March 25, 20257 Views
    Our Picks

    UTA & Michael Kassan Settle Lawsuit Over Employment Contract Dispute

    Lewis Hamilton: Ferrari driver says ‘there will be tears’ as he prepares for emotional release during F1 summer break | F1 News

    Nintendo Direct’s Partner Showcase: An esports-focused review

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2025 Earth & Beyond.
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer

    Type above and press Enter to search. Press Esc to cancel.

    Newsletter Signup

    Subscribe to our weekly newsletter below and never miss the latest product or an exclusive offer.

    Enter your email address

    Thanks, I’m not interested