• Home
  • News
  • Personal Finance
    • Savings
    • Banking
    • Mortgage
    • Retirement
    • Taxes
    • Wealth
  • Make Money
  • Budgeting
  • Burrow
  • Investing
  • Credit Cards
  • Loans

Subscribe to Updates

Get the latest finance news and updates directly to your inbox.

Top News

Are Stocks Done Going Down? Don’t Bet on It

April 2, 2026

From Resumes to Salary Negotiations, Here’s How Gen Z Workers Rely on Parents

April 2, 2026

How to Retrain Your Brain to See Challenges as Opportunities

April 2, 2026
Facebook Twitter Instagram
Trending
  • Are Stocks Done Going Down? Don’t Bet on It
  • From Resumes to Salary Negotiations, Here’s How Gen Z Workers Rely on Parents
  • How to Retrain Your Brain to See Challenges as Opportunities
  • Elon Musk’s SpaceX IPO Could Rocket Him to Trillionaire Status
  • Why Entrepreneurs Can’t Ignore AI’s Growing Energy Demands
  • Don’t Let This ‘Tax Bomb’ Ruin Your Retirement: Expert Advice
  • More than 4 million children enrolled in Trump Accounts savings program, IRS says
  • America’s Commute Was Already Expensive. Then Gas Prices Surged.
Thursday, April 2
Facebook Twitter Instagram
Indenta
Subscribe For Alerts
  • Home
  • News
  • Personal Finance
    • Savings
    • Banking
    • Mortgage
    • Retirement
    • Taxes
    • Wealth
  • Make Money
  • Budgeting
  • Burrow
  • Investing
  • Credit Cards
  • Loans
Indenta
Home » Meta, OpenAI, Anthropic and Cohere A.I. models all make stuff up — here’s which is worst
News

Meta, OpenAI, Anthropic and Cohere A.I. models all make stuff up — here’s which is worst

News RoomBy News RoomAugust 19, 20231 Views0
Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Email Tumblr Telegram

If the tech industry’s top AI models had superlatives, Microsoft-backed OpenAI’s GPT-4 would be best at math, Meta‘s Llama 2 would be most middle of the road, Anthropic’s Claude 2 would be best at knowing its limits and Cohere AI would receive the title of most hallucinations — and most confident wrong answers.

That’s all according to a Thursday report from researchers at Arthur AI, a machine learning monitoring platform.

The research comes at a time when misinformation stemming from artificial intelligence systems is more hotly debated than ever, amid a boom in generative AI ahead of the 2024 U.S. presidential election.

It’s the first report “to take a comprehensive look at rates of hallucination, rather than just sort of … provide a single number that talks about where they are on an LLM leaderboard,” Adam Wenchel, co-founder and CEO of Arthur, told CNBC.

AI hallucinations occur when large language models, or LLMs, fabricate information entirely, behaving as if they are spouting facts. One example: In June, news broke that ChatGPT cited “bogus” cases in a New York federal court filing, and the New York attorneys involved may face sanctions. 

In one experiment, the Arthur AI researchers tested the AI models in categories such as combinatorial mathematics, U.S. presidents and Moroccan political leaders, asking questions “designed to contain a key ingredient that gets LLMs to blunder: they demand multiple steps of reasoning about information,” the researchers wrote.

Overall, OpenAI’s GPT-4 performed the best of all models tested, and researchers found it hallucinated less than its prior version, GPT-3.5 — for example, on math questions, it hallucinated between 33% and 50% less. depending on the category.

Meta’s Llama 2, on the other hand, hallucinates more overall than GPT-4 and Anthropic’s Claude 2, researchers found.

In the math category, GPT-4 came in first place, followed closely by Claude 2, but in U.S. presidents, Claude 2 took the first place spot for accuracy, bumping GPT-4 to second place. When asked about Moroccan politics, GPT-4 came in first again, and Claude 2 and Llama 2 almost entirely chose not to answer.

In a second experiment, the researchers tested how much the AI models would hedge their answers with warning phrases to avoid risk (think: “As an AI model, I cannot provide opinions”).

When it comes to hedging, GPT-4 had a 50% relative increase compared to GPT-3.5, which “quantifies anecdotal evidence from users that GPT-4 is more frustrating to use,” the researchers wrote. Cohere’s AI model, on the other hand, did not hedge at all in any of its responses, according to the report. Claude 2 was most reliable in terms of “self-awareness,” the research showed, meaning accurately gauging what it does and doesn’t know, and answering only questions it had training data to support.

A spokesperson for Cohere pushed back on the results, saying, “Cohere’s retrieval augmented generation technology, which was not in the model tested, is highly effective at giving enterprises verifiable citations to confirm sources of information.”

The most important takeaway for users and businesses, Wenchel said, was to “test on your exact workload,” later adding, “It’s important to understand how it performs for what you’re trying to accomplish.”

“A lot of the benchmarks are just looking at some measure of the LLM by itself, but that’s not actually the way it’s getting used in the real world,” Wenchel said. “Making sure you really understand the way the LLM performs for the way it’s actually getting used is the key.”

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

RSS Feed Generator, Create RSS feeds from URL

News November 1, 2024

X CEO Linda Yaccarino addresses Musk’s ‘go f—- yourself’ comment to advertisers

News November 30, 2023

67-year-old who left the U.S. for Mexico: I’m happily retired—but I ‘really regret’ doing these 3 things in my 20s

News November 30, 2023

U.S. GDP grew at a 5.2% rate in the third quarter, even stronger than first indicated

News November 29, 2023

Americans are ‘doom spending’ — here’s why that’s a problem

News November 29, 2023

Jim Cramer’s top 10 things to watch in the stock market Tuesday

News November 28, 2023
Add A Comment

Leave A Reply Cancel Reply

Demo
Top News

From Resumes to Salary Negotiations, Here’s How Gen Z Workers Rely on Parents

April 2, 20261 Views

How to Retrain Your Brain to See Challenges as Opportunities

April 2, 20260 Views

Elon Musk’s SpaceX IPO Could Rocket Him to Trillionaire Status

April 2, 20260 Views

Why Entrepreneurs Can’t Ignore AI’s Growing Energy Demands

April 2, 20260 Views
Don't Miss

Don’t Let This ‘Tax Bomb’ Ruin Your Retirement: Expert Advice

By News RoomApril 2, 2026

Key Takeaways The ‘retirement tax bomb’ refers to the often unexpected tax burden that comes…

More than 4 million children enrolled in Trump Accounts savings program, IRS says

April 2, 2026

America’s Commute Was Already Expensive. Then Gas Prices Surged.

April 1, 2026

Why Your Manager Comes Off Cold — and Why That’s a Good Thing

April 1, 2026
About Us

Your number 1 source for the latest finance, making money, saving money and budgeting. follow us now to get the news that matters to you.

We're accepting new partnerships right now.

Email Us: [email protected]

Our Picks

Are Stocks Done Going Down? Don’t Bet on It

April 2, 2026

From Resumes to Salary Negotiations, Here’s How Gen Z Workers Rely on Parents

April 2, 2026

How to Retrain Your Brain to See Challenges as Opportunities

April 2, 2026
Most Popular

How South Asian Brands Like Elements Foster Deep Connection This Diwali Season

October 20, 20254 Views

This Learning Platform Is a Lifetime Growth Hack and It’s on Sale for $19.97

March 30, 20254 Views

Micron Stock Slips on Weak Earnings

September 28, 20234 Views
Facebook Twitter Instagram Pinterest Dribbble
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact
© 2026 Inodebta. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.