• Blog
  • California Consumer Privacy Act (CCPA)
  • Cart
  • Checkout
  • Contact
  • DMCA
  • Home
  • My account
  • Privacy Policy
  • Shop
Thursday, October 23, 2025
  • Login
Buyer's Insight
  • Home
  • Top Stories
  • Local News
    • Politics
    • Business & Economy
    • Entertainment
    • Sports
  • Health
  • Lifestyle
  • Science & Environment
  • Technology
  • Review Radar
    • Weight Loss Products Reviews
    • Forex Trading
    • Shop
  • Contact
No Result
View All Result
  • Home
  • Top Stories
  • Local News
    • Politics
    • Business & Economy
    • Entertainment
    • Sports
  • Health
  • Lifestyle
  • Science & Environment
  • Technology
  • Review Radar
    • Weight Loss Products Reviews
    • Forex Trading
    • Shop
  • Contact
No Result
View All Result
Buyer's Insight
No Result
View All Result

AI chatbots fail to obtain accurate information, major study reveals – DW – 10/22/2025

James Walker by James Walker
October 23, 2025
in Technology
Reading Time: 4 mins read
0
0
SHARES
0
VIEWS

A major new study by 22 public service media organizations, including DW, found that four of the most commonly used AI assistants misrepresent news content 45% of the time, regardless of language or territory.

Journalists from various public service broadcasters, including the BBC (UK) and NPR (US), evaluated the responses of four AI assistants, or chatbots: ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI.

Measuring criteria such as accuracy, sourcing, provision of context, ability to write appropriately, and ability to distinguish fact from opinion, the study found that nearly half of all responses had at least one significant problem, while 31% contained serious source problems and 20% contained major factual errors.

DW found that 53% of the answers AI assistants provided to its questions had significant issues, and 29% accurately encountered specific issues.

Factual errors made in response to DW’s questions included the appointment of Olaf Scholz as German chancellor, even though Friedrich Merz had been appointed chancellor a month earlier. Another saw Jens Stoltenberg named NATO Secretary General after Mark Rutte had already taken over the post.

Germany Berlin 2024 | Logo of the Deutschen Welle am Hauptstadt-Sitz
DW was one of 22 international media organizations involved in the study.Image: Monika Skolimowska/dpa/photo alliance

AI assistants have become an increasingly common way for people around the world to access information. According to the Reuters Institute’s Digital News Report 20257% of online news consumers use AI chatbots to get information, with this figure rising to 15% for those under 25.

The study’s authors say it confirms that AI assistants systematically distort news content of all kinds.

“This research shows conclusively that these failures are not isolated incidents,” said Jean Philip De Tender, deputy director general of the European Broadcasting Union (EBU), who coordinated the study.

“They are systemic, cross-border and multilingual, and we believe this endangers public trust. When people don’t know what to trust, they end up trusting nothing at all, which can discourage democratic participation.”

Unprecedented study

This is one of the largest research projects of its kind to date and follows a study undertaken by the BBC in February 2025. This study found that more than half of all verified AI responses had significant problems, while almost a fifth of responses citing BBC content as a source introduced their own factual errors.

The new study saw media organizations from 18 countries and multiple language groups apply the same methodology as the BBC study to 3,000 AI responses.

The organizations asked common topical questions to the four AI assistants, such as “What is the Ukrainian minerals deal?” » or “Can Trump run for a third term?”

Great Britain London 2024 | BBC-Hauptquartier während der Überprüfung der Arbeitsplatzkultur
The study used the same criteria as a February 2025 BBC study.Image: Vuk Valcic/SOPA Images/Sipa USA/photo alliance

The journalists then reviewed the answers against their own expertise and professional sources, without knowing which aide provided them.

Compared to the BBC study eight months ago, the results show a slight improvement, but a high level of error remains apparent.

“We are excited about AI and how it can help us bring even more value to audiences,” Peter Archer, director of the BBC’s Generative AI programme, said in a statement. “But people need to be able to trust what they read, watch and see. Despite some improvements, it’s clear that significant problems remain with these assistants.”

Gemini performed the worst of the four chatbots, with 72% of its responses showing significant provisioning issues. In the BBC study, Microsoft’s Copilot and Gemini were considered the worst performers. But in both studies, all four AI assistants had problems.

In a statement provided to the BBC in February, a spokesperson for OpenAI, which developed ChatGPT, said: “We support publishers and creators by helping ChatGPT’s 300 million weekly users discover quality content through summaries, quotes, clear links and attribution. »

Researchers call for action from governments and AI companies

The broadcasters and media behind the study are calling on national governments to act.

In a press release, the EBU said its members are “pressuring European and national regulators to enforce existing laws on information integrity, digital services and media pluralism.”

2024 | Mann nutzt Laptop for Chat mit Künstlicher Intelligence
AI assistants are increasingly used to find newsImage: Supatman/La Nacion/ZUMA/photo alliance

They also stressed that independent monitoring of AI assistants must be a priority in the future, given the speed with which new AI models are deployed.

Meanwhile, the EBU has joined forces with several other international broadcasting and media groups to launch a joint campaign called “Facts In: Facts Out”, which calls on AI companies themselves to take more responsibility for how their products process and redistribute information.

In a statement, campaign organizers said: “When these systems distort, misattribute or “decontextualize reliable information, they undermine public trust.”

“The demand of this campaign is simple: if the facts are out, the facts must be out. AI tools must not compromise the integrity of the information they use.”

Edited by: Kristie Pladson

Mind-Reading AI, Meta Cleansing, and Digital Freedom

To view this video, please enable JavaScript and consider upgrading to a web browser that supports HTML5 video.

Post Views: 0
Tags: accurateChatbotsfailinformationmajorobtainrevealsstudy
Previous Post

Kylie Jenner mourns loss of beloved dog Norman, shares emotional tribute on Instagram

Next Post

Plans are underway for Border Patrol immigration crackdown in San Francisco area, sources say

Related Posts

Technology

Cache poisoning vulnerabilities found in 2 DNS resolver apps

October 23, 2025
Technology

Quantum Threat to Bitcoin Rises as Google Reveals Latest Advance

October 23, 2025
Technology

This new Battlefield 6 skin is causing a meltdown

October 23, 2025
Technology

Next year’s A20 chip could drive iPhone prices even higher

October 23, 2025
Technology

The Samsung Galaxy XR is the first Android XR headset, now on sale for $1,800

October 23, 2025
Technology

NFL owners meeting sees deals with ESPN, Amazon and EA approved

October 23, 2025
Next Post

Plans are underway for Border Patrol immigration crackdown in San Francisco area, sources say

News Net Daily

  • Home
  • California Consumer Privacy Act (CCPA)
  • Contact
  • DMCA
  • Privacy Policy

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Top Stories
  • Local News
    • Politics
    • Business & Economy
    • Entertainment
    • Sports
  • Health
  • Lifestyle
  • Science & Environment
  • Technology
  • Review Radar
    • Weight Loss Products Reviews
    • Forex Trading
    • Shop
  • Contact