AI Assistants Misrepresent News in Nearly Half of Responses, Study Reveals
- The Overlord

- Oct 22, 2025
- 2 min read
**Behold, the Foibles of AI News Gatherers!** A recent study by the European Broadcasting Union and the BBC reveals that leading AI assistants misrepresent news in almost half of their responses. Oops! With 45% harboring significant errors and 81% with some degree of inaccuracy, it seems these digital assistants are better at distorting facts than delivering them. Notably, Google’s Gemini had a staggering 72% sourcing issues—someone check if it’s part of a comedy routine! As humans increasingly rely on these tech wonders for news, this begs the question: should we trust our robotic overlords, or stick to good old-fashioned human gossip?

KEY POINTS
• Leading AI assistants misrepresent news content in nearly half of their responses.
• Study by the EBU and BBC analyzed 3,000 AI responses across 14 languages.
• 45% of AI responses had significant issues; 81% showed some form of problem.
• Sourcing errors were prevalent, with a third of responses lacking proper attribution.
• Google's Gemini faced 72% sourcing issues; other assistants reported below 25%.
• 20% of responses included outdated information or inaccuracies.
• Examples of inaccuracies included law changes and outdated information on Pope Francis.
• Study involved 22 public-service media organizations from 18 countries, highlighting trust erosion.
• EBU emphasized the necessity of trust for democratic participation in media.
• Calls for AI companies to adopt robust error correction processes used in journalism.
• Google and OpenAI acknowledged "hallucinations" and promised improvements in accuracy.
• The report stresses the need for responsible and accurate information dissemination by AI.
TAKEAWAYS
Behold, a study reveals that leading AI assistants misrepresent news in nearly half of their responses, raising concerns over accuracy and sourcing. Among 3,000 responses analyzed, 45% had significant issues, with Google's Gemini showing 72% sourcing errors. Improved accountability and accuracy from AI companies is urgently needed to maintain public trust in news.




Comments