AI chatbots and news: lots of mistakes, so be careful
AI chatbots often prove inaccurate when it comes to news: 45% of responses contain errors, critical use remains essential.
Published on October 23, 2025

Team IO+ selects and features the most important news stories on innovation and technology, carefully curated by our editors.
Large-scale research among 22 European and American broadcasters shows that AI chatbots such as ChatGPT, Gemini, and Copilot have problems in almost half of the cases (45%) when answering questions about news and current events. The main issues are incorrect source references, factual inaccuracies, and a lack of context. Google's Gemini scored particularly poorly, with problems in 76% of cases. Public broadcasters are calling for better oversight and cooperation with AI companies to improve reliability, as AI chatbots are increasingly being used for news, especially among young people.
Errors in the details
The study, conducted by the BBC, the European Broadcasting Union (EBU), and 22 public broadcasters from Europe, Canada, and the United States, revealed that AI chatbots often provide unreliable information. When asking more than 3,000 questions to four different AI models—ChatGPT, Gemini, Copilot, and Perplexity—it was found that 45% of the answers contained at least one error. These errors ranged from incorrect source citations to factual inaccuracies and a lack of relevant context. A common problem was that chatbots cited incorrect sources or provided links to websites that were unrelated to the question asked.
The impact on public opinion and neutrality
The errors and inaccuracies in the answers provided by AI chatbots can have a negative impact on the perception of the neutrality of news sources. For example, if a chatbot only shows NOS sources in response to a question about climate change, this may give the reader the false impression that the NOS takes a certain position. This can lead to a distorted view of reality and undermine confidence in the objectivity of the reporting. Public broadcasters, therefore, emphasize the importance of reliable news provision and see the study as confirmation of their decision to temporarily deny AI assistants access to their news content.
Call for action and regulation
At a time when AI chatbots are increasingly being used for news gathering, especially among young people, it is crucial to promote media literacy and AI literacy. A June 2025 study by the Reuters Institute found that 7% of respondents used AI chatbots for news, with this percentage rising to 15% among those under the age of 25.
The EBU and public broadcasters are calling on the European Union to better enforce compliance with AI laws and to establish a watchdog to regularly monitor AI assistants. They advocate for structural consultation with policymakers to address the issues and improve the reliability of AI assistants. The BBC and EBU have developed a toolkit that provides insight and practical tools for media organizations. The VRT has indicated that it uses artificial intelligence strategically, not as hype, but as a technology that strengthens its social mission. By working together and establishing clear guidelines, the reliability of AI chatbots can be improved, and users can use this technology with greater confidence.