OpenAI, Axel Springer in deal to integrate AI and journalism, tackle AI ‘hallucinations’
Axel Springer, one of the largest media companies in Europe, is collaborating with OpenAI to integrate journalism into ChatGPT artificial intelligence (AI) technologies, the German publisher said in a statement on its blog on Dec. 13.
The collaboration involves using content from Axel Springer media brands to advance the training of OpenAI’s large language models. It aims to achieve a better ChatGPT user experience with up-to-date and authoritative content across diverse topics, and increased transparency through attributing and linking full articles.
Generative AI chatbots have long grappled with factual accuracy, occasionally generating false information, commonly referred to as “hallucinations.“ Initiatives to reduce these AI hallucinations were announced in June in a post on OpenAI’s website.
AI hallucinations occur when artificial intelligence systems generate factually incorrect information that is misleading or unsupported by real-world data. Hallucinations can manifest in various forms, such as generating false information, making up nonexistent events or people, or providing inaccurate details about certain topics.
The blend of AI and journalism has presented challenges, including concerns about transparency and misinformation. An Ipsos Global study revealed that 56% of Americans and 64% of Canadians believe AI will exacerbate the spread of misinformation, and globally, 74% think AI facilitates the creation of realistic fake news.
The partnership between OpenAI and Axel Springer aims to ensure that ChatGPT users can generate summaries from Axel Springer’s media brands, including Politico, Business Insider, Bild, and Die Welt.
Related: Opensource AI can outperform private models like Chat-GPT – ARK Invest research
However, the potential for AI to combat misinformation is also being explored, as seen with tools like AI Fact Checker and Microsoft’s integration of GPT-4 into its Edge browser.
The Associated Press has responded to these concerns by issuing guidelines restricting the use of generative AI in news reporting, emphasizing the importance of human oversight.
In October 2023, a team of scientists from the University of Science and Technology of China and Tencent’s YouTu Lab developed a tool to combat “hallucinations” by artificial intelligence (AI) models.
Magazine: Deepfake K-Pop porn, woke Grok, ‘OpenAI has a problem,’ Fetch.AI: AI Eye
Source: Read Full Article