Tag
8 articles
A recent test revealed that ChatGPT provided inaccurate recommendations when asked about WIRED's reviewer picks for electronics, highlighting AI's limitations in handling curated content.
This explainer explores the advanced AI/tech concept of real-time fact-checking, how it works, and why it's critical for building trustworthy social media platforms.
German media outlet Der Spiegel removed AI-generated images from its Iran coverage after discovering they were likely created or altered by artificial intelligence. The incident highlights the growing challenge of misinformation in modern journalism.
YouTube is expanding its AI deepfake detection tool to include politicians and journalists, helping protect public figures from AI-generated misinformation. The platform's likeness detection feature will now notify selected users when AI-generated content featuring their likeness appears on the site.
YouTube expands AI deepfake detection to politicians, journalists, and officials, enabling them to flag unauthorized likenesses for removal. The move strengthens efforts to combat AI-generated misinformation.
Meta's deepfake detection methods are insufficient for handling misinformation during armed conflicts, according to its own Oversight Board. The board is calling for a major overhaul of how the company identifies and surfaces deepfake content.
Experts are working to verify the authenticity of videos and images circulating online following the US and Israel military strike on Iran, as deepfakes and manipulated content become increasingly difficult to distinguish from real footage.
ChatGPT Voice and Gemini Live are prone to repeating false information up to 50% of the time, while Alexa refuses to spread falsehoods, highlighting critical AI safety gaps.