Apple is under increasing pressure to withdraw its controversial artificial intelligence (AI) feature that has produced inaccurate news alerts on its latest iPhones. Feature, designed to summarize breaking news notifications, has drawn criticism after generating completely false claims.
A prominent news organization initially raised concerns in December about the feature misrepresenting its journalism. Apple, however, only responded on Monday, stating it was working to clarify that summaries were AI-generated. Despite this, prominent media figures and organizations continue to demand more decisive action.
Calls for Immediate Withdrawal
Alan Rusbridger, former editor of the Guardian, stated that Apple should pull the feature entirely, describing it as “clearly not ready.” He criticized technology as being “out of control” and warned of misinformation risks it posed.
“Trust in news is low enough already without giant American corporations coming in and using it as a kind of test product,” Rusbridger stated on a popular radio program.
Series of High-Profile Errors
controversy stems from several glaring mistakes attributed to AI features. Last month, a headline was inaccurately summarized, falsely claiming Luigi Mangione, the man accused of killing UnitedHealthcare CEO Brian Thompson, had shot himself.
More recently, on Friday, AI summarized notifications to falsely assert that Luke Littler had won the PDC World Darts Championship hours before it started and that Spanish tennis star Rafael Nadal had come out as gay.
News organizations highlighted that the AI-generated summaries often misrepresented or outright contradicted their original content. “It is critical that Apple urgently addresses these issues as the accuracy of our news is essential in maintaining trust,” it stated.
The organization is not alone in its concerns. In November, a ProPublica journalist flagged erroneous AI-generated summaries of New York Times app notifications, including a false claim that Israeli Prime Minister Benjamin Netanyahu had been arrested. On January 6, an inaccurate summary appeared concerning the fourth anniversary of the Capitol riots.
Journalistic Organizations Demand Action
Reporters Without Borders, an organization advocating for journalists’ rights, joined the call for Apple to disable the feature in December. It criticized the AI feature’s immaturity, citing a false headline about Luigi Mangione as an example of the dangers posed by generative AI in producing unreliable information.
Apple’s Response and Planned Updates
In its statement on Monday, Apple announced that it would release an update “in coming weeks” to clarify when text displayed in notifications is AI-generated. It also encouraged users to report unexpected or inaccurate summaries.
“Apple Intelligence features are in beta and we are continuously making improvements with the help of user feedback,” the company said. Receiving the AI-generated summaries is optional, and the feature’s rollout in the UK began in December. Currently, it is available on iPhone 16 models, iPhone 15 Pro, and Pro Max handsets running iOS 18.1 or above, and select iPads and Macs.
Notification summaries aim to allow users to “scan for key details,” according to Apple. However, critics argue that the risks of spreading misinformation outweigh any potential benefits of the feature.
Misinformation Risks in AI Journalism
growing reliance on AI for summarizing and disseminating news has reignited debates about the role of technology in journalism. Media organizations emphasize the need for transparency and accuracy, warning that errors erode public trust in journalism and platforms that distribute news.
Apple’s AI feature joins a broader suite of AI tools the company is developing. As scrutiny mounts, the tech giant faces significant pressure to ensure its products do not contribute to the spread of misinformation in an era where public trust in news is already fragile.