Apple’s AI technology has started summarizing notifications, but currently, this feature is only available in English. Users have reported significant errors that can completely distort the original content. After receiving complaints from the BBC about several incorrectly summarized headlines, Apple has committed to making the AI summaries more transparent. However, there is a debate about whether such an error-prone feature should be offered at all.
On the pro side, Leo Becker welcomes AI summaries, despite their occasional inaccuracies. High-frequency chat groups from family, friends, colleagues, neighbors, clubs, and schools often lead to a flood of notifications on the iPhone. While one can choose to mute these notifications, there is a risk of missing something important. This is where Apple’s AI comes in, summarizing numerous messages into a single notification. Ideally, it provides concise information, such as Bernd urgently needing a garden hose or a lice alert in class 1d. However, it does not always work perfectly, especially with colloquial language or references that only insiders understand. This is a challenge for AI models and humans alike.
For Becker, the practical benefit outweighs the occasional errors in AI summaries. For important chats or personal messages, it is advisable to read the entire conversation. Considering the hundreds of millions of daily iPhone notifications, the error rate seems manageable. Moreover, summaries are tailored to each iPhone as Apple’s language model operates locally on the device.
While “fake news” is undoubtedly a serious issue, it is not caused by sloppy AI summaries but by malicious actors who deliberately spread misinformation. Ultimately, it is essential to question information, whether it comes from the original source or an AI summary. Users have the choice to enable or disable the feature and can exclude news specifically.
On the con side, Wolfgang Kreutz argues that summarizing short texts with AI is generally a bad idea. While receiving a summary instead of a barrage of messages might seem appealing, the failed exceptions are concerning. Missing an appointment might not be catastrophic, but an unannounced or downplayed emergency could be. Kreutz also doubts that everyone would find it amusing if a casual “That almost killed me” is turned into a “tragic accident.”
He is particularly critical of summarizing already condensed news notifications. Apple Intelligence has repeatedly demonstrated that this can lead to errors: BBC and New York Times articles were distorted into false claims and displayed with their logos. If readers believe a false report, it damages the reputation of credible media and undermines trust. Every avoidable mistake is one too many.
Currently, even the best language models lack true text comprehension, and short news tickers lack context. This makes misinformation inevitable. Since AI is primarily optimized and trained in English, the risk increases with other languages.
While users can choose not to enable the feature, if their surroundings take a distorted AI message seriously, they must deal with the consequences. As problems become less frequent, people may rely more on AI.
Kreutz believes Apple should not offer such automation, even as an option. The well-intentioned but error-prone summaries could have the same impact as genuine fake news by spreading misinformation.
The debate continues: should Apple provide AI summaries, or do the risks outweigh the benefits?