German Concerns Rise Over AI’s Impact on Media Credibility and Democracy

Three-quarters of German citizens (76 percent) are concerned about the credibility of media when artificial intelligence (AI) is involved. They believe that this technology leads to a decrease in trust in news and media content. This is a key finding from the “Transparency Check” study on the “Perception of AI Journalism,” published by the state media authorities on Wednesday. The study is based on a representative online survey of 3,013 internet users. According to the survey, 56 percent of participants even see AI as a threat to democracy in Germany. At the same time, over 90 percent consider clear rules for the use of technology in the media and labeling as essential.

Across all age groups, concerns about AI in journalism prevail, according to the analysis. Deceptions such as artificially generated but authentic-looking deepfakes and a lack of transparency are at the center of criticism. German citizens are particularly skeptical of purely AI-generated content, such as articles written entirely by the technology or synthetic moderation voices. Younger, formally higher-educated users who consider themselves to have high media literacy see more opportunities in AI. According to them, such automated tools could help with research or fact-checking.

AI is already widespread in the media. A test conducted by the study’s authors found that respondents sometimes find it difficult to recognize AI-supported content. Particularly in texts, 67 percent of respondents can still recognize the use of technology through transparent hints. “Practically no one, however, notices the speaker’s hint in the audio contribution,” it continues. There is also room for improvement in the labeling of video contributions. Younger people are most likely to respond to hints here. The most important recognition signals in videos are – more important than the labeling itself – an artificially automated-sounding voice and tone in the contribution, as well as the use of avatars.

In general, the majority (57 percent) rate their own knowledge of AI as poor. 41 percent have already tried tools like ChatGPT or DeepL. “The use of AI in journalism requires more than just technical competence,” summarizes Transparency Check leader Christian Krebs. “It is about an ethical attitude that places responsibility and transparency at the center.” Media houses and platforms must “actively advocate for clear labeling and understandable processes.”

According to a study by the Media Association of the Free Press (MVFP), 78 percent of local publishers see AI as a mega-trend. The technology is already used in every third magazine editorial office for research, topic selection, creation, and editing of texts. The German Journalists Association (DJV) warns: “Artificial intelligence must not be integrated at a decisive point in editorial processes.” The task of journalists involves much more than fine-tuning.

Exit mobile version