German Data Protection Authorities Investigate Chinese AI Chatbot DeepSeek

DeepSeek : German Data Protection Authorities Investigate Chinese AI Chatbot DeepSeek

German data protection officials are scrutinizing the Chinese AI chatbot DeepSeek, as reported by the specialized service Tagesspiegel Background. According to the report, Dieter Kugelmann, the data protection officer of Rhineland-Palatinate, stated that DeepSeek seems to lack almost everything in terms of data protection. DeepSeek’s chatbot, with its privacy policy, grants itself extensive access to data, such as IP addresses, chat histories, uploaded files, and even keyboard stroke patterns and rhythms.

Kugelmann further explained that he is unaware of any European branch or responsible legal representative of DeepSeek, which would already be a violation of the GDPR. There is currently no data protection agreement between the EU and China that provides a legal basis for data exchange.

The topic of DeepSeek was also discussed at the interim conference of data protection authorities in Berlin, according to Tagesschau. Rhineland-Palatinate and several other German data protection authorities plan to consult together on further steps. Initially, a questionnaire on data processing will likely be sent to the company. Italian data protection officials have also reportedly reached out to DeepSeek with questions about handling user data, and the app is currently unavailable there.

A recent leak indicates that DeepSeek may have issues with data protection and security. IT security researchers from Wiz discovered an open database from the provider containing sensitive information accessible on the internet. Security experts explained, “Within minutes, we found a publicly accessible ClickHouse database connected to DeepSeek – completely open and without authentication, allowing access to sensitive data.” The database contained a significant volume of chat histories, backend data, and sensitive information, including log streams and API secrets.

DeepSeek made waves with its AI chatbot, which can compete with major models like those from OpenAI despite significantly less training effort and sometimes even performs better. This caused a substantial stock market shake-up, affecting tech stocks such as graphics card manufacturer Nvidia. Reports suggest that OpenAI’s major investor, Microsoft, plans to investigate whether DeepSeek accessed OpenAI’s data unlawfully. Meta has already set up a crisis team due to DeepSeek. Some observers refer to this situation as a “Sputnik moment.”

The scrutiny of DeepSeek by German and European data protection authorities highlights growing concerns about data privacy and security in the use of AI technologies. The lack of transparency regarding DeepSeek’s data handling practices and the absence of a legal framework for data exchange between the EU and China pose significant challenges. The situation underscores the need for robust data protection measures and international agreements to safeguard user information in the era of rapidly advancing AI technologies.

The potential implications of DeepSeek’s data practices are far-reaching, impacting not only users’ privacy but also the competitive landscape of AI development. As investigations continue, the tech industry and regulators will closely watch how this case unfolds, potentially setting precedents for future cross-border data protection and AI governance.

The unfolding events surrounding DeepSeek serve as a reminder of the critical importance of data protection in the digital age. With AI technologies becoming increasingly integrated into various aspects of daily life, ensuring that these systems operate transparently and responsibly is paramount. The outcome of the investigations into DeepSeek could influence how data protection laws are enforced and adapted to address the challenges posed by emerging technologies.

As the world becomes more interconnected, the need for international cooperation in addressing data protection issues becomes evident. The case of DeepSeek may prompt discussions on establishing global standards and agreements to ensure that AI advancements do not come at the cost of user privacy and security.