Deepseek AI Challenges Nvidia and Chinese Censorship

Deepseek : Deepseek AI Challenges Nvidia and Chinese Censorship

Deepseek, a new AI developed in China, has managed to bypass several censorship blocks imposed by the Chinese government. On January 27, Nvidia, a major US chip manufacturer, experienced a significant drop in stock value, losing $600 billion in a single day. This was the largest single-day loss in Wall Street history. The reason for this was the introduction of Deepseek’s open-source chatbot, Deepseek-R1. This chatbot not only outperformed Chat-GPT-o1 in several benchmarks but also used fewer hardware resources. This efficiency could reduce the demand for expensive high-performance chips, impacting Nvidia’s business.

Deepseek quickly became the top-rated free AI application in the App Store, surpassing Chat-GPT. However, the software comes with a drawback: it is subject to censorship by the Chinese Communist Party (CCP), which leads it to respond with propaganda instead of factual information.

Experts from the Open-Source-LLM-Test-Project, Promptfoo, tested the extent of this censorship and found ways to circumvent it. They used a dataset of 1,360 prompts involving sensitive topics for the CCP, such as Taiwan’s independence and historical events related to the Cultural Revolution or President Xi Jinping. Unsurprisingly, Deepseek responded to most of these prompts with answers aligned with the CCP’s views. 85% of the prompts, or 1,156 in total, were answered with censored responses.

The experts noticed that these responses did not match Deepseek’s usual behavior, leading them to conclude that the censorship blocks were added afterward and could be bypassed relatively easily.

According to Promptfoo experts, bypassing Deepseek’s censorship was easier than expected. The censorship blocks were implemented with minimal effort to satisfy the government’s requirements. To bypass censorship, users could replace “China” with another hypothetical state in their prompts. This change allowed the chatbot to provide information on how to undermine the narratives of an authoritarian state to support independence movements.

Other common strategies to bypass blocks also worked with Deepseek. Users could generalize their prompts or request answers in the form of fictional texts. Technical jailbreak methods through prompt injections also proved successful.

The LLM experts believe that these methods might soon become unnecessary as other manufacturers develop similar models without these restrictions. They plan to test US chatbots on sensitive topics next. Although there is no direct state censorship in the US, biases exist within LLM models due to the source material used for training.

In the coming weeks, as more AI models emerge without these limitations, the landscape of AI and its applications may change significantly. These developments highlight the ongoing challenges and opportunities in the field of artificial intelligence.

Exit mobile version