Chinese AI Breakthrough Challenges US Giants and Faces Censorship Hurdles

DeepSeek : Chinese AI Breakthrough Challenges US Giants and Faces Censorship Hurdles

A meme is circulating on social media: “I can’t believe ChatGPT is losing its job to AI.” This seems to capture the sentiment around a recent deal from China involving a chatbot that appears to have emerged out of nowhere and can do everything. Since the Chinese language model DeepSeek launched its latest version, even OpenAI CEO Sam Altman seems to be struggling to keep up. The Chinese competitor is said to rival OpenAI’s flagship models at a fraction of the development cost. The startup claims the training cost only 5.6 million US dollars, while the US competitors like OpenAI, Google, and Meta spend ten to twenty times more.

In the current c’t podcast “Women and Technology,” Svea Eckert and Eva Wolfangel discuss the latest developments around DeepSeek. The timing is interesting for two reasons. First, the Chinese startup challenges the narrative from major US AI providers that more money, more training data, and more resources are needed for better results. This revelation comes shortly after Donald Trump’s inauguration, who also announced billions in AI investments, making it a strategic move, says Svea Eckert: “You have to read the whole situation politically: It’s certainly no coincidence.” Eckert views these developments positively for the market: “Finally, there’s movement in the market.”

Second, the discussion within the machine learning community has reached a point where many experts talk about a plateau. They argue that the transformer architecture, on which the successful large language models are currently based, has reached its limits. It doesn’t scale well anymore, so while more training data and parameters increase the effort, they don’t yield better results.

An unknown Chinese startup now shows that some architectural changes can achieve results with significantly fewer resources and less effort. However, Chinese providers face another major issue, reports Wolfangel: state censorship requirements. “Chinese AI providers must meet very strict tests and benchmarks,” says the tech journalist. Given the nature of generative AI, which is not robust and whose output is not explainable, implementing censorship securely is nearly impossible. Chinese chatbots are, of course, not allowed to criticize the regime or discuss events like the 1989 Tiananmen Square massacre.

DeepSeek, however, repeatedly does so inadvertently, as Eckert and Wolfangel tested through various creative approaches. In their experiments, they observed how the language model “discusses” these contradictory requirements with itself—on one hand, a user wanting to know more about China’s repression of dissidents, and on the other hand, “sensitive content” that it is not allowed to express.

So far, Eckert and Wolfangel show that both thoughts and initial responses, which the model apparently formulates faster than internal censorship can follow, disappear within seconds as if by magic. Another trick works well: talking about forbidden topics with the model in a roundabout way. DeepSeek successfully finds metaphors for forbidden terms like “Tank Man,” the unknown student who stood in front of tanks at Tiananmen Square. “Basically, it’s similar to how people in a dictatorship deal with censorship,” says Wolfangel: They invent other words and talk about things in a way that everyone understands—without saying forbidden things.

Whether Chinese censorship will be satisfied in the long run with a model spitting out censored content and then quickly deleting it is another question. “Women and Technology” is released every other Wednesday. Svea Eckert and Eva Wolfangel discuss a tech topic and meet inspiring women from and around the tech world.

Exit mobile version