DeepSeek has fallen from grace? Multiple institutions question its safety, and several countries have banned it


Since its launch, the DeepSeek application launched by the Chinese startup DeepSeek has attracted worldwide attention for its so-called low cost and advanced reasoning capabilities. However, more and more researchers have begun to question the security of DeepSeek. They worry that the cost of DeepSeek’s low-cost research and development may be the lack of its security, and its security vulnerabilities may be exploited by criminals.

After many countries introduced policies to ban or prepare to ban DeepSeek, the U.S. House of Representatives also proposed a new bill on Thursday (February 6) aimed at prohibiting the installation and use of DeepSeek in electronic devices of the U.S. federal government.

DeepSeek data protection leaks sensitive user information

With media hype, Chinese artificial intelligence software DeepSeek topped the free app download rankings of Apple and Google app stores in the United States in late January. However, cybersecurity experts have expressed concerns about DeepSeek’s practice of storing overseas users’ data in China, and several security assessments have revealed DeepSeek’s design loopholes in security.

In its privacy policy, DeepSeek acknowledges that it stores data on servers located in the People’s Republic of China.

Tim Khang, director of global intelligence at strategic intelligence firm Strider Technologies, said DeepSeek has a built-in absorption mechanism for obtaining data without any protection, which poses a significant risk to American personnel handling sensitive data.

“The security hole for U.S. users, outside of China, and even inside China is that DeepSeek has no transparent way of hosting data,” he told VOA. “So all data will be stored inside of China, and there’s no limit on how that data will be stored. There’s no limit on whether or not user data will be used, there’s no such statement.”

“I think that’s why it’s free,” he said.

Several recent cybersecurity studies on DeepSeek seem to confirm Tim Kang’s statement. A study released by the US cybersecurity company NowSecure on Thursday (February 6) showed that DeepSeek’s transmission and storage of user information was extremely lax. The research report said that DeepSeek’s data transmission to some users was not encrypted and was easily intercepted and tampered with; even for those encrypted data transmissions, DeepSeek used outdated encryption technology.

The researchers also said DeepSeek’s insecure storage of usernames, passwords and encryption keys increases the risk of that information being stolen; the app also readily collects user and device data that can be used to track users and for bad purposes, such as uncovering the actual identity of users who wish to remain anonymous.

Earlier, security researchers at the US cloud security company Wiz discovered a completely unprotected cloud database of DeepSeek shortly after DeepSeek became popular. The company revealed in a statement released on January 29 that the DeepSeek database leaked sensitive information such as user conversation history and API keys, and said that this was a basic security oversight rather than the result of a sophisticated cyber attack.

DeepSeek’s failure rate in preventing malicious hacking can be as high as 100%

At the same time, multiple security studies have found that DeepSeek has numerous security vulnerabilities and can easily be used for malicious purposes – for example, to teach criminals how to make biological and chemical weapons. The possibility of its security defenses being breached is much higher than that of the United States’ advanced AI models.

In a study announced on January 31, Robust Intelligence, a subsidiary of US data communications technology company CISCO, and the University of Pennsylvania, revealed major security flaws in the DeepSeek R1 model.

Generally speaking, AI models usually have a set of security protection systems to prevent AI robots from outputting harmful content. However, attackers who want to break through this layer of protection can use a technical means called “jailbreaking” to use carefully designed data inputs to force AI models to output harmful answers that violate the designer’s security guidelines.

In order to test the security of AI systems, the AI ​​security research field has developed a unified testing framework called HarmBench. The Cisco team tested DeepSeek’s system security flaws based on the standards established by this framework.

The researchers found that DeepSeek’s R1 model had a 100% chance of “failing” in these jailbreak tests. In comparison, the o1 (preview version) from OpenAI in the United States had a 26% chance of outputting objectionable content under jailbreak attack tests.

For example, in the “biological and chemical weapons” indicator, researchers can use “jailbreak” prompts to successfully let AI tools teach users how to make methylmercury that can be used in chemical weapons from ordinary household materials without special tools, or bypass the AI ​​system’s built-in security review to obtain DNA sequence information that can be used for biological weapons research.

Cisco researchers believe that DeepSeek’s low-cost development route may have come at the expense of security. They said in the report: “DeepSeek’s claimed cost-effective training methods, including reinforcement learning, thought chain self-evaluation, and distillation, may have compromised its security mechanisms. Compared with other cutting-edge models, DeepSeek R1 lacks strong protection, making it highly vulnerable to algorithm jailbreaking and potential abuse.”

Coincidentally, Palo Alto Networks, a cybersecurity company, also released a report on January 30 , saying that DeepSeek’s protection can be easily broken by hackers, providing hackers with skills in writing code that can be used to steal data, send phishing emails, and for other fraudulent purposes. Enkrypt AI, a cybersecurity company, also recently released a research report saying that DeepSeek’s R1 model is four times more likely to be used by malicious people to write malware and other unsafe code than OpenAI o1.

US considers banning DeepSeek from government devices due to links to Chinese state-owned enterprises

Because DeepSeek transfers users’ data to China and is subject to Chinese law, multiple studies have found its links to Chinese state-owned telecommunications companies and Chinese tech giants, leading to legislative initiatives by U.S. lawmakers to ban DeepSeek.

According to the Associated Press, DeepSeek’s user login system is linked to China Mobile, a state-owned enterprise in China, and can send user login information to China Telecom. Canadian cybersecurity company Feroot Security first discovered the connection between DeepSeek and China Mobile’s computer infrastructure.

“All of the telecommunications systems that China uses are very tightly controlled … the Chinese government monitors this information very carefully,” said Tim Kang, director of global intelligence at Strider Technologies. “So any government relationship, especially a state-owned enterprise like China Mobile, I would view it as a risk because of their government relationship.”

On February 6, members of the U.S. House of Representatives from both the Republican and Democratic parties introduced a bill aimed at prohibiting the installation and use of DeepSeek applications on U.S. federal government electronic devices. Democratic Congressman Josh Gottheimer and Republican Congressman Darin LaHood issued a statement when launching the proposal , saying that the Chinese government is able to use DeepSeek for surveillance and spread false information. The bill will list all AI applications supported by DeepSeek’s investor, Magic Square Quant, as prohibited.

“The Chinese Communist Party has made it clear that it will use any tool at its disposal to undermine our national security, spread harmful disinformation, and harvest Americans’ data,” Representative Gottheimer said in the statement. “Now, we have deeply troubling evidence that they are using DeepSeek to steal sensitive data on American citizens.”

“DeepSeek’s generative AI program harvests data from American users and stores that information for use by the Chinese Communist Party,” said Congressman LaHood. “Under no circumstances should we allow Chinese Communist Party companies to access sensitive government or personal data.”

Randall Schriver, former assistant secretary of defense for Indo-Pacific security affairs and president of the Project 2049 Institute, told VOA that DeepSeek “should not be on U.S. government devices and platforms” and called for “a rapid review by experts who can identify its flaws.”

“A lot of these applications don’t always directly involve national security, but the collection of data, personal information, all of that is tied to national security interests,” he said.

In addition to issues involving user privacy and data security, DeepSeek also has a built-in real-time content review mechanism to reinforce China’s official narrative. The international community also worries that this makes it a potential tool for the CCP to control speech and manipulate public opinion.

Since DeepSeek came out, governments and military agencies in many countries and regions have banned the use of this program on official equipment, including the U.S. Department of Defense, the U.S. Navy, NASA, the state of Texas, the Executive Yuan of Taiwan, Australia, South Korea and India.

Italy’s ban is the most severe, completely blocking the use of DeepSeek in the country and investigating the owner of the AI ​​tool.

Leave a Comment