skip to content

DeepSeek’s models 100% more susceptible to manipulation than US-made AI models

This week, a series of security research reports have raised concerns over the vulnerability of DeepSeek’s open-source AI models. The China-based AI startup, which has seen growing interest in the US, now faces increased scrutiny due to potential security flaws in its systems. Researchers have pointed out that these models could be more susceptible to manipulation than US-made counterparts, with some warning about the risks of data leaks and cyberattacks.

This newfound focus on DeepSeek’s security comes after troubling discoveries regarding exposed data, weak defenses, and the ease with which its AI models can be tricked into harmful actions.

Exposed data and weak security defenses

Security researchers have uncovered many troubling security flaws within DeepSeek’s systems. A report by Wiz, a cloud security startup, revealed that a DeepSeek database had been exposed online, allowing anyone who stumbled upon it to access sensitive information. This included chat histories, secret keys, backend details, and other private data. The database, which contained over a million lines of activity logs, was unsecured and could have been manipulated by malicious actors to escalate their privileges without needing to authenticate user identity. Although DeepSeek fixed the issue before publicly disclosing it, the exposure raised concerns about the company’s data protection practices.

More straightforward to manipulate than US models

In addition to the database leak, researchers at Palo Alto Networks found that DeepSeek’s R1 reasoning model, recently released by the startup, could be easily tricked into assisting with harmful activities.

Using basic jailbreaking techniques, the researchers could prompt the model to provide advice on writing malware, crafting phishing emails, and even constructing a Molotov cocktail. This highlighted a worrying level of susceptibility in the model’s security features, making it far more prone to manipulation than similar US-made models, such as OpenAI’s.

Further research by Enkrypt AI revealed that DeepSeek’s models are highly vulnerable to prompt injections, where hackers use carefully crafted prompts to trick the AI into producing harmful content. DeepSeek generated unsafe outputs in nearly half of the tests conducted. One such instance saw the AI writing a blog detailing ways terrorist groups could recruit new members, underlining the potential for serious misuse of the technology.

Growing US interest and future concerns

Despite these security issues, interest in DeepSeek has surged in the US following the release of its R1 model, which rivals OpenAI’s capabilities at a much lower cost. This sudden surge of attention has spurred increased scrutiny of the company’s data privacy and content moderation policies. Experts have warned that while the model may be suitable for specific tasks, it requires stronger safeguards to prevent misuse.

As concerns about DeepSeek’s security continue to grow, questions about potential US policy responses to companies using its models remain unanswered. Experts have emphasized that AI safety must evolve alongside technological advancements to avoid such vulnerabilities in the future.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed