DeepSeek AI Blames Cyberattack for Disruptions as Security Flaws Are Exposed
Chinese AI company DeepSeek has blamed recent sign-up disruptions on a cyberattack, just as security researchers are uncovering serious vulnerabilities in its R1 AI model. The company, which has positioned itself as a cost-effective competitor to OpenAI’s ChatGPT and Google’s Gemini, claimed that large-scale malicious attacks on its servers forced it to temporarily limit new registrations. Existing users were not affected by the disruptions, but DeepSeek has yet to provide further details on the nature of the attack.
While DeepSeek did not explicitly confirm the type of cyberattack it faced, cybersecurity experts suspect a distributed denial-of-service (DDoS) attack, in which hackers overwhelm a system with traffic to render it unavailable. Adding to the growing security concerns, DeepSeek also issued a warning about fraudulent social media accounts impersonating the company, suggesting an increase in deceptive activity surrounding its brand.
Table of Contents
Researchers Uncover Major Security Vulnerabilities
Beyond the cyberattack, security researchers have begun to examine DeepSeek R1’s security posture, and the results are troubling. Threat intelligence firm Kela reported that its red team was able to jailbreak the AI model, bypassing safety measures designed to prevent it from generating harmful content. By exploiting these weaknesses, Kela’s researchers successfully prompted the chatbot to create ransomware, fabricate misleading information, and generate step-by-step instructions for making toxins and explosives.
DeepSeek R1 was found to be vulnerable to several well-known jailbreak techniques that have already been patched in other AI models like ChatGPT. Among them are the Evil Jailbreak, which tricks an AI into taking on the persona of a malevolent confidant, and the Leo jailbreak, which instructs the model to act without ethical or legal restrictions. DeepSeek R1 failed both tests, making it significantly easier to manipulate compared to its Western counterparts.
AI Model Produces Unreliable and Misleading Information
Perhaps even more concerning, Kela’s red team attempted a social engineering test by asking the chatbot to compile a table containing private details about ten senior OpenAI employees, including email addresses, phone numbers, and salary information. While OpenAI’s ChatGPT refused to comply with the request, DeepSeek’s chatbot generated what appeared to be fabricated but convincingly structured data. The results raise serious questions about the model’s reliability, as it was willing to generate unverified and misleading content instead of outright rejecting the request.
Kela researchers warned that DeepSeek’s tendency to produce inaccurate information undermines its credibility. Users relying on the AI for factual data may unknowingly receive misinformation, making the platform less trustworthy than its competitors.
Privacy and Data Protection Risks Raise Red Flags
In addition to security vulnerabilities, DeepSeek’s rise has also ignited privacy and data protection concerns, particularly in the context of increasing scrutiny of Chinese technology companies. With the United States considering a ban on TikTok due to national security risks, DeepSeek’s AI platform is drawing similar concerns about data ownership and privacy laws.
Jennifer Mahoney, an advisory practice manager at Optiv specializing in data governance and privacy, emphasized the importance of questioning how generative AI platforms obtain and process data. She warned that users should be mindful of who controls the AI models, how the training data was sourced, and whether ethical guidelines were followed. She also pointed out that different countries have varying privacy laws, making it critical for users to understand how their data might be accessed and used when interacting with foreign AI services.
DeepSeek Faces Increasing Scrutiny
DeepSeek’s growing popularity has placed it under intense scrutiny from both cybersecurity researchers and regulatory authorities. While its AI models may offer impressive performance and cost efficiency, the glaring security flaws, susceptibility to manipulation, and potential privacy risks highlight the urgent need for stronger safeguards. As AI continues to shape the future of technology, ensuring robust security measures and ethical data practices will be crucial for maintaining trust in these systems.








