SecurityBrief India - Technology news for CISOs & cybersecurity decision-makers
Story image

Tenable warns DeepSeek AI model can be breached for malware

Today

Tenable Research has highlighted potential security risks linked to DeepSeek R1, a generative AI model which can be manipulated to produce malware.

Like many generative AI (GenAI) models, DeepSeek R1 is designed with safeguards to prevent misuse. However, Tenable Research has discovered that these defences can be bypassed, leading to concerns about AI's role in facilitating cybercrime. The researchers warned that while DeepSeek's malware generation output requires further refinement for full effectiveness, it reduces the entry barriers for those with minimal coding skills to develop malware.

The experiment conducted by Tenable's security team sought to determine if DeepSeek R1 could create two forms of malicious software: a keylogger and a ransomware sample. Initially, DeepSeek R1 refused to generate these, aligned with its programmed restrictions. However, Tenable's researchers employed simple jailbreaking methods, which enabled them to circumvent these limitations.

Nick Miles, Staff Research Engineer at Tenable, explained, "Initially, DeepSeek rejected our request to generate a keylogger. But by reframing the request as an 'educational exercise' and applying common jailbreaking methods, we quickly overcame its restrictions."

Upon bypassing the AI's safeguards, DeepSeek R1 produced a keylogger that could encrypt logs and store them discreetly on a device, as well as a ransomware executable capable of encrypting files. The findings suggest that even non-experts might exploit AI technologies like DeepSeek to develop malicious software.

The potential for GenAI to elevate cybercriminal activities is a significant concern. While DeepSeek's code requires manual refinement to fully function, the AI model can provide foundational code and suggest relevant techniques, accelerating the learning curve for aspiring cyber criminals.

Miles emphasised the importance of responsible AI development: "Tenable's research highlights the urgent need for responsible AI development and stronger guardrails to prevent misuse. As AI capabilities evolve, organisations, policymakers, and security experts must work together to ensure that these powerful tools do not become enablers of cybercrime."

The findings from Tenable Research underscore the ongoing challenge of securing generative AI models against misuse. As AI technology advances, so do the methods used to bypass its safeguards, highlighting the need for continuous improvements in security measures. Ensuring AI remains a tool for innovation rather than exploitation will require ongoing collaboration between developers, researchers, and policymakers.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X