SecurityBrief India - Technology news for CISOs & cybersecurity decision-makers
Story image
Cybercriminals wary of AI usage in operations – Sophos
Thu, 30th Nov 2023

In recent research reports, cybersecurity leader Sophos has revealed that whilst cybercriminals recognise the potential of advanced artificial intelligence in their operations, they continue to remain sceptical and wary of it.

Researchers from the cybersecurity firm’s X-Ops team studied discussions on four prominent dark web forums concerning the potential usage of artificial intelligenc in cyber crime.

Sophos’s first piece of research, "The Dark Side of AI: Large-Scale Scam Campaigns Made Possible by Generative AI", highlights how cybercriminals could use AI to carry out massive-scale fraudulent activities with minimal technical skills. The research team demonstrated this by creating a fully operational website with AI-generated images, audio, and product descriptions. The website also features a fake Facebook login page and a counterfeit checkout page to pilfer users' login credentials and credit card information. Remarkably, the team created the website with little technical knowledge, and hundreds of similar websites were developed in minutes using the same tool.

"It's natural—and expected—for criminals to turn to new technology for automation. The original creation of spam emails was a critical step in scamming technology because it changed the scale of the playing field. New AIs are poised to do the same," explained Ben Gelman, a senior data scientist at Sophos. He added, "However, part of the reason we conducted this research was to get ahead of the criminals.

"By creating a system for large-scale fraudulent website generation that is more advanced than the tools criminals are currently using, we have a unique opportunity to analyze and prepare for the threat before it proliferates," he said.

The second report, "Cybercriminals Can't Agree on GPTs", indicates that malicious actors on the dark web are discussing AI's potential for social engineering. Researchers discovered posts related to compromised ChatGPT accounts for sale and ways to bypass protections integrated into large language models (LLMs), like GPT-4, for immoral purposes. The team also found ten ChatGPT derivatives that creators claimed could be used to launch cyber attacks and develop malware.

"While there's been significant concern about the abuse of AI and LLMs by cybercriminals since the release of ChatGPT, our research has found that, so far, threat actors are more sceptical than enthused," noted Christopher Budd, director of X-Ops research at Sophos. He continued by comparing that to discussions about cryptocurrency, "Across two of the four forums on the dark web we examined, we only found 100 posts on AI. Compare that to cryptocurrency where we found 1,000 posts for the same period," he said

In summarising, it is clear that whilst AI's misuse by cybercriminals is a prospective threat, the actual integration of AI into cybercrime seems limited with many criminals remaining sceptical of its potential and wary of its implications.