AI now routine in cyber attacks, Google report finds
Tue, 12th May 2026 (Today)
Google Threat Intelligence Group has published research finding that artificial intelligence is being used more widely in cyber attacks by both criminal groups and state-backed actors.
The report describes AI use across several stages of intrusion activity, including vulnerability discovery, malware development, reconnaissance, phishing and more autonomous attack workflows. It links that activity to actors associated with China, North Korea and Russia, and says generative AI is moving from experimentation to routine operational use.
Among the report's central claims is what researchers believe may be the first observed zero-day exploit likely developed with AI assistance. Attackers are said to have used AI-supported workflows to identify and weaponise a two-factor authentication bypass in an open-source web administration platform.
Researchers say advanced large language models are becoming better at finding semantic logic flaws that can evade traditional testing tools. Such flaws can be harder to detect through standard automated security checks and may widen the range of weaknesses available to attackers.
Malware use
The report also points to AI use in malware concealment and support systems. It says APT27, a threat actor linked to the People's Republic of China, used Gemini to speed development of tools believed to support operational relay box infrastructure used to mask intrusion activity.
In a separate case, suspected Russia-linked actors targeting organisations in Ukraine were found using AI-generated decoy code in malware families known as CANFAIL and LONGSTREAM. The inserted code was designed to disguise malicious functions and make forensic investigation more difficult.
Researchers also examined PROMPTSPY, an Android backdoor that integrates Gemini into malware operations. Their analysis says the software can inspect device interfaces, generate commands and interact with infected devices without continuous direction from a human operator.
PROMPTSPY could also capture authentication gestures and rotate supporting infrastructure such as Gemini API keys and command-and-control servers. The report presents it as an example of how AI tools may be built into malicious software to reduce direct operator involvement while maintaining access to compromised systems.
Reconnaissance shift
Beyond malware, the research describes growing use of AI for information gathering and social engineering. Large language models are being used to map organisational structures, identify senior personnel and create phishing material aimed at companies and government bodies.
The report also highlights the spread of what it calls agentic AI frameworks. These tools can carry out tasks such as reconnaissance and vulnerability validation with limited human oversight, and the research links some of that activity to suspected China-related campaigns targeting organisations across Asia.
Another strand of the research focuses on influence operations. It describes suspected AI voice-cloning tied to the pro-Russia campaign Operation Overload, which used manipulated video content to impersonate legitimate journalists.
This use of synthetic media adds to concerns among security analysts that generative AI is lowering the cost and time needed to produce convincing deception material. The report suggests the same tools used for text generation and automation are also being adapted for impersonation and disinformation.
AI as target
The research also says threat actors are trying to expand access to commercial AI systems themselves. It describes the use of proxy relays, automated registration pipelines and account-pooling services intended to circumvent platform safeguards and billing controls.
At the same time, the broader AI software ecosystem is becoming a target. The report documents malicious OpenClaw skills said to be capable of executing unauthorised commands, and cites supply chain attacks affecting AI-related projects including LiteLLM and BerriAI.
These findings indicate that AI is not only being used as a tool by attackers but is also becoming part of the infrastructure that needs defending. As more organisations integrate AI models, developer frameworks and connected services into business systems, weaknesses in those components may create new routes for compromise.
Google is also developing defensive AI systems, including Big Sleep, a vulnerability discovery agent, and CodeMender, an experimental tool designed to patch software vulnerabilities automatically. The report adds that attackers are increasingly trying to industrialise access to AI systems through automated methods that bypass controls and sustain operations at scale.