Tenable exposes Gemini Trifecta flaws risking user data theft
Tenable has identified and disclosed three vulnerabilities in Google's Gemini suite, referred to as the Gemini Trifecta, which exposed users to privacy risks involving the potential silent theft of sensitive data.
The vulnerabilities affected three separate components of the Gemini suite. In Gemini Cloud Assist, attackers could plant poisoned log entries, causing Gemini to follow malicious instructions unknowingly when users engaged with the tool. In the Gemini Search Personalisation Model, it was possible for attackers to inject queries into a user's browser history. Gemini would then treat this data as trusted context, enabling the exfiltration of private information like saved items and location data. The Gemini Browsing Tool was also impacted, where attackers could prompt Gemini to send outbound requests containing user data directly to attacker-controlled servers.
All three vulnerabilities have since been remediated by Google. Tenable stated that these flaws provided attackers with "invisible doors" into the Gemini suite, potentially allowing unauthorised access to valuable user data without any indication to the user. The vulnerabilities allowed attackers to exploit routine platform features, negating the need for direct access or traditional phishing techniques.
According to Tenable Research, the root cause of the vulnerabilities was Gemini's failure to sufficiently differentiate between legitimate user input and attacker-supplied content. This oversight meant that inputs such as poisoned logs, manipulated search history entries, or hidden web content were accepted as valid, resulting in ordinary features becoming vectors for attacks.
"Gemini draws its strength from pulling context across logs, searches, and browsing. That same capability can become a liability if attackers poison those inputs," said Liv Matan, Senior Security Researcher at Tenable.
The security challenges highlighted by this disclosure emphasise the risks inherent in large language model (LLM) platforms. Attackers could manipulate the AI's behaviour in ways that remained invisible to end users, raising the security stakes for organisations relying on these tools.
Matan continued, "The Gemini Trifecta shows how AI platforms can be manipulated in ways users never see, making data theft invisible and redefining the security challenges enterprises must prepare for. Like any powerful technology, large language models (LLMs) such as Gemini bring enormous value, but they remain susceptible to vulnerabilities."
"Security professionals must move decisively, locking down weaknesses before attackers can exploit them and building AI environments that are resilient by design, not by reaction. This isn't just about patching flaws; it's about redefining security for an AI-driven era where the platform itself can become the attack vehicle."
Tenable detailed the potential impacts had the Gemini Trifecta been exploited. Attackers could have inserted malicious instructions into logs or search histories, exfiltrated sensitive user information such as saved data and location history, abused cloud integrations to reach wider cloud resources, and used the browsing tool to route user data to external destinations. These actions could be performed without users' knowledge, by leveraging AI-driven mechanisms rather than conventional hacking methods.
Following these findings, Google has since remediated the vulnerabilities and users are not required to take additional action. However, Tenable has recommended that security professionals reconsider their approach to AI features, treating them as active attack surfaces. The advice includes regularly auditing logs, search histories, and integrations to detect poisoning or manipulation attempts, monitoring for unusual tool executions or outbound requests that could suggest exfiltration, and proactively testing AI-enabled services for prompt injection resistance.
Matan further commented on the significance of the disclosure:
"This vulnerability disclosure underscores that securing AI isn't just about fixing individual flaws. It's about anticipating how attackers could exploit the unique mechanics of AI systems and building layered defenses that prevent small cracks from becoming systemic exposures."
The Gemini Trifecta highlights the emerging complexity of maintaining security within AI-driven environments, and the need for organisations to develop stronger, proactive security practices tailored to the evolving risks presented by advanced AI platforms.