SecurityBrief India - Technology news for CISOs & cybersecurity decision-makers
India
OpenAI launches GPT-5.5-Cyber for vetted defenders

OpenAI launches GPT-5.5-Cyber for vetted defenders

Fri, 8th May 2026 (Today)
Mark Tarre
MARK TARRE News Chief

OpenAI has launched GPT-5.5-Cyber in a limited preview for organisations securing critical infrastructure, alongside broader access controls for cybersecurity users of GPT-5.5.

The new model is aimed at specialised security work in authorised environments. GPT-5.5 with Trusted Access for Cyber will remain the main option for most defensive tasks, including secure code review, vulnerability triage, malware analysis, detection engineering and patch validation.

Trusted Access for Cyber is an identity- and trust-based framework that gives vetted defenders fewer automated refusals when carrying out approved cybersecurity work. It is designed to permit legitimate defensive use while continuing to block activity such as credential theft, stealth, persistence, malware deployment and exploitation of third-party systems.

OpenAI is drawing a clearer line between three levels of access. Standard GPT-5.5 remains the default version for general use. GPT-5.5 with Trusted Access for Cyber is intended for verified defensive work, while GPT-5.5-Cyber is the most permissive version for specialised workflows such as authorised red teaming, penetration testing and controlled validation exercises.

The initial version of GPT-5.5-Cyber is not intended to deliver a major jump in raw cyber performance over GPT-5.5. Instead, it is trained to be more permissive on security-related requests for a smaller group of users operating under stricter verification and monitoring requirements.

That distinction matters because OpenAI is trying to widen access for defenders without opening the door to abuse. Verified Trusted Access for Cyber users receive fewer classifier-based refusals on approved tasks, but safeguards remain in place against requests that could support real-world harm.

Access rules are also tightening for users of the more permissive models. Individual Trusted Access for Cyber users of the most cyber-focused models must enable Advanced Account Security, while organisations can instead attest that they use phishing-resistant authentication through single sign-on systems.

Security tiers

OpenAI positioned GPT-5.5 as the broadest tool for security teams because it combines general-purpose reasoning with support for many common defensive tasks. GPT-5.5-Cyber, by contrast, is reserved for cases where authorised workflows still run into refusals, especially when defenders need to validate exploitability in tightly controlled settings.

Most defenders are expected to start with GPT-5.5 under Trusted Access for Cyber rather than the more permissive preview model. OpenAI said the cyber-specific model is part of an iterative deployment process shaped by verification, misuse monitoring, approved-use scoping and partner feedback.

Partner focus

The rollout also highlights how OpenAI sees commercial security vendors as a key route into the market. It is working with vendors across vulnerability discovery, patching, detection, response and network enforcement, arguing that improvements across those layers can shorten the gap between finding a flaw and protecting customers.

For network and security providers, GPT-5.5 can support rule review, configuration analysis, incident investigation and secure change management while software fixes are still being deployed. These uses matter especially for critical infrastructure and public services, where cutting exposure quickly can be as important as issuing a patch.

In vulnerability research, GPT-5.5 with Trusted Access for Cyber can help users understand unfamiliar code, trace root causes, review patches and build safe reproduction harnesses. GPT-5.5-Cyber is relevant in a narrower set of cases where approved partners need exploit proof-of-concepts for coordinated disclosure or controlled validation.

Detection and monitoring are another target area. EDR, SIEM, IGA/PAM and other monitoring partners can use GPT-5.5 to connect telemetry, alerts and detections, summarise relevant findings and help analysts move more quickly from public disclosure to investigation.

OpenAI also highlighted software supply chain security, where models may be used to inspect dependency changes, assess exploitability in owned code and spot suspicious package behaviour earlier in development. It named Snyk, Gen Digital, Semgrep and Socket among partners helping it examine such use cases.

Open source angle

Alongside the cyber models, OpenAI is extending Codex Security to selected maintainers of critical open-source projects through its Codex for Open Source programme. The offering is designed to help maintainers identify, validate and remediate vulnerabilities with codebase-specific threat modelling, isolated validation and patch proposals for human review.

Open-source projects are a major transmission path for vulnerabilities across the wider software ecosystem, making upstream maintenance work an important part of cyber defence. OpenAI has also released a Codex Security plugin intended to bring those workflows into Codex interfaces such as the app and command-line tool.

The latest changes reflect a broader attempt to link model access more closely to user identity, organisational checks and task authorisation. "Expanding access to those capabilities responsibly requires stronger confidence in who is using the model, what systems they are targeting, and whether the work is authorized," OpenAI said.