OpenAI releases GPT-5.4-Cyber for vetted security teams, scaling Trusted Access programme



In short: OpenAI is releasing GPT-5.4-Cyber, a model fine-tuned for defensive cybersecurity with lowered refusal boundaries and binary reverse engineering capabilities, and scaling its Trusted Access for Cyber programme to thousands of verified defenders. The move comes a week after Anthropic restricted its more powerful Mythos model to just 11 organisations, setting up a philosophical split: OpenAI bets on broad verified access while Anthropic opts for tightly gated deployment.

OpenAI is opening up its most capable cybersecurity model to thousands of vetted defenders, releasing GPT-5.4-Cyber and expanding its Trusted Access for Cyber programme in what amounts to a direct response to Anthropic’s Project Glasswing announcement last week.

GPT-5.4-Cyber is a variant of GPT-5.4 fine-tuned specifically for defensive security work. Its defining feature is a lower refusal boundary: where standard models block sensitive queries about vulnerability research, exploit analysis, or malware behaviour, this version is designed to answer them, provided the user has been verified as a legitimate security professional. The model also introduces binary reverse engineering capabilities, letting analysts examine compiled software for weaknesses without access to source code.

Trusted Access for Cyber, scaled up

The model sits inside OpenAI’s Trusted Access for Cyber (TAC) programme, which the company first launched in February alongside a $10 million cybersecurity grant fund. TAC is an identity-and-trust framework that gates access to more capable models behind verification tiers. Individual users can authenticate at chatgpt.com/cyber. Enterprises can request team-wide access through an OpenAI representative. Security researchers who need the most permissive capabilities can apply for an invite-only tier.

The April update scales the programme from a limited pilot to what OpenAI describes as “thousands of verified individual defenders and hundreds of teams responsible for defending critical software.” The company is adding new tiers, with higher verification levels unlocking more powerful features. Users approved for the top tier gain access to GPT-5.4-Cyber. There is a catch: the highest-tier users may be required to waive Zero-Data Retention, meaning OpenAI retains visibility into how the model is being used.

The approach represents a philosophical shift. Rather than relying primarily on model-level restrictions to prevent misuse, OpenAI is moving towards an access-control model that verifies who is asking before deciding what the model will answer. The company frames this around three principles: democratised access using objective verification criteria, iterative deployment that updates safety systems as risks emerge, and ecosystem resilience through grants and open-source contributions.

The Anthropic context

OpenAI’s timing is impossible to read without reference to Anthropic’s Project Glasswing, announced on 7 April. Anthropic revealed that its Claude Mythos Preview model had autonomously discovered thousands of zero-day vulnerabilities across every major operating system and web browser, including a 27-year-old bug in OpenBSD and a 17-year-old remote code execution flaw in FreeBSD that Mythos identified, exploited, and documented without human intervention.

Anthropic’s response was to restrict access severely: Mythos Preview is available only to 11 organisations, including Apple, Google, Microsoft, AWS, Cisco, CrowdStrike, and JPMorgan Chase, under a $100 million defensive initiative. The model is not publicly available, and Anthropic has said it may never be, given the risk that its exploit-generation capabilities could be misused.

OpenAI is taking the opposite bet. GPT-5.4-Cyber is less capable than Mythos in raw vulnerability discovery, but OpenAI is making it available to a far broader audience. The implicit argument is that restricting powerful security tools to a handful of tech giants leaves the vast majority of organisations, including those defending critical infrastructure, hospitals, municipal governments, and small security firms, without access to the same calibre of defensive technology.

What GPT-5.4-Cyber can do

Beyond lowered refusal boundaries, the model is built for workflows that standard ChatGPT handles poorly or refuses outright. Binary reverse engineering is the headline feature: security analysts can feed compiled executables into the model and receive analysis of potential malware behaviour, embedded vulnerabilities, and structural weaknesses. This is work that traditionally requires specialised tools like IDA Pro or Ghidra and significant manual expertise.

The model also handles dual-use queries, questions about attack techniques, exploit chains, and vulnerability classes, that standard models flag as potentially harmful. OpenAI says earlier GPT versions sometimes refused to answer legitimate defensive queries, creating friction for security professionals who needed the model to reason about adversarial techniques in order to defend against them.

Codex Security, OpenAI’s automated code-scanning tool, complements the model. Since its launch, Codex Security has contributed to more than 3,000 critical and high-severity vulnerability fixes across the open-source ecosystem. It now covers more than 1,000 open-source projects through a free scanning programme.

The dual-use problem

The fundamental tension in cybersecurity AI is that the same capabilities that help defenders also help attackers. A model that can reverse-engineer binaries for defensive analysis can, in principle, be used to find exploitable flaws for offensive purposes. OpenAI’s answer is that verification and monitoring are more effective safeguards than blanket refusal.

The company is betting that KYC-style identity verification, tiered access, and retained usage data will deter misuse more effectively than a model that refuses to discuss exploit techniques, and which sophisticated adversaries can jailbreak anyway. Research published in January found that adaptive prompt injection attacks succeed against even state-of-the-art defences more than 85% of the time, suggesting that refusal-based safety is a losing game.

But the monitoring requirement raises its own questions. Requiring top-tier users to waive Zero-Data Retention means OpenAI will see what security researchers are doing with the model, which vulnerabilities they are investigating, which systems they are probing, and which exploits they are analysing. For security teams working on sensitive or classified infrastructure, that visibility may be a dealbreaker. It also creates a single point of compromise: if OpenAI’s logs are breached, they become a roadmap to unpatched vulnerabilities across the organisations using the programme.

The emerging landscape

Between Anthropic’s restricted Mythos, OpenAI’s verified-access GPT-5.4-Cyber, and Anthropic’s separate $100 million Glasswing fund, the cybersecurity AI market is splitting into two camps. One camp says these models are too dangerous for broad access and must be gated behind invitation-only consortiums. The other says broad access, with verification, is the only way to ensure that defenders are not outgunned by adversaries who face no such constraints.

The EU AI Act, whose most substantive obligations take effect on 2 August 2026, will add another variable. High-risk AI systems, a category likely to encompass security automation tools, will need to demonstrate compliance with requirements around risk management, data governance, transparency, and human oversight. How tiered-access cybersecurity models fit within that framework remains an open question that neither OpenAI nor Anthropic has fully addressed.

For now, the practical reality is that the world’s two most prominent AI companies are racing to equip cybersecurity professionals with models capable of finding and analysing vulnerabilities at a speed and scale that was impossible a year ago. Whether that race produces a safer internet or a more dangerous one depends on how well the guardrails hold.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


After being teased in the second beta, the new “Bubbles” feature is finally available in Android 17 Beta 3. This is the biggest change to Android multitasking since split-screen mode. I had to see how it worked—come along with me.

Now, it should be mentioned that this feature will probably look a bit familiar to Samsung Galaxy owners. One UI also allows for putting apps in floating windows, and they minimize into a floating widget. However, as you’ll see, Google’s approach is more restrained.

App Bubbles in Android 17

There’s a lot to like already

First and foremost, putting an app in a “Bubble” allows it to be used on top of whatever’s happening on the screen. The functionality is essentially identical to Android’s older feature of the exact same name, but now it can be used for apps in addition to messaging conversations.

To bubble an app, simply long-press the app icon anywhere you see it. That includes the home screen, app drawer, and the taskbar on foldables and tablets. Select “Bubble” or the small icon depicting a rectangle with an arrow pointing at a dot in the menu.

Bubbles on a phone screen

The app will immediately open in a floating window on top of your current activity. This is the full version of the app, and it works exactly how it would if you opened it normally. You can’t resize the app bubble, but on large-screen devices, you can choose which side it’s on. To minimize the bubble, simply tap outside of it or do the Home gesture—you won’t actually go to the Home Screen.

Multiple apps can be bubbled together—just repeat the process above—but only one can be shown at a time. This is a key difference compared to One UI’s pop-up windows, which can be resized and tiled anywhere on the screen. Here is also where things vary depending on the type of device you’re using.

If you’re using a phone, the current bubbled apps appear in a row of shortcuts above the window. Tap an app icon, and it will instantly come into view within the bubble. On foldables and tablets, the row of icons is much smaller and below the window.

Another difference is how the app bubbles are minimized. On phones, they live in a floating app icon (or stack of icons) on the edge of the screen. You are free to move this around the screen by dragging it. Tapping the minimized bubble will open the last active app in the bubble. On foldables and tablets, the bubble is minimized to the taskbar (if you have it enabled).

Bubbles on a foldable screen

Now, there are a few things to know about managing bubbles. First, tapping the “+” button in the shortcuts row shows previously dismissed bubbles—it’s not for adding a new app bubble. To dismiss an app bubble, you can drag the icon from the shortcuts row and drop it on the “X” that appears at the bottom of the screen.

To remove the entire bubble completely, simply drag it to the “X” at the bottom of the screen. On phones, there’s also an extra “Manage” button below the window with a “Dismiss bubble” option.

Better than split-screen?

Bubbles make sense on smaller screens

That’s pretty much all there is to it. As mentioned, there’s definitely not as much freedom with Bubbles as there is with pop-up windows in One UI. The latter allows you to treat apps like windows on a computer screen. Bubbles are a much more confined experience, but the benefit is that you don’t have to do any organizing.

Samsung One UI pop-up windows

Of course, Android has supported using multiple apps at once with split-screen mode for a while. So, what’s the benefit of Bubbles? On phones, especially, split-screen mode makes apps so small that they’re not very useful.

If you’re making a grocery list while checking the store website, you’re stuck in a very small browser window. Bubbles enables you to essentially use two apps in full size at the same time—it’s even quicker than swiping the gesture bar to switch between apps.

If you’d like to give App Bubbles a try, enroll your qualified Pixel phone in the Android Beta Program. The final release of Android 17 is only a few months away (Q2 2026), but this is an exciting feature to check out right now.

A desktop setup featuring an Android phone, monitor, and mascot, surrounded by red 'missing' labels


Android’s new desktop mode is cool, but it still needs these 5 things

For as long as Android phones have existed, people have dreamed of using them as the brains inside a desktop computing setup. Samsung accomplished this nearly a decade ago, but the rest of the Android world has been left out. Android 17 is finally changing that with a new desktop mode, and I tried it out.



Source link