OpenAI has begun restricting access to its new cybersecurity tool, GPT-5.5 Cyber, just days after CEO Sam Altman publicly criticized rival Anthropic for doing the same with its competing product, Mythos. The move highlights a growing tension in the AI industry over how to balance powerful defensive tools with the risk of misuse.
A reversal in messaging
On Thursday, Altman announced on X that OpenAI would begin rolling out Cyber to “critical cyber defenders” over the coming days. Interested users must apply through a dedicated OpenAI website, submitting credentials and a detailed description of their intended use. The tool is designed for tasks such as penetration testing, vulnerability identification and exploitation, and malware reverse engineering — essentially a comprehensive toolkit for companies to probe their own security defenses.
Also read: Medicare’s quiet bet on AI: A new payment model that most of tech hasn’t noticed
This controlled access approach mirrors the one Anthropic adopted for Mythos, which Altman had previously dismissed as “fear-based marketing.” Critics at the time also argued that Anthropic’s rhetoric around Mythos was overblown, especially after an unauthorized group reportedly managed to gain access to the tool despite the restrictions.
Why the gatekeeping matters
The central dilemma for both companies is the dual-use nature of these AI cybersecurity tools. While they can help organizations find and fix security holes, the same capabilities could be weaponized by malicious actors for offensive purposes. By limiting access to verified defenders, OpenAI and Anthropic aim to reduce that risk, even if it means slower adoption.
Also read: Altman testifies Musk once proposed handing OpenAI to his children during safety dispute
OpenAI has indicated it is working with the U.S. government to identify more legitimate users and is exploring pathways to make Cyber more widely available in the future. The company has not provided a timeline for broader release.
Industry implications
The episode underscores a broader industry challenge: how to deploy powerful AI tools responsibly without stifling innovation or appearing hypocritical. Altman’s earlier criticism of Anthropic now looks premature, as OpenAI faces the same practical realities. For cybersecurity professionals, the restricted access may slow the adoption of advanced AI defenses, potentially leaving organizations vulnerable in the interim.
The incident also raises questions about the effectiveness of access controls. The reported breach of Mythos suggests that determined bad actors may find ways around such barriers, regardless of company policy.
Conclusion
OpenAI’s decision to restrict Cyber access, after criticizing Anthropic for the same approach, reflects the complex realities of AI safety in cybersecurity. The company is now dealing with the same trade-offs between utility and security that it previously accused its rival of exaggerating. As the tools evolve, the industry will be watching to see whether access controls can truly keep powerful AI out of the wrong hands.
FAQs
Q1: What is OpenAI Cyber?
Cyber is a GPT-5.5-based cybersecurity toolkit designed to help organizations perform penetration testing, identify and exploit vulnerabilities, and reverse engineer malware.
Q2: Why is OpenAI restricting access to Cyber?
OpenAI is limiting access to verified cybersecurity defenders to prevent the tool from being misused by malicious actors for offensive purposes.
Q3: How does this compare to Anthropic’s Mythos?
Both tools are AI-powered cybersecurity assistants. Anthropic restricted Mythos access first, which Altman criticized as fear-based marketing. OpenAI is now using a similar restricted-access model for Cyber.

Be the first to comment