Anthropic AI Model Access Restricted Over Fears It Could Supercharge Cyberattacks

Anthropic AI model security concerns over digital lock and cyberattack threats.

In a move highlighting the dual-use dilemma of advanced artificial intelligence, AI company Anthropic has sharply restricted access to its newest model. The firm fears its creation’s exceptional ability to find software bugs could, in the wrong hands, become a powerful engine for future cyberattacks. This decision, announced on April 7, 2026, signals a new phase of caution in the AI industry.

Anthropic’s Stark Warning on AI and Cyber Threats

Anthropic stated its new general-purpose model, Claude Mythos Preview, can now surpass all but the most expert human hackers at discovering and exploiting software vulnerabilities. The company tested the model extensively. The results were alarming.

Also read: Chaos Labs Exits Aave in Stunning Split Over Risk Control and V4 Future

Claude Mythos identified thousands of critical security flaws across major operating systems, web browsers, and foundational software libraries. According to Anthropic, it found high-severity vulnerabilities in every major OS and browser. A staggering 99% of these flaws remain unpatched.

“Given the rate of AI progress, it will not be long before such capabilities proliferate,” the company warned in its announcement. The implication is clear. Widespread access to such tools could lower the barrier for sophisticated cyberattacks dramatically.

Also read: Libra Token Probe: Damning Evidence of Milei Calls Sparks Fresh Scandal

The Scale of the Discovery: Decades-Old Bugs

The vulnerabilities uncovered are not minor. Many have lurked undetected for ten to twenty years. This suggests a fundamental problem with traditional code auditing methods.

Data from Anthropic’s research reveals the age and scope of these hidden flaws:

  • A 27-year-old bug in OpenBSD, an operating system prized for its security focus. This has since been patched.
  • A 17-year-old remote code execution flaw in the open-source FreeBSD operating system.
  • A 16-year-old vulnerability in the FFmpeg media library, used by countless applications.
  • Numerous weaknesses in the Linux kernel and in core cryptography protocols like TLS and SSH.

Anthropic described the bugs as “often subtle or difficult to detect.” This is precisely the type of flaw that requires rare, expensive human expertise to find. AI changes that equation entirely. It can operate at a scale and speed humans cannot match.

The Rising Tide of AI-Powered Attacks

Anthropic’s caution is not theoretical. AI tools are already being weaponized. According to a 2025 report from research group AllAboutAI, there was a 72% year-over-year increase in AI-powered cyberattacks. The same report found 87% of global organizations experienced an AI-enabled attack that year.

Hackers use AI to write more convincing phishing emails, generate malicious code, and automate target discovery. Claude Mythos’s capability represents a quantum leap beyond these current uses. It could autonomously find novel, exploitable paths into secured systems.

Industry watchers note this creates a dangerous asymmetry. Defenders must patch every hole; an attacker needs only one. An AI that can find all the holes at once is a potent weapon.

Project Glasswing: A Defensive Alliance Forms

In response to this threat, Anthropic is not sitting idle. Alongside its restrictive access policy, it launched Project Glasswing. This is a collaborative security initiative involving over 40 major technology and finance firms.

Partners include Amazon Web Services, Apple, Cisco, Google, JPMorgan, the Linux Foundation, Microsoft, and Nvidia. The goal is defensive. The coalition will use Claude Mythos Preview’s analytical power to proactively find bugs, share data internally, and patch critical vulnerabilities before malicious actors can exploit them.

“It would be irresponsible for us to disclose details about them,” Anthropic said of the unpatched flaws, explaining its silence is a security measure. Project Glasswing provides a controlled channel to fix them.

This suggests a new model for AI security. Instead of a public release, powerful models may be deployed through tightly controlled consortia focused on specific, high-stakes problems like national infrastructure or financial system defense.

The Long Road to More Secure Software

Anthropic acknowledges this is the start of a long, difficult process. The company stated that defending the world’s digital infrastructure “might take years.”

There is a transitional period where offensive capabilities powered by AI could surge ahead of defenses. This period “will be fraught,” the company admitted. The short-term risk is real.

But the long-term outlook, according to Anthropic, is more optimistic. The firm expects defensive capabilities will ultimately dominate. AI will be used to write more secure code from the outset and to continuously harden existing systems. “The world will emerge more secure, with software better hardened—in large part by code written by these models,” the announcement concluded.

What this means for investors and the tech industry is a period of heightened scrutiny. The race is no longer just about building the most capable AI. It is also about building the safest and most responsibly deployed AI. Regulatory pressure is likely to increase, favoring companies that can demonstrate strong security governance.

Conclusion

Anthropic’s decision to limit access to its Claude Mythos AI model is a watershed moment. It directly confronts the danger that advanced AI could be used to supercharge cyberattacks. By funneling the model’s power into the defensive Project Glasswing alliance, Anthropic is attempting to steer its technology toward protection rather than exploitation. The discovery of decades-old, critical vulnerabilities proves the need for such tools. However, it also underscores the profound dual-use risk that will define the next era of artificial intelligence. The security of our digital world may increasingly depend on the choices a handful of AI labs make behind closed doors.

FAQs

Q1: Why did Anthropic limit access to Claude Mythos?
Anthropic restricted access because the model proved exceptionally skilled at finding software vulnerabilities. The company fears that if this capability became widely available, it could be used by malicious actors to launch devastating cyberattacks.

Q2: What is Project Glasswing?
Project Glasswing is a defensive security initiative led by Anthropic. It brings together over 40 companies, including tech giants like Microsoft, Google, and Apple, to use the Claude Mythos AI to find and patch software bugs before hackers can exploit them.

Q3: What kind of vulnerabilities did the AI find?
The AI found thousands of critical, often subtle flaws. These included vulnerabilities over 20 years old in major operating systems like OpenBSD and FreeBSD, weaknesses in the Linux kernel, and issues in core cryptography libraries used for secure internet communications.

Q4: Are AI-powered cyberattacks already happening?
Yes. According to a 2025 report from AllAboutAI, AI-powered cyberattacks increased by 72% year-over-year, with 87% of global organizations reporting an AI-enabled attack. Hackers currently use AI for phishing, code generation, and reconnaissance.

Q5: Will AI make software more or less secure in the future?
Anthropic believes that after a risky transitional period, AI will lead to more secure software. The long-term expectation is that AI will be used defensively to write better, harder code and to continuously audit and patch existing systems at scale.

Jackson Miller

Written by

Jackson Miller

Jackson Miller is a senior cryptocurrency journalist and market analyst with over eight years of experience covering digital assets, blockchain technology, and decentralized finance. Before joining CoinPulseHQ as lead writer, Jackson worked as a financial technology correspondent for several business publications where he developed deep expertise in derivatives markets, on-chain analytics, and institutional crypto adoption. At CoinPulseHQ, Jackson covers Bitcoin price movements, Ethereum ecosystem developments, and emerging Layer-2 protocols.

This article was produced with AI assistance and reviewed by our editorial team for accuracy and quality.

Be the first to comment

Leave a Reply

Your email address will not be published.


*