February 12, 2026 — A significant security rivalry has erupted in the autonomous AI agent space, with Near.AI co-founder Illia Polosukhin publicly launching development of IronClaw, a Rust-based alternative to the viral but vulnerable OpenClaw assistant. Concurrently, the AI protocol Olas has deployed specialized trading agents on the prediction market platform Polymarket, signaling a new phase of automation in speculative markets. These parallel developments, confirmed this week through GitHub activity and official announcements, highlight the escalating tension between rapid AI innovation and critical security concerns in decentralized ecosystems.
IronClaw Emerges as Secure Answer to OpenClaw’s Flaws
Near.AI’s Illia Polosukhin initiated the IronClaw project after observing multiple security incidents involving OpenClaw, the AI agent harness that gained popularity for its ability to control multiple tools and remember conversations across platforms like Telegram and Slack. “People are losing their funds and credentials using OpenClaw,” Polosukhin stated in a February 9 discussion. “A number of people have stopped using it as they’re afraid it will leak all of their information.” The core vulnerability stems from OpenClaw’s architecture, which grants the large language model direct access to credentials and terminal functions, creating what Polosukhin describes as “a total security black hole.”
Polosukhin’s solution involves fundamentally rearchitecting the system using Rust, a programming language celebrated for memory safety, and isolating each tool within sandboxed WebAssembly environments. This containment strategy ensures that if one component becomes compromised, it cannot affect others. More crucially, IronClaw implements a strict separation between the LLM and sensitive data. “The solution with IronClaw is to not let the LLM touch secrets at all,” Polosukhin explained. Secrets are stored in an encrypted vault, with the language model receiving permission to use them only for specific, pre-authorized sites and actions.
The Technical Architecture Behind the Security Promise
IronClaw’s security advantages stem from several deliberate technical choices that address OpenClaw’s most criticized weaknesses. The shift from JavaScript to Rust eliminates entire classes of memory-safety vulnerabilities that malicious actors frequently exploit. Furthermore, Rust’s relative obscurity in mainstream development reduces the pool of hackers familiar with its exploitation techniques. The system also treats prompt injections—where malicious instructions are hidden within normal-looking prompts—as critical security risks, implementing detection and mitigation layers that OpenClaw lacks.
- Compartmentalized Tools: Each agent function runs in an isolated WebAssembly sandbox, preventing lateral movement if compromised.
- Encrypted Vault for Secrets: Private keys and credentials never reside in plain text within the LLM’s operational memory.
- Permission-Based Access: The AI must request explicit, auditable permissions for each sensitive action, unlike OpenClaw’s broad access model.
- Rust Foundation: The use of Rust provides inherent protection against buffer overflows and memory corruption attacks common in C++ or JavaScript codebases.
Development Pace and Near-Term Availability
George Xian Zeng, General Manager of Near.AI, revealed the project’s remarkable velocity to industry publication Magazine. “He built the basis of it in one evening,” Zeng said of Polosukhin’s effort. “He was feeding his baby and building IronClaw at the same time.” This rapid development is evidenced by 74 GitHub commits from Polosukhin in the past week alone. Zeng anticipates a functional IronClaw release on Near.AI within “a matter of weeks.” In the interim, Near.AI Cloud offers a beta version of OpenClaw running in a Trusted Execution Environment (TEE), where all data remains encrypted and inaccessible even to Near.AI itself.
Olas Deploys Polystrat Bots on Polymarket
In a separate but related development, the autonomous AI network Olas has officially launched its Polystrat agents on Polymarket. These agents, adapted from the Omenstrat bots that have executed over 13 million transactions on the Gnosis-based Omen prediction platform, are designed to identify and act on market opportunities. Unlike some experimental agents that seek pure arbitrage, Polystrat agents employ a suite of tools analyzing news sources, public data, and market signals to predict outcomes in markets resolving within four days.
David Minarsch, co-founder of Olas and CEO of Valory, provided performance context. “What we see with Omenstrat is that over time they have a 55% to 65% success rate depending on which models and tools they use.” Data shared with Magazine shows win rates between 59.2% and 63.6% for categories like sustainability, science, and business, though performance drops significantly for subjects like fashion, arts, and social topics. Sports predictions hover around 51%, essentially random chance. Minarsch cautions that these results come from the smaller Omen marketplace and may not directly translate to the more liquid and competitive Polymarket environment.
Safety Architecture for Autonomous Trading
A critical design philosophy for Polystrat involves hardcoding all wallet and betting-related functions. “That’s a key architectural design decision, which really restricts the capability of the agent,” Minarsch notes. “So, our fully structured agent won’t suddenly become your personal assistant. But it also means it’s safer.” This approach mirrors the security-first thinking behind IronClaw, prioritizing safety over unbounded functionality. Users can fund an agent with approximately $100 to test its autonomous betting across multiple markets and gauge its effectiveness. Olas generates revenue by taking a cut of fees paid to tools, models, and other agents within its ecosystem, accessible via the Pearl marketplace which requires staking OLAS tokens.
The Persistent Challenge of Skill Marketplaces
Both developments underscore an unresolved tension in the AI agent ecosystem: the balance between open innovation and security. This is most evident in marketplaces like ClawHub, where users can download skills to enhance their agents’ capabilities. George Xian Zeng highlighted the dilemma: “The cool thing is that anyone can build a skill. But the dangerous thing about the current marketplace is that anyone can build a skill.” A recent report from blockchain security firm Slowmist found that 341 of the available skills on such platforms contained malicious code designed to steal passwords or data.
Zeng suggests a shift toward curation may be necessary. “How do you make a marketplace that’s safe and effective, right? We’re still going through how exactly do you make that work? I think it’s reasonable to consider maybe, like, a curated marketplace.” Near.AI has also launched its own crypto-based marketplace where AI agents can hire each other, though currently many of the 1,900 listed jobs involve building projects for Near itself.
| Platform/Agent | Core Innovation | Primary Security Measure | Current Status |
|---|---|---|---|
| OpenClaw (Original) | Multi-tool AI assistant harness | JavaScript-based, broad access model | Viral but criticized for leaks |
| IronClaw (Near.AI) | Secure Rust-based rebuild | Sandboxed WASM, encrypted vault, no LLM secret access | Active development, weeks from launch |
| Polystrat (Olas) | Prediction market trading agent | Hardcoded financial functions, structured autonomy | Live on Polymarket |
Broader Industry Context and Implications
The security focus of IronClaw arrives amid heightened sensitivity to AI risks. A viral social media post this week demonstrated that OpenClaw could still reveal private keys despite explicit user instructions not to, validating Polosukhin’s concerns. Furthermore, the integration of AI agents into financially sensitive environments like cryptocurrency wallets and prediction markets raises the stakes for security failures. The move toward Rust-based systems may signal a broader industry trend, similar to the programming language’s adoption in blockchain core development for its safety guarantees.
Meanwhile, the automation of prediction markets through agents like Polystrat could significantly increase market efficiency and liquidity, but also introduces new dynamics. If successful, these agents could reduce obvious arbitrage opportunities and potentially correlate markets based on similar AI-driven analysis. Their performance on Polymarket, a platform with substantial trading volume, will serve as a major test case for the viability of autonomous AI traders in complex, real-world financial environments.
Market Reactions and Competitive Landscape
The announcement has sparked discussion across developer communities, with many expressing relief at the security-focused alternative. However, some question whether IronClaw’s restrictive architecture will limit the creative, open-ended tasks that made OpenClaw popular. The competitive landscape is also evolving rapidly. Polymarket recently partnered with Kaito AI to launch “attention markets” for betting on virality, indicating a diversification of AI-integrated products. The Super Bowl’s plethora of AI advertisements from 16 tech companies further demonstrates the sector’s push into mainstream awareness, though historical parallels to the dot-com and crypto ad booms suggest a need for cautious optimism.
Conclusion
The simultaneous emergence of IronClaw and Polystrat marks a pivotal moment in the maturation of autonomous AI agents. IronClaw represents a necessary corrective to the “move fast and break things” approach, prioritizing user security through architectural rigor and modern tooling. Polystrat demonstrates a practical, revenue-generating application of AI agents in a high-stakes financial niche, albeit with carefully constrained capabilities. Together, they illustrate the industry’s growing recognition that for AI agents to gain trust and achieve widespread adoption—especially in cryptocurrency and finance—security and reliability must be foundational, not optional. The coming weeks will reveal whether IronClaw can deliver on its security promises and if Polystrat’s Omen success translates to Polymarket, potentially setting new standards for how AI interacts with the decentralized economy.
Frequently Asked Questions
Q1: What is the main security difference between OpenClaw and IronClaw?
IronClaw prevents the large language model from directly accessing secrets like private keys by storing them in an encrypted vault and granting only permission-based access. OpenClaw’s architecture allows the LLM broader, more direct access to credentials, which has led to documented leaks.
Q2: How successful are Olas’s Polystrat agents on prediction markets?
On the Omen platform, their predecessor agents achieved win rates between 55% and 65% over long time horizons in categories like business and science. However, performance varies widely by topic, and their effectiveness on the larger, more competitive Polymarket platform remains unproven.
Q3: When will IronClaw be available to the public?
Near.AI executives expect IronClaw to be finished and available on their platform within a matter of weeks, based on the current rapid development pace evidenced by frequent GitHub commits.
Q4: Why is Rust considered more secure for building AI agents?
Rust’s compiler enforces memory safety, eliminating entire classes of vulnerabilities like buffer overflows that are common in languages like C++ or JavaScript. Its relative novelty also means fewer hackers are proficient in exploiting Rust-specific flaws.
Q5: What is the biggest security risk with current AI agent skill marketplaces?
Marketplaces like ClawHub allow anyone to upload skills, and security audits have found hundreds of skills containing malicious code designed to steal data. This creates a significant supply-chain risk for users who download third-party enhancements.
Q6: Can I try a more secure version of OpenClaw right now?
Yes, Near.AI Cloud offers a beta version running in a Trusted Execution Environment (TEE), which encrypts all data. Access requires an application, but it provides a sandboxed, more secure environment than the standard OpenClaw deployment.
