Richard Socher, the AI researcher best known for founding the chatbot startup You.com and for his earlier work on ImageNet, is stepping into the next wave of artificial intelligence research. On Wednesday, his new startup, Recursive Superintelligence, emerged from stealth with $650 million in funding. The company is based in San Francisco and is pursuing what many in the field consider a holy grail: a recursively self-improving AI model — one that can identify its own weaknesses and redesign itself to fix them, without any human involvement.
What is recursive self-improvement, and why does it matter?
Recursive self-improvement, or RSI, refers to an AI system that can autonomously improve its own architecture and capabilities in a continuous loop. Unlike current AI models that require human engineers to fine-tune them, an RSI system would handle the entire cycle of ideation, implementation, and validation on its own. In an interview after the launch, Socher emphasized that this is fundamentally different from simply asking an AI to improve a specific output. “That’s not recursive self-improvement,” he said. “That’s just improvement.”
Also read: OpenAI brings Codex coding agent to mobile via ChatGPT app
The potential implications are enormous. If successful, such a system could accelerate progress in virtually every field of science and technology, from drug discovery to materials science to climate modeling. But it also raises profound questions about control, safety, and resource allocation — questions that Socher and his team are only beginning to address.
The open-endedness approach
Recursive Superintelligence’s technical strategy centers on a concept called open-endedness, borrowed from evolutionary biology. In nature, organisms adapt to their environment, and other organisms counter-adapt, creating a cycle that can produce increasingly complex life forms over billions of years. The company aims to replicate this dynamic in software.
Also read: Clawdmeter turns your Claude Code usage stats into a tiny desktop dashboard
Tim Rocktäschel, a co-founder who previously led open-endedness and self-improvement teams at Google DeepMind, has already demonstrated this approach in projects like Genie 3, a world model that can generate interactive environments on demand. Another example is “rainbow teaming,” where two AI systems are pitted against each other — one trying to make the other produce unsafe outputs, and the other learning to resist. This adversarial co-evolution, Socher explained, can run for millions of iterations, making the system progressively more reliable.
“You can actually allow two AIs to co-evolve,” Socher said. “One keeps attacking the other, and then comes up with not just one angle but many different angles.”
Why this matters for the AI industry
Recursive Superintelligence joins a growing cohort of “neolabs” — AI startups that prioritize fundamental research over building consumer products. But Socher pushes back against that label. “I feel like we’re not just a lab,” he said. “I want us to become a really viable company, to have amazing products that people love to use.”
The company’s team includes prominent researchers such as Peter Norvig, a veteran of Google and NASA, and Tim Shi, co-founder of the customer service AI unicorn Cresta. Josh Tobin, an early OpenAI employee who led the Codex and deep research teams, is also on board. Socher indicated that the first products could arrive in “quarters, not years.”
The $650 million funding round, backed by investors including Greycroft and GV, underscores the market’s appetite for high-risk, high-reward AI research. It also signals that investors believe the race to achieve RSI is still wide open, despite the massive resources being deployed by OpenAI, Google DeepMind, and Anthropic.
Compute as the ultimate resource
One of the most provocative implications of recursive self-improvement is that, once achieved, compute power becomes the only meaningful constraint. If an AI can improve itself autonomously, the speed of its progress is determined entirely by how much processing power is available. Socher acknowledged this dynamic, framing it as a question of resource allocation for humanity.
“In the future, a really important question will be: how much compute does humanity want to spend to solve which problems?” he said. “Here’s this cancer and here’s that virus — which one do you want to solve first? It becomes a matter of resource allocation.”
This framing shifts the debate from whether superintelligence is possible to how it should be governed — a conversation that Socher and his team are now helping to shape.
Conclusion
Recursive Superintelligence represents one of the most ambitious bets in contemporary AI research. With a blue-chip team, substantial funding, and a clearly defined technical approach centered on open-endedness, the company is positioning itself among the first of the quest for autonomous self-improving AI. Whether it succeeds or not, the questions it is asking — about how intelligence can improve itself, and how humanity should allocate compute resources — are likely to define the next decade of AI development.
FAQs
Q1: What is recursive self-improvement in AI?
Recursive self-improvement refers to an AI system that can autonomously identify its own weaknesses and redesign itself to fix them, without human intervention. The entire cycle of ideation, implementation, and validation is handled by the AI itself.
Q2: How is Recursive Superintelligence different from OpenAI or DeepMind?
While major labs are also working on self-improving systems, Recursive Superintelligence is specifically focused on open-endedness — a process inspired by biological evolution where two or more AI systems co-evolve, creating increasingly complex capabilities over millions of iterations.
Q3: When will Recursive Superintelligence release its first product?
According to CEO Richard Socher, the first products could arrive in “quarters, not years.” The company plans to release tools that demonstrate its recursive self-improvement capabilities in practical applications.

Be the first to comment