SAN FRANCISCO, April 9, 2026 – Amazon Web Services CEO Matt Garman has a straightforward response to critics questioning the cloud giant’s massive, simultaneous investments in rival AI labs OpenAI and Anthropic: get used to it. Speaking at the HumanX conference this week, Garman framed Amazon’s recent $50 billion commitment to OpenAI—following an $8 billion stake in Anthropic—not as a conflict, but as standard operating procedure for a platform business. His comments reveal the high-stakes, pragmatic calculus now defining the AI infrastructure war.
AWS’s Long History of Competing with Partners
According to Garman, AWS has built a “muscle” for managing these tensions. “We also knew that we would have to compete with our partners, because technology is interconnected,” he told the conference audience. This philosophy dates back to AWS’s launch in 2006. The unit initially relied on third-party software vendors to fill its catalog. However, Amazon often later developed its own competing services. Garman, who joined Amazon as an intern in 2005, has seen this playbook evolve for two decades.
Also read: Altman testifies Musk once proposed handing OpenAI to his children during safety dispute
The cloud market now operates on this accepted duality. Oracle, one of AWS’s fiercest rivals, sells its database software on the AWS marketplace. Similarly, countless software companies both rely on and compete with AWS’s native tools. Garman emphasized a core promise to partners: “We won’t give ourselves unfair competitive advantage.” This established dynamic, he argued, provides the framework for handling investments in competing AI model providers.
The Stakes Behind the Billions
For AWS, these investments are defensive and offensive. Data from Collaboration Research Group shows Microsoft Azure has gained cloud market share partly through its exclusive early access to OpenAI’s models. By bringing OpenAI’s models onto AWS—and deepening its existing partnership with Anthropic—Amazon directly counters Microsoft’s edge. The move also serves AWS customers demanding choice. “I think that is where the world will go,” Garman said, describing a future where businesses use multiple AI models for different tasks via routing services.
Also read: Google and SpaceX in talks to launch orbital data centers, WSJ reports
This model-routing strategy is key. A customer might use a powerful model for complex reasoning, a mid-tier model for planning, and a cheaper model for simple code completion. AWS’s Bedrock service is designed for this multi-model approach. By investing in both leading AI firms, AWS ensures its routing layer has top-tier options, keeping customers on its platform even as it develops its own in-house models like Titan.
Investor Loyalty Takes a Back Seat in the AI Gold Rush
Amazon is far from alone in spreading bets. When Anthropic closed a $30 billion funding round in February 2026, its investor list included over a dozen firms also backing OpenAI. Notably, this group included Microsoft, OpenAI’s primary cloud partner and investor. This suggests a sector-wide consensus: securing access to frontier AI technology trumps traditional notions of investor exclusivity. The implication is clear. For cloud hyperscalers, AI model access is a strategic necessity, not a optional partnership.
Industry watchers note this creates a complex web of alliances. “Capital is chasing capability without regard for traditional competitive lines,” said a managing partner at a venture firm who requested anonymity due to relationships with all involved companies. “The risk of missing out on the next model breakthrough outweighs the risk of a conflicted portfolio.”
How AWS Manages the Inherent Tension
Garman’s explanation points to formal and informal guardrails. Formally, investment deals likely include specific provisions regarding data access, resource allocation, and roadmap sharing. Informally, AWS’s size and structure allow for separate, firewalled teams to engage with each AI partner. A product team working with Anthropic’s Claude models may operate independently from a team integrating OpenAI’s GPT series.
This could signal a new phase in cloud competition. The battle is less about raw compute power and more about curating the best AI ecosystem. AWS’s strategy is to be the agnostic platform hosting all major models, including its own. The risk, analysts say, is that one AI partner could perceive favoritism, especially as AWS pushes its proprietary models. Garman’s public statements are part of managing that perception.
The Bottom Line for Cloud Customers
For businesses, AWS’s dual investment strategy likely means more choice and stability. It reduces the risk of being locked into a single AI model provider tied to one cloud. However, it also adds complexity. Customers must deal with pricing, performance, and integration differences across multiple models on Bedrock. The promise from AWS is that its tools will manage that complexity. The reality will be tested as both OpenAI and Anthropic release increasingly advanced—and competitive—model generations.
What this means for investors is a validation of the platform model. AWS’s ability to monetize competing technologies on its infrastructure is a powerful revenue driver. Its massive investments are essentially pre-paid credits for AI compute, guaranteeing future consumption on its cloud. This locks in demand from the most capital-intensive sector in tech.
Conclusion
AWS’s Matt Garman has reframed the cloud giant’s $58 billion bet on both Anthropic and OpenAI. He presents it not as a conflict, but as the application of a long-standing partnership philosophy to the AI era. In a market where model access is critical, AWS is using its capital and platform to ensure it remains at the center of the AI ecosystem, even as it competes within it. The success of this high-wire act will be a defining story in cloud computing for the rest of the decade.
FAQs
Q1: How much has Amazon invested in OpenAI and Anthropic?
AWS has committed $50 billion to OpenAI and had previously invested $8 billion in Anthropic, for a total direct investment of $58 billion in the two competing AI companies.
Q2: Why does AWS see no conflict in funding both companies?
AWS CEO Matt Garman states the company has a long history of partnering with firms while also competing with them through its own products. He says AWS has built organizational processes to manage this dynamic and promises not to give itself an “unfair competitive advantage.”
Q3: Are other tech companies investing in competing AI firms?
Yes. For example, Microsoft is a major investor in OpenAI but also participated in Anthropic’s $30 billion funding round in February 2026. Multiple venture capital firms are also invested in both companies.
Q4: What is the business reason for AWS to make these investments?
The investments secure strategic access to leading AI models for AWS customers, help counter Microsoft Azure’s exclusive partnership with OpenAI, and drive usage of AWS’s AI platform and compute services.
Q5: How does this affect companies using AWS for AI?
Customers gain access to multiple top-tier AI models (OpenAI’s GPT series, Anthropic’s Claude, and AWS’s own models) on a single platform. This offers choice and may reduce lock-in, but requires managing multiple tools and pricing structures.

Be the first to comment