Why AWS Thinks Backing Both Anthropic and OpenAI Makes Perfect Sense | FOMO Daily
8 min read
Why AWS Thinks Backing Both Anthropic and OpenAI Makes Perfect Sense
Amazon’s huge bets on both Anthropic and OpenAI may look like a conflict, but the real story is about control of the AI infrastructure layer. This piece explores why AWS is backing rival model makers at the same time, and how cloud power, distribution, and model routing are becoming just as important as the AI models themselves.
At first glance, it looks messy. Amazon has already invested a total of $8 billion in Anthropic, which says AWS is its primary cloud and training partner. Then Amazon turned around and announced a new multi year partnership with OpenAI that includes a $50 billion investment, OpenAI services on AWS, OpenAI Frontier distribution through AWS, and a major Trainium compute commitment. On paper, that looks like the kind of arrangement that should make everyone nervous. In practice, AWS boss Matt Garman says this is exactly the sort of tension the company already knows how to handle. Speaking at HumanX, he said AWS has spent years learning how to partner with companies it may also compete with, while promising not to give itself an unfair advantage.
That is the real heart of this story. The old tech world liked clean loyalties. One cloud. One strategic partner. One camp. The new AI world is starting to look nothing like that. In this market, the infrastructure giants do not want to bet on a single model company, because the stakes are too high and the market is moving too fast. AWS is not treating Anthropic and OpenAI like two relationships that need to be kept separate. It is treating them like two critical assets in the same larger AI power game.
AWS is playing an infrastructure game, not a loyalty game
That distinction matters. If you look at the official announcements, the OpenAI deal is not a small side partnership. Amazon says AWS and OpenAI will co create a Stateful Runtime Environment powered by OpenAI models, make OpenAI Frontier available through AWS as the exclusive third party cloud distribution provider, and have OpenAI consume around 2 gigawatts of Trainium capacity through AWS infrastructure. Amazon also said it would invest $50 billion in OpenAI, beginning with $15 billion and followed by another $35 billion when certain conditions are met.
At the same time, Anthropic’s own announcement from late 2024 said Amazon’s investment in Anthropic had reached $8 billion and that AWS had become Anthropic’s primary cloud and training partner. Anthropic also said it was working closely with AWS on future generations of Trainium hardware and on the underlying software stack. That means AWS is not just financially exposed to both companies. It is deeply embedded in the technical plumbing of both relationships.
Seen that way, the so called conflict starts to look a lot more like a strategy. AWS does not want to be the cloud provider standing outside the two biggest frontier model camps while its rivals cash in. By partnering with both, it keeps itself central to the next phase of enterprise AI, regardless of which model family wins more mindshare, developer usage, or revenue. That is not a contradiction. That is a hedge with ambition.
Latest
Top Picks
The latest industry news, interviews, technologies, and resources.
A U.S. court has allowed the Pentagon’s blacklisting of Anthropic to stand for now, highlighting a growing power struggle between AI companies and governments. This story explores how legal battles, national security, and AI ethics are colliding in a way that could reshape who controls the future of artificial intelligence.
The cloud giants are no longer neutral roads
There was a time when cloud platforms could pretend to be roads. You build the highway, then everyone else drives on it. AI is blowing that idea apart. The cloud giants are now roads, toll booths, vehicle manufacturers, and sometimes passengers as well. They are building chips, training stacks, model marketplaces, agent platforms, routing layers, and first party AI products of their own. In that world, neutrality becomes more of a business posture than a literal reality.
Garman’s explanation fits that reality almost too neatly. He said AWS has long experience competing with companies it also partners with, and that the company has built the muscle to go to market with partners even when it may have first party offerings that overlap with them. TechCrunch also noted that AWS sees model routing as part of the future, where customers will use different models for different tasks based on performance and cost. Garman said one model may be best for planning, another for reasoning, and a cheaper one for simpler jobs like code completion.
That point is bigger than it sounds. If the future of AI inside the enterprise is a routed mix of models rather than one monolithic winner, then AWS has every reason to keep multiple model leaders close. It does not need one champion. It needs a portfolio. Anthropic gives it strength in one direction. OpenAI gives it strength in another. AWS’s own homegrown models and agent tooling can then sit in the middle, benefiting from the whole ecosystem. That is where the conflict story turns into a platform story.
Anthropic and OpenAI are rivals, but AWS wants the traffic from both
This is what makes the arrangement feel so modern. Anthropic and OpenAI are fierce competitors, but AWS is not being asked to choose one and shut the door on the other. It is being asked to host both, help scale both, and sell access to both. That is not just about investment returns. It is about making sure AWS remains the place where enterprises come to build with whichever model stack they trust most.
The OpenAI deal makes that especially clear. OpenAI said AWS will be the exclusive third party cloud distribution provider for OpenAI Frontier and that the two companies are jointly building a stateful runtime layer for production grade agents. Amazon also said the partnership includes customised models for Amazon’s own customer facing applications. That is a commercial partnership, an infrastructure partnership, and a product partnership all at once.
Anthropic’s relationship with AWS is different in tone but just as strategic. Anthropic said AWS is its primary cloud and training partner and described deep collaboration on the silicon and software stack, plus large scale Claude distribution through Amazon Bedrock. Anthropic also highlighted Bedrock customers such as Pfizer, Intuit, Perplexity, and the European Parliament using Claude through AWS. In other words, Anthropic is not just an investment on Amazon’s cap table. It is already a major workload and ecosystem asset inside AWS.
So from AWS’s point of view, the answer to the conflict question is simple. Why would it give up either stream of demand?
This is what the AI market looks like when the stakes get serious
There is also a wider lesson here. In the AI boom, people often talk as if companies must line up in neat opposing camps. But when infrastructure, chips, models, cloud distribution, enterprise integration, and capital are all colliding at once, those lines get blurry fast. A company can be a supplier, investor, customer, distributor, and competitor at the same time. That is messy, but it is also probably the natural shape of the market now.
The TechCrunch report even points out that Amazon is hardly alone in living with these overlaps. When Anthropic announced its latest round in February, the article said it included at least a dozen investors that also back OpenAI, including Microsoft, which is OpenAI’s main cloud partner. That does not mean conflicts disappear. It means AI is becoming too important for the biggest players to stay pure. Everyone wants optionality. Everyone wants access. Everyone wants a shot at upside without being locked out of the next big platform layer.
That is the part people sometimes miss when they frame this as hypocrisy. AWS is not pretending Anthropic and OpenAI are best mates. It is saying the cloud business has always involved working with companies that may also compete with you. The difference now is that the numbers are bigger, the rivals are more famous, and the consequences are far more visible.
Why this matters beyond Amazon
This matters because it says a lot about how AI power is being organised. The frontier model companies need enormous amounts of compute, capital, distribution, and enterprise trust. The cloud giants need the hottest models, the heaviest workloads, and the biggest customers. Neither side can fully dominate without the other. That creates a new kind of relationship where competition and cooperation sit in the same room and pretend not to be awkward.
It also hints at where the next battle may go. If model routing becomes common, and if enterprises increasingly mix models depending on the task, then the most powerful company may not be the one with just the best model. It may be the one that controls the operating layer where all those models get chosen, routed, priced, and integrated into real work. AWS clearly wants to be that layer. Backing both Anthropic and OpenAI helps it stay in the middle of that future rather than watching it form elsewhere. That conclusion is an inference, but it is strongly supported by AWS’s public partnerships, Bedrock distribution strategy, and Garman’s comments on multi model routing.
The real story is not that AWS has found a clever excuse for an awkward conflict. The real story is that in AI, conflict is becoming part of the business model. Infrastructure giants do not want exclusive marriages. They want strategic access to every serious contender, plus the right to serve as the marketplace where those contenders meet enterprise demand.
That may sound cynical, but it is also rational. If AI really becomes the next foundational computing layer, then the company sitting closest to the traffic, the routing, the chips, and the enterprise workflows will be in one of the strongest positions of all. AWS is not confused about that. It is building for it. Anthropic and OpenAI may be rivals in the model race, but AWS is trying to make sure it wins the road they both have to drive on.
Poke is betting that the future of AI agents will not feel like using another app. It will feel like sending a text. This story explores why that matters, how the company is trying to turn automation into natural conversation, and why the real race in AI may now be shifting from raw intelligence to everyday usefulness, trust, and habit.