CoreWeave signs multi-year Anthropic deal as nine of ten top AI model providers join its platform



In short: CoreWeave announced a multi-year agreement with Anthropic on April 10, 2026, giving the Claude maker access to Nvidia GPU capacity across US data centres for production-scale AI workloads. Financial terms were not disclosed. The deal arrives one day after CoreWeave announced a $21 billion expansion of its Meta partnership, and adds Anthropic to a customer roster that now covers nine of the ten leading AI model providers. CoreWeave generated $5.13 billion in revenue in 2025 and is guiding for more than $12 billion in 2026, backed by a contracted backlog exceeding $66 billion.

Ex-crypto miner becomes AI’s landlord

CoreWeave was founded in 2017 as Atlantic Crypto, an Ethereum mining operation that bought Nvidia graphics processing units in bulk to mine cryptocurrency and rent spare GPU capacity to other miners. When crypto margins compressed in 2019, the company renamed itself CoreWeave and pivoted to GPU-on-demand cloud services for general computing purposes. The timing proved transformative: the AI model training boom that began in earnest in 2023 turned CoreWeave’s stockpile of Nvidia hardware into one of the most strategically valuable infrastructure positions in technology. The company went public on Nasdaq under the ticker CRWV on March 28, 2025, at $40 per share, raising $1.5 billion and valuing it at approximately $23 billion. CoreWeave operates 32 data centres with more than 250,000 GPUs and 1.3 gigawatts of contracted power capacity. Its 2025 revenue of $5.13 billion represented a 168 per cent increase year-on-year, and management has guided for more than $12 billion in 2026 revenue against a contracted backlog that now exceeds $66 billion.

The company’s rapid growth has come with a significant concentration risk: Microsoft accounted for approximately 67 per cent of CoreWeave’s 2025 revenue, a dependence that investors and analysts flagged in the run-up to the IPO. Microsoft’s push to develop its own AI models adds a further strategic variable, raising the question of how much of Microsoft’s compute demand will eventually shift toward in-house infrastructure rather than third-party GPU cloud rental. The Anthropic deal, arriving the day after a $21 billion Meta expansion, represents CoreWeave’s most visible effort to build a diversified customer base that reduces its dependence on any single hyperscaler.

What Anthropic is paying for

Anthropic’s compute strategy has grown more complex alongside its revenue. The company’s annualised revenue run rate surpassed $30 billion in early April 2026, more than three times the $9 billion figure it recorded at the end of 2025. That rate of acceleration, driven by enterprise Claude adoption and the breakout growth of Claude Code, has required Anthropic to expand its infrastructure commitments across multiple chip architectures simultaneously. Its primary training workloads run on Amazon Web Services Trainium hardware via Project Rainier, a large-scale cluster spanning hundreds of thousands of AI chips across multiple US data centres. Three days before the CoreWeave announcement, Anthropic’s deal with Google and Broadcom for multi-gigawatt TPU capacity secured access to approximately 3.5 gigawatts of next-generation tensor processing unit compute expected to come online in 2027. The CoreWeave deal fills a third lane: Nvidia GPU capacity for production inference workloads, running at the scale and latency performance that enterprise Claude deployments require. Anthropic’s $100 million commitment to its Claude partner network earlier this year signalled the company’s intent to expand the ecosystem of developers and enterprises building on Claude, and that ecosystem expansion is now directly driving the compute procurement decisions behind deals like this one.

CoreWeave co-founder and CEO Michael Intrator framed the deal in terms that go beyond raw infrastructure capacity. “AI is no longer just about infrastructure, it’s about the platforms that turn models into real-world impact,” he said. “We’re excited to work with Anthropic at the centre of where models are put to work and performance in production shows up. It’s exactly the kind of real-world deployment of AI that CoreWeave was built for.” Anthropic will initially deploy compute under a phased infrastructure rollout, with the option to expand the arrangement over time. The specific Nvidia chip architectures involved have not been publicly disclosed, though CoreWeave’s estate spans current and next-generation Nvidia GPU generations. Nvidia’s Vera Rubin GPUs, unveiled at GTC 2026, represent the next major architecture in CoreWeave’s deployment roadmap, with volume shipments expected in the second half of 2026.

Nine of ten, two deals in 48 hours

The Anthropic agreement means that nine of the ten leading AI model providers now use CoreWeave’s platform, a market penetration figure the company cited in its press release. The customer roster built alongside Microsoft includes Meta, OpenAI, Mistral, Cohere, IBM, and Nvidia itself, as well as a sub-leasing arrangement through which Microsoft supplies some CoreWeave capacity to third-party clients. The Meta relationship deepened significantly on April 9, 2026, one day before the Anthropic announcement: Meta committed an additional $21 billion to CoreWeave for dedicated AI cloud capacity running from 2027 through December 2032, bringing the total value of the two companies’ infrastructure relationship to approximately $35 billion. CoreWeave also expanded its agreement with OpenAI by up to $6.5 billion earlier in 2026. The two announcements in 48 hours, covering Meta and Anthropic, illustrate how CoreWeave is converting its infrastructure position into long-duration contracted revenue rather than spot-market GPU rentals. CoreWeave raised $8.5 billion in a GPU-backed debt facility in March 2026, with the Meta relationship used as collateral. The Anthropic deal, while undisclosed in value, will contribute to a backlog that analysts are watching as the primary indicator of the company’s long-term revenue predictability.

The infrastructure tells a story about dependence

The same day CoreWeave announced the Anthropic agreement, reports emerged that Anthropic is exploring the design of its own custom AI chips — a move that would, if realised, eventually reduce its dependence on the Nvidia-powered infrastructure that CoreWeave provides. The irony is deliberate: Anthropic’s current infrastructure commitments across AWS, Google Cloud, and now CoreWeave reflect a company that is simultaneously expanding compute dependency in the short term and exploring the routes to architectural independence in the long term. That tension is not unique to Anthropic. Meta, OpenAI, and Google have all invested heavily in custom silicon programmes while continuing to rent third-party Nvidia capacity, because the timelines for custom chip maturity and the demand curve for AI compute do not align closely enough to allow a clean transition. CoreWeave’s position as the GPU landlord of choice for the AI industry is therefore both a statement about the current moment and a structural bet that Nvidia-native cloud capacity will remain competitively necessary for at least the duration of the contracts now being signed. As AI infrastructure spending accelerated through 2025, the GPU cloud market began to look less like a transitional gap-filler and more like a permanent layer of the AI stack, and CoreWeave, two deals in two days, is the clearest evidence of that shift.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


As I’m writing this, NVIDIA is the largest company in the world, with a market cap exceeding $4 trillion. Team Green is now the leader among the Magnificent Seven of the tech world, having surpassed them all in just a few short years.

The company has managed to reach these incredible heights with smart planning and by making the right moves for decades, the latest being the decision to sell shovels during the AI gold rush. Considering the current hardware landscape, there’s simply no reason for NVIDIA to rush a new gaming GPU generation for at least a few years. Here’s why.

Scarcity has become the new normal

Not even Nvidia is powerful enough to overcome market constraints

Global memory shortages have been a reality since late 2025, and they aren’t just affecting RAM and storage manufacturers. Rather, this impacts every company making any product that contains memory or storage—including graphics cards.

Since NVIDIA sells GPU and memory bundles to its partners, which they then solder onto PCBs and add cooling to create full-blown graphics cards, this means that NVIDIA doesn’t just have to battle other tech giants to secure a chunk of TSMC’s limited production capacity to produce its GPU chips. It also has to procure massive amounts of GPU memory, which has never been harder or more expensive to obtain.

While a company as large as NVIDIA certainly has long-term contracts that guarantee stable memory prices, those contracts aren’t going to last forever. The company has likely had to sign new ones, considering the GPU price surge that began at the beginning of 2026, with gaming graphics cards still being overpriced.

With GPU memory costing more than ever, NVIDIA has little reason to rush a new gaming GPU generation, because its gaming earnings are just a drop in the bucket compared to its total earnings.

NVIDIA is an AI company now

Gaming GPUs are taking a back seat

A graph showing NVIDIA revenue breakdown in the last few years. Credit: appeconomyinsights.com

NVIDIA’s gaming division had been its golden goose for decades, but come 2022, the company’s data center and AI division’s revenue started to balloon dramatically. By the beginning of fiscal year 2023, data center and AI revenue had surpassed that of the gaming division.

In fiscal year 2026 (which began on July 1, 2025, and ends on June 30, 2026), NVIDIA’s gaming revenue has contributed less than 8% of the company’s total earnings so far. On the other hand, the data center division has made almost 90% of NVIDIA’s total revenue in fiscal year 2026. What I’m trying to say is that NVIDIA is no longer a gaming company—it’s all about AI now.

Considering that we’re in the middle of the biggest memory shortage in history, and that its AI GPUs rake in almost ten times the revenue of gaming GPUs, there’s little reason for NVIDIA to funnel exorbitantly priced memory toward gaming GPUs. It’s much more profitable to put every memory chip they can get their hands on into AI GPU racks and continue receiving mountains of cash by selling them to AI behemoths.

The RTX 50 Super GPUs might never get released

A sign of times to come

NVIDIA’s RTX 50 Super series was supposed to increase memory capacity of its most popular gaming GPUs. The 16GB RTX 5080 was to be superseded by a 24GB RTX 5080 Super; the same fate would await the 16GB RTX 5070 Ti, while the 18GB RTX 5070 Super was to replace its 12GB non-Super sibling. But according to recent reports, NVIDIA has put it on ice.

The RTX 50 Super launch had been slated for this year’s CES in January, but after missing the show, it now looks like NVIDIA has delayed the lineup indefinitely. According to a recent report, NVIDIA doesn’t plan to launch a single new gaming GPU in 2026. Worse still, the RTX 60 series, which had been expected to debut sometime in 2027, has also been delayed.

A report by The Information (via Tom’s Hardware) states that NVIDIA had finalized the design and specs of its RTX 50 Super refresh, but the RAM-pocalypse threw a wrench into the works, forcing the company to “deprioritize RTX 50 Super production.” In other words, it’s exactly what I said a few paragraphs ago: selling enterprise GPU racks to AI companies is far more lucrative than selling comparatively cheaper GPUs to gamers, especially now that memory prices have been skyrocketing.

Before putting the RTX 50 series on ice, NVIDIA had already slashed its gaming GPU supply by about a fifth and started prioritizing models with less VRAM, like the 8GB versions of the RTX 5060 and RTX 5060 Ti, so this news isn’t that surprising.

So when can we expect RTX 60 GPUs?

Late 2028-ish?

A GPU with a pile of money around it. Credit: Lucas Gouveia / How-To Geek

The good news is that the RTX 60 series is definitely in the pipeline, and we will see it sooner or later. The bad news is that its release date is up in the air, and it’s best not to even think about pricing. The word on the street around CES 2026 was that NVIDIA would release the RTX 60 series in mid-2027, give or take a few months. But as of this writing, it’s increasingly likely we won’t see RTX 60 GPUs until 2028.

If you’ve been following the discussion around memory shortages, this won’t be surprising. In late 2025, the prognosis was that we wouldn’t see the end of the RAM-pocalypse until 2027, maybe 2028. But a recent statement by SK Hynix chairman (the company is one of the world’s three largest memory manufacturers) warns that the global memory shortage may last well into 2030.

If that turns out to be true, and if the global AI data center boom doesn’t slow down in the next few years, I wouldn’t be surprised if NVIDIA delays the RTX 60 GPUs as long as possible. There’s a good chance we won’t see them until the second half of 2028, and I wouldn’t be surprised if they miss that window as well if memory supply doesn’t recover by then. Data center GPUs are simply too profitable for NVIDIA to reserve a meaningful portion of memory for gaming graphics cards as long as shortages persist.


At least current-gen gaming GPUs are still a great option for any PC gamer

If there is a silver lining here, it is that current-gen gaming GPUs (NVIDIA RTX 50 and AMD Radeon RX 90) are still more than powerful enough for any current AAA title. Considering that Sony is reportedly delaying the PlayStation 6 and that global PC shipments are projected to see a sharp, double-digit decline in 2026, game developers have little incentive to push requirements beyond what current hardware can handle.

DLSS 5, on the other hand, may be the future of gaming, but no one likes it, and it will take a few years (and likely the arrival of the RTX 60 lineup) for it to mature and become usable on anything that’s not a heckin’ RTX 5090.

If you’re open to buying used GPUs, even last-gen gaming graphics cards offer tons of performance and are able to rein in any AAA game you throw at them. While we likely won’t get a new gaming GPU from NVIDIA for at least a few years, at least the ones we’ve got are great today and will continue to chew through any game for the foreseeable future.



Source link