Meta commits another $21 billion to CoreWeave, bringing total AI cloud spend to $35 billion


In short: Meta has committed an additional $21 billion to CoreWeave for dedicated AI cloud capacity running from 2027 through December 2032, bringing the total value of the two companies’ infrastructure relationship to approximately $35 billion. The new contract will deliver early deployments of Nvidia’s Vera Rubin platform across multiple sites, and is designed specifically for inference workloads rather than training. Alongside the announcement, CoreWeave disclosed plans to raise $4.25 billion in new debt ,$3 billion in convertible notes and $1.25 billion in junk bonds,  to fund continued expansion. CoreWeave shares rose around 5% on the news; Meta shares gained roughly 3%.

From Ethereum mining to a $35 billion Meta relationship

CoreWeave was founded in 2017 in New Jersey as Atlantic Crypto, a commodity traders’ side project mining Ethereum using graphics processing units. When the 2018 cryptocurrency crash made mining uneconomical and Ethereum’s eventual move to proof-of-stake threatened to render GPU mining obsolete entirely, the founders, Michael Intrator, Brian Venturo, and Brannin McBee,  recognised that the GPU inventory they had accumulated was also exactly what machine learning researchers needed and could not easily access through conventional cloud providers. The company was renamed CoreWeave in 2019 and pivoted to GPU cloud infrastructure. It went public on March 28, 2025, at $40 per share, valuing it at $23 billion. Its 2025 revenue reached $5.13 billion, up 168% year on year, and its contracted backlog is estimated at more than $66 billion. The first Meta agreement, worth $14.2 billion and announced in September 2025, was the deal that established CoreWeave as a serious counterpart to the hyperscale cloud providers. The April 9, 2026 expansion, an additional $21 billion,  makes Meta the most significant commercial relationship in CoreWeave’s history, with a combined commitment that will sustain the company’s revenue base through the end of the decade.

What Meta is actually buying

The contract is specifically structured around inference rather than training. Meta’s Llama model family is open-weight and freely downloadable, which means the capital-intensive training phase is largely complete before any cloud contract is signed; the ongoing cost is serving those models to billions of users in real time. Inference at Meta’s scale,  hundreds of millions of daily active users across Facebook, Instagram, WhatsApp, and Meta AI,  requires sustained, low-latency compute across distributed infrastructure in a way that Meta’s own data centres cannot always absorb at peak capacity. CoreWeave will deploy that capacity across multiple locations and will include some of the first commercial deployments of Nvidia’s Vera Rubin platform, which the chipmaker unveiled at GTC 2026 in March as the next generation of its AI infrastructure hardware. The new deal supplements rather than replaces Meta’s internal build-out. Meta has guided for $115 billion to $135 billion in capital expenditure in 2026, with AI infrastructure identified as the primary driver, and the company has been explicit that it is building both owned data centres and sourcing external capacity simultaneously. The CoreWeave expansion follows a $27 billion infrastructure deal Meta signed with Nebius in March 2026, under which the Dutch neocloud operator will supply dedicated compute starting in early 2027, also featuring early Vera Rubin deployments. The two deals together illustrate that Meta is not simply procuring cloud capacity but building a diversified multi-vendor infrastructure position designed to give it flexibility and redundancy at hyperscale.

The customer diversification play

For CoreWeave, the Meta expansion solves a problem that has shadowed the company since its IPO: excessive revenue concentration. Microsoft represented 62% of CoreWeave’s 2024 revenue, a figure that made institutional investors uncomfortable and that the company has been working to reduce. With the new Meta commitment in place, CoreWeave CEO Michael Intrator said no single customer would represent more than 35% of total sales. That is still a significant concentration, but it is a materially different risk profile from a position where a single hyperscale customer controls the majority of your revenue. Nvidia, which made a $2 billion strategic investment in Nebius in March 2026 and has deepened its commercial relationships with every major AI cloud provider, sits at the centre of CoreWeave’s business model: CoreWeave’s entire infrastructure is built around Nvidia GPUs, and the Vera Rubin deployments in the Meta contract will extend that dependency into the next hardware generation. CoreWeave also recently expanded its agreement with OpenAI by up to $6.5 billion, further broadening its customer base beyond Microsoft. The company’s stock reached an all-time high of $187 in mid-2025 before pulling back to around $65 in late 2025 amid broader concerns about AI investment returns; following the Meta expansion announcement it was trading in the $88 to $95 range.

The debt that funds it all

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

AI cloud infrastructure is expensive to build before contracts start generating revenue, and CoreWeave has funded its growth primarily through debt. Alongside the Meta deal announcement, the company disclosed plans to raise $4.25 billion in new financing: $3 billion in convertible senior notes due 2032, carrying a coupon of between 1.5% and 2%, with an option for investors to convert into equity; and $1.25 billion in senior unsecured notes due 2031 at approximately 10%, effectively junk-bond pricing. CoreWeave’s total debt load sits at around $30 billion, roughly triple what it was a year earlier. The company’s argument for the debt structure is that its contracted revenue base,  more than $66 billion in backlog,  provides sufficient visibility to service the obligations. Intrator has described CoreWeave as an “AI factory” whose capital costs are underwritten by long-term customer commitments before infrastructure is built. The broader AI infrastructure financing environment has been characterised by similarly large-scale debt structures: SoftBank secured a $40 billion bridge loan to fund its $30 billion follow-on OpenAI investment as part of the Stargate project, illustrating that the capital requirements of AI at scale are now large enough to require financing instruments that did not exist in this form even two years ago. The year 2025 cemented AI infrastructure as the primary competitive variable in the technology industry, and CoreWeave, a company that began as a closet of Ethereum mining rigs,  has positioned itself as a load-bearing pillar of that infrastructure, one $21 billion commitment at a time.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


As I’m writing this, NVIDIA is the largest company in the world, with a market cap exceeding $4 trillion. Team Green is now the leader among the Magnificent Seven of the tech world, having surpassed them all in just a few short years.

The company has managed to reach these incredible heights with smart planning and by making the right moves for decades, the latest being the decision to sell shovels during the AI gold rush. Considering the current hardware landscape, there’s simply no reason for NVIDIA to rush a new gaming GPU generation for at least a few years. Here’s why.

Scarcity has become the new normal

Not even Nvidia is powerful enough to overcome market constraints

Global memory shortages have been a reality since late 2025, and they aren’t just affecting RAM and storage manufacturers. Rather, this impacts every company making any product that contains memory or storage—including graphics cards.

Since NVIDIA sells GPU and memory bundles to its partners, which they then solder onto PCBs and add cooling to create full-blown graphics cards, this means that NVIDIA doesn’t just have to battle other tech giants to secure a chunk of TSMC’s limited production capacity to produce its GPU chips. It also has to procure massive amounts of GPU memory, which has never been harder or more expensive to obtain.

While a company as large as NVIDIA certainly has long-term contracts that guarantee stable memory prices, those contracts aren’t going to last forever. The company has likely had to sign new ones, considering the GPU price surge that began at the beginning of 2026, with gaming graphics cards still being overpriced.

With GPU memory costing more than ever, NVIDIA has little reason to rush a new gaming GPU generation, because its gaming earnings are just a drop in the bucket compared to its total earnings.

NVIDIA is an AI company now

Gaming GPUs are taking a back seat

A graph showing NVIDIA revenue breakdown in the last few years. Credit: appeconomyinsights.com

NVIDIA’s gaming division had been its golden goose for decades, but come 2022, the company’s data center and AI division’s revenue started to balloon dramatically. By the beginning of fiscal year 2023, data center and AI revenue had surpassed that of the gaming division.

In fiscal year 2026 (which began on July 1, 2025, and ends on June 30, 2026), NVIDIA’s gaming revenue has contributed less than 8% of the company’s total earnings so far. On the other hand, the data center division has made almost 90% of NVIDIA’s total revenue in fiscal year 2026. What I’m trying to say is that NVIDIA is no longer a gaming company—it’s all about AI now.

Considering that we’re in the middle of the biggest memory shortage in history, and that its AI GPUs rake in almost ten times the revenue of gaming GPUs, there’s little reason for NVIDIA to funnel exorbitantly priced memory toward gaming GPUs. It’s much more profitable to put every memory chip they can get their hands on into AI GPU racks and continue receiving mountains of cash by selling them to AI behemoths.

The RTX 50 Super GPUs might never get released

A sign of times to come

NVIDIA’s RTX 50 Super series was supposed to increase memory capacity of its most popular gaming GPUs. The 16GB RTX 5080 was to be superseded by a 24GB RTX 5080 Super; the same fate would await the 16GB RTX 5070 Ti, while the 18GB RTX 5070 Super was to replace its 12GB non-Super sibling. But according to recent reports, NVIDIA has put it on ice.

The RTX 50 Super launch had been slated for this year’s CES in January, but after missing the show, it now looks like NVIDIA has delayed the lineup indefinitely. According to a recent report, NVIDIA doesn’t plan to launch a single new gaming GPU in 2026. Worse still, the RTX 60 series, which had been expected to debut sometime in 2027, has also been delayed.

A report by The Information (via Tom’s Hardware) states that NVIDIA had finalized the design and specs of its RTX 50 Super refresh, but the RAM-pocalypse threw a wrench into the works, forcing the company to “deprioritize RTX 50 Super production.” In other words, it’s exactly what I said a few paragraphs ago: selling enterprise GPU racks to AI companies is far more lucrative than selling comparatively cheaper GPUs to gamers, especially now that memory prices have been skyrocketing.

Before putting the RTX 50 series on ice, NVIDIA had already slashed its gaming GPU supply by about a fifth and started prioritizing models with less VRAM, like the 8GB versions of the RTX 5060 and RTX 5060 Ti, so this news isn’t that surprising.

So when can we expect RTX 60 GPUs?

Late 2028-ish?

A GPU with a pile of money around it. Credit: Lucas Gouveia / How-To Geek

The good news is that the RTX 60 series is definitely in the pipeline, and we will see it sooner or later. The bad news is that its release date is up in the air, and it’s best not to even think about pricing. The word on the street around CES 2026 was that NVIDIA would release the RTX 60 series in mid-2027, give or take a few months. But as of this writing, it’s increasingly likely we won’t see RTX 60 GPUs until 2028.

If you’ve been following the discussion around memory shortages, this won’t be surprising. In late 2025, the prognosis was that we wouldn’t see the end of the RAM-pocalypse until 2027, maybe 2028. But a recent statement by SK Hynix chairman (the company is one of the world’s three largest memory manufacturers) warns that the global memory shortage may last well into 2030.

If that turns out to be true, and if the global AI data center boom doesn’t slow down in the next few years, I wouldn’t be surprised if NVIDIA delays the RTX 60 GPUs as long as possible. There’s a good chance we won’t see them until the second half of 2028, and I wouldn’t be surprised if they miss that window as well if memory supply doesn’t recover by then. Data center GPUs are simply too profitable for NVIDIA to reserve a meaningful portion of memory for gaming graphics cards as long as shortages persist.


At least current-gen gaming GPUs are still a great option for any PC gamer

If there is a silver lining here, it is that current-gen gaming GPUs (NVIDIA RTX 50 and AMD Radeon RX 90) are still more than powerful enough for any current AAA title. Considering that Sony is reportedly delaying the PlayStation 6 and that global PC shipments are projected to see a sharp, double-digit decline in 2026, game developers have little incentive to push requirements beyond what current hardware can handle.

DLSS 5, on the other hand, may be the future of gaming, but no one likes it, and it will take a few years (and likely the arrival of the RTX 60 lineup) for it to mature and become usable on anything that’s not a heckin’ RTX 5090.

If you’re open to buying used GPUs, even last-gen gaming graphics cards offer tons of performance and are able to rein in any AAA game you throw at them. While we likely won’t get a new gaming GPU from NVIDIA for at least a few years, at least the ones we’ve got are great today and will continue to chew through any game for the foreseeable future.



Source link