Nvidia’s $2 billion Marvell bet is not an investment. It is a toll booth.



Nvidia has invested $2 billion in Marvell Technology and folded the chipmaker into its NVLink Fusion ecosystem, creating a partnership that covers custom AI accelerators, silicon photonics, and 5G/6G infrastructure. The deal ensures that every custom chip Marvell designs for hyperscalers like Amazon, Google, and Microsoft still generates Nvidia revenue through mandatory platform components, turning what looked like a competitive threat into an ecosystem tax.

Nvidia announced on Monday that it has invested $2 billion in Marvell Technology and entered a strategic partnership centred on NVLink Fusion, the rack-scale platform that allows third-party silicon to plug directly into Nvidia’s proprietary interconnect fabric. Marvell’s stock surged nearly 13 per cent on the news. Nvidia’s rose 5.6 per cent. The market read it as a deal. The more accurate reading is that it is infrastructure policy, written in silicon.

The partnership has Marvell supplying custom XPUs and NVLink Fusion-compatible scale-up networking, while Nvidia provides everything else: Vera CPUs, ConnectX network interface cards, BlueField data processing units, NVLink interconnect, and Spectrum-X switches.

The two companies will also collaborate on silicon photonics, the technology that uses light instead of copper to move data between chips at the speeds that next-generation AI clusters demand. Jensen Huang framed it in characteristically expansive terms. “The inference inflection has arrived,” the Nvidia chief executive said. “Token generation demand is surging, and the world is racing to build AI factories.

The strategic subtlety sits in the architecture of NVLink Fusion itself. Every NVLink Fusion platform must include at least one Nvidia product, whether a CPU, GPU, or switch. Nvidia also controls which partners receive NVLink IP licences. This means that the custom AI accelerators Marvell designs for hyperscalers, the very chips these customers commission specifically to reduce their dependence on Nvidia GPUs, will still generate Nvidia revenue on every rack deployed. It is, as Tom’s Hardware put it, a tax on custom ASICs.

The deal deepens a pattern that has become unmistakable. Nvidia has made a series of $2 billion investments in recent months, including stakes in CoreWeave, Nebius, Synopsys, Coherent, and Lumentum. Each targets a different layer of the AI infrastructure stack that is being built at unprecedented speed: cloud providers, chip design tools, optical networking components, and now custom silicon. The common thread is that each investment makes the recipient more dependent on Nvidia’s platform while Nvidia gains both financial exposure to and architectural influence over potential competitors.

Marvell is a particularly interesting target because its fastest-growing business is designing the custom AI accelerators that hyperscalers use to displace Nvidia GPUs. The company’s custom AI XPU business generated $1.5 billion in fiscal 2026 revenue and is expected to double by fiscal 2028. Marvell currently has 18 active custom silicon projects, including 12 devices for Amazon, Google, Microsoft, and Meta, and six for emerging AI customers.

Amazon’s Trainium chips, Microsoft’s Maia accelerators, and Google’s TPUs all flow through Marvell’s design capabilities. By investing $2 billion and pulling Marvell into NVLink Fusion, Nvidia has effectively ensured that the company building its competitors’ weapons is also paying Nvidia for the ammunition.

NVLink Fusion’s partner roster has expanded rapidly since its debut at Computex. Samsung Foundry joined in October to offer manufacturing support on its 3nm and 2nm nodes. Arm entered in November, enabling its licensees to build CPUs with native NVLink connectivity. SiFive joined in January, bringing RISC-V into the ecosystem. Fujitsu, Qualcomm, MediaTek, Alchip, Astera Labs, Synopsys, and Cadence were among the original partners.

The breadth of the list is the point: NVLink Fusion is becoming the default interconnect standard for custom AI silicon, not because it is open, but because Nvidia’s software ecosystem, particularly CUDA, makes it the path of least resistance for customers who need their hardware to work immediately.

The open alternative, the Ultra Accelerator Link consortium backed by AMD, Intel, Broadcom, Cisco, Google, HPE, Meta, and Microsoft, is designed to break exactly this kind of lock-in. But UALink faces what analysts describe as a crisis of the commons: its members have competing priorities, its 128G specification launch trails the pace of accelerator deployment, and several of its key members now have Nvidia money on their balance sheets. Nvidia’s financial stakes in companies nominally committed to an open standard raise legitimate questions about whether that standard can develop at the speed needed to offer a genuine alternative.

For Marvell’s chief executive Matt Murphy, the deal addresses a practical constraint. “By connecting Marvell’s leadership in high-performance analog, optical DSP, silicon photonics, and custom silicon to Nvidia’s expanding AI ecosystem through NVLink Fusion,” Murphy said, “we are enabling customers to build scalable, efficient AI infrastructure.

The translation: Marvell’s hyperscaler customers want custom chips that work seamlessly with the Nvidia infrastructure already deployed in their data centres, and NVLink Fusion is how that happens.

The silicon photonics component may prove the most consequential element of the partnership in the medium term. As AI clusters scale to hundreds of thousands of GPUs, the copper interconnects that have served the industry for decades are approaching fundamental bandwidth and energy limits. Optical interconnects can move data faster and more efficiently, but the technology remains expensive and difficult to manufacture at scale. Nvidia and Marvell collaborating on silicon photonics positions both companies at the centre of what could become the next critical bottleneck in AI infrastructure, after chips and after power.

The 5G and 6G dimensions of the partnership, encompassing what Nvidia calls AI-RAN infrastructure, signal an ambition that extends beyond the data centre entirely. If wireless networks increasingly rely on AI for signal processing and resource allocation, the base station becomes another compute node in the Nvidia ecosystem, running on Nvidia platforms with Marvell connectivity. It is the kind of horizontal expansion that turns a chip company into an infrastructure company.

Nvidia still commands roughly 90 per cent of the data centre GPU and AI accelerator market. The semiconductor industry generated $791.7 billion in sales in 2025 and is forecast to grow another 26 per cent in 2026. Against that backdrop, the commercial AI market is accelerating faster than anyone projected, and the companies racing to build it need hardware that works now, not hardware that might work when an open standard catches up. That urgency is Nvidia’s greatest asset and NVLink Fusion’s most effective sales pitch.

The $2 billion is a rounding error on Nvidia’s balance sheet. What it buys is something no amount of R&D spending can replicate: the architectural certainty that even the chips designed to replace Nvidia will be built inside an Nvidia-controlled ecosystem. It is not a partnership in any conventional sense. It is a toll booth on the only road that leads to the fastest-growing market in technology.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


As I’m writing this, NVIDIA is the largest company in the world, with a market cap exceeding $4 trillion. Team Green is now the leader among the Magnificent Seven of the tech world, having surpassed them all in just a few short years.

The company has managed to reach these incredible heights with smart planning and by making the right moves for decades, the latest being the decision to sell shovels during the AI gold rush. Considering the current hardware landscape, there’s simply no reason for NVIDIA to rush a new gaming GPU generation for at least a few years. Here’s why.

Scarcity has become the new normal

Not even Nvidia is powerful enough to overcome market constraints

Global memory shortages have been a reality since late 2025, and they aren’t just affecting RAM and storage manufacturers. Rather, this impacts every company making any product that contains memory or storage—including graphics cards.

Since NVIDIA sells GPU and memory bundles to its partners, which they then solder onto PCBs and add cooling to create full-blown graphics cards, this means that NVIDIA doesn’t just have to battle other tech giants to secure a chunk of TSMC’s limited production capacity to produce its GPU chips. It also has to procure massive amounts of GPU memory, which has never been harder or more expensive to obtain.

While a company as large as NVIDIA certainly has long-term contracts that guarantee stable memory prices, those contracts aren’t going to last forever. The company has likely had to sign new ones, considering the GPU price surge that began at the beginning of 2026, with gaming graphics cards still being overpriced.

With GPU memory costing more than ever, NVIDIA has little reason to rush a new gaming GPU generation, because its gaming earnings are just a drop in the bucket compared to its total earnings.

NVIDIA is an AI company now

Gaming GPUs are taking a back seat

A graph showing NVIDIA revenue breakdown in the last few years. Credit: appeconomyinsights.com

NVIDIA’s gaming division had been its golden goose for decades, but come 2022, the company’s data center and AI division’s revenue started to balloon dramatically. By the beginning of fiscal year 2023, data center and AI revenue had surpassed that of the gaming division.

In fiscal year 2026 (which began on July 1, 2025, and ends on June 30, 2026), NVIDIA’s gaming revenue has contributed less than 8% of the company’s total earnings so far. On the other hand, the data center division has made almost 90% of NVIDIA’s total revenue in fiscal year 2026. What I’m trying to say is that NVIDIA is no longer a gaming company—it’s all about AI now.

Considering that we’re in the middle of the biggest memory shortage in history, and that its AI GPUs rake in almost ten times the revenue of gaming GPUs, there’s little reason for NVIDIA to funnel exorbitantly priced memory toward gaming GPUs. It’s much more profitable to put every memory chip they can get their hands on into AI GPU racks and continue receiving mountains of cash by selling them to AI behemoths.

The RTX 50 Super GPUs might never get released

A sign of times to come

NVIDIA’s RTX 50 Super series was supposed to increase memory capacity of its most popular gaming GPUs. The 16GB RTX 5080 was to be superseded by a 24GB RTX 5080 Super; the same fate would await the 16GB RTX 5070 Ti, while the 18GB RTX 5070 Super was to replace its 12GB non-Super sibling. But according to recent reports, NVIDIA has put it on ice.

The RTX 50 Super launch had been slated for this year’s CES in January, but after missing the show, it now looks like NVIDIA has delayed the lineup indefinitely. According to a recent report, NVIDIA doesn’t plan to launch a single new gaming GPU in 2026. Worse still, the RTX 60 series, which had been expected to debut sometime in 2027, has also been delayed.

A report by The Information (via Tom’s Hardware) states that NVIDIA had finalized the design and specs of its RTX 50 Super refresh, but the RAM-pocalypse threw a wrench into the works, forcing the company to “deprioritize RTX 50 Super production.” In other words, it’s exactly what I said a few paragraphs ago: selling enterprise GPU racks to AI companies is far more lucrative than selling comparatively cheaper GPUs to gamers, especially now that memory prices have been skyrocketing.

Before putting the RTX 50 series on ice, NVIDIA had already slashed its gaming GPU supply by about a fifth and started prioritizing models with less VRAM, like the 8GB versions of the RTX 5060 and RTX 5060 Ti, so this news isn’t that surprising.

So when can we expect RTX 60 GPUs?

Late 2028-ish?

A GPU with a pile of money around it. Credit: Lucas Gouveia / How-To Geek

The good news is that the RTX 60 series is definitely in the pipeline, and we will see it sooner or later. The bad news is that its release date is up in the air, and it’s best not to even think about pricing. The word on the street around CES 2026 was that NVIDIA would release the RTX 60 series in mid-2027, give or take a few months. But as of this writing, it’s increasingly likely we won’t see RTX 60 GPUs until 2028.

If you’ve been following the discussion around memory shortages, this won’t be surprising. In late 2025, the prognosis was that we wouldn’t see the end of the RAM-pocalypse until 2027, maybe 2028. But a recent statement by SK Hynix chairman (the company is one of the world’s three largest memory manufacturers) warns that the global memory shortage may last well into 2030.

If that turns out to be true, and if the global AI data center boom doesn’t slow down in the next few years, I wouldn’t be surprised if NVIDIA delays the RTX 60 GPUs as long as possible. There’s a good chance we won’t see them until the second half of 2028, and I wouldn’t be surprised if they miss that window as well if memory supply doesn’t recover by then. Data center GPUs are simply too profitable for NVIDIA to reserve a meaningful portion of memory for gaming graphics cards as long as shortages persist.


At least current-gen gaming GPUs are still a great option for any PC gamer

If there is a silver lining here, it is that current-gen gaming GPUs (NVIDIA RTX 50 and AMD Radeon RX 90) are still more than powerful enough for any current AAA title. Considering that Sony is reportedly delaying the PlayStation 6 and that global PC shipments are projected to see a sharp, double-digit decline in 2026, game developers have little incentive to push requirements beyond what current hardware can handle.

DLSS 5, on the other hand, may be the future of gaming, but no one likes it, and it will take a few years (and likely the arrival of the RTX 60 lineup) for it to mature and become usable on anything that’s not a heckin’ RTX 5090.

If you’re open to buying used GPUs, even last-gen gaming graphics cards offer tons of performance and are able to rein in any AAA game you throw at them. While we likely won’t get a new gaming GPU from NVIDIA for at least a few years, at least the ones we’ve got are great today and will continue to chew through any game for the foreseeable future.



Source link