Nvidia’s $2 billion Marvell bet is not an investment. It is a toll booth.



Nvidia has invested $2 billion in Marvell Technology and folded the chipmaker into its NVLink Fusion ecosystem, creating a partnership that covers custom AI accelerators, silicon photonics, and 5G/6G infrastructure. The deal ensures that every custom chip Marvell designs for hyperscalers like Amazon, Google, and Microsoft still generates Nvidia revenue through mandatory platform components, turning what looked like a competitive threat into an ecosystem tax.

Nvidia announced on Monday that it has invested $2 billion in Marvell Technology and entered a strategic partnership centred on NVLink Fusion, the rack-scale platform that allows third-party silicon to plug directly into Nvidia’s proprietary interconnect fabric. Marvell’s stock surged nearly 13 per cent on the news. Nvidia’s rose 5.6 per cent. The market read it as a deal. The more accurate reading is that it is infrastructure policy, written in silicon.

The partnership has Marvell supplying custom XPUs and NVLink Fusion-compatible scale-up networking, while Nvidia provides everything else: Vera CPUs, ConnectX network interface cards, BlueField data processing units, NVLink interconnect, and Spectrum-X switches.

The two companies will also collaborate on silicon photonics, the technology that uses light instead of copper to move data between chips at the speeds that next-generation AI clusters demand. Jensen Huang framed it in characteristically expansive terms. “The inference inflection has arrived,” the Nvidia chief executive said. “Token generation demand is surging, and the world is racing to build AI factories.

The strategic subtlety sits in the architecture of NVLink Fusion itself. Every NVLink Fusion platform must include at least one Nvidia product, whether a CPU, GPU, or switch. Nvidia also controls which partners receive NVLink IP licences. This means that the custom AI accelerators Marvell designs for hyperscalers, the very chips these customers commission specifically to reduce their dependence on Nvidia GPUs, will still generate Nvidia revenue on every rack deployed. It is, as Tom’s Hardware put it, a tax on custom ASICs.

The deal deepens a pattern that has become unmistakable. Nvidia has made a series of $2 billion investments in recent months, including stakes in CoreWeave, Nebius, Synopsys, Coherent, and Lumentum. Each targets a different layer of the AI infrastructure stack that is being built at unprecedented speed: cloud providers, chip design tools, optical networking components, and now custom silicon. The common thread is that each investment makes the recipient more dependent on Nvidia’s platform while Nvidia gains both financial exposure to and architectural influence over potential competitors.

Marvell is a particularly interesting target because its fastest-growing business is designing the custom AI accelerators that hyperscalers use to displace Nvidia GPUs. The company’s custom AI XPU business generated $1.5 billion in fiscal 2026 revenue and is expected to double by fiscal 2028. Marvell currently has 18 active custom silicon projects, including 12 devices for Amazon, Google, Microsoft, and Meta, and six for emerging AI customers.

Amazon’s Trainium chips, Microsoft’s Maia accelerators, and Google’s TPUs all flow through Marvell’s design capabilities. By investing $2 billion and pulling Marvell into NVLink Fusion, Nvidia has effectively ensured that the company building its competitors’ weapons is also paying Nvidia for the ammunition.

NVLink Fusion’s partner roster has expanded rapidly since its debut at Computex. Samsung Foundry joined in October to offer manufacturing support on its 3nm and 2nm nodes. Arm entered in November, enabling its licensees to build CPUs with native NVLink connectivity. SiFive joined in January, bringing RISC-V into the ecosystem. Fujitsu, Qualcomm, MediaTek, Alchip, Astera Labs, Synopsys, and Cadence were among the original partners.

The breadth of the list is the point: NVLink Fusion is becoming the default interconnect standard for custom AI silicon, not because it is open, but because Nvidia’s software ecosystem, particularly CUDA, makes it the path of least resistance for customers who need their hardware to work immediately.

The open alternative, the Ultra Accelerator Link consortium backed by AMD, Intel, Broadcom, Cisco, Google, HPE, Meta, and Microsoft, is designed to break exactly this kind of lock-in. But UALink faces what analysts describe as a crisis of the commons: its members have competing priorities, its 128G specification launch trails the pace of accelerator deployment, and several of its key members now have Nvidia money on their balance sheets. Nvidia’s financial stakes in companies nominally committed to an open standard raise legitimate questions about whether that standard can develop at the speed needed to offer a genuine alternative.

For Marvell’s chief executive Matt Murphy, the deal addresses a practical constraint. “By connecting Marvell’s leadership in high-performance analog, optical DSP, silicon photonics, and custom silicon to Nvidia’s expanding AI ecosystem through NVLink Fusion,” Murphy said, “we are enabling customers to build scalable, efficient AI infrastructure.

The translation: Marvell’s hyperscaler customers want custom chips that work seamlessly with the Nvidia infrastructure already deployed in their data centres, and NVLink Fusion is how that happens.

The silicon photonics component may prove the most consequential element of the partnership in the medium term. As AI clusters scale to hundreds of thousands of GPUs, the copper interconnects that have served the industry for decades are approaching fundamental bandwidth and energy limits. Optical interconnects can move data faster and more efficiently, but the technology remains expensive and difficult to manufacture at scale. Nvidia and Marvell collaborating on silicon photonics positions both companies at the centre of what could become the next critical bottleneck in AI infrastructure, after chips and after power.

The 5G and 6G dimensions of the partnership, encompassing what Nvidia calls AI-RAN infrastructure, signal an ambition that extends beyond the data centre entirely. If wireless networks increasingly rely on AI for signal processing and resource allocation, the base station becomes another compute node in the Nvidia ecosystem, running on Nvidia platforms with Marvell connectivity. It is the kind of horizontal expansion that turns a chip company into an infrastructure company.

Nvidia still commands roughly 90 per cent of the data centre GPU and AI accelerator market. The semiconductor industry generated $791.7 billion in sales in 2025 and is forecast to grow another 26 per cent in 2026. Against that backdrop, the commercial AI market is accelerating faster than anyone projected, and the companies racing to build it need hardware that works now, not hardware that might work when an open standard catches up. That urgency is Nvidia’s greatest asset and NVLink Fusion’s most effective sales pitch.

The $2 billion is a rounding error on Nvidia’s balance sheet. What it buys is something no amount of R&D spending can replicate: the architectural certainty that even the chips designed to replace Nvidia will be built inside an Nvidia-controlled ecosystem. It is not a partnership in any conventional sense. It is a toll booth on the only road that leads to the fastest-growing market in technology.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


spring-sale-imagery

DeWalt/ZDNET

Spring means lawn and garden prep and DIY projects around the house. And if you’ve been looking for a handy gadget to help you with small repairs and crafts, you can pick up the DeWalt MT21 11-in-1 multitool at Amazon ahead of its Big Spring Sale for 25% off, bringing the price down to $30 (matching the lowest price of the year so far). It also comes with a belt sheath to keep it close by on jobsites.

Also: 10 DIY gadgets I never leave out of my toolkit

The MT21 has a compact design, measuring just 4 inches when fully folded and expanding to 6 inches when the pliers are deployed. The hinged handle is made of durable steel with a rubberized grip in iconic DeWalt yellow and black, adding a bit of visual flair while making the multitool more comfortable to use. Each of the included tools is also made of stainless steel for strength and reliability on jobsites and in the garage.

Also: The best Amazon Spring Sale DeWalt deals

The 11 featured tools include: regular and needlenose pliers, wire cutters, two flathead screwdrivers, a Phillips screwdriver, a file, a can and bottle opener, a saw blade, a straight-edge blade, and an awl tool. Each tool folds into the handle to keep them out of the way until needed and to protect your hands while using the multitool. 

We’re big fans of multitools here at ZDNET, and definitely recommend this highly rated one from DeWalt.

How I rated this deal 

DeWalt is one of the leading names in power tools, and if you’re looking for a handy EDC gadget or just need something for occasional DIY repairs, the MT21 multitool is a great choice. With 11 tools in a single gadget, you can do everything from assembling flat-pack furniture to minor electrical repairs. While not the steepest discount, getting your hands on a high-quality multitool for 25% off is still a great value. That’s why I gave this deal a 3/5 Editor’s rating.

Amazon’s Big Spring Sale runs March 25-31, 2026. 


Show more

Deals are subject to sell out or expire anytime, though ZDNET remains committed to finding, sharing, and updating the best product deals for you to score the best savings. Our team of experts regularly checks in on the deals we share to ensure they are still live and obtainable. We’re sorry if you’ve missed out on this deal, but don’t fret — we’re constantly finding new chances to save and sharing them with you at ZDNET.com


Show more

We aim to deliver the most accurate advice to help you shop smarter. ZDNET offers 33 years of experience, 30 hands-on product reviewers, and 10,000 square feet of lab space to ensure we bring you the best of tech. 

In 2025, we refined our approach to deals, developing a measurable system for sharing savings with readers like you. Our editor’s deal rating badges are affixed to most of our deal content, making it easy to interpret our expertise to help you make the best purchase decision.

At the core of this approach is a percentage-off-based system to classify savings offered on top-tech products, combined with a sliding-scale system based on our team members’ expertise and several factors like frequency, brand or product recognition, and more. The result? Hand-crafted deals chosen specifically for ZDNET readers like you, fully backed by our experts. 

Also: How we rate deals at ZDNET in 2026


Show more





Source link