VAST Data raises $1B at $30B valuation with Nvidia backing as AI data infrastructure demand accelerates


Summary: VAST Data raised $1 billion in a Series F at a $30 billion valuation, more than tripling from $9.1 billion, with Drive Capital and Access Industries co-leading and Nvidia, Fidelity, and NEA participating. More than $500 million is secondary capital. The company reports $4 billion in cumulative bookings, $500 million-plus in committed ARR, and is free cash flow positive with revenue roughly tripling year over year. Key customers include xAI’s 200,000-GPU Colossus cluster and CoreWeave’s $1.17 billion agreement.

VAST Data raised $1 billion in a Series F round at a $30 billion valuation, more than tripling the $9.1 billion it was valued at in its Series E in late 2023. Drive Capital and Access Industries co-led the round, with Nvidia, Fidelity Management and Research Company, and NEA participating. More than $500 million of the total is secondary capital, meaning it goes to early investors and employees selling shares rather than into the company’s treasury, a structure that relieves liquidity pressure on long-tenured shareholders and reduces the urgency of an IPO. The round makes VAST Data the most valuable private technology company founded in Israel, following Google’s $32 billion acquisition of Wiz in March.

The valuation is striking not because a company raised a billion dollars in 2026, a year in which record AI funding rounds have reshaped expectations of what venture-scale capital looks like, but because VAST Data sells data infrastructure, the layer of the AI stack that sits between the GPUs and the models. It is not a foundation model company. It is not a cloud provider. It is the company that ensures the data reaches the processors fast enough to keep them busy. Jensen Huang, Nvidia’s chief executive, recorded a personal endorsement at VAST’s Forward 2026 conference, stating that “with VAST Data, we’re transforming the storage of AI infrastructure” and explaining that without VAST’s technology, even the fastest AI processors face severe data bottlenecks. When the company that makes the GPUs tells you the GPUs are useless without a particular data platform, investors listen.

What VAST Data actually does

VAST Data provides what it calls an AI operating system that unifies storage, database, and compute into a single platform. The core architecture, called DASE (Disaggregated and Shared Everything), was announced when the company emerged from stealth in February 2019. It is flash-first and single-tier, eliminating the traditional storage hierarchy in which data moves between fast, expensive tiers and slow, cheap ones. For AI workloads, where training runs consume petabytes of data at sustained high throughput, the elimination of tiering removes a bottleneck that legacy storage systems were never designed to handle.

The platform has expanded well beyond storage. VAST DataSpace provides a globally distributed namespace across on-premises, cloud, and edge locations, scaling to exabytes and trillions of files. VAST InsightEngine automates real-time AI pipelines, handling chunking, embedding, vectorisation, and retrieval for retrieval-augmented generation, semantic search, and classification. VAST DataBase includes an integrated vector store that the company claims supports trillion-vector scale with constant-time search. VAST CNode-X, an Nvidia-certified system, makes GPU servers first-class infrastructure components inside the platform, with a fully CUDA-accelerated version of the operating system designed to run directly on Nvidia-powered servers. The pitch is that VAST is not a storage company that added AI features. It is a data platform that was built for AI from the beginning, and the storage is just the foundation.

The numbers

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

VAST Data has accumulated more than $4 billion in cumulative bookings and reports more than $500 million in committed annual recurring revenue as of the end of fiscal year 2026. CTech, the technology publication of the Israeli financial newspaper Calcalist, reports that total ARR including non-committed revenue has reached $2 billion. Revenue has been roughly tripling year over year. The company is generating more than $100 million in cash per quarter and is free cash flow positive with a positive operating margin, unusual for a company at this growth rate. The customer base has quadrupled among Fortune 1000 companies, with the top 100 new customers spending more than $1.2 million on average. Contracts typically run five to seven years.

The marquee customer relationships illustrate the scale. VAST Data powers the data platform behind xAI’s Colossus supercomputing cluster, a facility with more than 200,000 Nvidia GPUs where VAST says it reduced total cost of ownership by 50%. CoreWeave signed a $1.17 billion commercial agreement in November 2025, using VAST as the primary data foundation for its Nvidia-accelerated computing cloud. Other customers include Pixar, which uses the platform for petabytes of rendered assets as AI training data, NASA, the US Department of Energy, Boston Children’s Hospital, Booking Holdings, and several of the world’s largest banks. Renen Hallak, VAST’s founder and chief executive, said the company is “already supporting AI environments spanning millions of GPUs globally, operating across every layer of the AI stack.”

The data layer thesis

The investment thesis behind a $30 billion valuation for a data infrastructure company rests on a structural argument about how the AI stack works. The industry has spent three years and hundreds of billions of dollars on GPUs. Surging global AI investment, which the Stanford AI Index pegged at $285.9 billion in US private AI capital in 2025 alone, has been concentrated overwhelmingly on compute. But a GPU that is waiting for data is a GPU that is not training. The data layer, the infrastructure that stores, indexes, moves, and transforms the data that feeds the models, is increasingly recognised as the binding constraint on AI performance.

This is why Nvidia is not just investing in VAST Data but actively integrating its technology. The CUDA-accelerated operating system and CNode-X certification mean that VAST’s platform is designed to run on the same Nvidia hardware that runs the models, eliminating the traditional separation between storage infrastructure and compute infrastructure. Nvidia-backed AI infrastructure companies now span the entire stack, from GPU cloud providers to chip fabrication to data platforms, and VAST’s role is to ensure that the data moves as fast as the silicon can process it.

AI infrastructure startup valuations have been climbing sharply across the sector. FluidStack is in talks to raise $1 billion at an $18 billion valuation. CoreWeave, VAST’s largest customer, was valued at $35 billion earlier this year. Enterprise AI infrastructure deals like Jane Street’s $6 billion cloud commitment to CoreWeave, with a $1 billion equity investment attached, illustrate that demand for AI infrastructure is broadening beyond the hyperscalers into financial services, healthcare, and government. VAST’s position at the data layer of these environments, not the compute layer and not the model layer, is what makes the valuation argument distinct from the GPU cloud companies. If the compute layer is the engine, VAST is the fuel line. A $30 billion fuel line is expensive. The argument is that without it, the engine does not run.

The competitive landscape

VAST Data is not the only company building AI-native data infrastructure. DDN and WEKA are the two most frequently cited competitors, both offering high-performance storage platforms optimised for machine learning workloads. Hammerspace provides a global data orchestration layer. The incumbents, Dell, HPE, Hitachi Vantara, IBM, NetApp, and Pure Storage (recently rebranded as Everpure), are all deepening their Nvidia integrations and repositioning their storage portfolios for AI. Pure Storage’s FlashBlade products compete directly with VAST on performance. NetApp has expanded its AI storage services. All of them have larger installed bases and longer customer relationships than VAST.

VAST’s argument is that legacy storage architectures, designed for databases and file servers and retrofitted for AI, cannot deliver the sustained throughput that training runs at the scale of Colossus require. The single-tier, flash-first architecture eliminates the data movement that tiered systems impose, and the integrated database and compute capabilities mean that data transformation, the chunking, embedding, and vectorisation that AI pipelines require, happens within the platform rather than in a separate processing layer. Whether that architectural advantage is durable or whether the incumbents can close the gap will determine whether a $30 billion valuation looks prescient or excessive in three years.

Hallak has told employees and bankers that the company has considered an IPO in the second half of 2026 or later, according to The Information. The secondary-heavy structure of the Series F suggests that timeline is not imminent. VAST Data can afford to wait. It is cash-flow positive, tripling revenue, and sitting at the centre of the most capital-intensive technology buildout since the internet. The question is not whether the data layer matters. It is whether $30 billion is the right price for the company that is building it.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


As I’m writing this, NVIDIA is the largest company in the world, with a market cap exceeding $4 trillion. Team Green is now the leader among the Magnificent Seven of the tech world, having surpassed them all in just a few short years.

The company has managed to reach these incredible heights with smart planning and by making the right moves for decades, the latest being the decision to sell shovels during the AI gold rush. Considering the current hardware landscape, there’s simply no reason for NVIDIA to rush a new gaming GPU generation for at least a few years. Here’s why.

Scarcity has become the new normal

Not even Nvidia is powerful enough to overcome market constraints

Global memory shortages have been a reality since late 2025, and they aren’t just affecting RAM and storage manufacturers. Rather, this impacts every company making any product that contains memory or storage—including graphics cards.

Since NVIDIA sells GPU and memory bundles to its partners, which they then solder onto PCBs and add cooling to create full-blown graphics cards, this means that NVIDIA doesn’t just have to battle other tech giants to secure a chunk of TSMC’s limited production capacity to produce its GPU chips. It also has to procure massive amounts of GPU memory, which has never been harder or more expensive to obtain.

While a company as large as NVIDIA certainly has long-term contracts that guarantee stable memory prices, those contracts aren’t going to last forever. The company has likely had to sign new ones, considering the GPU price surge that began at the beginning of 2026, with gaming graphics cards still being overpriced.

With GPU memory costing more than ever, NVIDIA has little reason to rush a new gaming GPU generation, because its gaming earnings are just a drop in the bucket compared to its total earnings.

NVIDIA is an AI company now

Gaming GPUs are taking a back seat

A graph showing NVIDIA revenue breakdown in the last few years. Credit: appeconomyinsights.com

NVIDIA’s gaming division had been its golden goose for decades, but come 2022, the company’s data center and AI division’s revenue started to balloon dramatically. By the beginning of fiscal year 2023, data center and AI revenue had surpassed that of the gaming division.

In fiscal year 2026 (which began on July 1, 2025, and ends on June 30, 2026), NVIDIA’s gaming revenue has contributed less than 8% of the company’s total earnings so far. On the other hand, the data center division has made almost 90% of NVIDIA’s total revenue in fiscal year 2026. What I’m trying to say is that NVIDIA is no longer a gaming company—it’s all about AI now.

Considering that we’re in the middle of the biggest memory shortage in history, and that its AI GPUs rake in almost ten times the revenue of gaming GPUs, there’s little reason for NVIDIA to funnel exorbitantly priced memory toward gaming GPUs. It’s much more profitable to put every memory chip they can get their hands on into AI GPU racks and continue receiving mountains of cash by selling them to AI behemoths.

The RTX 50 Super GPUs might never get released

A sign of times to come

NVIDIA’s RTX 50 Super series was supposed to increase memory capacity of its most popular gaming GPUs. The 16GB RTX 5080 was to be superseded by a 24GB RTX 5080 Super; the same fate would await the 16GB RTX 5070 Ti, while the 18GB RTX 5070 Super was to replace its 12GB non-Super sibling. But according to recent reports, NVIDIA has put it on ice.

The RTX 50 Super launch had been slated for this year’s CES in January, but after missing the show, it now looks like NVIDIA has delayed the lineup indefinitely. According to a recent report, NVIDIA doesn’t plan to launch a single new gaming GPU in 2026. Worse still, the RTX 60 series, which had been expected to debut sometime in 2027, has also been delayed.

A report by The Information (via Tom’s Hardware) states that NVIDIA had finalized the design and specs of its RTX 50 Super refresh, but the RAM-pocalypse threw a wrench into the works, forcing the company to “deprioritize RTX 50 Super production.” In other words, it’s exactly what I said a few paragraphs ago: selling enterprise GPU racks to AI companies is far more lucrative than selling comparatively cheaper GPUs to gamers, especially now that memory prices have been skyrocketing.

Before putting the RTX 50 series on ice, NVIDIA had already slashed its gaming GPU supply by about a fifth and started prioritizing models with less VRAM, like the 8GB versions of the RTX 5060 and RTX 5060 Ti, so this news isn’t that surprising.

So when can we expect RTX 60 GPUs?

Late 2028-ish?

A GPU with a pile of money around it. Credit: Lucas Gouveia / How-To Geek

The good news is that the RTX 60 series is definitely in the pipeline, and we will see it sooner or later. The bad news is that its release date is up in the air, and it’s best not to even think about pricing. The word on the street around CES 2026 was that NVIDIA would release the RTX 60 series in mid-2027, give or take a few months. But as of this writing, it’s increasingly likely we won’t see RTX 60 GPUs until 2028.

If you’ve been following the discussion around memory shortages, this won’t be surprising. In late 2025, the prognosis was that we wouldn’t see the end of the RAM-pocalypse until 2027, maybe 2028. But a recent statement by SK Hynix chairman (the company is one of the world’s three largest memory manufacturers) warns that the global memory shortage may last well into 2030.

If that turns out to be true, and if the global AI data center boom doesn’t slow down in the next few years, I wouldn’t be surprised if NVIDIA delays the RTX 60 GPUs as long as possible. There’s a good chance we won’t see them until the second half of 2028, and I wouldn’t be surprised if they miss that window as well if memory supply doesn’t recover by then. Data center GPUs are simply too profitable for NVIDIA to reserve a meaningful portion of memory for gaming graphics cards as long as shortages persist.


At least current-gen gaming GPUs are still a great option for any PC gamer

If there is a silver lining here, it is that current-gen gaming GPUs (NVIDIA RTX 50 and AMD Radeon RX 90) are still more than powerful enough for any current AAA title. Considering that Sony is reportedly delaying the PlayStation 6 and that global PC shipments are projected to see a sharp, double-digit decline in 2026, game developers have little incentive to push requirements beyond what current hardware can handle.

DLSS 5, on the other hand, may be the future of gaming, but no one likes it, and it will take a few years (and likely the arrival of the RTX 60 lineup) for it to mature and become usable on anything that’s not a heckin’ RTX 5090.

If you’re open to buying used GPUs, even last-gen gaming graphics cards offer tons of performance and are able to rein in any AAA game you throw at them. While we likely won’t get a new gaming GPU from NVIDIA for at least a few years, at least the ones we’ve got are great today and will continue to chew through any game for the foreseeable future.



Source link