Demis Hassabis says Google DeepMind had to return to its startup roots after the Brain merger



In short: Demis Hassabis, speaking on the 20VC podcast with Harry Stebbings in early April 2026, described how Google DeepMind has accelerated its pace over the past two to three years by merging Google Brain’s compute resources with DeepMind’s research culture and returning to what he called a “startup or entrepreneurial” way of working. He also disclosed that he runs Isomorphic Labs, the group’s pharmaceutical AI spinoff, as a “second workday” beginning around 10pm, ahead of expected human trials in oncology later this year.

Assembling the ingredients

Google DeepMind’s formal merger of DeepMind and Google Brain completed in 2023. Hassabis described the period since as one of deliberate acceleration: aligning talent “from around the company, sort of pushing in one direction,” gaining access to the compute infrastructure that DeepMind had previously lacked at scale, and driving what he called “relentless sort of focus and pace.” In his characterisation, the transformation required a cultural adjustment as much as a structural one: the organisation had to “come back to almost our startup or entrepreneurial roots and be scrappier, be faster, ship things really quickly.” The current competitive environment, he said, was “ferocious.” Veteran employees with careers of 20 and 30 years were telling him it was “the most intense environment they’ve ever seen, perhaps ever in the technology industry.”

Hassabis said he speaks to Sundar Pichai, Alphabet’s chief executive, “every day,” reflecting the degree to which Google DeepMind now operates at the operational centre of Alphabet’s product and research strategy. That proximity is matched by a capital commitment of corresponding scale. Google’s compute build-out, developed in part through its custom chip partnerships with companies including Broadcom, is central to that positioning: Alphabet spent $91.4 billion on capital expenditure in 2025 and has guided for between $175 billion and $185 billion in 2026, a near-doubling, with supply constraints rather than capital availability described as the primary limiting factor.

The 90% claim

One of Hassabis’s more assertive statements in the podcast concerned DeepMind’s contribution to the history of AI. He said approximately 90% of the breakthroughs underpinning the modern AI industry were produced by either Google Brain, Google Research, or DeepMind. The claim is broadly consistent with the academic record on foundational developments, including the transformer architecture produced by Google Brain in 2017, early work on reinforcement learning from human feedback, and deep reinforcement learning techniques developed at DeepMind. The 2024 Nobel Prize in Chemistry, awarded to Hassabis and John Jumper and shared with David Baker, for the AlphaFold protein-folding system is the most formally recognised of those achievements. Whether 90% is accurate as a proportion is a matter of interpretation, and the industry has pluralised substantially since those foundational papers. The framing functions as a positioning statement as much as a historical claim.

The operational consequence of that legacy is a product release cadence that has accelerated sharply. Google’s open-weight model programme, most recently Gemma 4, now releases models built from the same research and training infrastructure as Gemini 3, closing a gap between frontier research and open-source contributions that previously existed. Gemini reached approximately 750 million monthly active users by the end of the fourth quarter of 2025, with Gemini 3 described in secondary reporting as having prompted an urgent internal response at OpenAI on its release in November of that year.

The second workday

Alongside leading Google DeepMind, Hassabis also runs Isomorphic Labs, the pharmaceutical AI spinoff that DeepMind established in 2021. He described his working arrangement in the 20VC conversation: a first workday at DeepMind, followed by a “second workday” beginning around 10pm dedicated to Isomorphic’s drug discovery programme. The dual commitment reflects a conviction that applying AI to drug discovery is both Hassabis’s most important long-term ambition and a project that requires sustained personal involvement rather than delegation.

Isomorphic raised $600 million in April 2025 and has existing partnership agreements with Eli Lilly and Novartis with combined milestone values of up to $3 billion. In February 2026, the company released IsoDDE, a drug design tool that Isomorphic says doubles the accuracy of AlphaFold 3 for generating drug candidates. Human clinical trials in oncology are expected later in 2026. The competitive dynamics in AI-driven drug discovery are intensifying across the industry: Anthropic’s acquisition of Coefficient Bio for approximately $400 million in April 2026, a stealth startup founded by former Genentech computational biology researchers, signals that general-purpose AI companies are now treating pharmaceutical discovery as a product category, not merely a demonstration of model capability.

The competitive framing

The 20VC podcast conversation, like Sebastian Mallaby’s biography of Hassabis, “The Infinity Machine,” published on 31 March 2026 and based on more than 30 hours of interviews, presents a researcher who has moved into the most commercially urgent phase of his career with a consistent thesis: that the most important research and the most important products are not separate activities, and that the organisation capable of doing both simultaneously at frontier scale will determine the shape of the industry. The year 2025 consolidated AI as a central strategic priority across the technology industry, with capital, talent, and institutional structure all reorganised around the question of pace. For Hassabis, the answer has been to bring the speed of a startup inside the resource base of one of the world’s largest technology companies, and to treat that combination as a durable advantage.

The scale of the capital flowing into the field makes that advantage harder to sustain. SoftBank’s $40 billion bridge loan to OpenAI represents a form of capitalisation that even Alphabet’s compute commitments cannot trivially match in kind. Hassabis’s account of a “ferocious” competitive environment is not rhetorical: it is a structural description of a race in which the resources of incumbents and the ambitions of challengers have converged to a point where institutional inertia is not merely a disadvantage but a disqualifying one. The startup mentality he describes at Google DeepMind is, in that context, a necessity rather than a preference.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


As I’m writing this, NVIDIA is the largest company in the world, with a market cap exceeding $4 trillion. Team Green is now the leader among the Magnificent Seven of the tech world, having surpassed them all in just a few short years.

The company has managed to reach these incredible heights with smart planning and by making the right moves for decades, the latest being the decision to sell shovels during the AI gold rush. Considering the current hardware landscape, there’s simply no reason for NVIDIA to rush a new gaming GPU generation for at least a few years. Here’s why.

Scarcity has become the new normal

Not even Nvidia is powerful enough to overcome market constraints

Global memory shortages have been a reality since late 2025, and they aren’t just affecting RAM and storage manufacturers. Rather, this impacts every company making any product that contains memory or storage—including graphics cards.

Since NVIDIA sells GPU and memory bundles to its partners, which they then solder onto PCBs and add cooling to create full-blown graphics cards, this means that NVIDIA doesn’t just have to battle other tech giants to secure a chunk of TSMC’s limited production capacity to produce its GPU chips. It also has to procure massive amounts of GPU memory, which has never been harder or more expensive to obtain.

While a company as large as NVIDIA certainly has long-term contracts that guarantee stable memory prices, those contracts aren’t going to last forever. The company has likely had to sign new ones, considering the GPU price surge that began at the beginning of 2026, with gaming graphics cards still being overpriced.

With GPU memory costing more than ever, NVIDIA has little reason to rush a new gaming GPU generation, because its gaming earnings are just a drop in the bucket compared to its total earnings.

NVIDIA is an AI company now

Gaming GPUs are taking a back seat

A graph showing NVIDIA revenue breakdown in the last few years. Credit: appeconomyinsights.com

NVIDIA’s gaming division had been its golden goose for decades, but come 2022, the company’s data center and AI division’s revenue started to balloon dramatically. By the beginning of fiscal year 2023, data center and AI revenue had surpassed that of the gaming division.

In fiscal year 2026 (which began on July 1, 2025, and ends on June 30, 2026), NVIDIA’s gaming revenue has contributed less than 8% of the company’s total earnings so far. On the other hand, the data center division has made almost 90% of NVIDIA’s total revenue in fiscal year 2026. What I’m trying to say is that NVIDIA is no longer a gaming company—it’s all about AI now.

Considering that we’re in the middle of the biggest memory shortage in history, and that its AI GPUs rake in almost ten times the revenue of gaming GPUs, there’s little reason for NVIDIA to funnel exorbitantly priced memory toward gaming GPUs. It’s much more profitable to put every memory chip they can get their hands on into AI GPU racks and continue receiving mountains of cash by selling them to AI behemoths.

The RTX 50 Super GPUs might never get released

A sign of times to come

NVIDIA’s RTX 50 Super series was supposed to increase memory capacity of its most popular gaming GPUs. The 16GB RTX 5080 was to be superseded by a 24GB RTX 5080 Super; the same fate would await the 16GB RTX 5070 Ti, while the 18GB RTX 5070 Super was to replace its 12GB non-Super sibling. But according to recent reports, NVIDIA has put it on ice.

The RTX 50 Super launch had been slated for this year’s CES in January, but after missing the show, it now looks like NVIDIA has delayed the lineup indefinitely. According to a recent report, NVIDIA doesn’t plan to launch a single new gaming GPU in 2026. Worse still, the RTX 60 series, which had been expected to debut sometime in 2027, has also been delayed.

A report by The Information (via Tom’s Hardware) states that NVIDIA had finalized the design and specs of its RTX 50 Super refresh, but the RAM-pocalypse threw a wrench into the works, forcing the company to “deprioritize RTX 50 Super production.” In other words, it’s exactly what I said a few paragraphs ago: selling enterprise GPU racks to AI companies is far more lucrative than selling comparatively cheaper GPUs to gamers, especially now that memory prices have been skyrocketing.

Before putting the RTX 50 series on ice, NVIDIA had already slashed its gaming GPU supply by about a fifth and started prioritizing models with less VRAM, like the 8GB versions of the RTX 5060 and RTX 5060 Ti, so this news isn’t that surprising.

So when can we expect RTX 60 GPUs?

Late 2028-ish?

A GPU with a pile of money around it. Credit: Lucas Gouveia / How-To Geek

The good news is that the RTX 60 series is definitely in the pipeline, and we will see it sooner or later. The bad news is that its release date is up in the air, and it’s best not to even think about pricing. The word on the street around CES 2026 was that NVIDIA would release the RTX 60 series in mid-2027, give or take a few months. But as of this writing, it’s increasingly likely we won’t see RTX 60 GPUs until 2028.

If you’ve been following the discussion around memory shortages, this won’t be surprising. In late 2025, the prognosis was that we wouldn’t see the end of the RAM-pocalypse until 2027, maybe 2028. But a recent statement by SK Hynix chairman (the company is one of the world’s three largest memory manufacturers) warns that the global memory shortage may last well into 2030.

If that turns out to be true, and if the global AI data center boom doesn’t slow down in the next few years, I wouldn’t be surprised if NVIDIA delays the RTX 60 GPUs as long as possible. There’s a good chance we won’t see them until the second half of 2028, and I wouldn’t be surprised if they miss that window as well if memory supply doesn’t recover by then. Data center GPUs are simply too profitable for NVIDIA to reserve a meaningful portion of memory for gaming graphics cards as long as shortages persist.


At least current-gen gaming GPUs are still a great option for any PC gamer

If there is a silver lining here, it is that current-gen gaming GPUs (NVIDIA RTX 50 and AMD Radeon RX 90) are still more than powerful enough for any current AAA title. Considering that Sony is reportedly delaying the PlayStation 6 and that global PC shipments are projected to see a sharp, double-digit decline in 2026, game developers have little incentive to push requirements beyond what current hardware can handle.

DLSS 5, on the other hand, may be the future of gaming, but no one likes it, and it will take a few years (and likely the arrival of the RTX 60 lineup) for it to mature and become usable on anything that’s not a heckin’ RTX 5090.

If you’re open to buying used GPUs, even last-gen gaming graphics cards offer tons of performance and are able to rein in any AAA game you throw at them. While we likely won’t get a new gaming GPU from NVIDIA for at least a few years, at least the ones we’ve got are great today and will continue to chew through any game for the foreseeable future.



Source link