OpenAI pauses Stargate UK as energy costs and copyright rules block the path


In short: OpenAI has paused its Stargate UK data centre project, citing the high cost of industrial electricity in Britain and an unfavourable regulatory environment around AI copyright. The project, announced in September 2025 alongside Nvidia and Nscale, had planned to deploy 8,000 GPUs at sites in north-east England, scalable to 31,000 over time. OpenAI says it will move forward “when the right conditions” allow, though it has given no timeline. The pause is a significant setback for the UK government’s AI Growth Zones initiative and arrives as OpenAI prepares for a public listing.

What Stargate UK was supposed to be

Stargate UK was announced in September 2025 as a sovereign AI infrastructure project: a partnership between OpenAI, Nvidia, and British cloud provider Nscale to build data centre capacity in north-east England that would allow OpenAI’s models to run on local computing power. The sites earmarked were Cobalt Park near Newcastle and Blyth, both within the UK government’s designated AI Growth Zones, a framework the government had positioned as a centrepiece of its industrial strategy for artificial intelligence. The project was unveiled during US President Donald Trump’s state visit to Britain, giving it diplomatic as well as commercial significance. The initial phase involved off take of approximately 8,000 Nvidia AI processors, with an ambition to scale that to 31,000 GPUs over time,  capacity that would have enabled OpenAI to serve critical public services, regulated industries such as finance, and national security partnerships without routing data through US-based infrastructure. OpenAI never disclosed the total investment figure associated with the UK project. The broader US Stargate project remains on track with data centre construction under way across the United States, backed by a $40 billion bridge loan SoftBank secured to finance its participation, making the UK pause a geographic exception rather than a signal of retreat from AI infrastructure spending overall.

The energy cost problem

The most concrete obstacle OpenAI identified is the cost of electricity in Britain. UK industrial electricity prices are among the highest of any IEA member state,  more than four times those in the United States, Finland, Norway, and Sweden. For a data centre drawing 100 megawatts, that differential is not a line-item concern but a structural one: the economics of running large-scale AI inference workloads at a site where power costs four times as much as they do in Virginia or Texas are fundamentally different, and that gap compounds as capacity scales. The problem is not simply a matter of electricity tariffs. Grid connection requests in the UK surged from 41 gigawatts in November 2024 to 125 gigawatts by June 2025, with an estimated 75 gigawatts of that queue attributable to data centre projects. Buildings can be constructed in 18 to 24 months; grid connections take three to eight years. That mismatch means that even if a project clears the financial hurdle, it faces an infrastructure queue that the current regulatory and planning framework has not been designed to process at AI-infrastructure speeds. The UK government’s AI Growth Zones policy, published in November 2025, was intended in part to address exactly this bottleneck, but the zone designations do not resolve the underlying grid constraints, and OpenAI’s decision to pause suggests that the policy framework has not yet translated into the conditions that would make the investment viable.

The copyright sticking point

The regulatory concern OpenAI cited alongside energy costs points to a separate and more politically charged problem: the UK’s unresolved approach to AI copyright. UK lawmakers have been working to update the rules governing how AI models are trained on copyrighted material. The government’s preferred approach, a broad text and data mining exception with an opt-out mechanism for rights holders, was rejected by the majority of respondents to the government’s own consultation, with creative industries, publishers, and news organisations arguing that a broad exception would allow generative AI companies to train on their works without compensation or meaningful consent. The consultation produced no consensus, and the government has since delayed any legislative change. For OpenAI, which trains large language models on text scraped from the internet, the uncertainty about whether that training will be lawful, and on what terms, is a material business risk. A UK data centre is not simply a power facility,  it creates legal jurisdiction. If the UK eventually adopts a copyright framework that restricts training data use more tightly than the US, operating infrastructure in Britain could expose OpenAI to liability or compliance costs that do not apply to its US operations. The pause allows OpenAI to wait for that regulatory picture to clarify before committing capital.

A pause, not a cancellation, and the IPO context

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

OpenAI’s statement was calibrated to leave the door open. “We continue to explore Stargate U.K. and will move forward when the right conditions such as regulation and the cost of energy enable long-term infrastructure investment,” the company said, framing the decision as contingent rather than final. The timing, however, is notable. OpenAI closed a $122 billion funding round at an $852 billion valuation in late March 2026, extending participation to retail investors for the first time in a move widely interpreted as groundwork for a public offering analysts expect as early as the fourth quarter of 2026. Companies approaching an IPO typically tighten their capital allocation discipline, avoid open-ended international commitments that could weigh on reported cash burn, and reduce exposure to projects with uncertain timelines. Pausing a data centre project that faces both energy cost headwinds and an unresolved copyright regime fits that pattern. The UK government, which had promoted Stargate UK as a signal of international investor confidence in Britain’s AI ambitions, described the decision as disappointing and said it remained in dialogue with OpenAI. OpenAI’s international Stargate expansion has not been without complications elsewhere either, its Abu Dhabi data centre plans drew an explicit threat from Iranian authorities amid escalating regional tensions, suggesting that sovereign AI infrastructure projects carry geopolitical risk profiles that are becoming a distinct factor in OpenAI’s site selection calculus. Meanwhile, Oracle appointed a new CFO this week to manage its $50 billion data centre construction programme as the central operating partner in the US Stargate project,  a contrast that illustrates where AI infrastructure spending remains active and where it is being reconsidered. The year 2025 established infrastructure access and energy as the primary competitive variables in AI,  and for the UK, OpenAI’s pause is a signal that it has not yet solved either.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


As I’m writing this, NVIDIA is the largest company in the world, with a market cap exceeding $4 trillion. Team Green is now the leader among the Magnificent Seven of the tech world, having surpassed them all in just a few short years.

The company has managed to reach these incredible heights with smart planning and by making the right moves for decades, the latest being the decision to sell shovels during the AI gold rush. Considering the current hardware landscape, there’s simply no reason for NVIDIA to rush a new gaming GPU generation for at least a few years. Here’s why.

Scarcity has become the new normal

Not even Nvidia is powerful enough to overcome market constraints

Global memory shortages have been a reality since late 2025, and they aren’t just affecting RAM and storage manufacturers. Rather, this impacts every company making any product that contains memory or storage—including graphics cards.

Since NVIDIA sells GPU and memory bundles to its partners, which they then solder onto PCBs and add cooling to create full-blown graphics cards, this means that NVIDIA doesn’t just have to battle other tech giants to secure a chunk of TSMC’s limited production capacity to produce its GPU chips. It also has to procure massive amounts of GPU memory, which has never been harder or more expensive to obtain.

While a company as large as NVIDIA certainly has long-term contracts that guarantee stable memory prices, those contracts aren’t going to last forever. The company has likely had to sign new ones, considering the GPU price surge that began at the beginning of 2026, with gaming graphics cards still being overpriced.

With GPU memory costing more than ever, NVIDIA has little reason to rush a new gaming GPU generation, because its gaming earnings are just a drop in the bucket compared to its total earnings.

NVIDIA is an AI company now

Gaming GPUs are taking a back seat

A graph showing NVIDIA revenue breakdown in the last few years. Credit: appeconomyinsights.com

NVIDIA’s gaming division had been its golden goose for decades, but come 2022, the company’s data center and AI division’s revenue started to balloon dramatically. By the beginning of fiscal year 2023, data center and AI revenue had surpassed that of the gaming division.

In fiscal year 2026 (which began on July 1, 2025, and ends on June 30, 2026), NVIDIA’s gaming revenue has contributed less than 8% of the company’s total earnings so far. On the other hand, the data center division has made almost 90% of NVIDIA’s total revenue in fiscal year 2026. What I’m trying to say is that NVIDIA is no longer a gaming company—it’s all about AI now.

Considering that we’re in the middle of the biggest memory shortage in history, and that its AI GPUs rake in almost ten times the revenue of gaming GPUs, there’s little reason for NVIDIA to funnel exorbitantly priced memory toward gaming GPUs. It’s much more profitable to put every memory chip they can get their hands on into AI GPU racks and continue receiving mountains of cash by selling them to AI behemoths.

The RTX 50 Super GPUs might never get released

A sign of times to come

NVIDIA’s RTX 50 Super series was supposed to increase memory capacity of its most popular gaming GPUs. The 16GB RTX 5080 was to be superseded by a 24GB RTX 5080 Super; the same fate would await the 16GB RTX 5070 Ti, while the 18GB RTX 5070 Super was to replace its 12GB non-Super sibling. But according to recent reports, NVIDIA has put it on ice.

The RTX 50 Super launch had been slated for this year’s CES in January, but after missing the show, it now looks like NVIDIA has delayed the lineup indefinitely. According to a recent report, NVIDIA doesn’t plan to launch a single new gaming GPU in 2026. Worse still, the RTX 60 series, which had been expected to debut sometime in 2027, has also been delayed.

A report by The Information (via Tom’s Hardware) states that NVIDIA had finalized the design and specs of its RTX 50 Super refresh, but the RAM-pocalypse threw a wrench into the works, forcing the company to “deprioritize RTX 50 Super production.” In other words, it’s exactly what I said a few paragraphs ago: selling enterprise GPU racks to AI companies is far more lucrative than selling comparatively cheaper GPUs to gamers, especially now that memory prices have been skyrocketing.

Before putting the RTX 50 series on ice, NVIDIA had already slashed its gaming GPU supply by about a fifth and started prioritizing models with less VRAM, like the 8GB versions of the RTX 5060 and RTX 5060 Ti, so this news isn’t that surprising.

So when can we expect RTX 60 GPUs?

Late 2028-ish?

A GPU with a pile of money around it. Credit: Lucas Gouveia / How-To Geek

The good news is that the RTX 60 series is definitely in the pipeline, and we will see it sooner or later. The bad news is that its release date is up in the air, and it’s best not to even think about pricing. The word on the street around CES 2026 was that NVIDIA would release the RTX 60 series in mid-2027, give or take a few months. But as of this writing, it’s increasingly likely we won’t see RTX 60 GPUs until 2028.

If you’ve been following the discussion around memory shortages, this won’t be surprising. In late 2025, the prognosis was that we wouldn’t see the end of the RAM-pocalypse until 2027, maybe 2028. But a recent statement by SK Hynix chairman (the company is one of the world’s three largest memory manufacturers) warns that the global memory shortage may last well into 2030.

If that turns out to be true, and if the global AI data center boom doesn’t slow down in the next few years, I wouldn’t be surprised if NVIDIA delays the RTX 60 GPUs as long as possible. There’s a good chance we won’t see them until the second half of 2028, and I wouldn’t be surprised if they miss that window as well if memory supply doesn’t recover by then. Data center GPUs are simply too profitable for NVIDIA to reserve a meaningful portion of memory for gaming graphics cards as long as shortages persist.


At least current-gen gaming GPUs are still a great option for any PC gamer

If there is a silver lining here, it is that current-gen gaming GPUs (NVIDIA RTX 50 and AMD Radeon RX 90) are still more than powerful enough for any current AAA title. Considering that Sony is reportedly delaying the PlayStation 6 and that global PC shipments are projected to see a sharp, double-digit decline in 2026, game developers have little incentive to push requirements beyond what current hardware can handle.

DLSS 5, on the other hand, may be the future of gaming, but no one likes it, and it will take a few years (and likely the arrival of the RTX 60 lineup) for it to mature and become usable on anything that’s not a heckin’ RTX 5090.

If you’re open to buying used GPUs, even last-gen gaming graphics cards offer tons of performance and are able to rein in any AAA game you throw at them. While we likely won’t get a new gaming GPU from NVIDIA for at least a few years, at least the ones we’ve got are great today and will continue to chew through any game for the foreseeable future.



Source link