The Uffizi cyberattack should worry every museum in Europe


A cyberattack on one of Italy’s greatest cultural institutions reveals a sector that has mastered physical security and ignored the digital kind.

On the weekend of 1 February 2026, staff at the Uffizi Galleries in Florence arrived on Monday morning to find their email accounts suspended, their internal servers unreachable, and the administrative backbone of one of the world’s most celebrated museums effectively dark.

The malware had come in through a vulnerability linked to software managing low-resolution images on the museum’s website, a door so small that nobody had thought to lock it. Within hours, whoever was on the other end had moved laterally through the network connecting the Uffizi, Palazzo Pitti, and the Boboli Gardens, touching the photographic archive server and, according to the Italian daily Corriere della Sera, sending a ransom demand directly to the personal phone of director Simone Verde.

The Uffizi’s official response was swift and categorical: nothing was stolen, no security systems were compromised, and the incident was “nothing like the Louvre.”

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

That comparison, intended to reassure, may be the most revealing thing anyone has said about the state of cultural security in Europe. Because the Uffizi cyberattack is not interesting for what it destroyed.

It is interesting for what it exposed: a sector that has spent centuries perfecting the art of physical protection while sleepwalking into digital vulnerability.

The Louvre reference is not casual. On 19 October 2025, thieves disguised as construction workers used a freight lift to reach a second-floor balcony of the Louvre, cut through a window, and in under eight minutes made off with eight pieces of the French Crown Jewels valued at approximately €88 million. A subsequent Senate inquiry revealed that only 39 per cent of the museum’s rooms were covered by CCTV, that one external camera had been pointed the wrong way, and that the surveillance system password was, simply, “Louvre.”

Director Laurence des Cars eventually resigned in February 2026. The jewels remain missing.

The Uffizi, then, had good reason to draw the distinction. Its attack was digital, not physical. No masked men, no cherry pickers, no shattered display cases. The museum stayed open throughout. Ticketing and visitor areas were unaffected. The only operational disruption, the Uffizi said, was the time required to restore backups.

But the distinction, while technically accurate, obscures a more uncomfortable truth. The Louvre heist was an old crime committed against an old weakness: a poorly guarded window. What happened at the Uffizi belongs to a different category entirely, one where the threat is invisible, the perimeter is infinite, and the damage may not be fully understood for months.

The gap between what Corriere della Sera reported and what the Uffizi acknowledged remains conspicuously wide. The newspaper described a prolonged intrusion in which attackers gained access to the entire museum network, extracted access codes, internal maps, and CCTV camera locations, took control of the photographic server, and then sent a ransom demand accompanied by a threat to auction the compromised data on the dark web.

The Uffizi denied nearly all of this. It said its physical security systems operate on closed internal networks, inaccessible from outside. It said no passwords were stolen. It pointed out that camera locations in a public museum are visible to any visitor, making their “discovery” unremarkable. It said the photographic archive had a complete backup.

What is not disputed is that malware did penetrate administrative systems in late January and early February, that staff email was disrupted, that Italian authorities opened an investigation for attempted extortion and unauthorised computer access, and that technical commentary has linked the incident to BabLock, a ransomware strain also known as Rorschach, previously associated with an attack on La Sapienza University of Rome.

The Uffizi also confirmed that it had moved Medici-era treasures to the Bank of Italy and sealed certain doorways with bricks and mortar, though it attributed both actions to planned renovations and fire safety compliance. It said the replacement of analogue surveillance cameras with digital ones had been recommended by police in 2024 and accelerated after the Louvre heist. Reasonable explanations. But the timing makes them difficult to read as purely coincidental.

What makes the Uffizi incident significant is not its severity but its typicality. Cultural institutions across Europe and North America have been absorbing cyberattacks with increasing frequency, and the pattern reveals a sector structurally unprepared for the threat.

In October 2023, the ransomware group Rhysida hit the British Library, ultimately leaking more than 600 gigabytes of stolen data after the library refused to pay. The estimated recovery cost reached £6 to £7 million. In late 2023, an attack on Gallery Systems, a software provider used by major American museums including the Museum of Fine Arts Boston and Crystal Bridges, rippled across its entire client base, disrupting digital collections and operations.

The Metropolitan Opera in New York suffered a cyberattack in 2022 that knocked out its website, box office, and call centre. Hackney Museum in London was caught in a broader attack on its parent borough council in 2020, in an incident its project curator later described as a “digital building burning down.”

These are not obscure institutions. They are among the most prominent cultural organisations in the world. And yet, as of 2024, only 69 per cent of museums in the United States reported having emergency response plans, and those plans overwhelmingly addressed analogue risks: earthquakes, floods, fire.

Not ransomware. Not data exfiltration. Not the quiet compromise of a network that connects cameras, ticketing, donor databases, and conservation records through a single digital spine.

The Uffizi case crystallises something the cybersecurity community has warned about for years: the convergence of physical and digital security in heritage institutions. Museums are not banks. They were not designed with network segmentation in mind.

Their buildings are centuries old. Their IT budgets are negligible. And yet the systems they now depend on, climate control, access management, surveillance, ticketing, collection documentation, are all networked, all connected, all potentially vulnerable.

When a cyberattacker maps the position of CCTV cameras and alarm systems, the threat is no longer purely digital. It becomes a reconnaissance tool for physical crime. When a ransomware group takes control of a photographic archive, the leverage is not just financial but cultural: the risk of losing or corrupting irreplaceable documentation of a nation’s artistic patrimony.

The Uffizi may be right that its closed-circuit security systems were never compromised. But the fact that the question even arises, that the connection between digital intrusion and physical vulnerability must be explicitly denied, tells you how thin the membrane has become.

The political response in Italy has been revealing. Former Prime Minister Matteo Renzi, who previously served as mayor of Florence, publicly attacked Culture Minister Alessandro Giuli, asking what he had done to protect the Uffizi.

Florence’s mayor, Sara Funaro, called for stronger digital security measures in cultural institutions. Trade unions raised concerns that physical countermeasures such as sealed doors might interfere with emergency evacuation routes in historic buildings.

Meanwhile, just weeks earlier, on the night of 22 March, four hooded thieves broke into the Magnani Rocca Foundation near Parma and stole paintings by Renoir, Cézanne, and Matisse in under three minutes, an entirely physical crime at an institution that apparently had an alarm system but not enough of a deterrent.

Italy, a country that holds perhaps the largest concentration of cultural treasures on earth, is now dealing with both categories of threat simultaneously: the old-fashioned smash-and-grab and the silent digital intrusion. And neither its political class nor its institutional leadership appears to have a coherent strategy for either.

There is a temptation, when an institution says nothing was stolen, to treat the incident as a near miss. This would be a mistake. The Uffizi attack, whatever its true scope, is a proof of concept. It demonstrated that the administrative systems of a world-class museum can be penetrated through a trivially small entry point. It demonstrated that an attacker can move laterally through interconnected networks linking multiple historic sites.

And it demonstrated that the public narrative after such an attack will be dominated by disagreement over what actually happened, a fog of conflicting claims that may itself become a tool for future adversaries.

Cultural institutions are not classified as critical infrastructure in most countries. They do not receive the mandatory cybersecurity audits, the dedicated funding, or the regulatory scrutiny that hospitals, energy grids, and financial systems do. 

They exist in a category of their own: publicly cherished, chronically underfunded, and now digitally exposed.

The Uffizi was right about one thing. What happened to it was nothing like the Louvre. It was, in some ways, worse. A jewel heist is dramatic, visible, and finite. A cyberattack is quiet, ambiguous, and its consequences unfold over months. The thieves who broke into the Louvre left through a window. The ones who got into the Uffizi may never have been in the building at all. And that, precisely, is the problem that Europe’s cultural institutions have not yet begun to solve.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


As I’m writing this, NVIDIA is the largest company in the world, with a market cap exceeding $4 trillion. Team Green is now the leader among the Magnificent Seven of the tech world, having surpassed them all in just a few short years.

The company has managed to reach these incredible heights with smart planning and by making the right moves for decades, the latest being the decision to sell shovels during the AI gold rush. Considering the current hardware landscape, there’s simply no reason for NVIDIA to rush a new gaming GPU generation for at least a few years. Here’s why.

Scarcity has become the new normal

Not even Nvidia is powerful enough to overcome market constraints

Global memory shortages have been a reality since late 2025, and they aren’t just affecting RAM and storage manufacturers. Rather, this impacts every company making any product that contains memory or storage—including graphics cards.

Since NVIDIA sells GPU and memory bundles to its partners, which they then solder onto PCBs and add cooling to create full-blown graphics cards, this means that NVIDIA doesn’t just have to battle other tech giants to secure a chunk of TSMC’s limited production capacity to produce its GPU chips. It also has to procure massive amounts of GPU memory, which has never been harder or more expensive to obtain.

While a company as large as NVIDIA certainly has long-term contracts that guarantee stable memory prices, those contracts aren’t going to last forever. The company has likely had to sign new ones, considering the GPU price surge that began at the beginning of 2026, with gaming graphics cards still being overpriced.

With GPU memory costing more than ever, NVIDIA has little reason to rush a new gaming GPU generation, because its gaming earnings are just a drop in the bucket compared to its total earnings.

NVIDIA is an AI company now

Gaming GPUs are taking a back seat

A graph showing NVIDIA revenue breakdown in the last few years. Credit: appeconomyinsights.com

NVIDIA’s gaming division had been its golden goose for decades, but come 2022, the company’s data center and AI division’s revenue started to balloon dramatically. By the beginning of fiscal year 2023, data center and AI revenue had surpassed that of the gaming division.

In fiscal year 2026 (which began on July 1, 2025, and ends on June 30, 2026), NVIDIA’s gaming revenue has contributed less than 8% of the company’s total earnings so far. On the other hand, the data center division has made almost 90% of NVIDIA’s total revenue in fiscal year 2026. What I’m trying to say is that NVIDIA is no longer a gaming company—it’s all about AI now.

Considering that we’re in the middle of the biggest memory shortage in history, and that its AI GPUs rake in almost ten times the revenue of gaming GPUs, there’s little reason for NVIDIA to funnel exorbitantly priced memory toward gaming GPUs. It’s much more profitable to put every memory chip they can get their hands on into AI GPU racks and continue receiving mountains of cash by selling them to AI behemoths.

The RTX 50 Super GPUs might never get released

A sign of times to come

NVIDIA’s RTX 50 Super series was supposed to increase memory capacity of its most popular gaming GPUs. The 16GB RTX 5080 was to be superseded by a 24GB RTX 5080 Super; the same fate would await the 16GB RTX 5070 Ti, while the 18GB RTX 5070 Super was to replace its 12GB non-Super sibling. But according to recent reports, NVIDIA has put it on ice.

The RTX 50 Super launch had been slated for this year’s CES in January, but after missing the show, it now looks like NVIDIA has delayed the lineup indefinitely. According to a recent report, NVIDIA doesn’t plan to launch a single new gaming GPU in 2026. Worse still, the RTX 60 series, which had been expected to debut sometime in 2027, has also been delayed.

A report by The Information (via Tom’s Hardware) states that NVIDIA had finalized the design and specs of its RTX 50 Super refresh, but the RAM-pocalypse threw a wrench into the works, forcing the company to “deprioritize RTX 50 Super production.” In other words, it’s exactly what I said a few paragraphs ago: selling enterprise GPU racks to AI companies is far more lucrative than selling comparatively cheaper GPUs to gamers, especially now that memory prices have been skyrocketing.

Before putting the RTX 50 series on ice, NVIDIA had already slashed its gaming GPU supply by about a fifth and started prioritizing models with less VRAM, like the 8GB versions of the RTX 5060 and RTX 5060 Ti, so this news isn’t that surprising.

So when can we expect RTX 60 GPUs?

Late 2028-ish?

A GPU with a pile of money around it. Credit: Lucas Gouveia / How-To Geek

The good news is that the RTX 60 series is definitely in the pipeline, and we will see it sooner or later. The bad news is that its release date is up in the air, and it’s best not to even think about pricing. The word on the street around CES 2026 was that NVIDIA would release the RTX 60 series in mid-2027, give or take a few months. But as of this writing, it’s increasingly likely we won’t see RTX 60 GPUs until 2028.

If you’ve been following the discussion around memory shortages, this won’t be surprising. In late 2025, the prognosis was that we wouldn’t see the end of the RAM-pocalypse until 2027, maybe 2028. But a recent statement by SK Hynix chairman (the company is one of the world’s three largest memory manufacturers) warns that the global memory shortage may last well into 2030.

If that turns out to be true, and if the global AI data center boom doesn’t slow down in the next few years, I wouldn’t be surprised if NVIDIA delays the RTX 60 GPUs as long as possible. There’s a good chance we won’t see them until the second half of 2028, and I wouldn’t be surprised if they miss that window as well if memory supply doesn’t recover by then. Data center GPUs are simply too profitable for NVIDIA to reserve a meaningful portion of memory for gaming graphics cards as long as shortages persist.


At least current-gen gaming GPUs are still a great option for any PC gamer

If there is a silver lining here, it is that current-gen gaming GPUs (NVIDIA RTX 50 and AMD Radeon RX 90) are still more than powerful enough for any current AAA title. Considering that Sony is reportedly delaying the PlayStation 6 and that global PC shipments are projected to see a sharp, double-digit decline in 2026, game developers have little incentive to push requirements beyond what current hardware can handle.

DLSS 5, on the other hand, may be the future of gaming, but no one likes it, and it will take a few years (and likely the arrival of the RTX 60 lineup) for it to mature and become usable on anything that’s not a heckin’ RTX 5090.

If you’re open to buying used GPUs, even last-gen gaming graphics cards offer tons of performance and are able to rein in any AAA game you throw at them. While we likely won’t get a new gaming GPU from NVIDIA for at least a few years, at least the ones we’ve got are great today and will continue to chew through any game for the foreseeable future.



Source link