what America’s AI purge means for the rest of us



On the afternoon of 27 February 2026, Pete Hegseth picked up his phone and posted to X. The US Secretary of Defense had just designated Anthropic, a San Francisco AI company, a “supply chain risk to national security.”

The label, under 10 USC 3252, had previously been applied to Huawei and ZTE, Chinese firms accused of embedding surveillance backdoors into their hardware.

Now it was being used against an American company founded by former OpenAI researchers, whose crime was this: it refused to let the US military use its AI models for mass domestic surveillance of American citizens, or for fully autonomous lethal weapons.

That afternoon, hours after Anthropic was blacklisted, OpenAI CEO Sam Altman announced his company had reached its own deal with the Pentagon. His models, he wrote, would be available for all lawful purposes.

The same evening, OpenAI’s most senior hardware executive, Caitlin Kalinowski, who had spent 16 months building the company’s robotics programme, announced her resignation.

“Surveillance of Americans without judicial oversight and lethal autonomy without human authorization,” she wrote, “are lines that deserved more deliberation than they got.”

The lines, as it turned out, had not been deliberated at all. They had been drawn in a contract dispute and erased in a Friday-afternoon press release.

This is where the story is usually told as a clash between two American companies and one American administration, a Washington power struggle with AI at its centre. That reading is not wrong. But it is incomplete.

What happened between Anthropic, OpenAI, and the Pentagon over the first three months of 2026 is also a story about democratic governance, about who gets to set the terms on which the most consequential technologies of our era are deployed, and about what happens when a government decides that the answer to that question is: whoever complies first.

The anatomy of a purge

The sequence of events is worth setting out clearly, because the pace at which they unfolded has obscured their significance. Anthropic held a $200 million Pentagon contract, awarded in July 2025, for work on classified systems.

The terms included two restrictions: Claude could not be used for mass domestic surveillance of American citizens, and it could not be used to power fully autonomous weapons with no human in the targeting loop. These were not novel demands.

They aligned with longstanding prohibitions in international humanitarian law and US constitutional protections. They were, by any reasonable measure, the kind of safeguards a democratic government should want embedded in its AI systems.

The Pentagon disagreed. It wanted, in the words of its final ultimatum, “unrestricted access to AI for all lawful purposes.” When Anthropic declined to remove its restrictions, Hegseth set a deadline: 5:01pm on 27 February. It passed without agreement. Trump, writing on Truth Social, called the company’s leadership “leftwing nut jobs” and ordered every federal agency to immediately cease use of Anthropic’s technology.

A federal judge in San Francisco, reviewing the designation, was less colourful but more precise. Judge Rita Lin wrote in her March ruling that the supply chain risk designation is “usually reserved for foreign intelligence agencies and terrorists, not for American companies,” and described the administration’s actions as “classic First Amendment retaliation.

She issued a preliminary injunction blocking the ban.

None of this stopped a federal appeals court from later denying Anthropic’s stay request, concluding that “the equitable balance here cuts in favour of the government.”

As of this writing, Anthropic is barred from Pentagon contracts, permitted to work with other agencies, and fighting two parallel lawsuits while simultaneously recruiting enterprise partners, launching a $100 million partner programme, and testing its new model, Mythos, with Wall Street banks at the quiet encouragement of the Treasury Secretary and the Federal Reserve chair.

The administration that blacklisted the company is also, directing those banks to evaluate it for critical financial infrastructure.

The contradiction is not bureaucratic confusion. It is a policy.

What OpenAI’s deal actually means

The more uncomfortable part of this story is OpenAI’s role in it. Altman has said his company shares Anthropic’s core principles: no domestic mass surveillance, no autonomous weapons. The companies’ stated red lines are, on paper, nearly identical.

The difference is that OpenAI signed, and Anthropic did not. What exactly is in OpenAI’s Pentagon agreement, and how its provisions compare to the assurances Anthropic sought, has not been made public.

Pentagon officials have said existing US law already prohibits the uses Anthropic was concerned about. Anthropic’s lawyers, and a group of 37 researchers from OpenAI and Google DeepMind who filed an amicus brief supporting the lawsuit, clearly do not share that confidence.

What we can say with reasonable certainty is this: a government that wanted to remove enforceable safety restrictions from AI models used in classified military systems found a way to do so. One company held the line and was treated as an adversary.

Another accommodated the government’s position and was treated as a partner. The market signal this sends to every AI company negotiating a public sector contract, anywhere in the world, could not be clearer.

Sam Altman has acknowledged the deal was “definitely rushed.” OpenAI’s own employees pushed back. ChatGPT uninstalls reportedly surged 295% in the days following the announcement, while Claude climbed to the top of the US App Store.

These responses suggest that users, at least, understood something significant had shifted. The question is whether policymakers outside the United States are drawing the same conclusion.

What Europe should question?

Europe has spent the better part of a decade building a regulatory framework for AI premised on a core democratic argument: that powerful technologies must be constrained by law, not merely by the good intentions of the companies that build them.

The AI Act, which enters full enforcement in August 2026, encodes that argument in legislation. Prohibited uses, including real-time biometric surveillance in public spaces and social scoring, are not left to corporate discretion. They are banned.

What the Anthropic saga demonstrates is what happens in a jurisdiction where that argument has been rejected. In the United States, the Biden administration’s AI safety executive order was revoked on Trump’s first day. State-level AI legislation has been actively suppressed. And when a company tried to embed the principles of the EU AI Act into its own contractual terms, a government that had previously praised its technology as “exquisite” reached for a statute designed to neutralise foreign saboteurs.

The EU’s “Digital Omnibus” package, currently under negotiation, proposes to delay and weaken parts of both the AI Act and GDPR in the name of cutting red tape and boosting competitiveness. It is being driven, at least in part, by the argument that European regulation puts the continent at a disadvantage against less constrained American and Chinese competitors.

The Anthropic case offers a corrective to that framing. What the US has demonstrated is not a competitive advantage through deregulation. It has demonstrated what it looks like when a government uses procurement power to enforce the removal of safety limits that its own democratic principles would otherwise require.

That is not a model Europe should envy. It is a warning, in my humble opinion.

Federal agencies are, as of this week, quietly testing Anthropic’s Mythos model despite the ban. Congressional staff are seeking briefings on its capabilities. The Commerce Department’s Centre for AI Standards and Innovation is actively evaluating its cybersecurity potential. The prohibition is, in practice, already eroding, because the technology is too useful to ignore, even for the government that declared it a national security threat.

That, too, is instructive. The AI guardrails Anthropic refused to remove were not protections the US government ultimately wanted to do without. They were protections it wanted to hold without being contractually bound by. The distinction matters. A safety principle written into a contract is enforceable. A safety principle stated in a press release is a communication strategy.

In Brussels, as in Washington, the question is not whether AI will be governed. It is whether the governance will be written into law before or after the most consequential decisions have already been made.

The deadline for the AI Act’s full provisions is August. The deadline Hegseth set for Anthropic was 5:01pm on a Friday. Both, in their own way, are a reckoning. Yet, I am sure this Saga will continue for a long time. 



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


After being teased in the second beta, the new “Bubbles” feature is finally available in Android 17 Beta 3. This is the biggest change to Android multitasking since split-screen mode. I had to see how it worked—come along with me.

Now, it should be mentioned that this feature will probably look a bit familiar to Samsung Galaxy owners. One UI also allows for putting apps in floating windows, and they minimize into a floating widget. However, as you’ll see, Google’s approach is more restrained.

App Bubbles in Android 17

There’s a lot to like already

First and foremost, putting an app in a “Bubble” allows it to be used on top of whatever’s happening on the screen. The functionality is essentially identical to Android’s older feature of the exact same name, but now it can be used for apps in addition to messaging conversations.

To bubble an app, simply long-press the app icon anywhere you see it. That includes the home screen, app drawer, and the taskbar on foldables and tablets. Select “Bubble” or the small icon depicting a rectangle with an arrow pointing at a dot in the menu.

Bubbles on a phone screen

The app will immediately open in a floating window on top of your current activity. This is the full version of the app, and it works exactly how it would if you opened it normally. You can’t resize the app bubble, but on large-screen devices, you can choose which side it’s on. To minimize the bubble, simply tap outside of it or do the Home gesture—you won’t actually go to the Home Screen.

Multiple apps can be bubbled together—just repeat the process above—but only one can be shown at a time. This is a key difference compared to One UI’s pop-up windows, which can be resized and tiled anywhere on the screen. Here is also where things vary depending on the type of device you’re using.

If you’re using a phone, the current bubbled apps appear in a row of shortcuts above the window. Tap an app icon, and it will instantly come into view within the bubble. On foldables and tablets, the row of icons is much smaller and below the window.

Another difference is how the app bubbles are minimized. On phones, they live in a floating app icon (or stack of icons) on the edge of the screen. You are free to move this around the screen by dragging it. Tapping the minimized bubble will open the last active app in the bubble. On foldables and tablets, the bubble is minimized to the taskbar (if you have it enabled).

Bubbles on a foldable screen

Now, there are a few things to know about managing bubbles. First, tapping the “+” button in the shortcuts row shows previously dismissed bubbles—it’s not for adding a new app bubble. To dismiss an app bubble, you can drag the icon from the shortcuts row and drop it on the “X” that appears at the bottom of the screen.

To remove the entire bubble completely, simply drag it to the “X” at the bottom of the screen. On phones, there’s also an extra “Manage” button below the window with a “Dismiss bubble” option.

Better than split-screen?

Bubbles make sense on smaller screens

That’s pretty much all there is to it. As mentioned, there’s definitely not as much freedom with Bubbles as there is with pop-up windows in One UI. The latter allows you to treat apps like windows on a computer screen. Bubbles are a much more confined experience, but the benefit is that you don’t have to do any organizing.

Samsung One UI pop-up windows

Of course, Android has supported using multiple apps at once with split-screen mode for a while. So, what’s the benefit of Bubbles? On phones, especially, split-screen mode makes apps so small that they’re not very useful.

If you’re making a grocery list while checking the store website, you’re stuck in a very small browser window. Bubbles enables you to essentially use two apps in full size at the same time—it’s even quicker than swiping the gesture bar to switch between apps.

If you’d like to give App Bubbles a try, enroll your qualified Pixel phone in the Android Beta Program. The final release of Android 17 is only a few months away (Q2 2026), but this is an exciting feature to check out right now.

A desktop setup featuring an Android phone, monitor, and mascot, surrounded by red 'missing' labels


Android’s new desktop mode is cool, but it still needs these 5 things

For as long as Android phones have existed, people have dreamed of using them as the brains inside a desktop computing setup. Samsung accomplished this nearly a decade ago, but the rest of the Android world has been left out. Android 17 is finally changing that with a new desktop mode, and I tried it out.



Source link