Trump’s campaign to preempt state AI regulation faces resistance from states and Congress alike


In short: The Trump administration is waging a multi-front campaign to prevent states from regulating AI, using a DOJ litigation task force, Commerce Department evaluations of “burdensome” state laws, and a legislative framework urging Congress to preempt state-level regulation with a “minimally burdensome national standard.” But states have accelerated in the opposite direction – 1,208 AI bills introduced in 2025, 145 enacted – and Congress has rejected preemption twice, including a 99-1 Senate vote to strip an AI moratorium from the One Big Beautiful Bill Act.

Doug Fiefia is a first-term Republican state representative from Herriman, Utah, and a former Google salesperson who managed a team working on the company’s early AI model implementation. Earlier this year, he introduced House Bill 286, the Artificial Intelligence Transparency Act, which would have required frontier AI companies to publish safety and child-protection plans and included whistleblower protections for employees who report safety concerns. It passed a House committee unanimously. Then the White House killed it.

On 12 February, the White House Office of Intergovernmental Affairs sent a letter to Utah Senate Majority Leader Kirk Cullimore Jr. stating: “We are categorically opposed to Utah HB 286 and view it as an unfixable bill that goes against the Administration’s AI Agenda.” Officials held several conversations with Fiefia over the preceding two weeks urging him not to move the bill forward. They did not offer specific changes that could make it acceptable. The bill died in the Senate.

Fiefia’s response was pointed. He said it was especially important to stand up for states’ rights when a fellow Republican was in power, to demonstrate that the principle was not partisan. His bill targeted only “frontier developers,” companies using at least 10^26 floating-point operations to train a model, and carried a $1 million penalty cap. It was, by the standards of AI legislation, modest. The White House treated it as existential.

The federal architecture

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The Trump administration’s campaign against state AI regulation has three components, each building on the last.

The first was Executive Order 14365, signed on 11 December 2025, titled “Ensuring a National Policy Framework for Artificial Intelligence.” It created an AI Litigation Task Force within the Department of Justice, operational from 10 January 2026, to challenge state AI laws in federal court on grounds of unconstitutional burden on interstate commerce or federal preemption. It directed the Secretary of Commerce to publish by 11 March a comprehensive evaluation of state AI laws identifying “burdensome” ones, and instructed the FTC to issue a policy statement on when state laws are preempted by the FTC Act. It conditioned access to federal broadband funding on states’ willingness to avoid enacting what the administration considers onerous AI laws. The executive order carved out child safety protections, data centre zoning authority, and state government procurement from preemption.

The second was the Commerce Department’s evaluation, published on the March deadline, which flagged laws in Colorado, California, and New York for particular scrutiny. The evaluation feeds into the DOJ task force, which is expected to begin filing federal legal challenges by summer 2026. Cases are projected to take two to three years to resolve.

The third was a National Policy Framework for AI released on 20 March, containing legislative recommendations organised around seven pillars: child protection, AI infrastructure, intellectual property, censorship and free speech, innovation, workforce preparation, and preemption of state AI laws. The framework states that “Congress should preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones.” The administration’s position on copyright is that training AI models on copyrighted material “does not violate copyright laws.” On content moderation, it urges Congress to prevent the federal government “from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas.”

David Sacks, who served as AI and crypto czar until transferring to a presidential advisory committee role in late March, framed the logic bluntly: “You’ve got 50 different states regulating this in 50 different ways, and it’s creating a patchwork of regulation that’s difficult for our innovators to comply with.” On Colorado’s algorithmic discrimination rules, he said they raised “very serious First Amendment concerns.” On blue states more broadly: “We don’t like seeing blue states trying to insert their woke ideology in AI models, and we really want to try and stop that.”

What the states have done

The states have not been idle while Washington argues about whether they should be allowed to act. In 2023, fewer than 200 AI bills were introduced across state legislatures. In 2024, the number rose to 635 across 45 states, with 99 enacted. In 2025, 1,208 AI-related bills were introduced across all 50 states, the first year every state introduced at least one, and 145 were enacted into law. In the first two months of 2026 alone, 78 chatbot-specific safety bills were filed across 27 states.

California’s Transparency in Frontier Artificial Intelligence Act took effect on 1 January 2026. Texas’s Responsible Artificial Intelligence Governance Act became effective the same day. Colorado’s AI Act, which bans algorithmic discrimination, had its effective date delayed to 30 June 2026. The volume of legislation reflects a bipartisan consensus at the state level that AI regulation cannot wait for a Congress that has repeatedly failed to act.

Utah Governor Spencer Cox, a Republican, has asserted that states should retain the power to regulate AI. “Let’s use this technology to benefit humankind, and let’s regulate it to make sure they don’t destroy humankind,” he said. “I don’t think that’s a contradiction.” He warned that if AI companies “start selling sexualised chatbots to kids in my state, now I have a problem with that,” and announced a “pro-human” AI initiative with $10 million for workforce readiness.

Congress cannot agree

The administration’s framework requires Congressional action to gain legal force. The executive order itself does not preempt, repeal, or invalidate any state AI law. Until courts rule on specific challenges, regulated parties must continue to comply with state regulations.

The most comprehensive federal AI bill is Senator Marsha Blackburn’s TRUMP AMERICA AI Act, a 291-page discussion draft released on 18 March. It would impose a duty of care for high-risk AI systems, require developers to publish training and inference data use records, repeal Section 230 of the Communications Decency Act, and create an AI liability framework enabling the Attorney General, state attorneys general, and private actors to sue AI developers. It would preempt state laws on frontier AI catastrophic risk management and largely preempt state digital replica laws. It remains a discussion draft and has not been formally introduced.

The One Big Beautiful Bill Act originally included a provision for a ten-year moratorium on state AI regulation, later reduced to five years tied to federal broadband funding. The Senate voted 99 to 1 to strip the AI preemption provision, with only Senator Thom Tillis of North Carolina voting to keep it. The bill was signed into law on 4 July without any restrictions on state AI legislation. Congress’s message was unambiguous: the guardrail question is not settled.

The money behind the fight

The lobbying infrastructure on both sides has scaled to match the stakes. Leading the Future, a super PAC launched in August 2025 by Andreessen Horowitz and OpenAI president Greg Brockman, raised $125 million in 2025 and had $70 million on hand at year end. It supports candidates favouring AI-friendly policies and uniform federal regulation over state-by-state approaches.

On the other side, Anthropic donated $20 million in February 2026 to Public First Action, a bipartisan group that plans to back 30 to 50 candidates from both parties who support AI safeguards. Public First’s broader network of super PACs has pledged $50 million for pro-regulation candidates. The tech industry reportedly spent more than $1 billion in total efforts to prevent states from regulating AI.

A bipartisan coalition of 36 state attorneys general sent a letter to Congress opposing AI preemption, arguing that risks including scams, deepfakes, and harmful interactions, especially for children and seniors, make state protections essential. Colorado’s attorney general has committed to challenging the executive order in court.

The precedent that matters

The administration revoked Biden’s Executive Order 14110 within hours of taking office on 20 January 2025, calling it “unnecessarily burdensome.” That order had required developers to conduct pre-release safety evaluations and share findings with the government. Its replacement, signed three days later, was titled “Removing Barriers to American Leadership in Artificial Intelligence.” The trajectory from revoking federal safety requirements to attempting to prevent states from creating their own has a logic: if the federal government will not regulate AI, and it will not allow states to regulate AI, then AI will not be regulated.

The contrast with Europe is instructive. The EU AI Act entered full enforcement in January 2026, creating a single regulatory framework across 27 member states. The US approach is the inverse: no binding federal standard and an active campaign to prevent the states from filling the gap. The result is that AI governance in America is being determined not by legislation or regulation but by litigation, executive orders, and the political leverage of the companies that stand to benefit most from the absence of rules.

Doug Fiefia, the Utah Republican who watched his transparency bill die after a White House letter, is now running for state senate. His opponent, the incumbent who helped kill the bill, reportedly said it “would have driven Utah out of the AI innovation business.” Fiefia co-chairs the AI task force of the Future Caucus alongside Monique Priestley, a Vermont Democrat with 24 years in technology. They represent a generation of state lawmakers who have worked in tech, understand what AI can do, and believe that understanding should inform regulation rather than prevent it. The question is whether the regulatory vacuum they are trying to fill will last long enough to become permanent.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


After being teased in the second beta, the new “Bubbles” feature is finally available in Android 17 Beta 3. This is the biggest change to Android multitasking since split-screen mode. I had to see how it worked—come along with me.

Now, it should be mentioned that this feature will probably look a bit familiar to Samsung Galaxy owners. One UI also allows for putting apps in floating windows, and they minimize into a floating widget. However, as you’ll see, Google’s approach is more restrained.

App Bubbles in Android 17

There’s a lot to like already

First and foremost, putting an app in a “Bubble” allows it to be used on top of whatever’s happening on the screen. The functionality is essentially identical to Android’s older feature of the exact same name, but now it can be used for apps in addition to messaging conversations.

To bubble an app, simply long-press the app icon anywhere you see it. That includes the home screen, app drawer, and the taskbar on foldables and tablets. Select “Bubble” or the small icon depicting a rectangle with an arrow pointing at a dot in the menu.

Bubbles on a phone screen

The app will immediately open in a floating window on top of your current activity. This is the full version of the app, and it works exactly how it would if you opened it normally. You can’t resize the app bubble, but on large-screen devices, you can choose which side it’s on. To minimize the bubble, simply tap outside of it or do the Home gesture—you won’t actually go to the Home Screen.

Multiple apps can be bubbled together—just repeat the process above—but only one can be shown at a time. This is a key difference compared to One UI’s pop-up windows, which can be resized and tiled anywhere on the screen. Here is also where things vary depending on the type of device you’re using.

If you’re using a phone, the current bubbled apps appear in a row of shortcuts above the window. Tap an app icon, and it will instantly come into view within the bubble. On foldables and tablets, the row of icons is much smaller and below the window.

Another difference is how the app bubbles are minimized. On phones, they live in a floating app icon (or stack of icons) on the edge of the screen. You are free to move this around the screen by dragging it. Tapping the minimized bubble will open the last active app in the bubble. On foldables and tablets, the bubble is minimized to the taskbar (if you have it enabled).

Bubbles on a foldable screen

Now, there are a few things to know about managing bubbles. First, tapping the “+” button in the shortcuts row shows previously dismissed bubbles—it’s not for adding a new app bubble. To dismiss an app bubble, you can drag the icon from the shortcuts row and drop it on the “X” that appears at the bottom of the screen.

To remove the entire bubble completely, simply drag it to the “X” at the bottom of the screen. On phones, there’s also an extra “Manage” button below the window with a “Dismiss bubble” option.

Better than split-screen?

Bubbles make sense on smaller screens

That’s pretty much all there is to it. As mentioned, there’s definitely not as much freedom with Bubbles as there is with pop-up windows in One UI. The latter allows you to treat apps like windows on a computer screen. Bubbles are a much more confined experience, but the benefit is that you don’t have to do any organizing.

Samsung One UI pop-up windows

Of course, Android has supported using multiple apps at once with split-screen mode for a while. So, what’s the benefit of Bubbles? On phones, especially, split-screen mode makes apps so small that they’re not very useful.

If you’re making a grocery list while checking the store website, you’re stuck in a very small browser window. Bubbles enables you to essentially use two apps in full size at the same time—it’s even quicker than swiping the gesture bar to switch between apps.

If you’d like to give App Bubbles a try, enroll your qualified Pixel phone in the Android Beta Program. The final release of Android 17 is only a few months away (Q2 2026), but this is an exciting feature to check out right now.

A desktop setup featuring an Android phone, monitor, and mascot, surrounded by red 'missing' labels


Android’s new desktop mode is cool, but it still needs these 5 things

For as long as Android phones have existed, people have dreamed of using them as the brains inside a desktop computing setup. Samsung accomplished this nearly a decade ago, but the rest of the Android world has been left out. Android 17 is finally changing that with a new desktop mode, and I tried it out.



Source link