Trump administration blacklisted Anthropic – now tells banks to use its AI



In short: Treasury Secretary Scott Bessent and Fed Chair Jerome Powell are urging Wall Street’s biggest banks to test Anthropic’s Mythos AI model for cybersecurity vulnerabilities, even as the Pentagon fights Anthropic in court after branding it a supply chain risk for refusing to remove safety guardrails on autonomous weapons and mass surveillance. JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are all reportedly testing the model. Mythos, which found thousands of zero-day flaws across major operating systems and browsers, is being distributed through a restricted programme called Project Glasswing to roughly 50 organisations. UK regulators are also scrambling to assess the risks.

The Trump administration is quietly encouraging America’s largest banks to test the same AI company’s technology it has spent two months trying to destroy. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned executives from JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley this week and urged them to use Anthropic’s new Mythos model to detect cybersecurity vulnerabilities in their systems, according to Bloomberg.

The recommendation is remarkable for its contradiction. Anthropic is currently fighting the Department of Defense in federal court after Defense Secretary Pete Hegseth designated the company a “supply chain risk“, a label that bars it from military contracts and directs defence contractors to stop using its technology. The designation came after Anthropic refused to remove two safety restrictions from its AI models: no use in fully autonomous weapons, and no deployment for mass surveillance of American citizens.

Now, two of the administration’s most senior economic officials are telling Wall Street to adopt the very product the Pentagon has tried to blacklist.

What Mythos actually does

Claude Mythos Preview is a frontier model that Anthropic did not explicitly train for cybersecurity. The vulnerability-finding capability emerged as what the company describes as a downstream consequence of general improvements in code reasoning and autonomous operation. During testing, Mythos identified thousands of zero-day vulnerabilities, flaws previously unknown to software developers, across every major operating system and web browser.

The capabilities were significant enough that Anthropic chose not to release the model publicly. Instead, it launched Project Glasswing, a controlled programme giving access to roughly 50 organisations including Amazon Web Services, Apple, Google, Microsoft, Nvidia, Cisco, CrowdStrike, Palo Alto Networks, and JPMorgan Chase. Anthropic has committed up to $100 million in usage credits and $4 million in direct donations to open-source security organisations as part of the initiative.

The framing, a model “too dangerous to release“, has drawn scepticism. Tom’s Hardware noted that claims of “thousands” of severe zero-day discoveries relied on just 198 manual reviews, and that many of the flagged vulnerabilities were in older software or were impractical to exploit. Others in the security community suggested the restricted release looked less like responsible AI governance and more like a smart enterprise sales strategy: create scarcity, generate fear, and let the customers come to you.

The Pentagon paradox

The collision between the Bessent-Powell recommendation and the Hegseth designation is not a matter of mixed signals, it is two branches of the same administration pursuing openly contradictory policies toward the same company.

The Pentagon dispute began in February, when Hegseth gave Anthropic CEO Dario Amodei a Friday deadline to drop the company’s safety restrictions or lose its $200 million defence contract. Amodei refused. Hegseth responded by declaring Anthropic a supply chain risk and President Trump ordered federal agencies to stop using its technology. A Pentagon official accused Amodei of having a “God complex.” Trump called Anthropic a “radical left, woke company.

The courts have since split. A federal judge in California issued a preliminary injunction blocking the supply chain designation, writing that “nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government.” An appeals court in Washington, D.C., denied Anthropic’s request to temporarily halt the blacklisting while the case proceeds. The net effect: Anthropic is excluded from DoD contracts but can continue working with other government agencies.

It is into that gap, excluded from the Pentagon but not from the Treasury or the Fed, that Bessent and Powell stepped this week.

What the banks are actually doing

JPMorgan Chase was the only bank listed as an official Project Glasswing partner, but Bloomberg reported that Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are all testing Mythos internally. The use cases reportedly include vulnerability detection, fraud-risk flagging, and compliance workflow automation across financial systems.

The speed of adoption reflects a genuine fear. If Mythos can find zero-day vulnerabilities in operating systems and browsers, it can presumably find them in banking infrastructure too, and so can any sufficiently capable model that follows. The defensive logic is straightforward: better to find the holes before an adversary’s AI does.

The regulatory response has been international. The Financial Times reported that UK officials at the Bank of England, the Financial Conduct Authority, and HM Treasury are in discussions with the National Cyber Security Centre to examine potential vulnerabilities highlighted by Mythos. Representatives from major British banks, insurers, and exchanges are expected to be briefed within the fortnight.

The uncomfortable implication

The Mythos episode exposes a structural problem in the administration’s approach to AI. The same government that branded Anthropic a national security threat because it refused to remove safety guardrails is now urging the financial system to depend on Anthropic’s technology for its own security. The message to Anthropic is incoherent: you are too dangerous to trust with defence contracts, but indispensable enough that the Treasury Secretary personally phones bank CEOs to recommend your product.

For Anthropic, the contradiction is strategically useful. Every bank that adopts Mythos deepens the company’s integration into critical national infrastructure, making the supply chain designation look increasingly absurd. For the administration, the episode reveals what happens when national security policy is driven by personal grievance rather than coherent strategy: the left hand blacklists what the right hand is busy deploying.

The banks, for their part, appear untroubled by the contradiction. When the Treasury Secretary and the Fed Chair tell you to test something, you test it,  regardless of what the Pentagon thinks about the company that made it.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


Do you ever walk past a person on the streets exhibiting mental health issues and wonder what happened to their family? I have a brother—or at least, I used to. I worry about where he is and hope he is safe. He hasn’t taken my call since 2014.

James and his brother as young children playing together before his brother became sick. James is on the right and his brother is on the left.

James and his brother as young children playing together before his brother became sick. James is on the right and his brother is on the left.

When I was 13, I had a very bad day. I was in the back of the car, and what I remember most was the world-crushing sound violently panging off every surface: he was pounding his fists into the steering wheel, and I worried it would break apart. He was screaming at me and my mother, and I remember the web of saliva and tears hanging over his mouth. His eyes were red, and I knew this day would change everything between us. My brother was sick.

Nearly 20 years later, I still have trouble thinking about him. By the time we realized he was mentally ill, he was no longer a minor. The police brought him to a facility for the standard 72-hour hold, where he was diagnosed with paranoid delusional schizophrenia. Concluding he was not a danger to himself or others, they released him.

There was only one problem: at 18, my brother told the facility he was not related to us and that we were imposters. When they let him out, he refused to come home.

My parents sought help and even arranged for medication, but he didn’t take it. Before long, he disappeared.

My brother’s decline and disappearance had nothing to do with the common narratives about drug use or criminal behavior. He was sick. By the time my family discovered his condition, he was already 18 and legally independent from our custody.

The last time he let me visit, I asked about his bed. I remember seeing his dirty mattress on the floor beside broken glass and garbage. I also asked about the laptop my parents had gifted him just a year earlier. He needed the money, he said—and he had maxed out my parents’ credit card.

In secret from my parents, I gave him all the cash I had saved. I just wanted him to be alright.

My parents and I tried texting and calling him; there was no response except the occasional text every few weeks. But weeks turned into months.

Before long, I was graduating from high school. I begged him to come. When I looked in the bleachers, he was nowhere to be seen. I couldn’t help but wonder what I had done wrong.

The last time I heard from him was over the phone in 2014. I tried to tell him about our parents and how much we all missed him. I asked him to be my brother again, but he cut me off, saying he was never my brother. After a pause, he admitted we could be friends. Making the toughest call of my life, I told him he was my brother—and if he ever remembers that, I’ll be there, ready for him to come back.

I’m now 32 years old. I often wonder how different our lives would have been if he had been diagnosed as a minor and received appropriate care. The laws in place do not help families in my situation.

My brother has no social media, and we suspect he traded his phone several years ago. My family has hired private investigators over the years, who have also worked with local police to try to track him down.

One private investigator’s report indicated an artist befriended my brother many years ago. When my mother tried contacting the artist, they said whatever happened between them was best left in the past and declined to respond. My mom had wanted to wish my brother a happy 30th birthday.

My brother grew up in a safe, middle-class home with two parents. He had no history of drug use or criminal record. He loved collecting vintage basketball cards, eating mint chocolate chip ice cream, and listening to Motown music. To my parents, there was no smoking gun indicating he needed help before it was too late.

The next time you think about a person screaming outside on the street, picture their families. We need policies and services that allow families to locate and support their loved ones living with mental illness, and stronger protections to ensure that individuals leaving facilities can transition into stable care. Current laws, including age-based consent rules, the limits of 72-hour holds, and the lack of step-down or supported housing options, leave too many families without resources when a serious diagnosis occurs.

Governments and lawmakers need to do better for people like my brother. As someone who thinks about him every day, I can tell you the burden is too heavy to carry alone.

James Finney-Conlon is a concerned brother and mental health advocate. He can be reached at [email protected].



Source link