Treasury Nudges Wall Street Toward Anthropic Mythos

Trump officials push banks to test Anthropic’s Mythos security model, revealing how AI security power is being shaped behind closed doors.

Treasury Nudges Wall Street Toward Anthropic Mythos

Trump officials push banks to test Anthropic’s Mythos security model in a way that makes the politics around the product more important than the product itself.

Imagine showing up to Treasury for the usual regulator kabuki and getting told, in very polite Washington language, that you should really test a specific AI model from Anthropic because it’s apparently so good at finding vulnerabilities that even the people who built it are handling it like uranium.

That’s not a normal meeting. That’s a summons with better catering.

And honestly, that’s the real story behind the headline that Trump officials push banks to test Anthropic’s Mythos security model. Not just the model. The choreography. The fact that Washington is now doing this deeply American thing where people argue about ideology in public and pick preferred vendors in private.

A founder friend told me over a truly offensive $7.50 espresso in New York, “The government doesn’t want to regulate AI. It wants a shortlist.” Brutal. Also correct.

Treasury Applied Pressure on Banks

According to Bloomberg, later cited by Fortune and TechCrunch, Treasury Secretary Scott Bessent and Fed Chair Jerome Powell brought major bank leaders into Treasury headquarters and encouraged them to test Anthropic’s Mythos model for vulnerability discovery.

If you’ve never dealt with finance or regulators, “encouraged” might sound soft. It isn’t. In that world, an encouragement from Treasury lands somewhere between a recommendation and a threat from God.

The guest list tells you this wasn’t some side conversation. Jane Fraser from Citi. Ted Pick from Morgan Stanley. Brian Moynihan from Bank of America. Charlie Scharf from Wells Fargo. David Solomon from Goldman. Fortune also reported Jamie Dimon was invited, even if he didn’t show.

Yes, banks test vendor tools all the time. But there’s a huge difference between a security team quietly piloting some product and the Treasury Secretary plus the Fed Chair all but saying, maybe start with this one. That’s not the market being the market. That’s the state leaning on the scale while pretending it’s just resting a hand there.

Fortune described it as an “emergency meeting.” Washington does not use that phrase because everyone was bored on a Tuesday.

Anthropic Is Fighting Washington While Selling to It

This is where the story stops being cybersecurity and starts feeling like satire.

TechCrunch and Fortune reported that Anthropic is also battling the Trump administration in court over a Defense Department designation calling it a supply-chain risk, after talks broke down over limits on government use of its models.

So in one part of government, Anthropic is risky enough to get blacklisted. In another, senior officials are nudging the biggest banks in America to evaluate its tech.

I don’t even think hypocrisy is the right word. It looks like a government admitting, without saying it out loud, that it may distrust a company politically and still feel operationally dependent on it. That’s the AI era in one sentence.

Fortune reported Anthropic briefed senior U.S. officials and industry people ahead of Mythos’s release. Axios said agencies including CISA and the Commerce Department were briefed too. So this wasn’t one of those launches where everyone discovers the news at the same time. The state was already in the loop.

Anthropic also said it would work with officials “at all levels of government” to prioritize national security and preserve the U.S. lead in AI. That’s polished PR language, sure. It’s also a clean description of how power now moves: not through law first, but through briefings, access, alignment, and a bunch of meetings normal people never hear about.

I’ve seen smaller versions of this movie a hundred times in tech. A startup says it’s independent. Government says it’s cautious. Then both sides quietly get entangled because nobody wants to be the idiot who moved too slowly.

Mythos May Be Real, but the Marketing Is Too

I’m not going to do the lazy thing and call the whole thing hype. That’s too easy, and probably wrong.

According to Anthropic’s own Frontier Red Team and Project Glasswing posts, Mythos found thousands of zero-day vulnerabilities across every major operating system and every major web browser. Anthropic says more than 99% of what it found is still undisclosed because the bugs haven’t been patched yet.

The examples are wild enough to cut through the usual AI fog. Anthropic says Mythos found a 27-year-old vulnerability in OpenBSD. It says it found a 16-year-old FFmpeg bug in code that automated testing had hit five million times without catching it. It reportedly chained Linux kernel bugs to go from regular user access to full machine control.

Anthropic also says the model did much of this “entirely autonomously, without any human steering.” That’s the line that changes the temperature. AI helping security researchers is one story. AI independently doing exploit-relevant work that used to require scarce experts is a much bigger one.

Still, the theater is obvious. WIRED had the right read: Mythos may be a real milestone, but it also sits inside a broader trend where AI agents are already making bug-finding and exploitation cheaper and faster. So Anthropic may be overselling the uniqueness while still being directionally right about where this is going.

WIRED quoted Alex Zenla, CTO of Edera, reacting to the development.

I typically am very skeptical of these things, and the open source community tends to be very skeptical, but I do fundamentally feel like this is a real threat.

A Wall Street trader analyzing financial data on multiple screens, with a focus on AI and technology trends.

Banks Are Testing More Than a Model

This is the part people keep missing.

If a bank starts relying on a frontier model to find vulnerabilities faster than its internal teams can, that’s not just a tooling choice. It’s a dependency choice. A strategic one. If Mythos consistently catches what your people and your old scanners miss, Anthropic stops being a vendor and starts becoming part of your security nervous system.

That’s what Project Glasswing really signals. Anthropic says it’s a coordinated effort to secure critical software before broader release, with access paired with validation and remediation. WIRED reported only a few dozen organizations are getting the model, including Microsoft, Apple, Google, and the Linux Foundation. Fortune added JPMorgan Chase, Amazon, and Google as partners in related reporting.

That list is the whole playbook: scarcity, prestige, urgency. If you’re in, you’re important. If you’re out, good luck.

Anthropic’s own benchmarks make the gap harder to wave away. On CyberGym vulnerability reproduction, Mythos Preview scored 83.1%, versus 66.6% for Claude Opus 4.6. That’s not a rounding error. That’s the kind of jump that makes last year’s assumptions look very last year.

Once defensive capability is distributed like a private club, AI safety starts looking a lot like a sales advantage. The safest institutions may just be the ones with the best relationships and the earliest access. That’s not some neutral technical outcome. That’s power.

And banks are a special case. If a major bank’s security posture degrades, that’s national infrastructure. If Wall Street starts needing a private AI lab to keep up with vulnerability discovery, this stops being a product story and becomes a sovereignty story.

The Real Panic Is the Patch Race

The clicky version of this story is obvious: AI can hack now. Very scary. Great trailer.

The real operational problem is nastier. AI-assisted offensive security compresses the time between bug discovery and exploitation, especially once models get better at exploit chains and zero-click attacks.

WIRED framed this better than most. Mythos matters less as a single doomsday product than as evidence that AI-assisted offensive security is accelerating. The old assumption was that defenders had enough time to discover, validate, disclose, patch, and move on before attackers industrialized the gap. That assumption is dying.

Anthropic’s own write-up is blunt. Mythos can allegedly turn N-day vulnerabilities into working exploits, reverse-engineer exploits on closed-source software, and discover zero-days in real open-source codebases. If you work in software security, that’s your backlog becoming a hostage note.

The FFmpeg example is especially striking. Anthropic says the vulnerable line had been hit by automated testing five million times without the bug being found.

And this isn’t just a U.S. story. The Financial Times reported that U.K. financial regulators are also discussing the risks around Mythos. Which tells you this is landing the same way everywhere: not as a neat model launch, but as a warning that patch cadence may no longer be enough.

A lot of executives still talk about cybersecurity like it’s mainly a staffing problem. Hire more analysts. Buy another tool. Add another vendor. But if offense gets compressed by AI faster than defense can validate and remediate, then the bottleneck becomes process. Bureaucracy becomes the vulnerability.

Don’t Call This a Free Market

This is how AI adoption in critical sectors is going to happen. Not through elegant legislation. Not through some pristine standards body document nobody reads. Through closed-door nudges, selective access, and soft pressure from officials who absolutely do not want to be blamed later.

So if Trump officials push banks to test Anthropic’s Mythos security model, let’s at least be honest about what that is. The state is shaping winners while pretending it isn’t doing industrial policy. Recommendation theater is still intervention when the people making the recommendation can make your life miserable.

Anthropic’s own rollout makes the contradiction sharper. Mythos is under restricted release because of safety concerns. Access is limited. That may be prudent. It also creates prestige, urgency, and dependence.

Nobody says, “We are building a strategic dependency on a single frontier AI vendor whose relationship with the federal government is unstable.” They say, “We’re running a pilot.” Then legal reviews it. Then risk signs off. Then procurement negotiates. Then six months later the product is buried so deep in workflows nobody remembers when it got there.

That’s how infrastructure gets made now. Not always by law. By accretion. Quietly. Through temporary decisions and meetings nobody voted on.

And that’s why this story matters. Not because banks might use a new security tool. Of course they will. The real question is who gets to decide which private AI company becomes part of the defensive nervous system of the financial system.

Right now, that decision seems to be happening through pressure, scarcity, and fear.

That should make you more nervous than the product demo.

Because once this becomes normal, we’re not just outsourcing cybersecurity.

We’re outsourcing state capacity.

Sources

Related reading