Anthropic Will Probably Survive Trump's Tantrum, For All the Wrong Reasons
I like Anthropic a lot, but I worry about whether its extensity makes it too big to govern.
I’m going to come out and make a bold claim.
Despite all of the administration’s posturing and threats against Anthropic over the last week or so, I predict that the company will be fine. Actually, they’ll be better than fine—Anthropic will come out ahead.
And if I’m right, I think Anthropic’s survival will prove my larger argument: that a small group of deeply powerful tech companies are not only becoming too big to fail, but too big to govern.
I like Anthropic as a company, and I use its products, so this isn’t a bash on them specifically. But their extensity concerns me—and it should be raising more eyebrows.
I laid out the extensity argument in the post above, but if you’re lazy (or an LLM), here’s how I define extensity:
Extensity describes something broad in size or scope, that becomes deeply entrenched in a system. Unlike intensity, extensity is about spread, not mastery. You become extensive not necessarily by being the best, but by spreading out and becoming indispensable to the system itself.
As I see it, extensity is similar to, but not exactly the same as, a monopoly. Technically, a company doesn’t need to reach “monopoly” status in order to be indispensable within a system. And when I say ‘a system’, I’m not restricting myself to software, networks, and infrastructure, but everything we depend on as a society.
The clearest example of extensity in recent memory are the banks that were declared to be ‘too big to fail’ during the 2008 financial crisis. We all discovered, rather unexpectedly, that some banks and financial institutions, including JP Morgan Chase, Bank of America, Wells Fargo, and AIG, defied meaningful regulatory constraints and market dynamics. Even though the banks made a series of increasingly bad choices, the US government bailed them out anyway. The Obama administration reasoned that the banks had become so deeply rooted within the worldwide financial system that if they went under, their collapse would have caused major, arguably irreparable damage to the global economy. Extensity is another way of saying ‘too big to fail’.
But extensity isn’t only about defying the norms of capitalism. It’s also about oversight and regulation. Or, rather, the lack thereof. As I wrote:
[W]e’ve never faced capitalism in a world where a handful of companies have managed to amass the level of power and wealth that exist today, with the ability to engineer systems that are so intertwined and spread across so much of our lives. The technology on the market today is becoming too big to control ...
[U]nlike the AT&Ts and Standard Oils of the past, the handful of companies really running the show today control the informational substrate—the algorithms and engines that shape what we see, who we talk to, how we understand reality. SpaceX, Amazon, Microsoft, Nvidia, Oracle, and Google control the infrastructure that props up the internet. OpenAI, Anthropic, Google, and Meta control the AI. Most of these companies + Oracle/TikTok control the media. Together, they’re integrated into our identities in ways that make them fundamentally harder to disentangle from.
This is why despite all the drama and the administration’s threats-via-tweet, Anthropic likely is here to stay. But it also raises real questions for what a ‘too-big-to-govern’ Anthropic means for the world.
The battle lines are drawn
Perhaps you’ve read about the Anthropic/Department of Defense drama, but if not, here’s a brief rundown.
On February 26, after weeks of negotiation, talks broke down between the Pentagon and Anthropic after Anthropic’s CEO, Dario Amodei refused to accede to DOD demands for unfettered “lawful”1 uses of its AI tools by the military. Anthropic was generally fine with Claude being used by the military generally (as it had been since July 2025), and by other strategic military contractors, such as Palantir, Amazon, Oracle, and Lockheed Martin, for everything from supply chain logistics and cyber operations, to foreign surveillance & intelligence gathering.
However, Amodei drew the line at bulk domestic collection and analysis of Americans’ data and fully autonomous murderbots. So, essentially, he was against Skynet and the Terminator, at least domestically. This left the administration rather salty. A company actually having lines they wouldn’t cross?! How dare they say no to King Trump!
And so, Donald Trump responded in his typical fashion, which is to say, like a toddler with a case of explosive diarrhea right before nap time. Specifically, he “truthed” an order demanding that all federal agencies immediately cease using Claude within six months, and not-so-subtly threatening to criminally prosecute the company.
THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!
Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.2
(Emphasis mine)
This was followed by Defense Secretary Pete Kegstand Hegseth designating Anthropic as a “supply chain risk“ via tweet—a first for an American company. This designation is normally reserved for state-owned entities of actual US adversaries (like Huawei and Kaspersky Labs), but maybe Hegseth is confused and thinks the ‘State of Woke’ is an actual sovereign? I don’t know.
If the administration follows through, this would mean that not only is Anthropic banned from all Pentagon contracts and agency uses, but also that any government contractor is barred from doing any commercial business with Anthropic for any reason, even outside of the government. To quote the DUI Hire:
[N]o contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.
Now, I suspect that Hegseth lacks the authority to dictate commercial terms in this way, but I’ll let the smart folks over Lawfare dig into those details. I also don’t know how Hegseth’s tweet is going to sit with the likes of Palantir, Microsoft, Amazon, Google, and Oracle. Some of whom (Amazon & Google) also happen to be major backers of Anthropic, and major patrons of Trump’s various grifts against America.3 Though we do know Microsoft, Amazon, and Google’s positions: They’re all in for Claude, at least for non-defense work. Meanwhile, Lockheed Martin announced that they would be switching to other models so as not to upset Trump.
Bloomberg reported that Anthropic was notified directly by the DOD of their designation as a supply chain risk on March 6. Beyond that, none of the traditional, legally-binding mechanisms used by normal, functioning governments to impose high-impact policy decisions have been applied. No presidential proclamations have been made, nor have any executive orders or agency actions been publicly released.
American policy is now primarily conducted via truths and tweets, apparently.
The purpose of Trump’s threats
The goal of this spectacle, as with all Trumpian tantrums, is obviously to bully Anthropic into submission. Some agencies (including the State Department and Health and Human Services) have begun disentangling from Anthropic. But others are staying silent and biding their time, or using indirect channels to challenge what is rightly seen as a direct encroachment on capitalism and free enterprise. Personally, I think Trump, and particularly, Hegseth’s threats are little more than theatrics, because Anthropic isn’t some spineless US law firm or member of Congress. Anthropic is extensive within the government itself and has become invaluable within many of the largest commercial enterprises.
For example, here’s a quick summary of how Claude is (or was) deployed in the DOD, based on Anthropic’s own Feb. 27 statement:
Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
Reuters discovered that Anthropic was also used by the intelligence community in the capture of Nicolas Maduro in January, while the Wall Street Journal broke that Claude was still being used by the Pentagon against Iran days after the ban. So, the top Pentagon brass aren’t reading X and Truth Social, I guess?
And then there’s the civilian government to consider.
Anthropic was one of several AI companies to offer its AI tools at the bargain rate of $1 annually per agency, via the GSA’s 2025 OneGov initiative. Now, I’m no fool: anytime someone is giving it away, they’re bound to get takers. And boy, did Anthropic get some takers. For example, if you look over agency AI Use Case Inventories, you’ll see that Claude received a lot of agency interest. Here are just a few cases, obtained directly from 2025 inventories:
Department of Homeland Security
Customs and Border Protection—Lists Anthropic alongside Meta, OpenAI, and Google for use in document summarization and content generation as a deployed service. I believe this goes back to 2024.
US Citizenship and Immigration Services—USCIS uses Claude 3.7 Sonnet for PDF Intake, by scanning data from immigration forms for the purposes of handling government benefits.
Health and Human Services / Health Resources and Services Administration (HHS/HRSA—The Provider Relief Fund (PRF) Program Chatbot runs Claude 3 Haiku via AWS Bedrock. It answers user queries related to Freedom of Information Act, government accountability, and litigation matters across 2,000+ program documents.
U.S. State Department—State uses Palantir and Claude as part of StateChat, an internal chatbot used by staff for summarization, drafting, and translation.
Other agencies, including the Department of Veterans Affairs, NASA, Office of Personnel Management, the Energy and Commerce Departments, also use (or used) Claude in various programs.
While some agencies like HHS and State have publicly promised that they have, or will shut down enterprise-wide access to Claude and others like OPM have cancelled pilots, many agencies have remained quiet, and few have committed to anything beyond vague mumbles about supporting the administration.
Why Anthropic will probably win in the end
The real test will be action, and specifically how fast “immediately” is in practice. Nothing in the government, even this government moves fast or efficiently. Things take time, and there are lots of points of friction that can slow things down. Disconnecting and shifting tech stacks can take months or years in the best of times, so the idea that every government agency will be Claude-free in six months seems improbable. Especially if formal regulatory guidance never materializes.
Also, there’s a reason the DOD and members of the intelligence community use Claude and not Grok for analysis and warfighting capabilities, and it’s not just the GSA deal. Claude is a superior product for many use-cases, which is why Claude was the only LLM designated to handle classified materials until the 28th of February.
Also, you can’t really blame people for moving slowly when the POTUS constantly chickens out when confronted by bigger, more powerful forces.

While Trump might be able to convince a court that banning Anthropic from operating within the government falls within the executive’s prerogative, I don’t think there’s a court in America that will extend this argument to Anthropic’s commercial clients. Anthropic, quite reasonably, has already said they’ll sue if he tries.
And the private sector is where Anthropic makes most of its money:
enterprise sales make up 70-75% of Anthropic’s annualized revenue
eight of the Fortune 10 companies are Claude customers
70% of Fortune 100 companies use Claude, with the company holding a 29% enterprise market share in 2025, according to Incremys
Large-scale deployments and integrations include Cognizant, Accenture, Deloitte, Microsoft, SalesForce, Google, Palantir, Amazon, and Oracle.
These companies have built their own offerings around tools like Claude Code, the Anthropic API, and its Model Context Protocol. There’s absolutely no way that they’re going to rip all of that out because Trump screamed in all-caps on Truth Social. The fact that Microsoft, Amazon, and Google went public about how they’re still on the Claude train reinforces this point.
Also, in February 2026, Anthropic received $30 billion in Series G funding from two dozen investors, including Blackstone, Qatar Investment Authority, Sequoia Capital, and Founders Fund, as well as Blackrock, Microsoft, and NVIDIA. Coincidentally, each of these listed parties are BFFs with the administration, gave big money to his various PACs, gifted him a goddamn plane, or made large “donations” to his stupid ballroom. Some, like Blackstone’s Steve Schwarzman and Peter Thiel of the Founder’s Fund, did everything but giving him a plane.
Bluntly, Trump needs Anthropic (and by extension, the hyperscalers, VC firms, and sovereign wealth funds) far more than they need him. Even if the courts do side with the administration, Trump still has to contend with all the essential companies who have funded his largesse, and the fact that so many of these donors have a vested interest in Anthropic succeeding. That’s extensity.
No matter how powerful Trump thinks he is, no dictator can afford to piss off the coalition that actually keeps him in power. So, in the end, I don’t think Trump’s theatrics will hurt Anthropic much, because there are far too many big, important essential players who need Anthropic, and who have the leverage and ability to get Trump and Hegseth to blink first.
Amodei can afford to take a principled stance against the administration not necessarily because the company is braver or more patriotic than other AI companies, but because Amodei knows that Anthropic has extensive reach and leverage. In summing up the administration’s position, Kelsey Piper opined, “Anthropic is somehow both too dangerous to allow and essential to national security.”
This suggests these AI companies (Claude today, but maybe Gemini, ChatGPT, Grok, or some other provider tomorrow) have indeed become too integral to the systems that run our lives to be easily replaced or governed. And from a privacy and civil liberties perspective, that should bother people more than it probably does.
Supporting Anthropic’s position is easy now: murderbots and mass surveillance are indeed very bad and AI shouldn’t be used for such things. But what happens after Anthropic IPO’s and there are shareholders to placate? What happens if the company decides that their red lines have become a very murky grey?
So, we all get to wait and see who chickens out first—Anthropic and its enterprise customers or the administration. And we find out: will Anthropic be one of the first un-governable companies?
Given how this administration seems to be fundamentally at odds with the law, or even the concept of a rules-based system, any sort of normative constraint like “lawfulness” is not very meaningful.
I’m surprised that more people didn’t point out the mutually exclusive conditions of ‘immediately cease’-ing with ‘a six month phase out.’
In a saner world, Republicans in Congress would be apoplectic, screaming about how the executive branch had been overrun by commies intent on destroying free enterprise. However, America’s Republican Party consists of neutered, lobotomized lapdogs. And also Lindsay Graham.


This is such a great analysis of the Anthropic‑DoD situation! I like Anthropic's products too, but I feel the company is not fundamentally different from the other major AI labs. It at least has some principles (for now), but the concentration of power makes it hard to put any meaningful controls in place, especially as its models become increasingly essential.
You're right that there's no guarantee they won't suddenly change their position under financial pressure. They're already diluting their safety commitments to be more “competitive,” so they might just change their current stance again if circumstances demand it.
Great and thoroughly researched article. I share the same intuition about the outcome. Another fascinating aspect of this is who should define the persona and alignment of Claude when used in combat. The private company or the sovereign state? (from this Ezra Klein Show episode https://youtu.be/xc97F2CFBOY?si=3X6CVZseoKB68IhC)