Barnes Tech

Back

Anthropic logo

I try not to post about politics very often. The state of things is genuinely terrible, and dwelling on it is bad for my mental health. But this story sits squarely at the intersection of AI, big tech, and government overreach — topics I write about often — and I think it’s important enough to talk about.

Full disclosure: I use Anthropic’s products every day. Claude Code is my most-used tool. I have every reason to be biased here. But the facts of this situation are so outrageous that bias barely matters. What the Pentagon just did to Anthropic should alarm anyone who cares about free enterprise, the rule of law, or the basic idea that the government shouldn’t be able to destroy an American company because it lost a contract negotiation.

What Happened#

Last summer, Anthropic was one of four AI companies — alongside Google, OpenAI, and xAI — awarded contracts with the Pentagon worth up to $200 million each. Anthropic’s contract included their standard usage policy, which the Pentagon agreed to at the time.

In January, Defense Secretary Pete Hegseth issued an AI strategy memo directing that all Pentagon AI contracts include “any lawful use” language — meaning AI companies must remove all safeguards and let the military use their models for anything not explicitly illegal. This was a direct collision with Anthropic’s contract terms.

Anthropic didn’t refuse to work with the military. They were the first AI company to deploy on classified military networks. They were the first to deploy at the national labs. Claude was used in the operation that captured former Venezuelan president Nicolás Maduro. They have an extensive partnership with Palantir. They voluntarily cut off hundreds of millions in revenue from Chinese firms linked to the CCP.

Anthropic drew exactly two red lines:

  1. No mass domestic surveillance. Using AI to surveil American citizens at scale.
  2. No fully autonomous weapons. AI systems that select and engage targets without any human in the loop.

That’s it. Not “we won’t work with the military.” Not “we object to military operations.” Two specific guardrails on capabilities that don’t even exist reliably yet and that most Americans would consider common sense.

The Pentagon’s response was to demand Anthropic accept the new terms unconditionally by Friday evening or face consequences.

The Punishment#

When Anthropic held their ground, the administration escalated to something unprecedented. Trump called them “Leftwing nut jobs” on social media and directed federal agencies to stop using their products. Hegseth designated Anthropic a supply chain risk.

This designation has never been used against an American company. It was created to deal with foreign adversaries like Huawei — companies suspected of espionage or embedding malware in American infrastructure. Using it as a cudgel against a domestic company because you don’t like how contract negotiations are going is, as Scott Alexander put it, “insane Third World bullshit.”

The designation doesn’t just cut Anthropic off from military contracts. It bans any contractor, supplier, or partner that does business with the US military from doing any commercial activity with Anthropic. Given how many companies do some business with the government, this could be fatal to Anthropic’s business. That’s the point. It’s not a policy disagreement. It’s punishment.

And the whole thing is self-contradicting. You can’t simultaneously designate a company a supply chain risk and threaten to invoke the Defense Production Act to force them to provide their technology because it’s essential to national security. As Lawfare noted, these two threats are inherently contradictory: one labels Anthropic a security risk, the other labels Claude as critical to national defense. Lawfare’s follow-up analysis called it “designation as political theater: a show of force that will not stick.”

The Broader Context#

This isn’t just about Anthropic. This is the US government demonstrating that it can threaten to destroy any American company, with no legal review, for failing to comply with demands that weren’t even in the original contract. Every company that does business with the government should be concerned.

As former Trump White House AI advisor Dean Ball wrote: “Even if Secretary Hegseth backs down and narrows his extremely broad threat against Anthropic, great damage has been done. Most corporations, political actors, and others will have to operate under the assumption that the logic of the tribe will now reign.”

The Defense Production Act was designed for wartime rationing of steel and aluminum, not for forcing a software company to retrain its AI model to enable mass surveillance. The DPA has never been used to mark a domestic company as a supply chain threat. Criminal penalties apply for noncompliance. The government is wielding a Korean War-era statute to bully a company that had the audacity to include guardrails in a contract that the Pentagon originally agreed to.

And the irony is suffocating. When Biden used the DPA’s lightest authority — Title VII, information gathering only — to require AI companies to report on training activities, Republicans called it dangerous overreach. Trump promised to repeal it. Now they’re threatening Title I compulsion powers, which are orders of magnitude more coercive, against a company that won’t let them build unsupervised killbots. The party of free markets and limited government, everyone.

OpenAI Swoops In#

The same day Hegseth designated Anthropic a supply chain risk, Sam Altman announced that OpenAI had reached a deal with the Pentagon. He claimed it includes “technical safeguards” addressing the same issues — prohibitions on mass surveillance and human responsibility for use of force. Whether these protections have any real teeth remains to be seen, and I’m skeptical that OpenAI somehow got the exact terms that Anthropic asked for and somehow the Pentagon is fine with it now.

Over 640 Google employees and nearly 100 OpenAI employees signed an open letter asking their companies to stand with Anthropic and refuse the Pentagon’s demands. The letter makes the point that the government was trying to divide the companies with fear that the others would cave. Credit to those employees for standing up.

Why This Matters#

Anthropic is the company that has most consistently prioritized safety in AI development. You can argue they’re imperfect, that they’re too cautious, that their models are annoying sometimes. But they are the only major AI lab that drew a line in the sand and held it when the most powerful institution on Earth threatened to destroy them for it.

Dario Amodei said it plainly in his CBS interview: the supply chain risk designation is “retaliatory and punitive,” and Anthropic drew these red lines because “we believe that crossing those lines is contrary to American values, and we wanted to stand up for American values.”

He’s right. And the fact that standing up for the idea that maybe the government shouldn’t conduct mass AI surveillance on its own citizens, or deploy autonomous weapons without human oversight, can get your company destroyed — that tells you everything you need to know about the people making these demands.

Anthropic says they’ll challenge the designation in court. Based on Lawfare’s legal analysis, they’ll likely win. But the damage in the interim is real, and the chilling effect on every other company is exactly what this administration wants.

🔗
Links
The Government Is Trying to Destroy Anthropic
https://barnes.tech/blog/the-government-is-trying-to-destroy-anthropic
Author Barnes Tech Blog
Published at March 2, 2026