Loading page...

Go Back

Anthropic Against the Pentagon

The Pentagon wanted Anthropic's AI for mass surveillance and autonomous weapons. Anthropic said no. Consequences were threatened.

Anthropic AI National Security
Luca Facchini
Luca Facchini
Anthropic Against the Pentagon

Something historic happened yesterday, and most people haven’t fully processed what it means.

President Trump ordered every U.S. federal agency to immediately stop using Anthropic’s technology. Defense Secretary Pete Hegseth declared the company a “supply chain risk to national security”

The reason? Anthropic refused to let the Pentagon use its Claude AI model for mass domestic surveillance of Americans and fully autonomous weapons systems.

Let’s slow down and look at the full picture.

A brief history

Anthropic was founded in 2021 by Dario Amodei and a group of former OpenAI researchers who left over concerns about AI safety. The company’s founding premise was simple: build AI responsibly, with ethical guardrails baked in. Claude was designed with that philosophy at its core.

In 2024, OpenAI quietly deleted language that had explicitly prohibited the use of its technology for weapons development and military warfare. In February 2025, Google followed suit, removing commitments to not apply their technology to weapons or surveillance. The AI industry was shifting, fast, toward military alignment.

Anthropic took a different path. In July 2025, the Pentagon awarded $200M contracts to four AI companies: Anthropic, Google, OpenAI, and Elon Musk’s xAI. Anthropic became the first AI company ever deployed on classified military networks. That was enourm trust. Claude was running on Pentagon systems for intelligence analysis, operational planning, and cyber operations.

The breaking point

The dispute centered on two non-negotiable “red lines” Anthropic drew in its contract:

  1. No mass domestic surveillance of American citizens
  2. No fully autonomous weapons — AI systems that select and engage targets without human oversight

The Pentagon wants to use AI for “all lawful purposes” without limits from a private contractor. It says it doesn’t plan to use Claude for surveillance or autonomous weapons, but it won’t make that a binding promise.

CEO Dario Amodei met with Defense Secretary Hegseth on Tuesday. Hegseth reportedly threatened severe consequences. On Thursday, Amodei issued his response, saying that threats won’t change Anthropic’s position.

On Friday, President Donald Trump pulled the trigger. Anthropic was blacklisted. He ordered every federal agency to immediately stop using its technology. Meanwhile, OpenAI signed a deal with the Pentagon within hours of Anthropic’s blacklisting.

His exact words: The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk.

The message from Washington was unmistakable: comply, or watch your competitor take your seat at the table.

As Amodei put it: the technology’s potential is “getting ahead of the law.”

Is this 1984, or…?

George Orwell imagined a world where the state deployed technology not to liberate, but to surveil and control, where the Ministry of Truth rewrote history, and the Thought Police monitored every citizen’s inner life. Oceania’s power wasn’t built on guns alone. It was built on the total visibility of the population to the state, and the total invisibility of the state to the population.

We are not there. But the architecture is being assembled, piece by piece, in full daylight. And everybody stays silent..

What’s remarkable about this moment is not that a government wants mass surveillance capability, cause historically, they always have. What’s remarkable is that a private AI company looked at a $200M contract, stared down a presidential ban, and said: No. Not with our technology.

Orwell’s Winston Smith had no such ally.