Judge says government's Anthropic ban looks like punishment
Summary
Patrick Sison/AP hide caption toggle caption Patrick Sison/AP A federal judge in San Francisco said on Tuesday the government's ban on Anthropic looked like punishment after the AI company went public with its dispute with the Pentagon over the military's potential uses of its artificial intelligence model, Claude. Lin made the remark at the outset of a hearing about Anthropic's request for a preliminary injunction in one of its lawsuits against the Pentagon, which has designated the company a supply chain risk, effectively blacklisting it. Technology Anthropic sues the Trump administration over 'supply chain risk' label Technology Deadline looms as Anthropic rejects Pentagon demands it remove AI safeguards "It looks like an attempt to cripple Anthropic," Lin said, adding she was concerned that the government might be punishing Anthropic for openly criticizing the government's position. District Court for the Northern District of California and the federal appeals court in Washington, D.C., allege the Trump administration violated the company's First Amendment right to speech and exceeded the scope of supply chain risk law.
Patrick Sison/AP hide caption toggle caption Patrick Sison/AP A federal judge in San Francisco said on Tuesday the government's ban on Anthropic looked like punishment after the AI company went public with its dispute with the Pentagon over the military's potential uses of its artificial intelligence model, Claude. Lin made the remark at the outset of a hearing about Anthropic's request for a preliminary injunction in one of its lawsuits against the Pentagon, which has designated the company a supply chain risk, effectively blacklisting it. Technology Anthropic sues the Trump administration over 'supply chain risk' label Technology Deadline looms as Anthropic rejects Pentagon demands it remove AI safeguards "It looks like an attempt to cripple Anthropic," Lin said, adding she was concerned that the government might be punishing Anthropic for openly criticizing the government's position. District Court for the Northern District of California and the federal appeals court in Washington, D.C., allege the Trump administration violated the company's First Amendment right to speech and exceeded the scope of supply chain risk law.
## Article Content
Business
Judge says government's Anthropic ban looks like punishment
March 24, 2026
8:12 PM ET
John Ruwitch
Pages from the Anthropic website and the company's logo are displayed on a computer screen in New York on Thursday, Feb. 26, 2026.
Patrick Sison/AP
hide caption
toggle caption
Patrick Sison/AP
A federal judge in San Francisco said on Tuesday the government's ban on Anthropic looked like punishment after the AI company went public with its dispute with the Pentagon over the military's potential uses of its artificial intelligence model, Claude.
U.S. District Judge Rita F. Lin made the remark at the outset of a hearing about Anthropic's request for a preliminary injunction in one of its lawsuits against the Pentagon, which has
designated
the company a supply chain risk, effectively blacklisting it.
Technology
Anthropic sues the Trump administration over 'supply chain risk' label
Technology
Deadline looms as Anthropic rejects Pentagon demands it remove AI safeguards
"It looks like an attempt to cripple Anthropic," Lin said, adding she was concerned that the government might be punishing Anthropic for openly criticizing the government's position.
Lin said she expected to make a ruling in the next few days on whether to temporarily pause the government's ban until the court decides on the merits of the case.
The hearing in the U.S. District Court for the Northern District of California is the latest development in a spat between one of the leading AI companies and the Trump administration, and it has implications for how the government can use AI more broadly.
Technology
Hegseth threatens to blacklist Anthropic over 'woke AI' concerns
Anthropic CEO Dario Amodei announced in late February that he would not allow the company's Claude's AI model to be used for autonomous weapons, or to surveil American citizens. President Trump subsequently
ordered
all U.S. government agencies to stop using Anthropic's products.
The Pentagon
designated
Anthropic as a "supply chain risk" earlier this month, citing national security concerns. That designation is normally reserved for entities deemed to be foreign adversaries that could potentially sabotage U.S. interests.
Anthropic has
filed
two
federal lawsuits alleging that this designation amounts to illegal retaliation against the company for
its stance
on AI safety. It argues that the label will cost it both customers and revenue, since it will bar Pentagon contractors from doing business with the company, as well.
The lawsuits, filed in the U.S. District Court for the Northern District of California and the federal appeals court in Washington, D.C., allege the Trump administration violated the company's First Amendment right to speech and exceeded the scope of supply chain risk law.
In Tuesday's hearing, lawyers for Anthropic said it was apparently the first time such a designation had been made against a U.S. company.
Lin said the Pentagon has a right to decide what AI products it wants to use. But she questioned whether the government broke the law when it banned its agencies from using Anthropic, and when Defense Secretary Pete Hegseth announced that anyone seeking business with the Pentagon must cut relations with Anthropic.
Technology
OpenAI announces Pentagon deal after Trump bans Anthropic
She said the actions were "troubling" because they did not seem to be tailored to the national security concerns in question, which could be addressed by the Pentagon simply ceasing to use Claude. Instead, she said, it looked like the government was trying to punish Anthropic.
But a lawyer for the government argued that its actions were not retaliatory, and were based on Anthropic's disagreement with the government over how its AI model could be used — not the company's decision to speak out about it.
The government also argued that Anthropic is a risk because, theoretically, in the future the company could update Claude in a way that endangers national security.
Anthropic did not respond immediately to an emailed request for comment.
A Pentagon spokesperson said that the agency's policy is not to comment on ongoing litigation.
Anthropic
Pentagon
Defense Department
AI
Artificial Intelligence
---
## Expert Analysis
### Merits
N/A
### Areas for Consideration
- Patrick Sison/AP hide caption toggle caption Patrick Sison/AP A federal judge in San Francisco said on Tuesday the government's ban on Anthropic looked like punishment after the AI company went public with its dispute with the Pentagon over the military's potential uses of its artificial intelligence model, Claude.
- Lin made the remark at the outset of a hearing about Anthropic's request for a preliminary injunction in one of its lawsuits against the Pentagon, which has designated the company a supply chain risk, effectively blacklisting it.
- Technology Anthropic sues the Trump administration over 'supply chain risk' label Technology Deadline looms as Anthropic rejects Pentagon demands it remove AI safeguards "It looks like an attempt to cripple Anthropic," Lin said, adding she was concerned that the government might be punishing Anthropic for openly criticizing the government's position.
### Implications
- Technology Anthropic sues the Trump administration over 'supply chain risk' label Technology Deadline looms as Anthropic rejects Pentagon demands it remove AI safeguards "It looks like an attempt to cripple Anthropic," Lin said, adding she was concerned that the government might be punishing Anthropic for openly criticizing the government's position.
- That designation is normally reserved for entities deemed to be foreign adversaries that could potentially sabotage U.S. interests.
- It argues that the label will cost it both customers and revenue, since it will bar Pentagon contractors from doing business with the company, as well.
- Technology OpenAI announces Pentagon deal after Trump bans Anthropic She said the actions were "troubling" because they did not seem to be tailored to the national security concerns in question, which could be addressed by the Pentagon simply ceasing to use Claude.
### Expert Commentary
This article covers anthropic, government, pentagon topics. Areas of concern are also raised. Readability: Flesch-Kincaid grade 0.0. Word count: 665.
Related Articles
See the messages Brian Hooker sent his friend after wife's disappearance in...
3 days, 2 hours ago
Breaking down Artemis II's reentry process, heat shield's importance
3 days, 2 hours ago
Tracking traffic through the Strait of Hormuz
3 days, 2 hours ago
Israel issues new evacuation orders for Beirut suburbs
3 days, 2 hours ago