US District Judge blocks government ban on Anthropic AI - JURIST - News
Summary
News WebTechExperts / Pixabay A federal judge on Thursday blocked the Trump administration from designating the artificial intelligence company Anthropic as a “supply chain risk” and banning federal contractors from using its technology. US District Judge Rita Lin ruled in favor of Anthropic, halting a presidential directive that ordered all federal agencies to cease using the company’s Claude AI model. The administration subsequently declared the company a national security treat and a supply chain risk. She wrote that the administration’s action “appear designed to push Anthropic,” adding that penalizing the company for bringing public scrutiny to the government’s contracting positions constitutes “classic illegal First Amendment retaliation.” Furthermore, regarding the “supply chain risk” designation, Lin noted that the government failed to provide evidence demonstrating such a risk and bypassed the legally required procedures for making that determination.
News WebTechExperts / Pixabay A federal judge on Thursday blocked the Trump administration from designating the artificial intelligence company Anthropic as a “supply chain risk” and banning federal contractors from using its technology. US District Judge Rita Lin ruled in favor of Anthropic, halting a presidential directive that ordered all federal agencies to cease using the company’s Claude AI model. The administration subsequently declared the company a national security treat and a supply chain risk. She wrote that the administration’s action “appear designed to push Anthropic,” adding that penalizing the company for bringing public scrutiny to the government’s contracting positions constitutes “classic illegal First Amendment retaliation.” Furthermore, regarding the “supply chain risk” designation, Lin noted that the government failed to provide evidence demonstrating such a risk and bypassed the legally required procedures for making that determination.
## Article Content
News
WebTechExperts
/ Pixabay
A federal judge on Thursday
blocked
the Trump administration from designating the artificial intelligence company Anthropic as a “supply chain risk” and banning federal contractors from using its technology.
US District Judge Rita Lin ruled in favor of Anthropic, halting a presidential directive that ordered all federal agencies to cease using the company’s Claude AI model.
The legal dispute
stems from
contract negotiations between Anthropic and the US Department of Defense. The Pentagon had introduced plans to accelerate its use of AI to rapidly process intelligence data and increase military efficiency. During negotiation, Anthropic insisted on implementing safety guardrails, including a restriction prohibiting the use of its technology for the mass surveillance of American citizens. In response, a Pentagon official stated that the military only issues lawful orders.
Following the disagreement over the terms of service, President Donald Trump publicly
criticized
Anthropic in February. Trump stated the company made a “disastrous mistake” by attempting to force the Defense Department to adhere to its corporate policies, arguing the move put American lives at risk. The administration subsequently declared the company a national security treat and a supply chain risk.
Anthropic responded by filing a lawsuit against the federal government. The company argued that the administration’s designation violated the Administrative Procedure Act (APA) and the First Amendment, characterizing the ban as retaliation for exercising its free speech rights regarding the ethical use of its technology.
Judge Lin agreed with Anthropic in her decision. She wrote that the administration’s action “appear designed to push Anthropic,” adding that penalizing the company for bringing public scrutiny to the government’s contracting positions constitutes “classic illegal First Amendment retaliation.”
Furthermore, regarding the “supply chain risk” designation, Lin noted that the government failed to provide evidence demonstrating such a risk and bypassed the legally required procedures for making that determination.
---
## Expert Analysis
### Merits
N/A
### Areas for Consideration
- News WebTechExperts / Pixabay A federal judge on Thursday blocked the Trump administration from designating the artificial intelligence company Anthropic as a “supply chain risk” and banning federal contractors from using its technology.
- The legal dispute stems from contract negotiations between Anthropic and the US Department of Defense.
- Trump stated the company made a “disastrous mistake” by attempting to force the Defense Department to adhere to its corporate policies, arguing the move put American lives at risk.
### Implications
N/A
### Expert Commentary
This article covers anthropic, company, risk topics. Areas of concern are also raised. Readability: Flesch-Kincaid grade 0.0. Word count: 306.
Related Articles
HRW reports mass killings in Burkina Faso conflict, urging government action -...
1 day, 18 hours ago
UN experts call for immediate provision of humanitarian aid in South Sudan...
1 day, 19 hours ago
Australia online regulator reports non-compliance with social media ban - JURIST -...
1 day, 19 hours ago
US Supreme Court hears challenge to Trump birthright citizenship order - JURIST...
2 days ago