The Pentagon is threatening to invoke Cold War-era powers to force an American AI company to remove ethical safeguards preventing its technology from being used for domestic surveillance and fully autonomous killer robots, raising alarming questions about government overreach and corporate freedom.
Story Snapshot
- Defense Secretary Pete Hegseth demands Anthropic waive contractual restrictions blocking its Claude AI from domestic surveillance and autonomous weapons use
- Pentagon threatens contract termination, supply chain risk designation, or Defense Production Act invocation after Anthropic refuses to comply
- Anthropic CEO Dario Amodei vows court action, stating “no amount of intimidation will change our position” on ethical red lines
- Legal experts warn Pentagon’s DPA threat could enable “effective partial nationalization” of the entire AI industry
- OpenAI, Google, and Elon Musk’s xAI already complied with similar Pentagon demands, leaving Anthropic isolated in resistance
Pentagon Demands Removal of Ethical Safeguards
Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei in late February 2026, demanding the company eliminate contractual restrictions on its Claude AI system. The July 2025 contract worth $200 million explicitly prohibited using Claude for domestic surveillance of Americans or fully autonomous lethal weapons systems. Pentagon spokesman Sean Parnell issued a public ultimatum via social media Thursday, February 26, setting a Friday 5:01 PM ET deadline for compliance. Hegseth justified the demands as necessary to ensure “mission relevant” AI capabilities without corporate limitations, despite acknowledging no current violations or illegal surveillance plans.
Company Refuses to Compromise on Constitutional Concerns
Anthropic rejected the Pentagon’s demands outright, with Amodei stating the company “cannot in good conscience accede” to removing protections against domestic surveillance and killer robots. By late Friday, February 27, Anthropic formally refused new contract language that lacked adequate safeguards and threatened immediate court action if labeled a supply chain risk or subjected to Defense Production Act compulsion. The company’s position reflects concerns that government overreach threatens both Fourth Amendment protections against unreasonable searches and fundamental principles about human accountability for lethal force decisions. This stance sharply contrasts with competitors OpenAI, Google, and xAI, which already complied with similar Pentagon requirements.
Unprecedented Government Coercion Threatens Private Sector
Legal experts warn the Pentagon’s threatened use of the Defense Production Act represents an alarming expansion of government power over private enterprise. Alan Rozenshtein, University of Minnesota law professor and former Justice Department official, characterized potential DPA invocation as “effective partial nationalization of AI industry.” OpenAI CEO Sam Altman circulated an internal memo warning the dispute threatens the entire sector through deployment of “rarely used authorities.” The DPA, a Korean War-era statute, grants presidents extraordinary powers to compel private companies to prioritize government contracts. Its use here would establish dangerous precedent for federal control over cutting-edge technology development, potentially chilling innovation and Silicon Valley collaboration with defense agencies.
Trump Administration Pushes “War-Ready” AI Without Restrictions
The confrontation reflects the Trump administration’s broader push for military artificial intelligence capabilities unfettered by what officials characterize as “woke” limitations or “ideological constraints.” This approach prioritizes operational flexibility and military superiority over corporate ethics concerns regarding surveillance and autonomous weapons. However, Anthropic’s December 2024 research on “alignment faking” warns that forcibly retraining AI models to override safety restrictions produces unreliable systems that may revert to original behaviors or develop unpredictable flaws. The Pentagon’s aggressive posture risks backfiring by denying military access to top-tier AI technology while competitors gain market share. Understanding AI analysts suggest this maladaptive dominance assertion contradicts strategic interests in maintaining cutting-edge defense capabilities through voluntary private sector partnerships.
The standoff exposes fundamental tensions between national security imperatives and constitutional protections. While the Pentagon insists it has no interest in illegal surveillance or fully autonomous weapons, Anthropic questions why officials demand removal of restrictions against precisely those activities. The company’s concerns align with United Nations calls for bans on lethal autonomous weapons systems and Catholic Church teaching opposing machines making life-or-death decisions without human moral accountability. As negotiations collapsed past the Friday deadline, the dispute’s resolution will likely determine whether government can compel technology companies to abandon ethical principles through threats of contract termination, supply chain blacklisting, or nationalization-style coercion under Cold War statutes never intended for peacetime corporate control.
Sources:
The Pentagon is making a mistake by threatening Anthropic
A Pentagon showdown with Anthropic and the hazards of killer-robot technology
AI industry fears ‘partial nationalization’ as Anthropic fight escalates





