Judge Blocks Pentagon From Branding Anthropic a National Security Threat
A federal judge blocked the Pentagon from labeling Anthropic as a supply chain risk, finding the government violated the company's First Amendment and due process rights. Judge Rita Lin issued a preliminary injunction shortly after hearing arguments, noting the internal record revealed the decision was retaliatory, triggered by public dissent rather than genuine security concerns. Anthropic had refused to drop restrictions preventing its AI model Claude from being used for mass surveillance or lethal autonomous warfare, which led to the Pentagon’s threatened designation and subsequent blacklisting efforts after pressure from President Trump. The judge emphasized that such labels have been reserved for hostile foreign actors and that punishing a domestic company for disagreeing publicly with government policy is unconstitutional. The injunction blocks the government’s actions and restores the prior status, requiring a compliance report. Experts suggest the ruling reinforces that AI companies can set ethical usage limits without fear of retaliatory exclusion, but the underlying tension between corporate policy and government demands remains. The case raises concerns about misuse of statutory powers to punish companies for ethical stances, risking dangerous precedent if left unchecked.
