Anthropic Won’t Lift AI Safeguards Amid Ongoing Pentagon Dispute: CEO

Summary

Anthropic CEO Dario Amodei announced the company will not remove safeguards from its Claude AI model, defying a Pentagon demand for permission to use the technology for "any lawful use," including potentially in autonomous weapons and surveillance. This stance risks Anthropic’s $200 million Defense Department contract and possible classification as a “supply chain risk.” Anthropic remains the only major AI firm refusing to grant unrestricted military access to its systems. Amodei stressed alignment with national security goals but asserted that current AI systems are too unreliable for autonomous weapons and that mass surveillance contradicts democratic values. He criticized the Pentagon’s contradictory position of labeling Anthropic both as a security risk and an essential national security partner. Despite dropping a pledge to pause training advanced systems pending safeguards, Anthropic maintains limits on how Claude can be used. The Pentagon’s actions are seen by observers as an effort to pressure both Anthropic and the broader tech industry into compliance with government demands for unrestricted AI use in military applications. The Defense Department has not announced its final decision regarding Anthropic’s contract.