Anthropic Defies Pentagon Over Military AI Safeguards

Anthropic is standing firm in a high-stakes dispute with the United States government over the boundaries of artificial intelligence use in military operations. The AI company has refused to remove safety restrictions that block its technology from being used to autonomously direct weapons and conduct mass domestic surveillance of American citizens.
The conflict reached a critical point this week when Defense Secretary Pete Hegseth met with Anthropic chief executive Dario Amodei and demanded the company revise its usage policies. The Pentagon has argued that it should only need to comply with existing US law rather than the internal rules of a private technology company. Hegseth gave Anthropic a deadline of Friday at 5 p.m. to respond.
The stakes are substantial. Anthropic faces the potential loss of a $200 million Pentagon contract. Government officials have also threatened to designate the company a supply chain risk, a label that is typically reserved for companies from foreign adversaries, or to invoke the Defense Production Act to compel compliance. Legal experts have described either move as unprecedented and likely to trigger significant litigation.
Anthropic was the first artificial intelligence company to operate on US Defense Department classified networks, a distinction that gave it a historically significant position in the government’s AI infrastructure. That position has become more precarious after the Pentagon announced a separate agreement this week with xAI, the AI company founded by Elon Musk, to deploy its technology across classified networks.
The dispute intensified after the Pentagon grew concerned that Anthropic had raised questions about whether its AI products were involved in the US military operation that resulted in the detention of Venezuelan President Nicolás Maduro in January 2026. Amodei told Hegseth during their meeting that his company had not raised those concerns to Palantir or the Pentagon, and that its existing safeguards would not interfere with the Defense Department’s current operations.
At the core of the standoff are two positions Anthropic is unwilling to abandon. The company holds that its AI technology is not sufficiently reliable to control weapons systems without human intervention. It also maintains that no legal or regulatory framework currently exists to govern the use of AI in the mass surveillance of American citizens, making such applications inconsistent with its policies.
The confrontation unfolded on the same day Anthropic published a revised version of its Responsible Scaling Policy, now in its third iteration. The original policy, introduced in 2023, required the company to pause training more powerful AI models if their capabilities outpaced its ability to ensure their safety. That requirement has been removed in the updated version. Anthropic argued that a responsible developer halting progress while less careful competitors continued advancing could produce an outcome that is less safe overall.
The updated policy separates what Anthropic plans to do independently from what it recommends the broader AI industry adopt. It introduces a Frontier Safety Roadmap, a set of publicly declared but non-binding goals covering security, alignment, and risk management. The policy also establishes a system of Risk Reports to be published every three to six months, with external expert review required under certain conditions.
Anthropic acknowledged in its new policy that its original ambition of triggering a broad industry-wide race toward stronger safety standards had not fully materialized. Several major AI companies adopted broadly similar frameworks after Anthropic’s initial policy in 2023, but a sustained escalation of safety commitments across the industry did not follow. Government regulation on AI safety has also advanced slowly, with the political environment shifting toward economic growth and competitiveness over precautionary oversight.
Anthropic was founded in 2021 by former executives of OpenAI and has positioned itself as a safety-focused developer incorporated as a public benefit corporation. Its refusal to yield to Pentagon pressure is being closely watched by AI researchers, policymakers, and legal experts as an early test of whether private AI companies can sustain independent safety standards when confronted with direct demands from the US government.



