Pentagon officially labels Anthropic a supply chain risk "effective immediately"

The decision could force military contractors to drop the company's Claude AI tools.
Text: Óscar Ontañón Docal
Published 2026-03-06

The Pentagon has designated US artificial intelligence company Anthropic as a supply chain risk "effective immediately," a decision that could force defense contractors to stop using its Claude chatbot. The decision follows a dispute between the Trump administration and Anthropic leadership over restrictions on how the military could use its AI systems.

The conflict escalated after CEO Dario Amodei refused to remove safeguards designed to prevent the technology from being used for mass surveillance of Americans or fully autonomous weapons. President Donald Trump and Defense Secretary Pete Hegseth argued those limits could endanger national security and hinder military operations.

Some contractors, including Lockheed Martin, have already begun distancing themselves from Anthropic. Meanwhile, rival OpenAI announced a new deal with the Pentagon to deploy ChatGPT in classified military environments, potentially replacing Anthropic's technology...

Claude

Back