The intersection of artificial intelligence ethics and military power has arrived in a San Francisco federal courtroom, with Microsoft filing a supporting brief on behalf of Anthropic in the AI company’s fight against the Pentagon. Microsoft argued that a temporary restraining order was urgently needed to prevent immediate harm to businesses that have integrated Anthropic’s technology into their operations. The tech giant was joined by Amazon, Google, Apple, and OpenAI, all of whom have filed in support of Anthropic in what has become a landmark case for the future of AI governance.
The dispute traces back to a $200 million contract that would have allowed the Pentagon to deploy Anthropic’s AI on classified military systems at a time when the US was preparing military operations against Iran. Anthropic insisted that the contract must prohibit use of its technology for mass domestic surveillance or autonomous lethal weapons, conditions the Pentagon was unwilling to accept. Defense Secretary Pete Hegseth labeled Anthropic a supply-chain risk, a designation that has since led to the cancellation of the company’s existing government contracts.
Microsoft’s position as both a Pentagon contractor and an integrator of Anthropic’s AI into military systems makes its court filing particularly impactful. The company participates in the $9 billion Joint Warfighting Cloud Capability contract and has struck additional multibillion-dollar deals with federal agencies. Microsoft publicly called for a collaborative approach to AI governance, arguing that responsible use of AI and robust national defense were complementary rather than competing goals.
Anthropic launched two lawsuits simultaneously, challenging the supply-chain risk designation in both California and Washington DC courts. The company argued that the designation, typically reserved for firms with ties to foreign adversaries, was being used to punish it for advocating AI safety. Anthropic’s filings disclosed that it does not currently believe Claude is safe or reliable enough for autonomous lethal operations, and said this uncertainty was the direct basis for the restrictions it sought to include in the contract.
As the legal battle progresses, Congress is separately investigating whether AI tools were used in a US military strike in Iran that reportedly killed more than 175 civilians at an elementary school. Lawmakers are demanding clarity on the role of AI in targeting and the extent of human oversight in military operations. The answers to these questions may determine not just Anthropic’s fate, but the ethical framework within which all AI companies must operate when dealing with the US government.