Dario Amodei, the CEO of Anthropic, has announced that the company will legally contest the Pentagon’s recent classification of it as a risk to U.S. national security. In a blog post, Amodei emphasized that the implications of this designation are less severe than initially perceived, and he sought to reassure customers regarding the company’s future.
According to Amodei, the Pentagon has classified Anthropic and its products, notably the Claude AI model, as a supply chain risk. This designation is unprecedented for a U.S. firm, typically reserved for foreign entities regarded as adversaries, such as the Chinese tech giant Huawei.
The ruling mandates that defense contractors must certify they do not utilize Anthropic’s models in their dealings with the Pentagon. However, Amodei argued that this classification only pertains to the direct application of Claude by customers engaged in contracts with the Department of War, rather than a blanket prohibition on all usage of Claude.
Microsoft, a key partner of Anthropic, shared this interpretation, allowing for Anthropic’s products to remain accessible to all its clients except for those working with the military. Similarly, major cloud services including Google and Amazon Web Services stated their continued support for Anthropic’s offerings, underscoring that the restrictions apply solely to military contracts.
The tension between Anthropic and the Pentagon escalated after the company publicly asserted its stance against the use of its technology for mass surveillance or fully autonomous weapons systems. This stance reportedly irritated Pentagon chief Pete Hegseth, who contended that the military operates legally and that suppliers cannot dictate the usage of their products.
In his blog post, Amodei also addressed a leaked internal memo wherein he suggested that the actions against Anthropic stem from political motivations, insinuating a lack of political donations to the Trump administration compared to competitors like OpenAI. He later characterized the memo as an outdated perspective written under pressure during a challenging period for the firm.
Initially, OpenAI had sought to take over Anthropic’s military contract, but this decision faced backlash from its staff, leading CEO Sam Altman to brand the move as “sloppy” and commit to revising the agreement.
Despite these challenges, the contentious situation has provided some unexpected benefits for Anthropic, which was founded in 2021 by former OpenAI employees with a mission centered on AI safety. The ongoing conflict has driven the Claude app to the top of download charts on both Apple and Google platforms, and the company reported that its number of paying users has doubled since the start of the year, with the app now being downloaded over a million times daily.





