A dispute over how the US military can use AI has Anthropic facing the full wrath of US President Donald Trump. The development is crucial amid the threat of a US-Iran conflict. But why has Trump suddenly gone nuclear on Anthropic? What comes next for the AI firm?
For artificial intelligence (AI) company Anthropic, Donald Trump’s surprise action came without a prompt. Trump, in one sweeping Truth Social post on Friday, directed all government agencies to stop using AI tools of Anthropic, known for its AI assistant Claude. Defence Secretary Pete Hegseth went a step further, designating the AI startup a national security threat. At the heart of the standoff is Anthropic’s firm refusal to allow unbridled access to its technology for military purposes.
Call it a twist of fate, but little did Anthropic anticipate this day when it became the first major AI lab to work on classified US military networks around two years ago. It earned a $200 million contract from the Pentagon in July 2025. Seven months later, it is now at the core of Trump’s latest war of choice.
“We don’t need it, we don’t want it, and will not do business with them again,” the mercurial President said, dubbing Anthropic as a “woke, radical left” company.
Anthropic has now also been designated a “supply-chain risk” — a first for an American company. This tag is normally used for foreign firms with ties to US adversaries. The move essentially blacklists it from working with the US military or contractors.
But why has Trump suddenly gone nuclear on Anthropic? What comes next for the AI firm? Stay with us as we simplify the Trump-Anthropic controversy.
WHAT IS ANTHROPIC?
Now, before we delve into the complexities of the row, one might wonder what Anthropic is. You must have heard about ChatGPT, the AI tool developed by OpenAI. Similarly, Anthropic is an American AI company best known for building the AI assistant Claude.
The crucial aspect here is that Anthropic describes itself as an AI safety and research company. That “safety and research” framing by Anthropic is behind the latest standoff (we will come to this later).
Since 2024, Anthropic’s Claude has been used extensively by the US for sensitive military planning and operations. It allows security and intel officers to quickly analyse and connect vast amounts of classified data — tasks that will otherwise take humans weeks.
In fact, the AI tool was recently used to plan and execute the daring operation to capture Venezuelan President Nicolas Maduro in January.
But the company always had two red lines in its deal with the Trump administration. It eventually became the sticking point.
WHY TRUMP BANNED ANTHROPIC?
As we mentioned earlier, Anthropic differed from other AI firms due to its safety-conscious business model. CEO Dario Amodei has always been against the Trump administration using the AI tool for spying on American citizens or to power killer robots that can eliminate people without human input.
However, the Trump administration wants Anthropic to turn off these safety guardrails and demanded unfettered access to Claude’s capabilities that it said would help protect the country “lawfully”.
In essence, Trump did not want any restraints on how AI could be deployed on the battlefield. The development assumes significance amid the looming prospect of the US striking Iran in the coming days.
However, Anthropic has pushed back against Trump’s move. It plans to approach the court in the US, labelling the company a “danger to national security”.
The label means tech giants such as Nvidia, Amazon and Google, which have deals with Anthropic, would need to cut ties with the AI firm.
“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” Amodei, who recently attended the AI Summit in India, said in a statement.
WHAT NEXT FOR ANTHROPIC?
For now, Anthropic will continue to provide the Department of War with its services. A six-month transition period has been decided.
Hours after Trump’s post, OpenAI announced it sealed a deal with the administration on the use of its AI models. However, not much is known about the deal as CEO Sam Altman has previously said that OpenAI shared the same red lines as Anthropic.
But six months is far off, and a possible conflict with Iran is looming. In such a scenario, Trump has a trick up his sleeve — the Cold War-era Defence Production Act.
“Anthropic better get their act together, and be helpful during this phase-out period, or I will use the full power of the presidency to make them comply, with major civil and criminal consequences to follow,” Trump said.
While Trump did not specify, experts pointed out that he might be hinting at the Defence Production Act.
WHAT IS THE DEFENCE PRODUCTION ACT?
The Cold War-era Defence Production Act of 1950, or DPA, essentially allows the US president to require private American companies to prioritise government contracts tied to national defence. Historically, the law has been used mainly to address supply chain or production bottlenecks, like during the COVID pandemic.
Thus, if Trump does decide to strike Iran, he can invoke the DPA to force Anthropic’s hand in prioritising government access to its AI tool. The law, however, does not give the government the power to seize companies. The move can also be challenged in court.
In case Anthropic doesn’t comply, violations of the DPA may result in criminal penalties.
For Anthropic, the development could not have come at a worse time. The AI firm, which is valued at $380 billion, is planning to go public this year. The friction with the Trump administration might affect investor sentiment or impact other deals.






