Anthropic Challenges US ‘Supply Chain Risk’ Designation as Report Highlights AI’s Impact on Software Productivity

Last Updated:
Anthropic plans to challenge a US “supply chain risk” designation by the United States Department of War, as a separate report highlights how generative AI tools significantly improve software engineering productivity
Anthropic Challenges US ‘Supply Chain Risk’ Designation as Report Highlights AI’s Impact on Software Productivity
On Friday, Anthropic revealed that it had received a formal notice confirming the designation and said it plans to contest the decision in court. Credits: ANI

Dario Amodei-led Anthropic is preparing to challenge a designation by the United States Department of War that labels the artificial intelligence company a supply chain risk to US national security.

On Friday, Anthropic revealed that it had received a formal notice confirming the designation and said it plans to contest the decision in court.

"Anthropic received a letter from the Department of War confirming that we have been designated as a supply chain risk to America's national security. As we wrote on Friday, we do not believe this action is legally sound, and we see no choice but to challenge it in court,” Amodei said in his press statement.

Sign up for Open Magazine's ad-free experience
Enjoy uninterrupted access to premium content and insights.
The Department's letter has a narrow scope, and this is because the relevant statute (10 USC 3252) is narrow, too. It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain.
he added.

“Even for Department of War contractors, the supply chain risk designation doesn't (and can't) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts," Amodei said.

Why is Anthropic concerned about domestic surveillance and fully autonomous weapons?

Amodei said his concerns remain focused on domestic surveillance and fully autonomous weapons but added that discussions with the Department of War had been constructive.

open magazine cover
Open Magazine Latest Edition is Out Now!

Imran Khan: Pakistan’s Prisoner

27 Feb 2026 - Vol 04 | Issue 60

The descent and despair of Imran Khan

Read Now

"I would like to reiterate that we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible,” he wrote.

“As we wrote on Thursday, we are very proud of the work we have done together with the Department, supporting frontline war fighters with applications such as intelligence analysis, modelling and simulation, operational planning, cyber operations, and more,” he added.

“As we stated last Friday, we do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making--that is the role of the military. Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance, which relate to high-level usage areas, and not operational decision-making," he wrote.

Amodei added that ensuring continuity of AI tools for national security operations remains a priority amid ongoing US military activity in West Asia.

How is Anthropic supporting US national security operations with its AI models?

"Our most important priority right now is making sure that our war fighters and national security experts are not deprived of important tools in the middle of major combat operations,” he said.

“Anthropic will provide our models to the Department of War and national security community, at nominal cost and with continuing support from our engineers, for as long as is necessary to make that transition, and for as long as we are permitted to do so. Anthropic has much more in common with the Department of War than we have differences,” he added.

“We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government. All our future decisions will flow from that shared premise," he said.

Anthropic's tools continue to be used during the United States' ongoing operation in the Middle East.

A Reuters report said the US used an array of weapons in strikes conducted against Iran as part of Operation Epic Fury, which included artificial intelligence services from Anthropic.

How did the Pentagon reportedly use Anthropic’s Claude AI tools in its attack on Iran?

According to the report, the Pentagon used artificial intelligence services from Anthropic, including its Claude tools, during its attack on Iran.

The developments come amid growing use of generative artificial intelligence across industries.

A report by Ness Digital Engineering and Zinnov found that the adoption of generative AI tools in software development can significantly improve productivity and reduce task completion time.

The report highlighted that Generative AI tools such as Copilot and CodeWhisperer have the potential to transform software engineering productivity, particularly in routine development tasks.

"Generative AI (GAI) has a significant impact on repeatable sustenance activities and reducing knowledge barriers... 70% reduction in task completion time for existing code updates..... 48% reduction in task completion time for senior engineers,” it stated.

What did the Ness and Zinnov study reveal about generative AI’s impact on software engineers?

Ness and Zinnov conducted a detailed analysis of more than 100 software engineers across various use cases and development environments to assess the real world impact of Generative AI in software development.

According to the findings, task completion time for existing code updates can be reduced by as much as 70 per cent when developers use Generative AI tools, indicating that AI can be particularly useful in repetitive coding activities and maintenance work.

The study also found that senior engineers experienced a 48 per cent reduction in task completion time when using these tools.

However, the report said the impact of Generative AI may vary depending on factors such as the experience level of engineers, the complexity of the coding task, and the development environment.

In highly complex coding tasks, productivity improvements from AI tools appear more limited.

How effective are generative AI tools in high code complexity software development environments?

The study observed that high code complexity environments saw around a 10 per cent reduction in task completion time, suggesting that skilled engineers will continue to play a crucial role in complex software development.

The report also noted improvements in collaboration and knowledge sharing within development teams.

Around 70 per cent of engineers reported improved engagement while working with Generative AI tools, with the study suggesting that such tools can reduce knowledge barriers and help developers work more effectively in distributed global teams.

Ness used its proprietary Matrix platform, a dynamic data driven engineering platform, to monitor key engineering performance indicators such as quality, productivity, responsiveness, and code quality during the study.

The report concluded that Generative AI has strong transformative potential in software engineering if used appropriately, though its overall impact will depend on factors such as engineer seniority, task type, and code complexity.

(With inputs from ANI)