
A dispute between the U.S. military and artificial intelligence developer Anthropic has intensified after the Department of Defense formally labeled the company a supply chain risk, a step that could affect how defense contractors use the firm’s technology.
Pentagon officials said Thursday that Anthropic and its products now fall under the designation, which takes effect immediately. The classification is usually aimed at companies tied to hostile foreign governments. Applying it to an American technology firm is unusual and could require companies working with the military to certify that they are not using Anthropic’s AI systems in government projects.
The conflict centers on Anthropic’s AI assistant Claude, a large language model used by businesses, government agencies and defense contractors. The technology has been incorporated into tools used in national security and intelligence environments. According to the Defense Department, the company attempted to limit how the military could deploy the system, something officials argue interferes with lawful operational use.
Defense officials said the military must retain the authority to use critical technology without restrictions set by vendors. From their perspective, allowing a private company to block certain uses could hinder mission capabilities and potentially affect personnel in active operations.
Anthropic’s leadership has pushed back strongly against that view. CEO Dario Amodei said the company proposed guardrails meant to prevent two specific applications: mass surveillance of Americans and the creation of fully autonomous weapons systems. He said those protections were aimed at broader policy issues rather than battlefield decisions made by commanders.
Amodei has also indicated the company plans to challenge the government’s decision in court. He argued the designation is not supported by law and warned that forcing contractors to quickly move away from Anthropic’s models could disrupt technology systems already in use across defense programs.
The decision may have wide effects throughout the defense contracting world. Companies that rely on government work are now evaluating whether they must remove Anthropic’s models from their software platforms. Lockheed Martin has already said it will comply with federal direction and explore other artificial intelligence providers. The company added that its programs do not depend entirely on one AI vendor, which should limit the impact.
Other firms appear to be taking a more cautious approach while reviewing the details of the order. Microsoft said its legal analysis suggests collaboration with Anthropic can continue for projects that do not involve defense contracts.
Some lawmakers and policy analysts have criticized the government’s move, arguing the supply chain risk designation was originally intended to guard against technology linked to geopolitical rivals. Senator Kirsten Gillibrand said applying the authority to a domestic company could damage both national security and the United States’ position in the global AI industry.
Former national security officials have raised similar concerns. In a letter sent to lawmakers, several retired military leaders and intelligence figures warned that using the tool in this situation could establish a precedent that discourages innovation among American technology companies working with the government.
The standoff has also intensified competition among major AI developers. Shortly after the administration moved to restrict Anthropic’s technology, OpenAI announced an agreement to deploy its own models in classified defense environments. At the same time, public attention surrounding the dispute has boosted consumer interest in Anthropic’s Claude application, which has seen a surge in downloads.
As the legal battle and policy debate unfold, the clash highlights the growing tension between national security demands and the ethical limits some technology companies seek to place on advanced artificial intelligence systems.
This image is the property of The New Dispatch LLC and is not licenseable for external use without explicit written permission.









