CEO Dario Amodei says company cannot remove AI safety limits despite Pentagon pressure and possible contract cancellation threats
New Delhi: Artificial intelligence company Anthropic has refused a request from the United States Department of Defense, also known as the Pentagon, to allow unrestricted use of its AI systems. The disagreement became public after Anthropic’s CEO, Dario Amodei, released a detailed statement explaining why the company would not remove safety limits from its technology.
Anthropic shared the statement on its official website and on X (formerly Twitter) through its verified account, @AnthropicAI. The post said, “A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War.” The link led to a longer explanation of the company’s position.
In his message, Amodei clearly said, “Regardless, these threats do not change our position: we cannot in good conscience accede to their request.” This line made it clear that even under pressure, the company would not change its decision.
Pentagon Wants Full Access for “Any Lawful Use”
Reports say the Pentagon asked Anthropic to agree that its AI model, Claude, could be used for any lawful purpose by the US government. Defense officials say they need this freedom to use AI tools in different national security and military operations.
However, Anthropic believes that some possible uses go too far. The company says it supports the responsible use of AI in defense work but does not agree with removing the safety protections built into its systems.
Company Raises Two Main Concerns
Anthropic highlighted two specific concerns about how its AI could be used:
- Mass domestic surveillance – The company fears that AI could be used to create large-scale systems to monitor civilians, which could harm privacy and democratic freedoms.
- Fully autonomous weapons – Anthropic says today’s AI systems are not reliable enough to make life-and-death decisions on their own without human involvement.
The company stressed that humans must always remain involved in serious military decisions, especially those related to combat. It warned that allowing AI to act alone in warfare would be risky and irresponsible.
Anthropic Says It Supports National Security
Anthropic clarified that it is not against working with the US government. In fact, the company said its AI tools already help with tasks such as mission planning, running simulations, analyzing intelligence and supporting cybersecurity operations.
The company also said it has previously taken steps to protect US interests. For example, it stopped misuse of its AI systems by groups linked to the Chinese Communist Party and has supported stronger export controls on advanced AI technology.
Still, Anthropic said it cannot accept new contract terms that would force it to remove important safety guardrails from its AI systems.
Pentagon Sets Deadline and Warns of Consequences
According to reports, the Pentagon has given Anthropic a deadline to accept the revised contract terms. If the company refuses, the Defense Department could cancel a contract reportedly worth up to $200 million.
Officials are also said to be considering labeling Anthropic as a “supply chain risk.” This label is usually used for companies connected to foreign adversaries and could prevent Anthropic from getting future government contracts.
Also Read: From OnePlus to iQOO: Best 5G Smartphones You Can Buy Under ₹25,000
There are also reports that the US government could use emergency powers under the Defense Production Act to force compliance, though no final decision has been made.
Defense officials have denied that they plan to use AI for illegal surveillance. They say all uses would follow US law. However, they continue to push for broader access to Anthropic’s AI technology.
Bigger Debate Over AI and Ethics
This dispute has started a wider discussion in the technology and policy world. Some experts have praised Anthropic for standing by its ethical limits. Others argue that private companies should not restrict military capabilities, especially when national security is involved.
The situation raises an important question: how should powerful AI systems be controlled when they are used for defense and security? As AI becomes more advanced and more connected to military systems, governments and companies must decide how to balance innovation with safety and ethics.
What Happens Next?
Discussions between Anthropic and the Pentagon are still ongoing, but the disagreement has not been resolved. Anthropic has said that if the Pentagon ends their partnership, the company will help ensure a smooth transition so that military work is not disrupted.
For now, CEO Dario Amodei has made the company’s position clear. Anthropic is willing to work with the US government, but it will not remove safety protections that it believes are necessary to protect human life and civil freedoms.
The final outcome of this dispute could have a major impact on how AI companies and governments cooperate in the future. It could also influence global rules about how artificial intelligence is used in military and defense operations.
khushisikarwar is an award-winning journalist and content creator who thrives on telling stories that matter. As a key contributor to Newsisland, [she] focus on cultural commentary, providing readers with thought-provoking insights.
