Sam Altman Announces OpenAI Deal With US Government After Anthropic Ban

Sam Altman Announces OpenAI Deal With US Government After Anthropic Ban

Sam Altman highlights OpenAI’s mission of safety and benefit-sharing in landmark US defense agreement

New Delhi: OpenAI CEO Sam Altman announced that the company has signed a formal agreement with the United States Department of War to deploy its artificial intelligence models inside the department’s classified network. He shared the news shortly after President Donald Trump ordered all federal agencies to stop using AI systems made by Anthropic and moved to label the company as a possible national security supply-chain risk.

The White House took this step after a disagreement between Anthropic and the Department of War over how AI systems should be used in defense work. Reports say officials asked Anthropic to remove certain safety conditions from its contract so its AI models could be used for all lawful purposes without limits. Anthropic refused, saying that removing those protections would go against its core ethical principles, especially regarding surveillance and autonomous weapons. The company has said it plans to challenge the government’s decision in court.

Sam Altman’s Statement on the Agreement

In a detailed post on X, Sam Altman confirmed the agreement and explained the thinking behind it. He wrote that tonight OpenAI reached an agreement with the Department of War to deploy its models in the department’s classified network. He said that in all discussions, the Department of War showed deep respect for safety and a clear desire to work together to achieve the best possible outcome.

Altman said that AI safety and making sure its benefits reach many people are at the heart of OpenAI’s mission. He explained that two of the company’s most important safety rules are banning domestic mass surveillance and making sure humans remain responsible for decisions involving the use of force, including in autonomous weapon systems. According to Altman, the Department of War agrees with these principles, reflects them in law and policy, and included them in the agreement.

Also Read: Dario Amodei, CEO of Anthropic, Rejects US Defense Request to Remove AI Safeguards

He also said that OpenAI will build technical safeguards to make sure its models behave properly, which the Department of War also wanted. The company will send forward-deployed engineers to help manage the models and ensure their safe use. OpenAI will operate its systems only on secure cloud networks within the classified infrastructure.

Altman added that OpenAI is asking the Department of War to offer the same terms to all AI companies, saying these are standards that everyone should be willing to accept. He said the company strongly wants to see the situation calm down and move away from legal and government fights toward reasonable agreements. He repeated that OpenAI remains committed to serving all of humanity as best as it can. He also acknowledged that the world is complicated, messy, and sometimes dangerous.

Safety Guidelines Embedded in the Deal

The agreement clearly includes important safety protections. It bans the use of OpenAI’s models for domestic mass surveillance and ensures that humans stay responsible for decisions about the use of force. The deal also includes technical controls to make sure the models follow the agreed safety standards.

OpenAI will deploy its models only on secure cloud systems connected to the Department of War’s classified network. The company will not allow the systems to run independently on uncontrolled or fully autonomous platforms. The forward-deployed engineers will work closely with officials to monitor performance and ensure everything follows the agreed rules.

Growing Debate Over AI in Defense

This development shows how important artificial intelligence has become in national security work. Government agencies now use advanced AI systems for data analysis, planning logistics, predicting risks, and supporting complex decisions.

At the same time, the dispute with Anthropic has brought attention to serious ethical questions about how AI should be used in military and intelligence settings. Anthropic insisted that certain safety protections, especially limits on surveillance and autonomous weapons, should never be removed. The Department of War reportedly wanted more flexibility, which led to the end of their partnership.

Also Read: HRDS India’s Sadhgraha Project Provides Permanent Homes to 100 Tribal Families in Dhenkanal

OpenAI’s agreement suggests that it is still possible for AI companies and the government to work together by clearly writing safety principles into contracts. However, the situation also raises questions about how these protections will be applied and enforced in the future.

What Comes Next

Anthropic’s expected legal challenge could increase the debate over competition, politics, and the rules that govern artificial intelligence in the United States. At the same time, OpenAI’s new agreement makes it an important partner in the country’s defense technology efforts.

As artificial intelligence becomes more deeply connected to classified and defense systems, discussions about safety, responsibility, and national security are likely to continue. Policymakers, technology companies, and the public will keep watching how governments and AI firms balance innovation with ethical responsibility in an increasingly complex world.

Leave a Reply

Your email address will not be published. Required fields are marked *