The US Department of War has ended negotiations with Anthropic over artificial intelligence (AI) safety rails, named the company a supply chain risk and signed a contract with OpenAI instead.

Anthropic, negotiating a $200-million contract, insisted on guidelines to prevent its AI technology, Claude, from being used for surveillance of US citizens or fully autonomous weapons.

When a deadline imposed by the Pentagon was passed without agreement, Secretary of War Pete Hegseth used social media platform X to designate Anthropic a supply chain risk to national security.

“Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” he stated.

“Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.

“America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.”

Anthropic, which has not yet received any official communicaiton, has questioned the legality of naming it a supply chain risk.

“We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government,” Anthropic states.

“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.”

Despite initial support for Anthropic’s stand, OpenAI has now signed its own contract to replace Anthropic, which the company says does include the required guardrails.

“Yesterday we reached an agreement with the Pentagon for deploying advanced AI systems in classified environments, which we requested they also make available to all AI companies,” the company states.

“We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.”

It says there three main red lines that guide its work with the DoW:

  • No use of OpenAI technology for mass domestic surveillance.
  • No use of OpenAI technology to direct autonomous weapons systems.
  • No use of OpenAI technology for high-stakes automated decisions (such as systems such as “social credit”).

“Other AI labs have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments,” it continues. “We think our approach better protects against unacceptable use.

“In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in US law.”

On Thursday, workers from a number of technology companies, including Amazon, Google and Microsoft, came out in support of Anthropic’s stance, as did OpenAI CEO Sam Altmann.