OpenAI has changed it terms and conditions for the Pentagon contract it won over rival Anthropic over the weekend.

The artificial intelligence (AI) developer was awarded the contract when US president Donald Trum ordered that Anthropic’s solutions not be used in federal systems.

The fallout came as Anthropic sought guarantees that its technology would not be employed in surveillance in American citizens, or to power autonomous weapons.

OpenAI’s original deal with the Pentagon was to allow it use the AI technology for any lawful purpose. It also negotiated terms allowing it uphold safety principles by installing guardrails.

On Monday evening, OpenAI founder Sam Altman posted that new additions would spell out those principles, and specifically exclude the technology from being used intentionally for domestic surveillance of US citizens and nationals.

“There was so much focus on this, that we wanted to make this point especially clear, including around commercially acquired information,” Alman writes.

He adds that the Department of War has also agreed that OpenAI services will not be used by its intelligence agencies (for example, the NSA).

“For extreme clarity: we want to work through democratic processes,” Altman adds. “It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty.

“But we are clear on how the system works. Because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it.”