U.S. military 4 min read

OpenAI Amends A.I. Deal With the Pentagon

Source: N.Y Times
Haiyun Jiang/The New York Times
Haiyun Jiang/The New York Times

The new pact includes additional protections to prevent the use of the company’s technology for mass surveillance of Americans.

Cade Metz and 
Cade Metz reported from San Francisco and Julian E. Barnes from Washington.


After a weekend of criticism, OpenAI said on Monday that its deal to provide artificial intelligence technologies for the Defense Department’s classified systems now included additional protections to prevent its technology from being used in mass surveillance of Americans.

OpenAI announced its original agreement with the Pentagon on Friday, hours after President Trump ordered federal agencies to stop using A.I. technology made by OpenAI’s rival, Anthropic.

Under the original deal, OpenAI agreed to let the Pentagon use its A.I. systems for any lawful purpose. The company also negotiated terms that allowed it to uphold its so-called safety principles by installing specific technical guardrails on its technology.

But on Monday night, OpenAI said the deal had been amended to say its A.I. systems “shall not be intentionally used for domestic surveillance of U.S. persons and nationals” in line with relevant federal laws, which it named in the contract.

The amendment also adds that the agreement prohibits “deliberate tracking, surveillance or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

The Defense Department said in a statement that it “will always come to the table for reasonable discussion as we did with OpenAI. Anthropic didn’t want to do that, because they have their own personal vendettas.”

Anthropic did not immediately respond to a request for comment.

“It’s critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear,” OpenAI’s chief executive, Sam Altman, said in a social media post. He added, “Just like everything we do with iterative deployment, we will continue to learn and refine as we go.”

Mr. Altman also said the Pentagon had assured the company that its technology would not be used by defense intelligence agencies, including the National Security Agency.

In recent weeks, Anthropic had tussled with the Pentagon over how its A.I. could be used. The Defense Department demanded that it be able to use Anthropic’s A.I. system for all lawful purposes, or it would cut the company off from government business.

Anthropic said it needed terms that would ensure that its A.I. technology would not be used for domestic surveillance of Americans or for autonomous lethal weapons. But the Pentagon insisted that a private contractor like Anthropic could not decide how its tools would be used in national security work.

Anthropic and the Defense Department failed to agree on terms by a Pentagon-imposed Friday afternoon deadline. Minutes after the deadline passed, Defense Secretary Pete Hegseth declared Anthropic a “supply-chain risk to national security,” a designation that could prevent that the start-up from doing business with the U.S. government.

As Anthropic and Pentagon battled, OpenAI and Mr. Altman started their own negotiations with the Defense Department.

(The New York Times sued OpenAI and Microsoft in 2023, accusing them of copyright infringement of news content related to A.I. systems. The two companies have denied those claims.)

Mr. Altman publicly backed Anthropic’s stance that A.I. should not be used for domestic surveillance or autonomous weapons. But OpenAI was still able to a complete a deal.

When OpenAI announced the original agreement, many inside and outside the company struggled to understand how it had reached an agreement while Anthropic had not.

Unlike Anthropic, OpenAI agreed that the Pentagon could use its technology for all lawful purposes. But it negotiated the right to build in safeguards that would prevent its systems from being used in ways that it did not want. The Pentagon also agreed to have some OpenAI employees work alongside government personnel on classified projects “to help with our models and to ensure their safety.”

Since then, Mr. Altman and other OpenAI employees negotiated new language that specifically prohibited the Pentagon from using the technology for mass surveillance.

“We shouldn’t have rushed to get this out on Friday. The issues are super complex, and demand clear communication,” Mr. Altman wrote. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future.”

Advertisement
You did not use the site, Click here to remain logged. Timeout: 60 second