Sam Altman says OpenAI is renegotiating an “opportunistic and dirty” deal with the Pentagon

Sam Altman says OpenAI is renegotiating an “opportunistic and dirty” deal with the Pentagon

GettyImages-2261852484 Sam Altman says OpenAI is renegotiating an "opportunistic and dirty" deal with the Pentagon

OpenAI says it is renegotiating its “urgent” agreement with the Pentagon to add an explicit ban on using its artificial intelligence for domestic surveillance of US citizens — a provision that addresses one of the issues. The most controversial issues In the confrontation between the US military and the artificial intelligence industry.

In an internal memo And spread on social networking sites, Sam Altman, CEO of OpenAI, said the company “should not have been in such a rush” to release the agreement on Friday.

“The issues are very complex and require clear communication,” he wrote. He added: “We were honestly trying to de-escalate matters and avoid a much worse outcome, but I think it seemed opportunistic and dirty.”

According to Altman, the new contract language will stipulate that OpenAI’s AI systems may not be “intentionally used for domestic surveillance of U.S. persons and citizens,” consistent with the Fourth Amendment, the National Security Act of 1947, and the Foreign Intelligence Surveillance Act of 1978.

Katrina Mulligan, head of national security partnerships at OpenAI and a former senior official at the Pentagon, National Security Council, and Department of Justice, He said that too Defense intelligence components — including the National Security Agency, the National Geospatial Intelligence Agency, and the Defense Intelligence Agency — would be prohibited from using OpenAI services under the agreement, and any use by those agencies would require a separate contract amendment.

The renegotiated terms would also add explicit restrictions covering commercially purchased data — such as cell phone location records or fitness app information — which had been a legal gray area. According to a report in The Atlantic, rival Anthropic specifically sought similar safeguards against internal surveillance in its own negotiations with the Pentagon. Its insistence on stricter guarantees to ban the use of its surveillance tools is said to have been one of the main obstacles that ultimately led to the collapse of those talks.

Despite the renegotiated terms, legal experts have questioned the enforceability of the restrictions.

“This seems like a significant improvement over previous language regarding surveillance, and I’m glad to see it,” Charles Bullock, a senior research fellow at the Institute for Law and Artificial Intelligence, said in a post on X. “It does not address concerns about autonomous weapons, nor does it claim to.”

Independent analysts as well as OpenAI employees also called for a process in which independent lawyers could review the entire contract and share their analysis with relevant employees.

Violent reaction from employees

The renegotiated terms come after OpenAI ran into trouble A wave of backlash From inside and outside the company. Altman had already acknowledged that the idea of ​​approving the Pentagon deal hours after the Trump administration described rival Anthropic as a “supply chain risk” for rejecting a contract without explicit safeguards for artificial intelligence, did not “sound great.” This was especially true given that Altman had previously said publicly that he supports Anthropic’s red lines around mass surveillance and autonomous weapons.

Anthropic sought to draw two hard limits in its negotiations with the Pentagon: a ban on the use of its AI for mass surveillance of American citizens, and a ban on the integration of its technology into autonomous weapons systems — defined as those capable of deciding to strike targets without direct human supervision.

Critics, including Jonathan Iwry, a fellow at the Responsible Artificial Intelligence Laboratory at the University of Pennsylvania’s Wharton School, have accused OpenAI of undermining anthropology at a critical moment.

“What is particularly disappointing is that the rest of the AI ​​industry has failed to provide support to Anthropic,” Iori said. luck. “If these companies were serious about their commitment to safe and responsible AI (on which some have built their reputations), they could have joined forces and stood united against the Pentagon on behalf of the public. Instead, they allowed management to compete against each other as competitors in the marketplace.”

Several OpenAI employees signed an open letter supporting Anthropic after the confrontation. Consumers also showed their support by sending Claude, Anthropic’s AI assistant, to the top of Apple’s App Store charts for the first time, suggesting that users were shifting in protest. Chalk writings criticizing OpenAI’s decision also appeared on the sidewalk outside its offices in San Francisco.

Some OpenAI researchers have even spoken out. Aidan McLaughlin, a research scientist at the company, Published on X And he personally didn’t think “this deal was worth it,” in a post that attracted nearly 500,000 views.

Share this content:

Post Comment