Pentagon Admits Iran Deal 'Looked Opportunistic and Sloppy' as ChatGPT Users Flee in Revolt
OpenAI CEO Sam Altman scrambled to rewrite the company's Pentagon contract after a 200% surge in ChatGPT deletions, adding explicit bans on domestic spying and NSA access. The chaos erupted when rival Anthropic was blacklisted for refusing to build autonomous weapons—then quietly kept working with the military anyway.
Sam Altman just learned what happens when you cut a deal with the Pentagon on a Friday afternoon and hope nobody notices. By Monday, the OpenAI CEO was publicly admitting the company's military AI contract "looked opportunistic and sloppy," scrambling to add new restrictions as ChatGPT users deleted the app at triple the normal rate and Anthropic's Claude—supposedly banned by the Trump administration—rocketed to the top of Apple's App Store.
The debacle began Friday when OpenAI announced it would provide AI technology for classified U.S. military operations, positioning itself as the responsible alternative after Anthropic refused to drop its red line against fully autonomous weapons. But according to Sensor Tower data reported by the BBC, the backlash was immediate and brutal: ChatGPT uninstalls surged 200% over the weekend, while Claude—the app that was supposed to be blacklisted—became the number one download on Apple's platform, where it remained as of Tuesday.
By Monday, Altman was on X doing damage control. OpenAI would now explicitly prohibit its systems from being "intentionally used for domestic surveillance of U.S. persons and nationals," he announced. Intelligence agencies like the NSA would be barred from using OpenAI's technology "without a follow-on modification to the contract." The original deal, Altman conceded, was rushed. "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy," he wrote. "The issues are super complex, and demand clear communication."
What Altman didn't say was that OpenAI had walked straight into a trap of its own making. The company had spent Saturday claiming its Pentagon agreement contained "more guardrails than any previous agreement for classified AI deployments, including Anthropic's." That framing—positioning OpenAI as the ethical grown-up—collapsed almost immediately when users started asking what those guardrails actually meant. The answer, it turned out, was not much until Monday's hasty amendments.
The Anthropic angle makes this even messier. The company was blacklisted by the Trump administration for refusing to compromise on autonomous weapons, a principled stand that should have ended its Pentagon work. Except it didn't. CBS News reported Tuesday that Claude was still in use in the U.S.-Israel war with Iran, despite the supposed ban. The Pentagon declined to comment on its dealings with Anthropic, leaving the entire situation in a fog of contradictions: the banned AI is still deployed, the approved AI needed emergency restrictions, and nobody in government seems willing to explain what's actually happening.
This isn't just corporate drama—it's a window into how AI is being integrated into warfare with almost no public oversight. According to Fox News, Undersecretary of War for Policy Elbridge Colby defended the administration's 2026 National Defense Strategy before the Senate Armed Services Committee on Tuesday, describing a shift to "NATO 3.0" where wealthy European allies would "take the lead for the conventional defense of European NATO" while the U.S. focuses on deterring China and defending the homeland. Colby called this a return to "Cold War mentality" with an emphasis on "burden sharing," but senators weren't buying it.
Senate Armed Services Committee Chairman Roger Wicker, a Mississippi Republican, said "any clear-eyed assessment of the military situation in Europe makes it clear we cannot fully delegate the Russia problem to our European allies." Ranking member Jack Reed, a Rhode Island Democrat, called the strategy a "flawed proposal" and rejected "the abdication of our clear national security interests in Europe by suggesting Russia is their problem to manage." Colby insisted the strategy didn't leave allies in danger, but rather focused U.S. resources "realistically and prudently" while accounting for "our allies' and partners' ability and will to meet those challenges."
Meanwhile, NATO Secretary General Mark Rutte was busy praising the Trump administration's Iran campaign, telling journalists in North Macedonia on Tuesday that there was "widespread support in Europe" for the strikes that killed Ayatollah Ali Khamenei. "I was on the phone with many leaders over the weekend and also early this week," Rutte said, according to The Defense Post. "I clearly sensed that taking out the nuclear capability, taking out the ballistic missile capability, and also Khamenei gone, this is applauded by many of my colleagues in NATO." Rutte emphasized that NATO was "not involved" in the operations but would "defend every inch of NATO territory."
Back in Silicon Valley, the ChatGPT exodus continued. Users weren't just deleting the app—they were making a statement about corporate complicity in warfare. The fact that Anthropic's Claude became the top download despite being officially banned suggests people care more about a company's stated principles than whether those principles are actually enforced. It's a strange moment: the AI that refused to build killer robots is still being used in combat, while the AI that agreed to work with the Pentagon had to rewrite its contract after a user revolt.
The real story here is what's happening in the shadows. AI is already embedded in military operations through companies like Palantir, whose Maven platform integrates satellite data, intelligence reports, and commercial AI systems to help commanders make "faster, more efficient, and ultimately more lethal decisions," as Louis Mosley, head of Palantir's UK operations, told the BBC. The UK Ministry of Defence just signed a £240 million contract with Palantir. NATO is using it. Ukraine is using it. And unlike OpenAI or Anthropic, Palantir doesn't support a blanket ban on autonomous weapons—it just wants a "human in the loop."
But as Professor Mariarosaria Taddeo of Oxford University pointed out to the BBC, with Anthropic sidelined, "the most safety-conscious actor was now out from the room. That is a real problem." The concern isn't just about autonomous weapons—it's about AI systems that can hallucinate or make mistakes being integrated into life-and-death decisions. Lieutenant Colonel Amanda Gustave, chief data officer for NATO's Task Force Maven, insisted there's always "a human in the loop" and that "it would never be the case that an AI would make a decision for us." But as AI gets faster and more persuasive, that human oversight becomes harder to maintain.
OpenAI's weekend disaster reveals the deeper tension: tech companies want to be seen as responsible while also securing lucrative government contracts, but users are increasingly skeptical of that balance. Altman's admission that the deal looked "opportunistic and sloppy" is rare corporate candor, but it doesn't resolve the underlying problem. If the most safety-focused AI companies are either banned or forced to water down their principles, who's left to push back when the Pentagon asks for something dangerous?
The answer, increasingly, is nobody. And that's exactly what has ChatGPT users deleting their accounts and downloading Claude instead, even if Claude is secretly still working for the military. At least Anthropic tried to draw a line. OpenAI just drew one on Monday after the backlash made it impossible not to.