Dario Amodei Just Declared War on Sam Altman — And He's Winning
Anthropic's CEO sent a scathing 1,600-word memo calling OpenAI's Pentagon deal 'safety theater' and Altman's messaging 'straight up lies.' As ChatGPT uninstalls surge 295% and Anthropic climbs to #2 in the App Store, Silicon Valley's AI civil war just went nuclear.
There's a moment in every corporate rivalry when the gloves come off and someone says what everyone's been thinking. Dario Amodei just had that moment, and he didn't whisper it — he put it in writing to his entire company. In a 1,600-word memo that reads less like internal communication and more like a declaration of war, Anthropic's co-founder and CEO called out his former colleague Sam Altman by name, accusing him of peddling 'straight up lies' about OpenAI's freshly inked Pentagon deal. The phrase 'safety theater' appears multiple times. So does 'mendacious.' This isn't a disagreement between competitors. This is personal.
The proximate cause is OpenAI's announcement last week that it would provide AI services to the Department of Defense — a deal that landed mere hours before the U.S. and Israel launched strikes against Iran. The timing was, to put it mildly, unfortunate. But timing is only part of what has Amodei so incensed. According to his memo, OpenAI and Anthropic were both courted by the Pentagon. Both had opportunities to say yes. Anthropic said no. OpenAI said yes. And now, Amodei argues, Altman is trying to spin that decision as some kind of principled middle ground, 'presenting himself as a peacemaker and dealmaker' while simultaneously accusing Anthropic of being naive or unpatriotic for holding the line. That's what Amodei calls lies. Not spin. Not positioning. Lies.
What makes this memo so explosive isn't just the language — though calling a fellow CEO 'mendacious' is about as close as you get to a professional throat-punch — it's the specificity. Amodei doesn't traffic in vague accusations. He names names. He cites dollar amounts. He points out that Greg Brockman, OpenAI's president, donated $25 million to a Trump PAC, and suggests that OpenAI's willingness to work with the Pentagon is less about national security and more about political access. He claims that everyone involved — Pentagon officials, Palantir executives, political consultants — assumed the 'problem' Anthropic was trying to solve was employee relations, not actual red lines around military use. In other words, they thought Anthropic would eventually cave, that its public stance was performative. Amodei is saying: we didn't cave, and OpenAI did.
The public is responding accordingly. ChatGPT uninstalls jumped 295% in the days following the Pentagon announcement. Protesters surrounded OpenAI's San Francisco headquarters, scrawling chalk messages as part of the QuitGPT movement. Meanwhile, Anthropic's Claude app rocketed to #2 in the App Store. Amodei even gloats about this in his memo, noting that the public 'mostly see OpenAI's deal with the DoD as sketchy or suspicious, and see us as the heroes.' It's a rare moment of corporate schadenfreude delivered with the subtlety of a sledgehammer.
Altman, for his part, held an all-hands meeting on Tuesday to address employee concerns. His message was blunt: OpenAI doesn't get to make 'operational decisions' about how the military uses its AI. 'So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that,' he told staffers. It's a defensible position — companies that sell to the government rarely control end use — but it's also a tacit admission that OpenAI has ceded moral authority over its technology. Altman also warned that if OpenAI doesn't work with the Pentagon, someone else will, likely xAI, which he suggested would 'say We'll do whatever you want.' It's a realpolitik argument: better us than them.
But here's the problem with that logic: it assumes the only two options are collaboration or capitulation. Amodei is arguing there's a third option — saying no and meaning it. OpenAI tried to thread the needle by publishing reworked contract language stating its AI 'shall not be intentionally used for domestic surveillance of U.S. persons and nationals.' Brad Carson, a former congressman now leading Americans for Responsible Innovation (which, notably, received $20 million from Anthropic), called the provision essentially meaningless. 'I've reluctantly come to the conclusion that this provision doesn't really exist, and they are just trying to fake it,' he said. The full contract hasn't been released, which only deepens suspicions.
What's unfolding here is more than a business dispute. It's a fundamental schism over what AI companies owe the world beyond their shareholders. Amodei is betting that there's a market — and a moral high ground — in being the company that says no to the Pentagon, even if it costs them contracts. Altman is betting that the government will work with whoever has the best models, and that influence from the inside is better than purity from the outside. Both can't be right. And the fact that Anthropic's Claude is now the #2 app while ChatGPT users are fleeing suggests the public has already started choosing sides.
This is also a story about the Trump administration's gravitational pull on Silicon Valley. Amodei explicitly contrasts Anthropic's refusal to donate to Trump with OpenAI's financial entanglements, framing the Pentagon deal as part of a broader pattern of accommodation. He accuses Altman of giving 'dictator-style praise to Trump,' a line that will sting precisely because it's not entirely unfair. Altman has been notably solicitous of the administration, and OpenAI's willingness to work with the Pentagon under Trump is a sharp departure from the company's earlier rhetoric about safety and alignment.
The irony, of course, is that both companies were founded by people who left OpenAI because they were worried it was moving too fast and prioritizing growth over safety. Amodei was one of them. Now he's accusing his former colleagues of doing exactly what he feared they would do. And he's not wrong that there's a performative quality to some of OpenAI's safety messaging — the company has a habit of announcing red lines and then quietly revising them when convenient. But Amodei's memo also reveals something about Anthropic: it sees itself as the moral alternative, the company that will hold the line even when it's costly. That's a powerful brand position, especially right now.
What happens next is anyone's guess, but the dynamics are clear. Anthropic is betting that being the 'good guy' AI company is a competitive advantage. OpenAI is betting that being the pragmatic, government-friendly AI company is the path to dominance. The market will decide, but so will employees, users, and regulators. And if the last week is any indication, Anthropic is winning the public relations war. Whether that translates into long-term success is another question. But for now, Dario Amodei has done something remarkable: he's made saying no to the Pentagon look like the smart business move. Sam Altman is going to have to live with that.