# The Deal That Divided Silicon Valley: OpenAI Signs with Pentagon Hours After Trump Blacklists Anthropic
**Published: February 28, 2026**
You know how sometimes you watch a drama unfold where two characters start from the same place, take opposite paths, and end up on completely different sides of history?
That's what's happening right now with OpenAI and Anthropic.
Just hours after President Trump ordered all federal agencies to immediately cease using Anthropic's technology, branding the AI company a national security risk, OpenAI CEO Sam Altman announced a deal with the Pentagon to deploy OpenAI's models inside classified military networks .
The timing couldn't be more dramatic. The same ethical guardrails that got Anthropic blacklisted—demands that its AI not be used for mass surveillance or fully autonomous weapons—are reportedly baked into OpenAI's agreement . The Pentagon apparently said yes to OpenAI where it said no to Anthropic.
Let me walk you through what actually happened, why it matters, and what this means for the future of AI, national security, and the growing divide between two of the world's most important technology companies.
---
## The Short Version: What You Need to Know
**The OpenAI deal:** OpenAI reached an agreement with the Department of War (the Pentagon's new name under the Trump administration) to deploy its AI models within classified military networks .
**The safety principles:** Altman emphasized that the agreement includes two core safeguards—"prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems" .
**The Anthropic blacklisting:** Hours earlier, President Trump ordered all federal agencies to stop using Anthropic's technology and directed the Pentagon to designate the company a "supply chain risk"—a label typically reserved for companies from adversary nations .
**The conflict:** Anthropic had refused to agree to "any lawful use" of its Claude models, insisting on maintaining safeguards against mass surveillance and fully autonomous weapons. The Pentagon called this unacceptable .
**The irony:** OpenAI appears to have secured the same restrictions Anthropic was punished for demanding .
**What's next:** Anthropic vows to challenge the designation in court, while employees from both companies have shown solidarity, urging their leaders to stand together .
---
## The Ultimatum: Anthropic's Last Stand
To understand what happened Friday, you need to go back to the days leading up to it.
The Pentagon had given Anthropic a deadline: agree to "any lawful use" of its Claude models, dropping specific safeguards against mass surveillance and autonomous weapons, or face consequences .
Anthropic's CEO Dario Amodei refused to budge. In a statement, the company made its position clear: "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons" .
The company laid out its concerns explicitly. They would not support uses of AI that threaten democratic values. They would not allow their technology to power mass surveillance of American citizens. They would not enable weapons systems that could kill without human oversight .
This wasn't a new position. Anthropic had built its entire brand around being the "safety-first" AI company, the one founded by former OpenAI employees who left because they worried about the rush to commercialization .
But the stakes got real very quickly.
---
## The Hammer Falls: Trump's Truth Social Directive
On Friday, President Trump took to his Truth Social platform with a message that left no room for ambiguity:
"I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it and will not do business with them again! Anthropic better get their act together and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow" .
Defense Secretary Pete Hegseth followed up with an even more blistering attack. In a post on X, he accused Anthropic of "arrogance and betrayal" and a "textbook case of how not to do business with the United States Government" .
He announced that the Pentagon would designate Anthropic a "Supply-Chain Risk to National Security"—a label typically applied to companies with direct ties to foreign adversaries, never before used against an American firm .
The practical effect is devastating: "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic" .
Given that Anthropic's Claude is already used by eight of the ten largest U.S. companies, this could cause massive disruption across the defense industrial base .
---
## The Deal: OpenAI Steps In
Hours later, Sam Altman posted on X:
"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome" .
The key paragraph came next:
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement" .
Altman added that OpenAI would "build technical safeguards to ensure our models behave as they should" and would deploy forward-deployed engineers (FDEs) to help with safety .
Then came an appeal for fairness: "We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements" .
Defense Secretary Hegseth reposted Altman's announcement. Under Secretary Emil Michael, in charge of technology at the Pentagon, added: "When it comes to matters of life and death for our warfighters, having a reliable and steady partner that engages in good faith makes all the difference as we enter into the AI Age" .
---
## The Irony: Same Restrictions, Different Outcome
Here's the part that's causing whiplash in Silicon Valley.
Anthropic's position—no mass surveillance, no fully autonomous weapons—is reportedly the same set of restrictions OpenAI just secured in its agreement .
So why was Anthropic blacklisted and OpenAI embraced?
The Pentagon's official position has been that it operates within the law and that contracted suppliers cannot set terms on how their products are employed . Defense Secretary Hegseth made this explicit: "The Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic" .
But if that's true, how did OpenAI get an agreement that includes those very restrictions?
Either the Pentagon softened its stance, or OpenAI found a way to frame the restrictions that was acceptable. Altman emphasized that the Department of War "agrees with these principles" and "reflects them in law and policy" . That framing—that these aren't new restrictions, just affirmations of existing law—may have been the key.
Whatever the reason, the optics are stark. One company stood firm and got crushed. Another found a path forward and got the deal.
---
## The Solidarity Movement: "We Will Not Be Divided"
Perhaps the most surprising development came from within the AI industry itself.
Hundreds of employees from Google DeepMind and OpenAI signed an open letter titled "We Will Not Be Divided," urging their companies to rally behind Anthropic .
"We hope our leaders will put aside their differences, and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight," the letter said .
"They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand" .
This is remarkable. Employees of OpenAI, whose company just signed a deal with the Pentagon, are publicly calling for solidarity with the competitor that got blacklisted.
Top House Democrat Hakeem Jeffries praised Anthropic's "courage" for pushing back "against this shocking invasion of privacy scheme," calling Hegseth "the least qualified Secretary of Defense in our nation's history" .
---
## The Legal Fight: Anthropic's Challenge
Anthropic isn't going quietly. The company announced it will challenge the "supply chain risk" designation in court .
"We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government," the company said .
The stakes are enormous. If the designation stands, any company doing business with the Pentagon would have to certify they don't use Anthropic's products. Given Claude's widespread adoption in enterprise, that could force a massive, expensive transition .
Anthropic's statement was defiant: "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons" .
---
## The Bigger Picture: AI's National Security Crossroads
This showdown marks a pivotal moment for the AI industry.
### The Safety Philosophy Shift
Both OpenAI and Anthropic have quietly softened their safety stances in recent weeks. Anthropic's updated "Responsible Scaling Policy 3.0" removed a hard commitment to pause training if models hit dangerous capability thresholds . The company acknowledged that unilateral safety pledges "won't survive a world where rivals have no such constraints" .
OpenAI, for its part, removed the word "safely" from its mission statement in late 2025—a small edit with potentially large implications .
But there's a difference between softening safety commitments and accepting military contracts. This week, that line got crossed.
### The Market Reaction
Anthropic's influence extends far beyond Washington. On Wall Street, new Claude releases have triggered what traders call the "SaaSpocalypse"—five separate stock market gyrations in four weeks .
Feb. 3: Legal plugins wiped out $285 billion in market value. Thomson Reuters plunged nearly 16%. LegalZoom cratered 20%.
Feb. 6: Claude Opus 4.6 launched, sending financial data stocks tumbling again.
Feb. 20: Claude Code Security hit cybersecurity stocks. CrowdStrike down 8%. Cloudflare down 8%.
The point is: Anthropic matters. Its technology is embedded in the largest U.S. companies. Disrupting that relationship has consequences far beyond one startup's fortunes .
### The Growth Trajectories
The financial stakes are enormous. OpenAI just closed a $110 billion funding round at a $730 billion pre-money valuation . Anthropic, just two weeks earlier, raised $30 billion at a $380 billion valuation .
But the growth curves tell different stories. Epoch AI's modeling suggests Anthropic has been growing at about 10x annually since crossing $1 billion ARR, while OpenAI is growing at about 3.4x. If current trends hold, Anthropic's ARR could surpass OpenAI's by August 2026 .
That's what makes this Pentagon showdown so significant. The government is picking sides in a commercial rivalry with billions—and potentially trillions—at stake.
---
## Table: OpenAI vs. Anthropic – The Pentagon Showdown
| **Factor** | **OpenAI** | **Anthropic** |
| :--- | :--- | :--- |
| **Pentagon Status** | Deal signed for classified network use | Blacklisted, designated "supply chain risk" |
| **Stated Restrictions** | No domestic mass surveillance, human responsibility for force | Same restrictions demanded, refused to back down |
| **CEO** | Sam Altman | Dario Amodei |
| **Recent Valuation** | ~$840 billion post-money | ~$380 billion post-money |
| **Annual Growth Rate** | ~3.4x | ~10x |
| **Key Enterprise Users** | Broad consumer base | 8 of 10 largest U.S. companies |
| **Safety Framework** | Mission statement softened, removed "safety" | RSP 3.0 removed "pause training" commitment |
| **Response to Pentagon** | Negotiated deal with safeguards | Refused to compromise, now suing |
---
## What This Means for Different People
### If You Work in AI
Your industry just got a lot more complicated. The clean lines between "ethical AI" and "military AI" just blurred. If you're at Anthropic, you're watching your company fight for its existence. If you're at OpenAI, you're watching your employer get rewarded for finding a path forward.
The employee letter—signed by hundreds from both companies—suggests the rank and file aren't happy with this divide.
### If You Care About AI Safety
This is the moment many safety advocates feared. The most safety-conscious company got punished. The company that found a way to work with the military got rewarded. The message to the industry is clear: adapt or die.
But it's not that simple. OpenAI's agreement includes restrictions. The Pentagon apparently accepted them. So maybe the message is: be flexible, not rigid.
### If You're an Investor
You're watching a $380 billion company get potentially locked out of a massive market. The "supply chain risk" designation could ripple through the entire defense industrial base. If you have exposure to companies that rely on Claude, pay attention.
The growth curves suggest Anthropic has enormous momentum. But momentum doesn't matter if you can't sell to your biggest customer.
### If You're Just Watching
You're witnessing history. The AI industry, which has operated largely outside government control, just got pulled directly into the national security apparatus. The lines between Silicon Valley and the Pentagon just got a lot blurrier.
---
## Frequently Asked Questions
**Q: What exactly happened between the Pentagon and Anthropic?**
A: The Pentagon demanded that Anthropic agree to "any lawful use" of its Claude models, dropping specific safeguards against mass surveillance and fully autonomous weapons. Anthropic refused. In response, President Trump ordered all federal agencies to stop using Anthropic, and the Pentagon designated the company a "supply chain risk" .
**Q: What did OpenAI agree to with the Pentagon?**
A: OpenAI reached a deal to deploy its models within the Pentagon's classified network. The agreement includes two core safeguards: prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems .
**Q: Are OpenAI's safeguards different from what Anthropic demanded?**
A: They appear to be the same. Anthropic's position was also against mass surveillance and fully autonomous weapons. The key difference seems to be that OpenAI found a way to have those restrictions accepted, while Anthropic's refusal led to a confrontation .
**Q: Why did the Pentagon accept OpenAI's restrictions but punish Anthropic?**
A: That's the million-dollar question. Altman emphasized that the Department of War "agrees with these principles" and "reflects them in law and policy" . It's possible the framing—that these aren't new restrictions but affirmations of existing law—made the difference. It's also possible the Pentagon simply preferred dealing with OpenAI.
**Q: What happens to Anthropic now?**
A: Anthropic will challenge the "supply chain risk" designation in court. If the designation stands, any company doing business with the Pentagon would have to certify they don't use Anthropic's products—a potentially massive disruption .
**Q: What did Trump say about Anthropic?**
A: On Truth Social, Trump wrote: "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it and will not do business with them again!" .
**Q: Did employees protest this decision?**
A: Yes. Hundreds of employees from Google DeepMind and OpenAI signed an open letter titled "We Will Not Be Divided," urging their leaders to stand together and refuse the Pentagon's demands .
**Q: Is OpenAI now a military contractor?**
A: In a sense, yes. The deal allows the Pentagon to use OpenAI's models in classified networks. But Altman emphasized that the agreement includes safety safeguards and that the Pentagon agreed to those terms .
**Q: What does this mean for AI safety more broadly?**
A: This is a watershed moment. The company that positioned itself as the safety leader got punished. The company that found a way to work with the military got rewarded. The message to the industry is that safety principles must be flexible enough to accommodate national security priorities.
**Q: Could this affect OpenAI's relationship with Microsoft?**
A: OpenAI and Microsoft issued a joint statement this week affirming their partnership remains strong. But this Pentagon deal, combined with OpenAI's recent $110 billion funding round that included Amazon and Nvidia, suggests OpenAI is diversifying its relationships.
---
## The Bottom Line
Here's what I keep coming back to.
Two companies started from nearly the same place. Both were founded by people who believed AI needed careful guardrails. Both built safety into their core missions. Both faced the same choice: work with the military, on its terms, or refuse.
One refused and got crushed. One found a path forward and got the deal.
**The irony** is that OpenAI's deal reportedly includes the same restrictions Anthropic was punished for demanding. The Pentagon apparently said yes to OpenAI where it said no to Anthropic. Whether that's because of different framing, different personalities, or different political calculations, the result is the same.
**The industry** is watching. The employee letter—signed by hundreds from both camps—shows the rank and file aren't happy. They see their companies being divided and conquered. They're urging solidarity.
**The government** just made a powerful statement. It will work with AI companies, but on its terms. Companies that insist on setting terms will be left behind.
**The question** for everyone else is simple: What kind of AI future do we want? One where the most safety-conscious companies get locked out of government work? One where the lines between commercial AI and military AI blur completely?
There are no easy answers. But this week, the questions got a lot more urgent.
---
*Got thoughts on the OpenAI-Pentagon deal? Worried about where this is heading? Drop a comment and let me know.*


No comments:
Post a Comment