In February 2026, the Trump administration labeled Anthropic a "supply chain risk" and banned its AI assistant Claude from all federal systems. The reason: Anthropic refused to remove safety limits that kept Claude from being used for weapons targeting and mass surveillance.
The Pentagon wanted an AI it could point at anything. Anthropic said no. So the government kicked them out.
Within days, OpenAI landed a Pentagon contract to fill the gap. The #CancelChatGPT hashtag exploded. Uninstalls spiked 295%. People were furious at the company that said yes and the government that punished the one that said no.
That debate got wall-to-wall coverage. Should AI companies help build weapons? Fair question. But it buried a different story, one that affects far more people than the Pentagon's vendor list.
The government already does this
In February 2026, the New York Times reported that the Department of Homeland Security had sent hundreds of administrative subpoenas to Google, Meta, Reddit and Discord. The subpoenas demanded names, email addresses and phone numbers behind anonymous social media accounts that had criticized ICE or tracked the locations of immigration agents.
These weren't warrants. No judge reviewed them. DHS wrote them up, signed them and sent them directly to the tech companies. Google, Meta and Reddit complied with some of the requests.
One target was Montco Community Watch, a pair of Facebook and Instagram accounts in Montgomery County, Pennsylvania, that posted bilingual alerts about ICE sightings. About 10,000 followers. No real names. DHS wanted to know who ran them. The ACLU stepped in and DHS withdrew the subpoena before a judge could rule on whether it was lawful.
That pattern kept repeating. The government issues a demand. If nobody fights it, the company hands over the data. If someone does fight it, DHS quietly pulls the request and sends another one to somebody else.
This is not a new pattern
Governments have been demanding user data from tech platforms for over a decade. Meta alone received 323,000 government requests for user data in the first half of 2024. In the U.S., that was roughly 82,000 requests, with gag orders blocking Meta from telling the user about 77% of the time.
Since 2013, governments worldwide have requested data on over 12 million accounts from just four companies: Apple, Google, Meta and Microsoft. The U.S. accounts for a third of those requests.
The difference between Facebook posts and AI chats is what people share. On social media, you curate. You post what you want others to see. With an AI chatbot, people type the things they'd never say out loud. Legal strategies. Medical fears. Business plans. Relationship problems. The internal monologue that used to stay internal.
OpenAI already publishes a transparency report. In the first half of 2025, they received somewhere between 0 and 249 national security requests (the law only lets them report in bands). That number will grow as ChatGPT becomes more central to how people work and think.
Where the Pentagon story connects
Go back to the Pentagon decision. The government banned the AI company with more privacy limits and picked the one with fewer. The public debate was about weapons ethics. But the data angle matters too.
Both companies' terms allow them to share user data with the government in response to legal demands. That's standard language. ChatGPT, Claude, Gemini, Copilot: every major platform reserves this right.
The government chose the vendor that would let it do more with the AI. It also chose a vendor that, like every other AI company, will hand over user data when served with the right paperwork.
Meanwhile, the same government is using administrative subpoenas (no judge required) to unmask people who post about ICE on Facebook. The tools change. The dynamic doesn't.
What this means for everyone else
The Pentagon debate is about whether AI should help build weapons. That question matters.
But for most people, the question that hits closer to home is simpler. Everything you type into an AI chatbot lives on a server you don't control. The company's privacy policy, not your settings, decides who can see it. Governments have spent a decade getting very good at asking tech companies for user data. And the agency that's supposed to set rules for AI companies says it has no plans to.
People tell AI chatbots things they wouldn't post on Facebook. The government is already subpoenaing Facebook. It doesn't take much to see where this goes.
BeatMask catches sensitive data before it reaches any AI tool's servers. On your device, before anything is sent. Nothing leaves. Nothing is logged.