← All posts

Governments Keep Banning AI Chatbots. Here's Why.

The same fight keeps playing out. Governments want AI but can't control the data. Companies want contracts but won't always bend. Nobody's winning.

Rows of colorful lucha libre masks on display

In February 2026, the Trump administration labeled Anthropic a "supply chain risk" and banned its AI assistant Claude from all federal systems. The reason? Anthropic refused to remove restrictions that prevented Claude from being used for autonomous weapons targeting and mass surveillance.

The Pentagon wanted an AI it could point at anything. Anthropic said no. So the government kicked them out.

Within days, OpenAI landed a Pentagon contract to fill the gap. The #CancelChatGPT hashtag exploded. ChatGPT uninstalls spiked 295%. People were furious at the company that said yes, and the government that punished the one that said no.

This was dramatic. But it wasn't new. Governments and AI tools have been colliding for three years now. The specifics change. The pattern doesn't.

It started in Italy

In March 2023, Italy became the first Western country to ban ChatGPT outright. The Italian data protection authority, the Garante, said OpenAI had no legal basis for collecting and processing the personal data of millions of Italian users. No consent. No transparency. Just a chatbot hoovering up everything people typed into it.

OpenAI scrambled. They added age verification, updated their privacy disclosures, and gave European users an opt-out for training. Access came back a month later. But the Garante wasn't done. In December 2024, they fined OpenAI 15 million euros.

Italy was the canary. Other countries were watching.

Then the leaks started

Weeks after Italy's ban, Samsung engineers pasted proprietary source code into ChatGPT. Three separate incidents in 20 days. Chip designs. Internal meeting notes. Code that Samsung had spent years developing, sent straight to OpenAI's servers.

Samsung banned all generative AI tools for employees. They weren't the only ones spooked.

Across Washington, federal agencies started blocking ChatGPT. The Department of Energy. The Social Security Administration. The USDA. The VA. One by one, they pulled the plug. Nobody had policies for this stuff yet. The safe move was to shut it off and figure it out later.

By September 2023, the US Space Force banned all generative AI tools entirely. They called it a "temporary strategic pause." The concern was simple: anything typed into these tools could end up on someone else's servers. For an agency that handles classified operations, that's not a hypothetical risk. It's a deal-breaker.

The military question

While government agencies were banning AI tools, the military was trying to figure out how to use them. That tension created one of the strangest chapters in this story.

In January 2024, The Intercept reported that OpenAI had quietly removed language from its terms of service that previously banned military and warfare applications. The old policy was explicit: don't use our tools to hurt people. The new policy just... wasn't.

OpenAI said the change was about allowing "national security use cases" that didn't involve weapons. Critics pointed out that the language was deliberately vague enough to cover almost anything.

Meanwhile, the Air Force had built its own internal chatbot called NIPRGPT, designed to run on military networks. It didn't last. In 2024, the Army blocked it from their networks over data leakage concerns. The Air Force eventually shut the whole thing down in December 2025.

Even when the military built its own tools, in its own environment, it still couldn't solve the data problem.

Two companies, two choices

This is where the story gets interesting. OpenAI and Anthropic faced the same question from the same customer. The Pentagon wanted AI. Both companies could have said yes.

OpenAI removed its weapons ban and leaned in. Anthropic kept its restrictions and held firm. One got a Pentagon contract worth billions. The other got banned from federal use and labeled a threat to the supply chain.

The market reacted. Not the way you'd expect. Public backlash against OpenAI was enormous. People who'd been casually using ChatGPT suddenly realized the company behind it had made a very specific choice about what its technology could be used for. That 295% spike in uninstalls was real anger, not just a hashtag.

But the Pentagon doesn't care about hashtags. It cares about capability and compliance. Anthropic refused to comply. So Anthropic is out.

The deregulation backdrop

All of this happened against a shifting policy landscape. In January 2025, President Trump revoked Biden's AI Executive Order, which had established safety testing requirements and transparency rules for AI systems. The replacement order focused on deregulation and "removing barriers to American AI innovation."

Translation: the guardrails were coming off. Companies that played ball with the new administration would get contracts and access. Companies that insisted on their own ethical limits would get left behind.

Anthropic's ban wasn't just about one company's terms of service. It was a signal. Play along or get out.

The pattern nobody can fix

Zoom out, and the same cycle keeps repeating. A government discovers that AI tools collect too much data, or don't have the right controls, or won't bend to the right demands. They ban it. Sometimes temporarily, sometimes not.

China, Russia, and Iran block ChatGPT entirely, for their own authoritarian reasons. Italy banned it over privacy. The US Space Force banned it over security. The Trump administration banned Claude over control.

The reasons are different. The outcome is the same. Nobody has figured out how to make these tools safe enough for the people who need them most.

Governments want AI because it's genuinely useful. It can process intelligence, automate paperwork, speed up research. But every time they try to adopt it, they run into the same wall: the data goes somewhere, and they can't always control where.

Companies want government contracts because they're massive. But government use comes with government demands. Sometimes those demands are reasonable (don't leak classified data). Sometimes they're not (let us use your AI for mass surveillance). The line between the two isn't always obvious.


This isn't going to get resolved anytime soon. The technology is moving faster than the policies. The contracts are worth too much. The ethical questions are too hard.

What's clear is the pattern. Every few months, another government bans another AI tool, or another company makes a choice it can't take back. The details change. The tension doesn't. Worth keeping an eye on.

AI Policy Government Privacy