← All posts

The Body That Wrote the EU AI Act Just Banned AI From Its Own Devices

Their IT team couldn't figure out where the data was going. They're not the only ones.

Stone comedy and tragedy masks on the curved bench of an ancient Greek amphitheater

On February 17, 2026, the European Parliament did something that would be funny if it weren't so revealing. It disabled the built-in AI features on every tablet and smartphone issued to its lawmakers and staff.

The writing assistants. The summarizers. The virtual helpers. All switched off.

The reason, per an internal IT email seen by Politico: the Parliament's tech team couldn't verify where the data was going. These AI features send content to cloud servers to do their work. The IT team couldn't promise that draft laws, private emails and internal memos weren't ending up on servers outside Europe, in countries with different rules about who can ask to see them.

Their exact words: "The full extent of data shared with service providers is still being assessed. Until this is fully clarified, it is considered safer to keep such features disabled."

The irony writes itself

The European Parliament passed the EU AI Act, the world's first major law aimed at regulating artificial intelligence. It took effect in August 2024. The law requires AI providers to be transparent about how they handle data, to build in safety measures and to submit to audits.

But the AI Act is just the latest in a long line. For American readers who may not follow EU policy closely: Europe has been the world's most aggressive data privacy regulator for over a decade. The EU passed GDPR in 2016, the law that forced every website to ask about cookies and gave Europeans the right to demand companies delete their data. Before that, the EU struck down a data-sharing deal with the U.S. (Safe Harbor) after Edward Snowden revealed that American intelligence agencies were tapping into tech company servers. They struck down the replacement deal (Privacy Shield) a few years later for the same reason. The EU has fined Meta, Google and Amazon billions of euros over privacy violations. This is not a government that panics easily about data. They've been thinking about it longer than most.

And now the body that wrote all of those rules has concluded it can't trust AI tools on its own devices.

They pulled the plug. They also told lawmakers to check the AI settings on their personal phones and avoid letting any third-party AI apps scan their work emails or documents. We can lock down the work devices, but we can't stop you from pasting a draft regulation into ChatGPT on your personal phone. Please don't.

They're not the first

Europe has been here before. In 2023, the Parliament banned TikTok from staff devices over data flowing to servers in China. The same year, Italy's data protection authority banned ChatGPT outright, the first Western nation to do so. OpenAI had no legal basis for collecting the personal data of millions of Italian users, the regulator said. OpenAI patched things up and got back in. Italy fined them 15 million euros a year later anyway.

But the pattern isn't just European.

After Italy, the dominoes fell fast. The U.S. Department of Energy, Social Security Administration, USDA and VA all blocked ChatGPT from their networks. By September 2023, the U.S. Space Force banned all generative AI tools. Their logic was the simplest version of the same concern: anything typed into these tools could end up on someone else's servers. For classified work, that's a non-starter.

Then came DeepSeek. When the Chinese AI company went viral in early 2025, the response was swift and near-total. Research found that 90% of enterprises blocked DeepSeek entirely. The concern was specific: DeepSeek's privacy policy states it stores all user data in China, where local law requires companies to share data with intelligence officials on request.

That sounds alarming. But as TechCrunch noted in its coverage of the EU Parliament ban, the same logic applies closer to home: uploading data to ChatGPT, Claude or Copilot means U.S. authorities can demand the companies that run those tools turn over user data.

DeepSeek got banned for saying the quiet part out loud. The U.S.-based tools have the same dynamic, just with more steps.

The pattern underneath

Every one of these bans started the same way. Someone looked at how an AI tool actually works and asked a simple question: where does the data go after it leaves this device?

Italy asked the question and got a GDPR violation. The Space Force asked and got a classified data risk. The EU Parliament asked and got an unanswerable question. DeepSeek's users asked and got an answer they didn't like.

The 56% of enterprises that now block most AI tools aren't paranoid. They just did the audit.


BeatMask catches sensitive data before it leaves your device. Before it reaches any AI tool's servers. Nothing is sent. Nothing is stored.

AI Policy Data Privacy EU AI Act GDPR DeepSeek