← All posts

Your AI Prompts Are Leaking Data. Here's What You Can Do About It.

These tools are incredible. But you should know what happens to the stuff you type into them.

Luchador in mask at the ring ropes

In April 2023, Samsung engineers pasted proprietary source code into ChatGPT. Three separate times over 20 days. Confidential chip designs. Internal meeting transcripts. All sent to OpenAI's servers, where it became training data.

That same month, a bug in ChatGPT exposed other people's conversation histories. Users saw chat titles that weren't theirs. Some paid subscribers could see other users' names and partial credit card numbers.

These aren't scare stories. They happened at one of the largest companies on earth and on a platform used by hundreds of millions of people.

AI tools are genuinely great. That's the problem.

ChatGPT helps people write better emails, summarize 40-page reports in seconds, and answer questions that used to take 20 minutes of Googling. Claude, Gemini, Copilot. They all save real time on real work. Nobody should stop using them.

But most people have no idea what happens to the stuff they type into that friendly chat window. It feels private. Like a conversation between you and the machine. It's more like a group email you can't unsend.

What's actually happening with your data

When you send a message to ChatGPT, Claude, Gemini, or Copilot, that text goes to the company's servers. What happens next depends on which tool you're using and which settings you've got turned on.

ChatGPT trains on your conversations by default if you're on the free or Plus plan. Unless you've specifically gone into Settings and turned off "Improve the model for everyone," your chats are fair game. Team and Enterprise accounts are opted out automatically, but most people aren't on those plans.

Google Gemini started using conversations for training in September 2025. It's controlled by a setting called "Gemini Apps Activity." If you haven't touched it, it's probably on.

Claude changed their policy in late 2025. Free, Pro, and Max users now choose whether to opt in or out of training. If you opted in (or never made a choice), your conversations can be retained for up to five years.

Microsoft Copilot trains on consumer conversations by default. And here's the kicker: even if you opt out of model training, Microsoft's privacy statement says they can still use your data for "product improvement" and other purposes.

The pattern is the same everywhere. If you're on a free or consumer plan, your conversations are probably training the next version of the model. Your draft emails, your personal questions, your brainstorming sessions, that medical question you asked at 2 a.m. All of it.

Diagram showing data flowing from your device to AI provider servers
Everything you type goes to the provider's servers. What happens next depends on your settings.

5 things you can do in the next 10 minutes

The good news? Every major AI tool lets you opt out. You just have to know where to look. These settings are buried behind three or four clicks in menus nobody opens. Here's exactly where to find them.

1. Turn off training in ChatGPT

Go to Settings > Data Controls > "Improve the model for everyone" and toggle it off. Your chat history stays. Your conversations just stop becoming training data.

When you're sharing something especially sensitive, use Temporary Chat (the icon in the top-right corner). Those conversations are never used for training and get deleted automatically.

OpenAI's guide to turning off model training →

2. Turn off Gemini Apps Activity

Open gemini.google.com, click the activity icon (clock with an arrow), and hit "Turn off." If you want to also delete past conversations that may have already been used, choose "Turn off and delete activity."

Google's Gemini privacy hub →

3. Check your Claude privacy settings

In Claude, go to Settings > Privacy and look for "Help improve Claude." If it's on, your conversations are being used for training and can be kept for up to five years. Toggle it off and retention drops to 30 days, with no training.

Anthropic's explanation of how your data is used →

4. Check Copilot too

On copilot.microsoft.com, click your profile icon > Privacy > Personalization, and toggle off "Model training." On the mobile app, it's under Menu > Profile > Account > Privacy. Read the fine print here, though. Microsoft's opt-out is narrower than the others.

Microsoft's Copilot privacy controls →

5. The three-second rule

Before you paste something into any AI tool, take three seconds and actually look at what's on your clipboard. Is there a password in that config? A client's name in that email thread? A credit card number in that spreadsheet? You'd be surprised how often sensitive information hitchhikes along with the stuff you actually meant to share.

You don't need to be paranoid. Just aware. These tools work best when you use them with your eyes open.


These tools aren't going anywhere, and they shouldn't. They're too useful. But "useful" and "private" aren't the same thing, and right now the defaults across every major platform are set to share, not protect. Ten minutes of settings changes and one new habit. That's all it takes.

AI Privacy Data Protection ChatGPT Gemini Claude