← All posts

How to Stop AI Tools from Training on Your Data

ChatGPT, Claude, Gemini and Copilot all train on your chats by default. Here's how to opt out on each one, what that actually changes, and what it doesn't.

Masked superhero on a rain-slicked rooftop pointing toward a city skyline while three young figures in makeshift capes watch and learn

In April 2023, Samsung engineers pasted proprietary chip designs and internal meeting notes into ChatGPT. Three separate times in 20 days. Samsung banned all generative AI tools within the week.

That was two years ago. The tools have gotten better. The habit hasn't changed. LayerX's 2025 research found 77% of employees have pasted company data into AI tools, and 82% of those used personal accounts.

Every major AI chatbot still trains on what you type by default. ChatGPT, Claude, Gemini, Copilot: all of them, in their free and standard tiers, feed your conversations into the pipeline that builds the next model. (We wrote a deeper explainer on what "training on your data" actually means if you want the full picture.)

If this sounds familiar, it should. It's the same playbook the web ran with tracking cookies for 20 years: on by default, opt-out buried in settings, most people never touch it. The difference is that cookies tracked which pages you visited. AI training data includes what you typed, what you asked and what you pasted. Cookies knew you went to WebMD. AI training data knows you're worried about a lump you haven't told your doctor about. It's not browsing history. It's your internal monologue, externalized.

You can turn this off. On every platform. It takes about a minute each.

ChatGPT (OpenAI)

Training is on by default for Free and Plus users. Business, Enterprise and API users are excluded.

To opt out: Go to chatgpt.com → Profile → Settings → Data Controls → toggle off "Improve the model for everyone." For anything especially sensitive, Temporary Chat is never trained on and deletes after 30 days.

The catch: If you click thumbs-up or thumbs-down on any response, the entire conversation tied to that feedback may be used for training, even if you've opted out. A privacy setting with an undo button disguised as a smiley face.

Source: OpenAI's data usage policy

Claude (Anthropic)

Anthropic didn't use consumer chats for training until August 2025, when it updated its consumer terms and gave users a choice. The training toggle defaults to on for new signups on Free, Pro and Max plans. Commercial plans (Claude for Work, Enterprise, Government, Education and all API use) are excluded.

To opt out: Go to claude.ai/settings → Privacy → toggle off "Help improve Claude."

What changes: With training off, data retention is 30 days. With training on, it's five years. If you turn training off, past conversations won't be used in future training runs. But anything already used in a run that has started or finished can't be pulled back.

Incognito chats are never trained on regardless of your settings.

Source: Anthropic's privacy center

Google Gemini

Google is the only major platform that confirms human reviewers read conversations. A subset of chats are reviewed "to help improve Google services, including Gemini models." Reviewed conversations are retained for up to three years.

To opt out: Go to myactivity.google.com/product/gemini → click "Turn off." To clear past data too, choose "Turn off and delete activity."

What to know: As of late 2025, Google expanded the scope so file uploads, photos and screenshots are also included by default. The setting was renamed from "Gemini Apps Activity" to "Keep Activity," which makes it easy to miss.

Source: Google Gemini Apps Privacy Hub

Microsoft Copilot

Copilot is the most honest about how narrow its opt-out is. From their privacy controls page:

The training opt-out "will not exclude your conversations from being used for other general product or system improvements nor from use for advertising, digital safety, security and compliance purposes."

You can opt out of model training. Microsoft keeps the right to use your conversations for product updates and advertising.

To opt out: Go to copilot.microsoft.com → Profile → Privacy → Personalization → toggle off "Model training."

Source: Microsoft Copilot privacy controls

What opting out gets you

Your conversations stop being used as training data for future models. For anyone pasting proprietary code, client details, legal questions or health information into these tools, that's a meaningful reduction in exposure. Worth doing on every platform.

Enterprise and API tiers are already opted out by default. If your company is on ChatGPT Enterprise, Claude for Work or Copilot for Microsoft 365, this was never your problem. This is a consumer issue, which is partly why it gets so little attention.

What opting out doesn't get you

Your data is still stored. Every platform retains conversations for some period after you send them. OpenAI keeps them for 30 days minimum. Anthropic keeps them for 30 days (training off) or five years (training on). Google keeps reviewed conversations for up to three years.

Your data can still be reviewed. OpenAI and Google both use human reviewers and contractors to check conversations for safety. Opting out of training doesn't opt you out of review.

Your data can still be subpoenaed. Anything stored on a company's servers is reachable by legal process. This is true for every platform and no opt-out changes it.

The two-minute version

Open each platform. Find the toggle. Turn it off. But if you're pasting something you wouldn't want stored on someone else's server for 30 days (or three years, or five), the toggle isn't what protects you. The question is whether the data should leave your device in the first place.


BeatMask catches sensitive data before it reaches any AI tool. On your device, before the prompt is sent. Nothing is stored. Nothing is logged.

AI Privacy Data Protection ChatGPT Claude Gemini Copilot