← All posts

AI Helped My Family Through a Hospital Stay. It Still Has a Privacy Problem.

Both things are true.

Woman in Dia de los Muertos sugar skull face paint seated at a candlelit ofrenda with marigolds

About a year ago, my mom was admitted to the hospital and stayed over a week. Doctors and nurses would stop by for tiny windows throughout the day. Lab results, radiology reports, ultrasounds and X-rays would pop up in her records app with no explanation. New medications would appear on her bedside table with long names and no context about what they did or what the side effects were.

We felt lost. Like a lot of families in that spot.

So we started pasting everything into Claude. Lab results, medication names, imaging reports. We got clear, detailed answers at our level. It eased our minds and let us use those brief minutes with the doctor to ask real questions and plan her treatment as partners rather than passengers.

My mom is in better shape today than she's been in 20 years. And I think back to that moment as one of the early seeds for BeatMask.

I'm telling this story because the privacy debate around AI health tools gets lost in abstraction. These tools are genuinely useful. For millions of people, they fill a gap that the healthcare system has never closed: the gap between getting a result and understanding what it means. That's real. And it matters.

What also matters is what happens to that data after you paste it in.

The new health push

In January 2026, both major AI companies went all-in on health. OpenAI launched ChatGPT Health, a dedicated tab where users can connect medical records and apps like Apple Health, MyFitnessPal and Peloton. Days later, Anthropic launched Claude for Healthcare, with personal health integrations through HealthEx and Apple Health, plus HIPAA-ready tools for hospitals and insurers.

Over 40 million people already ask ChatGPT health questions every day. 230 million ask health questions each week across the platform. Both companies looked at how people were using their products and built features around it.

The tools are good. The gap is in what protects the data once it's there.

Where HIPAA stops

When you visit your doctor, HIPAA keeps your records private. Your hospital can't share your chart with your employer. Your insurer can't leak your diagnosis. If they do, there are real penalties.

But HIPAA only covers healthcare providers and their contracted partners. It does not cover a consumer app where you choose to upload your own records.

OpenAI's head of health, Nate Gross, said it directly: "In the case of consumer products, HIPAA doesn't apply in this setting." Vanderbilt professor Bradley Malin was blunter in TIME: "If you are providing data directly to a technology company that is not providing any health care services, then it is buyer beware."

Both platforms say they won't use health data for training. Both store health chats in a separate, sandboxed space. Those are real protections. But they're company promises, not legal requirements. They can change with a terms-of-service update. HIPAA protections can't.

And there's a detail that Medical Economics flagged that most coverage skipped: OpenAI acknowledged that data in ChatGPT Health can still be obtained through a subpoena or court order.

Both things are true

I used Claude to help my family through my mom's hospital stay. I'd do it again tomorrow. Millions of people are doing the same thing right now, and their health is better for it.

But the data they're sharing to get those answers now lives on servers with no HIPAA protection, no federal oversight and a privacy policy that can change at any time. Diagnoses. Medications. Mental health questions. Test results they haven't told anyone about.

The tools work. The protections haven't caught up.


BeatMask catches health records, patient data and personal information before it reaches AI tools. On your device, before anything is shared.

Healthcare Data Privacy HIPAA ChatGPT Claude