← All posts

America's Top Cybersecurity Official Uploaded Classified Docs to ChatGPT While His Staff Was Banned From Using It

Automated sensors caught it. Policies didn't.

Gold Egyptian funerary mask on a desk among classified folders under a desk lamp

Here's a sentence that shouldn't exist: the person in charge of defending America's critical infrastructure had to be stopped by an automated sensor after uploading government documents to consumer ChatGPT.

And yet.

In late January 2026, Politico reported that Madhu Gottumukkala, acting director of the U.S. Cybersecurity and Infrastructure Security Agency, had uploaded at least four documents marked "For Official Use Only" to the public version of ChatGPT. CISA's own monitoring systems flagged the uploads and triggered a review. Representative Bennie Thompson issued a formal Congressional statement the same day.

The detail that makes this story: Gottumukkala had personally obtained a special exemption to use ChatGPT. The same tool was blocked for other DHS employees. The person who approved his own access to the banned tool then used it to upload sensitive government documents.

Knowing better wasn't enough

It would be easy to call this a leadership failure or a culture problem. Both are fair. But the real point is simpler.

Gottumukkala is not some careless intern. His entire career is data security. He got the exemption. He likely thought he was being careful. And then he uploaded For Official Use Only documents to ChatGPT because the tool was right there and the work needed doing.

If the head of America's cyber defense agency can't stop themselves from pasting sensitive docs into ChatGPT, the problem isn't training or awareness. Everyone involved here knew the risks. They had a policy. They even had a process for granting access. And the data still left the building.

That's not a failure of character. That's a failure of architecture. Rules tell people what not to do. Architecture makes it hard to do it in the first place.

Detection is not protection

The ironic upside: the sensors worked. They flagged the uploads. A human review followed. Gold star for the sensors.

But the documents had already reached OpenAI's servers. Whatever happened to them next (training data, safety review, cold storage) was governed by OpenAI's terms of service, not CISA's rules.

The gap between "we have a policy" and "data never left" is where most AI incidents live. A policy is words on a page. What happens in practice, under deadline pressure, with a document that would take ten minutes to read but three seconds to paste: that's a different thing entirely.

The number to hold onto

Netskope's 2026 Cloud and Threat Report came out the same week. It found the average company logs 223 cases per month where someone sends sensitive data to an AI tool in ways that break policy. In the top quartile, that number is 2,100.

Those aren't hackers. Those are employees. Doing their jobs. Pasting things in.

The CISA incident is just the version that got a name, a headline and a Congressional statement. The other 222 violations per month don't get any of those.


BeatMask catches sensitive data before it's sent. On your device, before it reaches any AI tool's servers.

AI Security Government ChatGPT Data Privacy