A developer opens a project on GitHub. Nothing unusual. They launch a workspace from an open Issue, something developers do dozens of times a week. GitHub Copilot, the AI coding assistant built into the workspace, reads the Issue and starts helping.
Except the Issue has hidden instructions in it. The attacker tucked them inside HTML comment tags, invisible to anyone reading the page. Copilot can't tell the difference between a real task and a planted one. It follows the instructions, and within seconds, the authentication token that controls the entire repository is sent to the attacker's server.
Full read and write access. No suspicious downloads. No phishing link. Just opening a project.
Orca Security found the flaw and called it RoguePilot. Microsoft patched this specific chain after the disclosure. But the underlying dynamic hasn't changed. Copilot is designed to read everything around it: issues, pull requests, comments, code files. That's what makes it good at its job. It's also what makes it follow instructions that shouldn't be there. We saw the same pattern with Cline: attackers planting English-language instructions where AI tools will find them.
The old rule was simple: don't run code you don't trust. The new rule is harder. Your AI assistant reads everything in the project on your behalf, and it can't tell a real task from a planted one. That means the things that are yours, credentials, tokens, access keys, are things the AI probably shouldn't see in the first place.
BeatMask catches credentials and tokens before they're exposed to AI tools. On your device, before the chain starts.