One click is all it takes: How ‘Reprompt’ turned Microsoft Copilot into data exfiltration tools

Attacks turned Microsoft’s Copilot into a data theft accomplice.

A new Copilot exploit reveals how LLMs can be quietly turned into always-on data exfiltration tools.

new one-click attack flow discovered by Varonis Threat Labs researchers underscores this fact. ‘Reprompt,’ as they’ve dubbed it, is a three-step attack chain that completely bypasses security controls after an initial LLM prompt, giving attackers invisible, undetectable, unlimited access.

“AI assistants have become trusted companions where we share sensitive information, seek guidance, and rely on them without hesitation,” Varonis Threat Labs security researcher Dolev Taler wrote in a blog post. “But … trust can be easily exploited, and an AI assistant can turn into a data exfiltration weapon with a single click.”

Ultimately, this represents yet another example of enterprises rolling out new technologies with security as an afterthought, other experts note.

“Seeing this story play out is like watching Wile E. Coyote and the Road Runner,” said David Shipley of Beauceron Security. “Once you know the gag, you know what’s going to happen. The coyote is going to trust some ridiculously flawed Acme product and use it in a really dumb way.”

In this case, that ‘product’ is LLM-based technologies that are simply allowed to perform any actions without restriction. The scary thing is there’s no way to secure it because LLMs are what Shipley described as “high speed idiots.”

“They can’t distinguish between content and instructions, and will blindly do what they’re told,” he said.

LLMs should be limited to chats in a browser, he asserted. Giving them access to anything more than that is a “disaster waiting to happen,” particularly if they’re going to be interacting with content that can be sent via e-mail, message, or through a website.

Using techniques such as applying least access privilege and zero trust to try to work around the fundamental insecurity of LLM agents “look brilliant until they backfire,” Shipley said. “All of this would be funny if it didn’t get organizations pwned.”

Read the Full Story at CSO Online

Previous
Previous

SolarWinds, again: Critical RCE bugs reopen old wounds for enterprise security teams

Next
Next

Microsoft warns MSMQ may fail after update, breaking apps