Microsoft 365 Copilot Hit by First Zero-Click AI Security Flaw


A new attack technique named EchoLeak has been described as a zero-click AI interaction vulnerability that allows attackers to steal sensitive data from Microsoft 365 (M365) Copilot without any user interaction.
Details of the Vulnerability
Vulnerability ID: CVE-2025-32711
Severity Level: Critical
CVSS Score: 9.3
General Description: This vulnerability enables attackers to inject malicious AI commands into M365 Copilot to extract information over the network.
Affected Versions:
OpenSSH versions prior to
4.4p1
are affected unless patched for CVE-2006-5051 and CVE-2008-4109.Versions from
4.4p1
onwards, excluding versions8.5p1
and9.8p1
.
This vulnerability has been addressed by Microsoft and does not require users to update any patches. There is no evidence that this vulnerability has been exploited in practice. The entity that discovered and reported the issue stated that this is a case of Large Language Model (LLM) Scope Violation, initially leading to indirect prompt injection, resulting in unintended behavior.
Exploitation Method
The EchoLeak technique is based on the concept of LLM Scope Violation. This mechanism involves untrusted content—such as emails from outside the organization—being introduced into the context of sensitive AI data processing without clear separation.
In a specific scenario, an attacker sends an email to a user's Microsoft 365 work inbox, formatted in markdown containing a malicious prompt string. When the user uses Copilot to query related information, the system's RAG tool extracts data from both the email content and authorized internal documents, inadvertently introducing sensitive information into Copilot's output context.
Figure 1. Copilot exploited to reveal sensitive information
The data can then be leaked through sharing platforms like Microsoft Teams or SharePoint, or even transmitted via URLs embedded in the returned results. This entire process does not require the user to perform any specific actions such as clicking or opening attachments.
Risk Level and Security Impact
The most notable aspect of EchoLeak is its ability to be exploited without user interaction. This makes it particularly dangerous in enterprise environments using AI to automate information processing. An attacker only needs a small entry point—such as a markdown-formatted email—to trigger a chain of events leading to system-level data leakage.
EchoLeak also highlights a significant weakness in the current security architecture of large language models: AI can inadvertently be manipulated to act against the very system it is designed to support. This is the first time a prompt injection attack has been successfully deployed in a zero-click environment at an enterprise scale, paving the way for a new generation of AI-driven threats.
Aim Security, the entity that discovered and reported the vulnerability, emphasizes that EchoLeak is not just a single technical vulnerability but a prime example of AI system design not accounting for indirect attack scenarios. The risk is greater when AI has access to sensitive resources but lacks context-based access controls.
Recommendations
FPT Threat Intelligence recommends organizations and individuals take several measures to prevent this dangerous attack campaign:
Enhance Input Control for LLM: Establish content filters to prevent malicious prompts hidden in emails, markdown documents, or other formats from entering the AI processing environment.
Clearly Separate Internal and External Data: Configure systems so that AI agents do not automatically retrieve and process both external emails and internal documents in the same context.
Limit AI Agent Access Rights: Restrict the scope of data that AI can access by applying role-based access control (RBAC) or contextual access control.
Monitor AI Agent Behavior and Activity: Deploy monitoring systems to detect abnormal behavior during AI task execution, especially data retrievals that do not align with the original query objectives.
Conduct Security Awareness Training for End Users: Educate users about the risks of prompt injection and their role in ensuring information security, including not requesting Copilot to process unchecked content.
Perform Regular Security Assessments for AI Systems: Evaluate the resilience of large language models against indirect attack techniques like EchoLeak, and apply appropriate patches or configuration updates.
References
Subscribe to my newsletter
Read articles from Tran Hoang Phong directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Tran Hoang Phong
Tran Hoang Phong
Just a SOC Analyst ^^