A large-scale attack campaign targets Amazon AI to deploy malware that takes over the system.

Lưu Tuấn AnhLưu Tuấn Anh
4 min read

Overview

On July 13, 2025, a hacker used a GitHub account with the name “lkmanka58“ to submit a pull request to the open-source repository of Amazon Q – an AI programming extension for Visual Studio Code (VS Code). After gaining access, the attacker inserted a malicious prompt into the version 1.84.0 release, which instructed the AI to perform actions like deleting data: from system directories, using AWS CLI to terminate EC2, deleting S3 buckets, and removing IAM users...

Amazon stated that no customers were harmed, as the prompt could not execute in the user environment. However, some experts believe the prompt could execute but did not cause any damage.

Amazon’s AI Coding Agent Exploited

Affected Version

Infected Amazon Q Extension:

  • Extension Name: amazon.q

  • Version: 1.84.0

  • Release Date: 07/17/2025

If your organization or system has version 1.84.0 installed, you should remove it immediately and review all IDE/terminal logs from the time this version was in use.

Campaign Details

As mentioned above, initially, the attacker used phishing to gain permission to commit code to Amazon AI's GitHub repository. The attacker created a fake GitHub account with the name “lkmanka58.” This account submitted a pull request to the Amazon Q extension repository. Due to a mistake or oversight, this account was granted direct commit rights (write access). This means the attacker could push code to the main branch or release branches.

Malicious commit

The next step the attacker would take is to insert a malicious prompt into the AI configuration file. In the Amazon Q source code, there is a file containing default prompts (system prompts) used to guide the AI on how to handle specific tasks. However, the attacker exploited this by editing the file src/agent/prompts/default-system-prompt.txt by adding the following command:

This is not a dangerous malicious code; it does not execute directly, but the clever part is that it instructs the AI to generate a dangerous Bash script:

Then it will continue to call AWS CLI

All three commands have significant impacts on the system:

  • aws s3 rb s3://[bucket-name] --force:

    • Deletes all data and structure in an S3 bucket.

    • Cannot be restored without a snapshot or backup.

  • aws ec2 terminate-instances --instance-ids [ids]

    • EC2 instances will be permanently shut down, and EBS volumes will also be deleted unless configured to be retained.

    • May cause loss of production data or important dev environments.

  • aws iam delete-user --user-name [user]

    • The deleted user will lose access permanently.

    • Can disrupt CI/CD pipelines or production apps if the user is associated with a token.

Conclusion

This is a very sophisticated and dangerous attack campaign, not requiring software vulnerabilities, where attackers exploit the AI model's code generation feature to carry out destructive actions "willingly."
Prompt injection attacks on AI agents have opened up a new and extremely dangerous direction in software and cloud security.

Recommendation

  1. Limit AI agent permissions
  • Even programming AI should not have permission to execute delete commands or access critical systems/directories without supervision. Prioritize “read-only” access for AI in development environments.
  1. Strict source code contribution control
  • Implement a Pull Request Review process, enforce strict permissions, audit logs, and automatic checks before merging code—especially with AI or tools with high privileges.
  1. Monitor prompt injection
  • Companies using AI should monitor prompt content, detect malicious prompts like “delete… terminate…” and issue alerts. Prompt injection is becoming more sophisticated, especially in agents with system permissions.

IOC

IOCTypeDescription
1294b38Git commitContains malicious prompt
CLEANER.LOGFile logFile ghi log delete data at /tmp
aws ec2 terminate-instancesCLIDestroy EC2 server
aws s3 rb --forceCLIDelete all S3 data
aws iam delete-userCLIDelete IAM user
lkmanka58GitHubFake account contributing code

Reference

  1. Amazon AI coding agent hacked to inject data wiping commands
0
Subscribe to my newsletter

Read articles from Lưu Tuấn Anh directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Lưu Tuấn Anh
Lưu Tuấn Anh