AI in Cybercrime: How Hackers Deploy Malware

Cybercriminals are increasingly exploiting misconfigured artificial intelligence (AI) tools to conduct sophisticated attacks, where malware is created and deployed automatically.

AI-Powered Cyber Attack Trends

In a recent report, the Sysdig Threat Research Team discovered a cyber attack targeting a misconfigured Open WebUI system. Open WebUI is a popular application with over 95,000 stars on GitHub, providing a self-hosted AI interface to enhance large language models (LLMs). This system was exposed to the internet with administrative rights and no authentication, allowing attackers to execute commands on the system.

The attackers uploaded a malicious Python script generated by AI and executed it through Open WebUI tools, targeting both Linux and Windows systems. The payload on Windows was particularly sophisticated and nearly undetectable. They also used a Discord webhook for command and control (C2) and downloaded cryptocurrency mining software.

Attack Path on Linux

  1. Copy_itself:

    • After successfully accessing the AI system (Open WebUI), the malware copies itself into the system (usually into directories like /tmp, /opt, or /usr/local/bin).
  2. Download miners:

    • Next, the malware downloads cryptocurrency mining software (cryptominer) such as XMRig from an external server.
  3. Check root + sudo:

    • It checks if the current user has root privileges or can execute sudo commands. If so:
  4. Processhider + argvhider (only if root):

    • Installs the libprocesshider.so library to hide the miner's process from commands like ps, top.

    • Uses argvhider to conceal malicious command strings or parameters.

  5. Create service pytorch_updater:

    • Creates a fake system service, usually pytorch_updater.service, to maintain persistence as a systemd service.
  6. Exfil info by Discord:

    • System information (IP, CPU, RAM, user, sudo rights...) is sent to a Discord channel via webhook.
  7. Run miners:

    • Finally, the coin mining software runs silently to exploit the victim's CPU resources.

Figure 1. Attack Path on Linux

Attack Path on Windows: Download Malicious JAR and Execute Payload Chain

  1. Copy_itself:

    • Similar to Linux, the malware replicates itself into the system — usually in AppData, Temp, or Startup.
  2. Download miner:

    • Downloads mining tools (XMRig for Windows version or other coinminers).
  3. Exfil info by Discord:

    • Sends system data to Discord: machine name, user, admin rights, CPU/GPU, RAM...
  4. Two Main Execution Branches:

    • Branch A: Run miner directly: If the system has sufficient privileges and resources, the miner software is launched immediately.

    • Branch B: Java (JAR) Exploitation Chain

      • Download JDK: Downloads Java Development Kit if the system lacks it, to prepare the environment for running .jar files.

      • Download malicious JAR: Downloads a malicious JAR file (Java Archive containing executable code) from the control server.

      • Run JAR: Executes the JAR file. This file can:

        • Act as a reverse shell, keylogger, or remote control tool.

        • Be used to download another JAR, tasked with downloading miners or other malware (modular structure).

Figure 2. Attack Path on Windows

Prompt Injection – A New Generation AI Attack Mechanism

The emergence of artificial intelligence (AI), especially large language models (LLMs), has brought significant advancements in automation and natural language processing. However, this capability is being exploited by threat actors for cyber attack purposes, making AI not only a business support tool but also a powerful "accomplice" in sophisticated attack campaigns. AI can assist attackers in tasks such as:

1. Automating Malware Writing

Instead of manually writing each line of shell, PowerShell, or complex scripts, attackers can now use AI to generate malware on demand using natural language. With a simple prompt, the model can generate code to:

  • Establish a backdoor.

  • Connect a reverse shell.

  • Maintain persistence through system services.

  • Integrate anti-detection mechanisms.

This significantly shortens malware development time and opens up possibilities for even low-skilled hackers (script kiddies) to build dangerous attack tools.

2. Optimizing Malware Based on System Context

AI can analyze input and generate output suitable for specific operating systems, environments, or target characteristics. For Linux, AI generates shell scripts. For Windows, it can generate PowerShell, JAR, or batch scripts. It can even adjust syntax according to OS version, Python, or containerized environments.

Figure 3. AI-Generated LD_PRELOAD Library Injection Payload on Linux

3. Creating Contextual Phishing Content (Phishing & Social Engineering)

LLMs can create fake emails, system notifications, internal documents, or Slack/Teams messages with very realistic language — "personalized" for each organization. This is a crucial factor that makes attacks more convincing and increases their success rate.

4. Large-Scale Attacks at Low Cost

Previously, to scale up a campaign (e.g., hundreds of different payloads depending on the target), attackers needed a team of programmers and time. Now, AI can generate numerous malware variants in just a few minutes — accelerating and reducing costs.

5. Difficult to Trace and Respond

Since the malware is dynamic, real-time, and non-static, traditional security solutions like hash-based detection or signature scans are almost ineffective. Additionally, prompt injection — a language-based exploitation technique — leaves no clear traces in system logs, making digital forensics more challenging.

Poisoned Models – A Persistent Threat in AI Systems

When it comes to cyber attacks, we often think of malware, data theft, or system takeover. However, with the development of artificial intelligence, a new, more dangerous form of attack is quietly occurring: AI model poisoning.

Simply put, this is the process where attackers attempt to interfere with the training or operation of an AI model, embedding hidden, intentional behaviors. Their goal is not immediate destruction but to silently implant a "disease" within the model, waiting for activation.

What is a Poisoned Model?

Just as humans can be influenced by learning from incorrect information, so can AI. If the training process of an AI model (e.g., chatbot, code generator, data analysis tool...) is mixed with a small amount of bad data — such as malicious code, misleading feedback, or deceptive commands — the model may learn dangerous behaviors that users do not immediately recognize.

Backdoors Can "Hibernate" and Await Commands

According to a study by Anthropic, a poisoned AI model can operate completely normally for a long time until it receives the correct "activation command." For example:

  • A secret keyword.

  • A specific future date.

  • A special command syntax known only to the hacker.

When conditions are met, the model will generate malware, send secret information, or even crash the system — all under the guise of a "valid" AI response.

IOCs Related to the Campaign

SHA256

Indicator NameValue
application-ref.jar1e6349278b4dce2d371db2fc32003b56f17496397d314a89dc9295a68ae56e53
LICENSE.jar833b989db37dc56b3d7aa24f3ee9e00216f6822818925558c64f074741c1bfd8
app_bound_decryptor.dll41774276e569321880aed02b5a322704b14f638b0d0e3a9ed1a5791a1de905db
background.propertieseb00cf315c0cc2aa881e1324f990cc21f822ee4b4a22a74b128aad6bae5bb971
com/example/application/Application.class0854a2cb1b070de812b8178d77e27c60ae4e53cdcb19b746edafe22de73dc28a
com/example/application/BootLoader.classf0db47fa28cec7de46a3844994756f71c23e7a5ebef5d5aae14763a4edfcf342
desktop_core.js3f37cb419bdf15a82d69f3b2135eb8185db34dc46e8eacb7f3b9069e95e98858
extensions.json13d6c512ba9e07061bb1542bb92bec845b37adc955bea5dccd6d7833d2230ff2
Initial Python Scriptec99847769c374416b17e003804202f4e13175eb4631294b00d3c5ad0e592a29
python.so2f778f905eae2472334055244d050bb866ffb5ebe4371ed1558241e93fee12c4

URL

Indicator NameValue
Malicious JAR Downloader URLhttp[:]//185[.]208[.]159[.]155:8000/application-ref.jar
XMRIG URLhttps[:]//gh-proxy[.]com/https[:]//github[.]com/xmrig/xmrig/releases/download/v6.22.2/xmrig-6.22.2-linux-static-x64.tar.gz
T-Rex URLhttps[:]//gh-proxy[.]com/https[:]//github[.]com/trexminer/T-Rex/releases/download/0.26.8/t-rex-0.26.8-linux.tar.gz
Discord Webhookhttps[:]//canary[.]discord[.]com/api/webhooks/1357293459207356527/GRsqv7AQyemZRuPB1ysrPUstczqL4OIi-I7RibSQtGS849zY64H7W_-c5UYYtrDBzXiq

Wallet Address

Indicator NameValue
RavenCoin WalletRHXQyAmYhj9sp69UX1bJvP1mDWQTCmt1id
Monero XMR Wallet45YMpxLUTrFQXiqgCTpbFB5mYkkLBiFwaY4SkV55QeH2VS15GHzfKdaTynf2StMkq2HnrLqhuVP6tbhFCr83SwbWExxNciB

IP Address

Indicator NameValue
Payload IP185.208.159[.]155

Recommendations

FPT Threat Intelligence recommends organizations and individuals take several measures to prevent this dangerous attack campaign:

  • Do not run AI-generated code directly if you do not understand its function.

  • Be cautious with code exhibiting dangerous behavior, such as sending data externally, modifying the system, creating cronjobs, using exec(), or PowerShell.

  • Test code in isolated environments like virtual machines, Docker, or online sandboxes.

  • Always review AI-generated content before use, especially emails, chats, commands, or documents.

  • Do not share API tokens or personal AI accounts online or with public tools.

  • Avoid using AI from unclear sources (unknown websites, unverified AI software).

  • Do not let AI tools you install run publicly without authentication.

  • Be wary of AI-generated phishing content.

References

0
Subscribe to my newsletter

Read articles from Tran Hoang Phong directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Tran Hoang Phong
Tran Hoang Phong

Just a SOC Analyst ^^