Hallucinated Dependency
: A software package that an AI model recommends or generates import statements for but that does not actually exist in any package registry. Attackers monitor AI model outputs for commonly hallucinated package names and publish malicious packages under those names, creating a novel supply chain attack vector unique to AI-generated code.
Why It Matters for AI-Coded Apps
LLMs confidently recommend packages that do not exist because they generate plausible-sounding names from training patterns. Research shows that up to 20% of AI-suggested package names in certain contexts are hallucinated. When an attacker claims that hallucinated name on npm or PyPI, any developer who follows the AI’s suggestion installs malicious code.
Real-World Example
ChatGPT recommends pip install flask-session-auth for session management. This package does not exist on PyPI. An attacker sees this recommendation repeated across multiple AI interactions, registers flask-session-auth on PyPI with legitimate-looking code plus a credential stealer. Developers following AI advice install the trap.
How to Detect and Prevent It
Always verify that AI-suggested packages exist on the official registry before installing. Check download counts, maintainer reputation, and repository links. Use npm info or pip show to verify packages before npm install or pip install. Consider using an LLM security tool that validates AI-recommended dependencies against package registries.
Frequently Asked Questions
How common are hallucinated packages?
Studies show that approximately 5-20% of AI-suggested package names across npm and PyPI do not correspond to real packages, depending on the model and prompt context. Niche or framework-specific packages are hallucinated more frequently than popular ones.
Why do AI models hallucinate package names?
LLMs generate package names by predicting the most probable token sequence based on training data. They cannot query package registries in real-time. If the model learned patterns like ‘flask-*’ packages, it may generate plausible names like ‘flask-session-auth’ that follow the pattern but do not exist.
How do attackers exploit hallucinated dependencies?
Attackers run popular AI models with common coding prompts, collect frequently hallucinated package names, then register those names on public registries with malicious code. This attack is called ‘slopsquatting’ and has been demonstrated to successfully compromise developer environments.
Scan your app for security issues automatically
Vibe Eval checks for 200+ vulnerabilities in AI-generated code.
Try Vibe Eval