What Scanners Don’t See
I ran VibeEval on a Lovable-generated SaaS last week. Zero critical findings. Clean dependency tree. Good validation patterns.
Then I spent an hour actually reviewing the code. Found three ways to access any user’s data.
The issue wasn’t in the code patterns. It was in the architecture. The AI had built a perfectly functional system with a fundamentally insecure design. Every individual component was fine. The way they connected was broken.
Hidden Risk: Implicit Trust
AI code trusts other AI code. Here’s what that looks like:
| |
See the problem? Each component assumes the others handle authorization. None of them do. The profile component assumes the hook checks permissions. The hook assumes the API checks. The API assumes… nothing, it just serves data.
This pattern emerges because AI generates each piece in isolation. It doesn’t maintain security context across the conversation.
Hidden Risk: Context Loss
Your first prompt: “Build a user management system with proper authentication.”
AI generates good auth patterns.
Prompt 47: “Add a feature to export user data as CSV.”
AI generates the export feature without inheriting the auth context from prompt 1. Now you have an unauthenticated data export endpoint.
The mitigation: Keep auth requirements in every prompt. Be explicit:
“Add a feature to export user data as CSV. The endpoint must require authentication and only allow users to export their own data. Admins can export any user’s data.”
Hidden Risk: Dependency Sprawl
AI coding tools suggest packages based on training data. They don’t evaluate whether you need them.
I analyzed a Cursor-generated project recently:
- 847 npm dependencies
- 12 with known vulnerabilities
- 340 that were completely unused
Each dependency is attack surface. The AI added a charting library for one pie chart. That library pulled in 50 sub-dependencies including an XML parser with a known vulnerability.
The mitigation: Audit dependencies before deployment:
| |
Hidden Risk: Shadow Authentication
AI tools sometimes implement multiple auth systems without realizing it. I’ve seen:
- JWT auth in API routes
- Session auth in middleware
- Supabase auth in components
Each system thinks it’s the source of truth. They don’t coordinate. An attacker who bypasses one system might find the other doesn’t check at all.
Signs you have this problem:
- Multiple
isAuthenticatedfunctions - Auth checks that use different user objects
- Some routes checking headers, others checking cookies
The fix: Standardize on one auth system. Remove the others completely. Don’t leave dead auth code—it becomes live auth bypass.
Hidden Risk: Insecure Defaults
AI picks defaults based on what appears most often in training data. Those defaults are often “works in development” not “secure in production”:
| Component | AI Default | Secure Default |
|---|---|---|
| CORS | origin: '*' | Specific origins |
| Cookies | No flags | httpOnly, secure, sameSite |
| Rate limiting | None | Yes |
| Error messages | Detailed | Generic |
| Logging | Minimal | Structured |
| HTTPS | Optional | Required |
The mitigation: Review configuration before deployment. AI rarely generates production-ready config.
Hidden Risk: Prompt Leakage
If your app uses AI features (chat, summarization, etc.), AI coding tools often embed your prompts directly in client-side code.
| |
Attackers inspect your bundle. They see your prompts. They use this information to craft better attacks or extract business intelligence.
The mitigation: Keep prompts server-side. Never ship prompt content in client code.
Hidden Risk: Incomplete Error Handling
AI generates happy path code. Error handling is an afterthought.
| |
Missing error handling leads to:
- Information disclosure (stack traces)
- Denial of service (unhandled exceptions)
- Data corruption (partial operations)
- Security bypass (try-catch hiding auth failures)
The mitigation: Add explicit error handling for every external call. Return generic errors to clients, log details server-side.
The Architecture Review
Before deploying vibe-coded apps, ask:
- Auth flow: Where is authentication checked? Is it checked before every sensitive operation?
- Trust boundaries: Where does user input enter the system? Where is it validated?
- Data flow: Can user A access user B’s data through any path?
- Failure modes: What happens when external services fail?
- Secrets: Where are credentials stored? Could they leak?
Don’t trust that “it works” means “it’s secure.” AI generates working code. Security requires intentional design.
FAQ
Can I use AI-generated architecture safely?
How do I maintain security context across prompts?
Are some vibe coding tools more secure than others?
Should I avoid vibe coding for sensitive applications?
Conclusion
Key Takeaways
- Scanners catch code patterns, not architecture flaws
- AI components often have implicit trust that creates auth gaps
- Context loss across prompts leads to inconsistent security
- Dependency sprawl expands attack surface unnecessarily
- Multiple auth systems create bypass opportunities
- AI picks insecure defaults that work in development
- Architecture review is essential before deploying vibe-coded apps