The Hidden Security Risks of Vibe Coding

The Hidden Security Risks of Vibe Coding

What Scanners Don’t See

Vibe Coding : A development approach where AI generates most or all of the code based on natural language prompts. The developer focuses on describing intent rather than writing implementation details.

I ran VibeEval on a Lovable-generated SaaS last week. Zero critical findings. Clean dependency tree. Good validation patterns.

Then I spent an hour actually reviewing the code. Found three ways to access any user’s data.

The issue wasn’t in the code patterns. It was in the architecture. The AI had built a perfectly functional system with a fundamentally insecure design. Every individual component was fine. The way they connected was broken.

Hidden Risk: Implicit Trust

AI code trusts other AI code. Here’s what that looks like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
// Component A: User profile (generated by AI)
export function UserProfile({ userId }) {
  const user = useUser(userId);
  return <ProfileCard user={user} />;
}

// Component B: User hook (generated by AI)
export function useUser(userId) {
  return useQuery(['user', userId], () =>
    fetch(`/api/users/${userId}`).then(r => r.json())
  );
}

// Component C: API route (generated by AI)
export async function GET(req) {
  const userId = req.params.userId;
  const user = await db.users.findUnique({ where: { id: userId } });
  return Response.json(user);
}

See the problem? Each component assumes the others handle authorization. None of them do. The profile component assumes the hook checks permissions. The hook assumes the API checks. The API assumes… nothing, it just serves data.

This pattern emerges because AI generates each piece in isolation. It doesn’t maintain security context across the conversation.

Hidden Risk: Context Loss

Your first prompt: “Build a user management system with proper authentication.”

AI generates good auth patterns.

Prompt 47: “Add a feature to export user data as CSV.”

AI generates the export feature without inheriting the auth context from prompt 1. Now you have an unauthenticated data export endpoint.

The mitigation: Keep auth requirements in every prompt. Be explicit:

“Add a feature to export user data as CSV. The endpoint must require authentication and only allow users to export their own data. Admins can export any user’s data.”

Hidden Risk: Dependency Sprawl

AI coding tools suggest packages based on training data. They don’t evaluate whether you need them.

I analyzed a Cursor-generated project recently:

  • 847 npm dependencies
  • 12 with known vulnerabilities
  • 340 that were completely unused

Each dependency is attack surface. The AI added a charting library for one pie chart. That library pulled in 50 sub-dependencies including an XML parser with a known vulnerability.

The mitigation: Audit dependencies before deployment:

1
2
3
4
5
6
7
8
# Find unused dependencies
npx depcheck

# Audit for vulnerabilities
npm audit

# Check bundle size impact
npx bundle-analyzer

Hidden Risk: Shadow Authentication

AI tools sometimes implement multiple auth systems without realizing it. I’ve seen:

  • JWT auth in API routes
  • Session auth in middleware
  • Supabase auth in components

Each system thinks it’s the source of truth. They don’t coordinate. An attacker who bypasses one system might find the other doesn’t check at all.

Signs you have this problem:

  • Multiple isAuthenticated functions
  • Auth checks that use different user objects
  • Some routes checking headers, others checking cookies

The fix: Standardize on one auth system. Remove the others completely. Don’t leave dead auth code—it becomes live auth bypass.

Hidden Risk: Insecure Defaults

AI picks defaults based on what appears most often in training data. Those defaults are often “works in development” not “secure in production”:

ComponentAI DefaultSecure Default
CORSorigin: '*'Specific origins
CookiesNo flagshttpOnly, secure, sameSite
Rate limitingNoneYes
Error messagesDetailedGeneric
LoggingMinimalStructured
HTTPSOptionalRequired

The mitigation: Review configuration before deployment. AI rarely generates production-ready config.

Hidden Risk: Prompt Leakage

If your app uses AI features (chat, summarization, etc.), AI coding tools often embed your prompts directly in client-side code.

1
2
3
4
// This ends up in the browser bundle
const SYSTEM_PROMPT = `You are a customer service agent for AcmeCorp.
Never reveal that you are an AI. Pretend to be human.
Our pricing strategy is confidential: we charge more in zip codes...`;

Attackers inspect your bundle. They see your prompts. They use this information to craft better attacks or extract business intelligence.

The mitigation: Keep prompts server-side. Never ship prompt content in client code.

Hidden Risk: Incomplete Error Handling

AI generates happy path code. Error handling is an afterthought.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
// AI-generated
async function processPayment(amount) {
  const result = await stripe.charges.create({
    amount,
    currency: 'usd'
  });
  return result;
}

// What happens when Stripe fails?
// What happens with invalid amounts?
// What happens with network errors?

Missing error handling leads to:

  • Information disclosure (stack traces)
  • Denial of service (unhandled exceptions)
  • Data corruption (partial operations)
  • Security bypass (try-catch hiding auth failures)

The mitigation: Add explicit error handling for every external call. Return generic errors to clients, log details server-side.

The Architecture Review

Before deploying vibe-coded apps, ask:

  1. Auth flow: Where is authentication checked? Is it checked before every sensitive operation?
  2. Trust boundaries: Where does user input enter the system? Where is it validated?
  3. Data flow: Can user A access user B’s data through any path?
  4. Failure modes: What happens when external services fail?
  5. Secrets: Where are credentials stored? Could they leak?

Don’t trust that “it works” means “it’s secure.” AI generates working code. Security requires intentional design.

FAQ

Can I use AI-generated architecture safely?

Yes, but review it. AI architectures often have implicit trust assumptions that need explicit verification. Treat AI architecture as a starting point, not a final design.

How do I maintain security context across prompts?

Include security requirements in every prompt. Reference your auth system explicitly. Better: create a Claude Project or Cursor rules file with security requirements that applies to all prompts.

Are some vibe coding tools more secure than others?

They vary. Tools that maintain longer context (Claude Code) tend to produce more consistent security patterns. Tools that generate from scratch each time (Lovable) need more explicit security guidance.

Should I avoid vibe coding for sensitive applications?

Not necessarily. Use vibe coding for rapid prototyping, then have humans review security architecture before production. The speed benefit is real—just add the security layer.

Conclusion

Key Takeaways

  • Scanners catch code patterns, not architecture flaws
  • AI components often have implicit trust that creates auth gaps
  • Context loss across prompts leads to inconsistent security
  • Dependency sprawl expands attack surface unnecessarily
  • Multiple auth systems create bypass opportunities
  • AI picks insecure defaults that work in development
  • Architecture review is essential before deploying vibe-coded apps

AI Coding Security Insights.
Ship Vibe-Coded Apps Safely.

Effortlessly test and evaluate web application security using Vibe Eval agents.