80% of Vibe Coded Apps Have Critical Security Holes - Here's How to Fix Yours

80% of Vibe Coded Apps Have Critical Security Holes - Here's How to Fix Yours

I spent the weekend auditing a friend’s AI-built SaaS. Claude Code wrote 90% of the code. The app looked polished, handled payments, even had a dashboard. It also had seven critical vulnerabilities that would have let anyone access the database directly.

This isn’t unusual. According to by , even the best-performing LLMs produce code with serious security flaws, and the vulnerabilities are largely distinct across different AI models.

Here’s the checklist I’ve put together after auditing dozens of these apps. Copy these into your CLAUDE.md file and save yourself from learning these lessons the hard way.

The Checklist

Secure Your Vibe Coded App

10 security checks every AI-generated application needs

Stop Direct Database Access

If you use tools like Supabase or Firebase, AI often connects your frontend straight to the database. This is like leaving your front door open.

The fix: Ask the AI to build a middleware or backend API that handles data for you. Yes, it adds complexity. Yes, you need it.

1
2
3
4
5
6
7
// What AI generates
const { data } = await supabase.from('users').select('*');

// What you actually need
const response = await fetch('/api/users', {
  headers: { Authorization: `Bearer ${token}` }
});

Someone in the replies asked about using Supabase with RLS policies. Here’s the thing - RLS is easy to mess up, especially for non-technical founders. And if you’re building mobile apps, you’ve given up all control. Even a simple change requires a release that takes days because Apple and Google review everything.

Check Authorization on Every Action

Just because someone is logged in doesn’t mean they can do everything. Every endpoint needs to verify exactly who the user is before responding.

Think of it like checking an ID badge at every high-security door, not just the front entrance.

1
2
3
4
5
6
7
8
9
// Not enough
if (req.user) {
  return res.json(sensitiveData);
}

// Actually secure
if (req.user && req.user.id === resource.ownerId) {
  return res.json(sensitiveData);
}

Withhold, Don't Just Hide

If you have premium features, don’t ask the backend “is this user premium?” and then trust the frontend to hide the button.

Client-side authorization : A security anti-pattern where access decisions are made in browser code that users can easily modify.

Tech-savvy users can change that “no” to a “yes” in their browser’s dev tools. Instead, only deliver premium content after the server validates their subscription status.

1
2
3
4
5
6
7
8
9
// Wrong: frontend hides button based on flag
if (user.isPremium) {
  showPremiumContent();
}

// Right: backend only sends content if authorized
app.get('/api/premium-content', requirePremium, (req, res) => {
  res.json(premiumContent);
});

Keep Secrets Off the Browser

Never put API keys in code that runs on the user’s screen. Your OpenAI key, Stripe secret, database credentials - if it’s in their browser, it’s in their pocket.

1
2
3
4
5
6
7
8
9
// Exposed in browser bundle
const openai = new OpenAI({ apiKey: 'sk-...' });

// Safe on server
// .env file on server only
OPENAI_API_KEY=sk-...

// API route
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

Understand What .env Actually Does

Using a .env file doesn’t automatically make you safe. It just prevents keys from being pushed to git. But those keys can still end up in your client bundle if you’re not careful.

In Next.js, any environment variable prefixed with NEXT_PUBLIC_ is exposed to the browser. In Vite, it’s VITE_. Know your framework’s rules.

1
2
3
4
5
# Exposed to browser (Next.js)
NEXT_PUBLIC_STRIPE_KEY=pk_live_...

# Server only (Next.js)
STRIPE_SECRET_KEY=sk_live_...

Do Sensitive Calculations Server-Side

Avoid calculating prices, scores, or sensitive logic on the user’s device. If the logic lives on their phone, they can change the math.

I’ve seen apps where changing a JavaScript variable gave users unlimited credits. Always do calculations on your server.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
// Client-side (hackable)
const total = items.reduce((sum, item) => sum + item.price, 0);
const discount = isPremium ? total * 0.2 : 0;
processPayment(total - discount);

// Server-side (secure)
app.post('/api/checkout', async (req, res) => {
  const total = await calculateCartTotal(req.user.id);
  const discount = await getPremiumDiscount(req.user.id, total);
  await processPayment(total - discount);
});

Sanitize All Inputs

If a user types weird code into a comment box, it could break your database or execute malicious scripts.

Input sanitization : The process of cleaning user input to ensure it’s treated as data, not executable code.

Ask the AI to sanitize all inputs so text is treated as just text, not commands.

1
2
3
4
5
6
7
// Vulnerable
const comment = req.body.comment;
await db.query(`INSERT INTO comments (text) VALUES ('${comment}')`);

// Sanitized
const comment = sanitizeHtml(req.body.comment);
await db.query('INSERT INTO comments (text) VALUES (?)', [comment]);

Add Rate Limiting

Without rate limiting, a bot could click your “send email” or “generate image” button 1,000 times per second. This crashes your app and costs you a fortune.

1
2
3
4
5
6
7
8
import rateLimit from 'express-rate-limit';

const limiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 100 // limit each IP to 100 requests per window
});

app.use('/api/', limiter);

Stop Logging Sensitive Data

When AI writes code to fix bugs, it often logs everything to see what’s working. Make sure it isn’t printing passwords, emails, or tokens where anyone can see them.

1
2
3
4
5
// Dangerous debugging
console.log('User login:', { email, password, token });

// Production safe
console.log('User login:', { userId: user.id, timestamp: new Date() });

Audit with a Different AI Model

If you coded your app with Claude, ask Gemini to audit this code for security vulnerabilities. Using a second AI model catches blind spots that the first model consistently misses.

This isn’t paranoia - different models have different training data and different failure modes. A vulnerability that Claude generates repeatedly might be obvious to GPT-4, and vice versa.

Bonus: Error Handling That Doesn’t Leak Secrets

Don’t let error messages spill details like database structure or file paths. Keep them vague for users but log the full details privately.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
// Leaky
catch (error) {
  res.status(500).json({
    error: error.message,
    stack: error.stack,
    query: failedQuery
  });
}

// Secure
catch (error) {
  logger.error('Database error', { error, userId: req.user?.id });
  res.status(500).json({ error: 'Something went wrong' });
}

Is using Supabase RLS enough for security?

RLS (Row Level Security) can work, but it’s easy to misconfigure. For non-technical founders, a backend API is safer because the logic is explicit and testable. For mobile apps, RLS also means you lose flexibility - any security change requires an app store release.

Do I need all of these if I'm just building an MVP?

Yes. Security debt compounds faster than technical debt. A breach in your MVP can kill your reputation before you even launch. At minimum, implement server-side auth, input sanitization, and keep secrets off the client.

Can I just use AI to fix the security issues AI created?

Partially. AI is great at fixing known patterns once you identify them. But you need a human (or a specialized security scanner) to identify the issues first. That’s why auditing with a different AI model helps - it catches things the original model misses.

What's the fastest way to check if my app has these issues?

Run your codebase through a security scanner designed for AI-generated code. Tools like VibeEval specifically look for the patterns AI models create. You can also grep your codebase for obvious red flags: dangerouslySetInnerHTML, origin: '*', and environment variables in client bundles.

Key Takeaways

  • Over 80% of vibe coded apps have critical vulnerabilities that attackers can exploit
  • Never connect frontend directly to database - always use a backend API layer
  • Authorization must happen on every single endpoint, not just at login
  • Keep all API keys and secrets server-side only - .env files alone don’t protect you
  • Calculate prices, scores, and sensitive logic on the server where users can’t modify it
  • Sanitize every user input to prevent injection attacks
  • Add rate limiting to prevent abuse and runaway costs
  • Stop logging sensitive data like passwords and tokens
  • Audit your code with a different AI model than you used to write it
  • Error messages should be vague for users but detailed in private logs

Security runs on data.
Make it work for you.

Effortlessly test and evaluate web application security using Vibe Eval agents.