I spent the weekend auditing a friend’s AI-built SaaS. Claude Code wrote 90% of the code. The app looked polished, handled payments, even had a dashboard. It also had seven critical vulnerabilities that would have let anyone access the database directly.
This isn’t unusual. According to LLM Generated Code Security Analysis , even the best-performing LLMs produce code with serious security flaws, and the vulnerabilities are largely distinct across different AI models.
Here’s the checklist I’ve put together after auditing dozens of these apps. Copy these into your CLAUDE.md file and save yourself from learning these lessons the hard way.
The Checklist
Secure Your Vibe Coded App
10 security checks every AI-generated application needs
Stop Direct Database Access
If you use tools like Supabase or Firebase, AI often connects your frontend straight to the database. This is like leaving your front door open.
The fix: Ask the AI to build a middleware or backend API that handles data for you. Yes, it adds complexity. Yes, you need it.
| |
Someone in the replies asked about using Supabase with RLS policies. Here’s the thing - RLS is easy to mess up, especially for non-technical founders. And if you’re building mobile apps, you’ve given up all control. Even a simple change requires a release that takes days because Apple and Google review everything.
Check Authorization on Every Action
Just because someone is logged in doesn’t mean they can do everything. Every endpoint needs to verify exactly who the user is before responding.
Think of it like checking an ID badge at every high-security door, not just the front entrance.
| |
Withhold, Don't Just Hide
If you have premium features, don’t ask the backend “is this user premium?” and then trust the frontend to hide the button.
Tech-savvy users can change that “no” to a “yes” in their browser’s dev tools. Instead, only deliver premium content after the server validates their subscription status.
| |
Keep Secrets Off the Browser
Never put API keys in code that runs on the user’s screen. Your OpenAI key, Stripe secret, database credentials - if it’s in their browser, it’s in their pocket.
| |
Understand What .env Actually Does
Using a .env file doesn’t automatically make you safe. It just prevents keys from being pushed to git. But those keys can still end up in your client bundle if you’re not careful.
In Next.js, any environment variable prefixed with NEXT_PUBLIC_ is exposed to the browser. In Vite, it’s VITE_. Know your framework’s rules.
| |
Do Sensitive Calculations Server-Side
Avoid calculating prices, scores, or sensitive logic on the user’s device. If the logic lives on their phone, they can change the math.
I’ve seen apps where changing a JavaScript variable gave users unlimited credits. Always do calculations on your server.
| |
Sanitize All Inputs
If a user types weird code into a comment box, it could break your database or execute malicious scripts.
Ask the AI to sanitize all inputs so text is treated as just text, not commands.
| |
Add Rate Limiting
Without rate limiting, a bot could click your “send email” or “generate image” button 1,000 times per second. This crashes your app and costs you a fortune.
| |
Stop Logging Sensitive Data
When AI writes code to fix bugs, it often logs everything to see what’s working. Make sure it isn’t printing passwords, emails, or tokens where anyone can see them.
| |
Audit with a Different AI Model
If you coded your app with Claude, ask Gemini to audit this code for security vulnerabilities. Using a second AI model catches blind spots that the first model consistently misses.
This isn’t paranoia - different models have different training data and different failure modes. A vulnerability that Claude generates repeatedly might be obvious to GPT-4, and vice versa.
Bonus: Error Handling That Doesn’t Leak Secrets
Don’t let error messages spill details like database structure or file paths. Keep them vague for users but log the full details privately.
| |
Is using Supabase RLS enough for security?
Do I need all of these if I'm just building an MVP?
Can I just use AI to fix the security issues AI created?
What's the fastest way to check if my app has these issues?
dangerouslySetInnerHTML, origin: '*', and environment variables in client bundles.Key Takeaways
- Over 80% of vibe coded apps have critical vulnerabilities that attackers can exploit
- Never connect frontend directly to database - always use a backend API layer
- Authorization must happen on every single endpoint, not just at login
- Keep all API keys and secrets server-side only -
.envfiles alone don’t protect you - Calculate prices, scores, and sensitive logic on the server where users can’t modify it
- Sanitize every user input to prevent injection attacks
- Add rate limiting to prevent abuse and runaway costs
- Stop logging sensitive data like passwords and tokens
- Audit your code with a different AI model than you used to write it
- Error messages should be vague for users but detailed in private logs