Last month, I ran comprehensive security scans against 1,000 publicly deployed applications built primarily with AI coding tools. The results weren’t surprising, but the patterns were revealing.
This isn’t a hit piece on vibe coding. I use these tools daily. But understanding where they fail helps us build better guardrails.
Methodology
I identified applications through several signals:
- Deployment on platforms popular with vibe coders (Vercel, Netlify, Replit)
- Characteristic AI-generated code patterns in client-side bundles
- Package.json signatures suggesting rapid generation
- Public repos with Claude/Cursor/Copilot commit messages
Each application received the same battery of 247 security checks, covering OWASP Top 10 vulnerabilities, misconfigurations, and AI-specific issues like hallucinated dependencies.
The Numbers
Let’s break down what I found:
| Vulnerability Category | Prevalence | Average Severity |
|---|---|---|
| Missing Security Headers | 89% | Medium |
| Exposed API Endpoints | 67% | High |
| Insecure Auth Patterns | 45% | Critical |
| Client-Side Secrets | 38% | Critical |
| SQL/NoSQL Injection Risk | 23% | Critical |
| XSS Vulnerabilities | 31% | High |
| CORS Misconfigurations | 56% | Medium |
| Outdated Dependencies | 71% | Variable |
The “clean code, broken security” phenomenon was everywhere. These applications looked professional. The UI was polished. The features worked. The security was absent.
Framework-Specific Findings
Next.js Applications (412 scanned)
Next.js apps performed best overall, but still showed significant issues.
Most common issues:
- API routes without authentication (54%)
- getServerSideProps leaking sensitive data (23%)
- Middleware bypasses (12%)
- Environment variables exposed in client bundle (31%)
Next.js provides more structure, which AI models handle better. The framework’s conventions guide generated code toward better patterns. But the API routes are a consistent weakness, with AI models frequently generating unprotected endpoints.
| |
Remix Applications (156 scanned)
Remix’s loader/action pattern should enforce better security, but AI models don’t always understand it.
Most common issues:
- Loaders exposing full database objects (47%)
- Actions without CSRF protection (61%)
- Session mishandling (34%)
- Incorrect error boundaries leaking information (28%)
The Remix model is more complex than Next.js, and AI-generated code showed more fundamental misunderstandings of the framework’s security model.
Vanilla React/Vite (289 scanned)
Without framework guardrails, vanilla React apps showed the highest vulnerability rates.
Most common issues:
- No security headers (94%)
- Direct API calls with exposed keys (52%)
- XSS through dangerouslySetInnerHTML (29%)
- No CORS policy (78%)
- State management exposing sensitive data (41%)
AI models generating vanilla React default to patterns that work but aren’t secure. Without framework conventions enforcing structure, the generated code takes shortcuts.
Astro Applications (143 scanned)
Astro’s island architecture provides some inherent security benefits, but deployment configuration often undermines them.
Most common issues:
- Static pages with hardcoded secrets (33%)
- SSR endpoints without protection (45%)
- Missing security headers (87%)
- Client-side hydration exposing data (21%)
Astro apps had the lowest critical vulnerability rate, partly because they often have less server-side attack surface.
Tool-Specific Patterns
Cursor-Generated Code
Applications showing Cursor generation signatures had distinctive patterns:
- Better code structure overall
- Consistent missing input validation
- Strong tendency toward
cors({ origin: '*' }) - Frequent use of deprecated packages
Claude-Generated Code
Claude-generated patterns showed:
- More verbose security comments (without implementation)
- Better error handling structure
- Frequent hallucinated package suggestions
- Tendency to over-expose in error responses
Replit Agent Code
Replit-deployed applications showed:
- Highest rate of exposed environment variables
- Most database connection string leaks
- Simplest authentication implementations
- Strongest tendency toward single-file architectures
The “Clean Code, Broken Security” Phenomenon
This was the most striking finding. Traditional vulnerable code often looks bad. Spaghetti logic, obvious shortcuts, clearly amateur patterns. AI-generated vulnerable code looks professional.
The authentication bypass isn’t obviously wrong:
| |
The code is clean. The variable names are good. The structure is professional. It just doesn’t actually authenticate anyone because jwt.decode doesn’t verify the signature.
I found this pattern, or variations of it, in 23% of applications with JWT authentication.
Deployment Platform Comparison
Where applications were deployed affected their security posture:
| Platform | Average Vulnerability Count | Most Common Issue |
|---|---|---|
| Vercel | 3.2 | API route exposure |
| Netlify | 4.1 | Missing headers |
| Replit | 6.8 | Environment leaks |
| Railway | 4.5 | Database exposure |
| Fly.io | 3.9 | CORS misconfiguration |
Vercel’s automatic security headers and Next.js integration provide baseline protection. Replit’s development-first approach leaves many security configurations as developer responsibility.
What Actually Prevents These Issues
Looking at the applications that passed security scans, patterns emerged:
Applications with zero critical vulnerabilities shared:
- Automated security scanning in deployment pipeline (100%)
- Use of authentication libraries over custom implementations (89%)
- Environment variable management through platform secrets (94%)
- Regular dependency updates (78%)
The cleanest applications weren’t written by better developers or generated by better AI. They had better processes around the AI-generated code.
Secure Your Vibe-Coded App
Based on patterns from the most secure applications in this study
Add Security Headers
next.config.js headers. On Netlify, use _headers file. This single step addresses the most common vulnerability category.Protect API Routes
Use Auth Libraries
Scan Before Deploy
Audit Dependencies Weekly
npm audit or use Dependabot. 71% of scanned apps had known vulnerable dependencies that could be fixed with a version bump.FAQ
Which AI coding tool produces the most secure code?
Are these vulnerabilities specific to AI-generated code?
How did you identify vibe-coded applications?
What should I do if I've already deployed a vibe-coded app?
Conclusion
Key Takeaways
- 73% of vibe-coded applications contain at least one critical vulnerability
- Missing security headers (89%) and exposed API endpoints (67%) are the most common issues
- Next.js applications show 68% vulnerability rate vs 81% for vanilla React
- Framework choice matters more than AI tool choice for security outcomes
- The “clean code, broken security” phenomenon makes vulnerabilities harder to spot visually
- JWT authentication bypasses using decode instead of verify appear in 23% of apps
- Replit deployments show 2x the vulnerability count of Vercel deployments
- Applications with automated security scanning have 91% fewer critical vulnerabilities
- The most secure apps use auth libraries instead of custom implementations
The vibe coding revolution is real, and it’s not going away. But this research shows that speed without security creates technical debt measured in vulnerabilities, not just code quality.
The good news: the fixes are straightforward. Security headers, auth middleware, dependency updates, and automated scanning address the vast majority of issues found in this study.
The bad news: 73% of vibe-coded apps haven’t implemented them yet.