I Scanned 1,000 AI Apps. 73% Had Critical Flaws. (Here's What They All Got Wrong)

I Scanned 1,000 AI Apps. 73% Had Critical Flaws. (Here's What They All Got Wrong)

Last month, I ran comprehensive security scans against 1,000 publicly deployed applications built primarily with AI coding tools. The results weren’t surprising, but the patterns were revealing.

This isn’t a hit piece on vibe coding. I use these tools daily. But understanding where they fail helps us build better guardrails.

Methodology

Vibe-Coded Application : For this research, an application was classified as vibe-coded if it showed clear indicators of AI generation: characteristic code patterns, certain dependency combinations, and deployment signatures common to AI-assisted development.

I identified applications through several signals:

  • Deployment on platforms popular with vibe coders (Vercel, Netlify, Replit)
  • Characteristic AI-generated code patterns in client-side bundles
  • Package.json signatures suggesting rapid generation
  • Public repos with Claude/Cursor/Copilot commit messages

Each application received the same battery of 247 security checks, covering OWASP Top 10 vulnerabilities, misconfigurations, and AI-specific issues like hallucinated dependencies.

The Numbers

Let’s break down what I found:

Vulnerability CategoryPrevalenceAverage Severity
Missing Security Headers89%Medium
Exposed API Endpoints67%High
Insecure Auth Patterns45%Critical
Client-Side Secrets38%Critical
SQL/NoSQL Injection Risk23%Critical
XSS Vulnerabilities31%High
CORS Misconfigurations56%Medium
Outdated Dependencies71%Variable

The “clean code, broken security” phenomenon was everywhere. These applications looked professional. The UI was polished. The features worked. The security was absent.

Framework-Specific Findings

Next.js Applications (412 scanned)

Next.js apps performed best overall, but still showed significant issues.

Most common issues:

  • API routes without authentication (54%)
  • getServerSideProps leaking sensitive data (23%)
  • Middleware bypasses (12%)
  • Environment variables exposed in client bundle (31%)

Next.js provides more structure, which AI models handle better. The framework’s conventions guide generated code toward better patterns. But the API routes are a consistent weakness, with AI models frequently generating unprotected endpoints.

1
2
3
4
5
6
7
8
// Common AI-generated pattern in Next.js
// pages/api/users/[id].js
export default async function handler(req, res) {
  const user = await prisma.user.findUnique({
    where: { id: req.query.id }
  });
  res.json(user); // No auth check, exposes any user's data
}

Remix Applications (156 scanned)

Remix’s loader/action pattern should enforce better security, but AI models don’t always understand it.

Most common issues:

  • Loaders exposing full database objects (47%)
  • Actions without CSRF protection (61%)
  • Session mishandling (34%)
  • Incorrect error boundaries leaking information (28%)

The Remix model is more complex than Next.js, and AI-generated code showed more fundamental misunderstandings of the framework’s security model.

Vanilla React/Vite (289 scanned)

Without framework guardrails, vanilla React apps showed the highest vulnerability rates.

Most common issues:

  • No security headers (94%)
  • Direct API calls with exposed keys (52%)
  • XSS through dangerouslySetInnerHTML (29%)
  • No CORS policy (78%)
  • State management exposing sensitive data (41%)

AI models generating vanilla React default to patterns that work but aren’t secure. Without framework conventions enforcing structure, the generated code takes shortcuts.

Astro Applications (143 scanned)

Astro’s island architecture provides some inherent security benefits, but deployment configuration often undermines them.

Most common issues:

  • Static pages with hardcoded secrets (33%)
  • SSR endpoints without protection (45%)
  • Missing security headers (87%)
  • Client-side hydration exposing data (21%)

Astro apps had the lowest critical vulnerability rate, partly because they often have less server-side attack surface.

Tool-Specific Patterns

Cursor-Generated Code

Applications showing Cursor generation signatures had distinctive patterns:

  • Better code structure overall
  • Consistent missing input validation
  • Strong tendency toward cors({ origin: '*' })
  • Frequent use of deprecated packages

Claude-Generated Code

Claude-generated patterns showed:

  • More verbose security comments (without implementation)
  • Better error handling structure
  • Frequent hallucinated package suggestions
  • Tendency to over-expose in error responses

Replit Agent Code

Replit-deployed applications showed:

  • Highest rate of exposed environment variables
  • Most database connection string leaks
  • Simplest authentication implementations
  • Strongest tendency toward single-file architectures

The “Clean Code, Broken Security” Phenomenon

Clean Code Broken Security : A pattern where AI-generated code appears professional and well-structured while containing fundamental security vulnerabilities that aren’t visible without specialized analysis.

This was the most striking finding. Traditional vulnerable code often looks bad. Spaghetti logic, obvious shortcuts, clearly amateur patterns. AI-generated vulnerable code looks professional.

The authentication bypass isn’t obviously wrong:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
// This looks reasonable at first glance
const authenticateUser = async (req, res, next) => {
  const token = req.headers.authorization?.split(' ')[1];
  if (token) {
    const decoded = jwt.decode(token); // decode, not verify!
    req.user = decoded;
    next();
  } else {
    res.status(401).json({ error: 'Unauthorized' });
  }
};

The code is clean. The variable names are good. The structure is professional. It just doesn’t actually authenticate anyone because jwt.decode doesn’t verify the signature.

I found this pattern, or variations of it, in 23% of applications with JWT authentication.

Deployment Platform Comparison

Where applications were deployed affected their security posture:

PlatformAverage Vulnerability CountMost Common Issue
Vercel3.2API route exposure
Netlify4.1Missing headers
Replit6.8Environment leaks
Railway4.5Database exposure
Fly.io3.9CORS misconfiguration

Vercel’s automatic security headers and Next.js integration provide baseline protection. Replit’s development-first approach leaves many security configurations as developer responsibility.

What Actually Prevents These Issues

Looking at the applications that passed security scans, patterns emerged:

Applications with zero critical vulnerabilities shared:

  • Automated security scanning in deployment pipeline (100%)
  • Use of authentication libraries over custom implementations (89%)
  • Environment variable management through platform secrets (94%)
  • Regular dependency updates (78%)

The cleanest applications weren’t written by better developers or generated by better AI. They had better processes around the AI-generated code.

Secure Your Vibe-Coded App

Based on patterns from the most secure applications in this study

Add Security Headers

Configure your deployment platform to add security headers automatically. On Vercel, use next.config.js headers. On Netlify, use _headers file. This single step addresses the most common vulnerability category.

Protect API Routes

Every API endpoint needs authentication verification. Use middleware that runs before all API routes. Don’t trust the AI to add auth to each endpoint individually.

Use Auth Libraries

NextAuth, Auth0, Clerk, or Supabase Auth. Don’t let the AI implement JWT validation from scratch. Pre-built auth libraries have been security audited.

Scan Before Deploy

Add automated security scanning to your CI/CD pipeline. Tools like VibeEval can run on every deployment and block releases with critical vulnerabilities.

Audit Dependencies Weekly

Run npm audit or use Dependabot. 71% of scanned apps had known vulnerable dependencies that could be fixed with a version bump.

FAQ

Which AI coding tool produces the most secure code?

Based on this research, the tool matters less than the framework and review process. Next.js applications had the lowest vulnerability rate regardless of which AI tool generated them. The framework’s structure provides guardrails that AI models follow.

Are these vulnerabilities specific to AI-generated code?

The vulnerabilities themselves aren’t unique, but the patterns are. AI-generated code shows characteristic mistakes that stem from training data patterns and optimization for functionality over security. Human-written code has different vulnerability distributions.

How did you identify vibe-coded applications?

Through multiple signals: deployment platform analytics, code pattern analysis in client bundles, package.json signatures, public repository commit patterns, and characteristic AI-generated comments. No single signal was definitive, but multiple signals together provided high confidence.

What should I do if I've already deployed a vibe-coded app?

Run a security scan immediately. The most critical issues, missing auth on API routes and exposed secrets, can be fixed in hours. Then set up continuous monitoring to catch issues in future deployments.

Conclusion

Key Takeaways

  • 73% of vibe-coded applications contain at least one critical vulnerability
  • Missing security headers (89%) and exposed API endpoints (67%) are the most common issues
  • Next.js applications show 68% vulnerability rate vs 81% for vanilla React
  • Framework choice matters more than AI tool choice for security outcomes
  • The “clean code, broken security” phenomenon makes vulnerabilities harder to spot visually
  • JWT authentication bypasses using decode instead of verify appear in 23% of apps
  • Replit deployments show 2x the vulnerability count of Vercel deployments
  • Applications with automated security scanning have 91% fewer critical vulnerabilities
  • The most secure apps use auth libraries instead of custom implementations

The vibe coding revolution is real, and it’s not going away. But this research shows that speed without security creates technical debt measured in vulnerabilities, not just code quality.

The good news: the fixes are straightforward. Security headers, auth middleware, dependency updates, and automated scanning address the vast majority of issues found in this study.

The bad news: 73% of vibe-coded apps haven’t implemented them yet.

AI Coding Security Insights.
Ship Vibe-Coded Apps Safely.

Effortlessly test and evaluate web application security using Vibe Eval agents.