Common Vulnerabilities in AI-Generated Code (And How to Fix Them)

Common Vulnerabilities in AI-Generated Code (And How to Fix Them)

The Pattern Behind AI Vulnerabilities

Security Vulnerability : A weakness in code that allows attackers to compromise the confidentiality, integrity, or availability of a system. In AI-generated code, these typically stem from missing validation, insecure defaults, and shortcuts in authentication logic.

AI doesn’t introduce new vulnerability classes. It introduces the same old vulnerabilities at higher frequency because it optimizes for “code that works” rather than “code that’s secure.”

Here are the most common issues, with real examples from production code.

1. Insecure Direct Object References (IDOR)

What AI does wrong:

1
2
3
4
5
6
7
8
// API endpoint generated by AI
app.get('/api/users/:userId/documents', async (req, res) => {
  const { userId } = req.params;
  const documents = await db.documents.findMany({
    where: { userId: userId }
  });
  res.json(documents);
});

The problem: Any authenticated user can view any other user’s documents by changing the userId parameter.

The fix:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
app.get('/api/users/:userId/documents', async (req, res) => {
  const { userId } = req.params;
  const requestingUser = req.user.id; // From auth middleware

  // Verify the requesting user can access this resource
  if (userId !== requestingUser && !req.user.isAdmin) {
    return res.status(403).json({ error: 'Forbidden' });
  }

  const documents = await db.documents.findMany({
    where: { userId: userId }
  });
  res.json(documents);
});

Better fix: Don’t use URL parameters for sensitive resources at all:

1
2
3
4
5
6
app.get('/api/my-documents', async (req, res) => {
  const documents = await db.documents.findMany({
    where: { userId: req.user.id }
  });
  res.json(documents);
});

2. Missing Input Validation

What AI does wrong:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
// User registration
app.post('/api/register', async (req, res) => {
  const { email, password, name } = req.body;

  const user = await db.users.create({
    data: { email, password: hashPassword(password), name }
  });

  res.json({ user });
});

The problem: No validation on email format, password strength, or name content. SQL injection if using raw queries. XSS if name is displayed without sanitization.

The fix:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
import { z } from 'zod';

const registerSchema = z.object({
  email: z.string().email().max(255),
  password: z.string().min(8).max(128),
  name: z.string().min(1).max(100).regex(/^[a-zA-Z\s]+$/)
});

app.post('/api/register', async (req, res) => {
  const result = registerSchema.safeParse(req.body);

  if (!result.success) {
    return res.status(400).json({ errors: result.error.issues });
  }

  const { email, password, name } = result.data;

  // Check for existing user
  const existing = await db.users.findUnique({ where: { email } });
  if (existing) {
    return res.status(409).json({ error: 'Email already registered' });
  }

  const user = await db.users.create({
    data: {
      email,
      password: await hashPassword(password),
      name
    }
  });

  res.json({ user: { id: user.id, email: user.email, name: user.name } });
});

3. Hardcoded Secrets

What AI does wrong:

1
2
3
4
5
6
// AI generates this during development and it stays
const stripe = new Stripe('sk_live_xxxxx...');
const supabase = createClient(
  'https://xxx.supabase.co',
  'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...'
);

The problem: Secrets in code get committed to git and exposed.

The fix:

1
2
3
4
5
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY);
const supabase = createClient(
  process.env.SUPABASE_URL,
  process.env.SUPABASE_ANON_KEY
);

Prevention: Add pre-commit hook:

1
2
3
4
5
6
# .pre-commit-config.yaml
repos:
  - repo: https://github.com/gitleaks/gitleaks
    rev: v8.18.0
    hooks:
      - id: gitleaks

4. JWT Without Proper Verification

What AI does wrong:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
// Token generation
const token = jwt.sign({ userId: user.id }, 'secret');

// Token verification
app.use((req, res, next) => {
  const token = req.headers.authorization?.split(' ')[1];
  if (token) {
    req.user = jwt.decode(token); // WRONG: decode doesn't verify!
  }
  next();
});

The problem: jwt.decode() doesn’t verify the signature. Anyone can forge tokens.

The fix:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
const JWT_SECRET = process.env.JWT_SECRET;

// Token generation with expiration
const token = jwt.sign(
  { userId: user.id },
  JWT_SECRET,
  { expiresIn: '1h' }
);

// Proper verification
app.use((req, res, next) => {
  const token = req.headers.authorization?.split(' ')[1];
  if (token) {
    try {
      req.user = jwt.verify(token, JWT_SECRET);
    } catch (error) {
      return res.status(401).json({ error: 'Invalid token' });
    }
  }
  next();
});

5. SQL Injection via String Concatenation

What AI does wrong:

1
2
3
4
5
6
7
8
// Search endpoint
app.get('/api/search', async (req, res) => {
  const { query } = req.query;
  const results = await db.$queryRaw`
    SELECT * FROM products WHERE name LIKE '%${query}%'
  `;
  res.json(results);
});

The problem: Direct string interpolation allows SQL injection.

The fix:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
app.get('/api/search', async (req, res) => {
  const { query } = req.query;

  // Input validation
  if (!query || query.length > 100) {
    return res.status(400).json({ error: 'Invalid query' });
  }

  // Parameterized query
  const results = await db.products.findMany({
    where: {
      name: {
        contains: query,
        mode: 'insensitive'
      }
    },
    take: 50
  });

  res.json(results);
});

6. Missing Rate Limiting

What AI does wrong:

1
2
3
4
5
6
// Login endpoint with no protection
app.post('/api/login', async (req, res) => {
  const { email, password } = req.body;
  const user = await authenticate(email, password);
  // ... return token
});

The problem: Attackers can brute force passwords indefinitely.

The fix:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
import rateLimit from 'express-rate-limit';

const loginLimiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 5, // 5 attempts per window
  message: { error: 'Too many login attempts, try again later' },
  standardHeaders: true,
  legacyHeaders: false
});

app.post('/api/login', loginLimiter, async (req, res) => {
  // ... login logic
});

7. Cross-Site Scripting (XSS)

What AI does wrong:

1
2
3
4
5
6
7
8
9
// React component
function Comment({ comment }) {
  return (
    <div
      className="comment"
      dangerouslySetInnerHTML={{ __html: comment.content }}
    />
  );
}

The problem: User-supplied HTML is rendered without sanitization.

The fix:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
import DOMPurify from 'dompurify';

function Comment({ comment }) {
  const sanitizedContent = DOMPurify.sanitize(comment.content);

  return (
    <div
      className="comment"
      dangerouslySetInnerHTML={{ __html: sanitizedContent }}
    />
  );
}

// Better: avoid HTML entirely
function Comment({ comment }) {
  return (
    <div className="comment">
      {comment.content}
    </div>
  );
}

Quick Fix Checklist

Securing AI-Generated Code

Rapid security improvements for vibe-coded apps

Add Authorization to Every Endpoint

Every API endpoint that accesses data needs to verify:

  • Is the user authenticated?
  • Does this user have permission to access THIS specific resource?

Add Input Validation Everywhere

Use Zod or Joi to validate every input:

  • Request bodies
  • Query parameters
  • URL parameters

Move Secrets to Environment Variables

Search for hardcoded strings that look like API keys. Move them all to .env files and use process.env.

Add Rate Limiting to Auth Endpoints

Login, registration, password reset, and 2FA endpoints need rate limiting. 5-10 attempts per 15 minutes is reasonable.

Use Parameterized Queries

Never concatenate user input into queries. Use your ORM’s query builder or parameterized queries.

FAQ

Why does AI keep making these mistakes?

AI optimizes for code that runs, not code that’s secure. Training data includes vulnerable code. The model has no concept of “attacker” or “exploit” when generating solutions.

Will future AI models fix these issues?

Somewhat. Newer models are better at security. But the fundamental issue remains—AI generates what works, not what’s safe. Security will always require human review.

Should I rewrite all AI-generated code?

No. Fix the specific vulnerabilities. Most AI code is fine. Focus on auth, data access, and input handling—the areas where security matters.

Conclusion

Key Takeaways

  • IDOR is the most common AI vulnerability—always verify resource ownership
  • Input validation must happen server-side with proper schemas
  • Hardcoded secrets need pre-commit hooks to prevent commits
  • JWT requires proper verification, not just decoding
  • SQL injection prevention means parameterized queries, always
  • Rate limiting is essential on auth endpoints
  • XSS prevention: sanitize HTML or avoid rendering user HTML entirely

AI Coding Security Insights.
Ship Vibe-Coded Apps Safely.

Effortlessly test and evaluate web application security using Vibe Eval agents.