Security Vulnerability: A weakness in code that allows attackers to compromise the confidentiality, integrity, or availability of a system. In AI-generated code, these typically stem from missing validation, insecure defaults, and shortcuts in authentication logic.
AI doesn’t introduce new vulnerability classes. It introduces the same old vulnerabilities at higher frequency because it optimizes for “code that works” rather than “code that’s secure.”
Here are the most common issues, with real examples from production code.
1. Insecure Direct Object References (IDOR)
What AI does wrong:
1
2
3
4
5
6
7
8
// API endpoint generated by AI
app.get('/api/users/:userId/documents',async(req,res)=>{const{userId}=req.params;constdocuments=awaitdb.documents.findMany({where:{userId:userId}});res.json(documents);});
The problem: Any authenticated user can view any other user’s documents by changing the userId parameter.
The fix:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
app.get('/api/users/:userId/documents',async(req,res)=>{const{userId}=req.params;constrequestingUser=req.user.id;// From auth middleware
// Verify the requesting user can access this resource
if(userId!==requestingUser&&!req.user.isAdmin){returnres.status(403).json({error:'Forbidden'});}constdocuments=awaitdb.documents.findMany({where:{userId:userId}});res.json(documents);});
Better fix: Don’t use URL parameters for sensitive resources at all:
// User registration
app.post('/api/register',async(req,res)=>{const{email,password,name}=req.body;constuser=awaitdb.users.create({data:{email,password:hashPassword(password),name}});res.json({user});});
The problem: No validation on email format, password strength, or name content. SQL injection if using raw queries. XSS if name is displayed without sanitization.
import{z}from'zod';constregisterSchema=z.object({email:z.string().email().max(255),password:z.string().min(8).max(128),name:z.string().min(1).max(100).regex(/^[a-zA-Z\s]+$/)});app.post('/api/register',async(req,res)=>{constresult=registerSchema.safeParse(req.body);if(!result.success){returnres.status(400).json({errors:result.error.issues});}const{email,password,name}=result.data;// Check for existing user
constexisting=awaitdb.users.findUnique({where:{email}});if(existing){returnres.status(409).json({error:'Email already registered'});}constuser=awaitdb.users.create({data:{email,password:awaithashPassword(password),name}});res.json({user:{id:user.id,email:user.email,name:user.name}});});
3. Hardcoded Secrets
What AI does wrong:
1
2
3
4
5
6
// AI generates this during development and it stays
conststripe=newStripe('sk_live_xxxxx...');constsupabase=createClient('https://xxx.supabase.co','eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...');
The problem: Secrets in code get committed to git and exposed.
// Search endpoint
app.get('/api/search',async(req,res)=>{const{query}=req.query;constresults=awaitdb.$queryRaw`
SELECT * FROM products WHERE name LIKE '%${query}%'
`;res.json(results);});
The problem: Direct string interpolation allows SQL injection.
importDOMPurifyfrom'dompurify';functionComment({comment}){constsanitizedContent=DOMPurify.sanitize(comment.content);return(<divclassName="comment"dangerouslySetInnerHTML={{__html:sanitizedContent}}/>);}// Better: avoid HTML entirely
functionComment({comment}){return(<divclassName="comment">{comment.content}</div>);}
Quick Fix Checklist
Securing AI-Generated Code
Rapid security improvements for vibe-coded apps
Add Authorization to Every Endpoint
Every API endpoint that accesses data needs to verify:
Is the user authenticated?
Does this user have permission to access THIS specific resource?
Add Input Validation Everywhere
Use Zod or Joi to validate every input:
Request bodies
Query parameters
URL parameters
Move Secrets to Environment Variables
Search for hardcoded strings that look like API keys. Move them all to .env files and use process.env.
Add Rate Limiting to Auth Endpoints
Login, registration, password reset, and 2FA endpoints need rate limiting. 5-10 attempts per 15 minutes is reasonable.
Use Parameterized Queries
Never concatenate user input into queries. Use your ORM’s query builder or parameterized queries.
FAQ
Why does AI keep making these mistakes?
AI optimizes for code that runs, not code that’s secure. Training data includes vulnerable code. The model has no concept of “attacker” or “exploit” when generating solutions.
Will future AI models fix these issues?
Somewhat. Newer models are better at security. But the fundamental issue remains—AI generates what works, not what’s safe. Security will always require human review.
Should I rewrite all AI-generated code?
No. Fix the specific vulnerabilities. Most AI code is fine. Focus on auth, data access, and input handling—the areas where security matters.
Conclusion
Key Takeaways
IDOR is the most common AI vulnerability—always verify resource ownership
Input validation must happen server-side with proper schemas
Hardcoded secrets need pre-commit hooks to prevent commits
JWT requires proper verification, not just decoding
SQL injection prevention means parameterized queries, always
Rate limiting is essential on auth endpoints
XSS prevention: sanitize HTML or avoid rendering user HTML entirely