15 Security Practices of the Vibe Coder (A Penetration Tester's Dream)

15 Security Practices of the Vibe Coder (A Penetration Tester's Dream)

I spend a lot of time looking at vibe-coded applications. The pattern recognition kicks in fast now. Within the first few minutes, I can usually predict exactly which security holes I’ll find.

Not because I’m particularly clever. Because vibe coders make the same mistakes. Every. Single. Time.

Vibe Coding : Development approach where AI generates code from natural language prompts, prioritizing shipping speed over traditional engineering practices like security reviews.

Here are the 15 security practices that make penetration testers reach for the champagne.

1. The Authentication Setup

“The AI said this was secure.”

It’s a JWT stored in localStorage. The secret is literally the string “secret”. The token expires in 100 years.

What actually happens: Attackers grab tokens via XSS. They forge their own tokens because your secret is in every JWT tutorial ever written. Sessions last longer than your company will.

The fix: Use httpOnly cookies. Generate a real secret. Set reasonable expiration times.

2. The .env File

Committed to GitHub on day 1. You didn’t notice for 3 months. Someone in Russia did. Your AWS bill noticed too.

What actually happens: Bots continuously scan GitHub for exposed credentials. They find yours within hours. By the time you notice, your infrastructure has been mining crypto for weeks.

The fix: Add .env to .gitignore before your first commit. Use a secret scanner in CI. Rotate any keys that ever touched a repo.

3. SQL Injection

“I’m using an ORM so I’m safe.”

You are not safe. You raw dogged one query “just for this edge case.” That edge case is your entire auth flow.

1
2
3
4
// The edge case
const user = await db.query(
  `SELECT * FROM users WHERE email = '${req.body.email}'`
);

What actually happens: That one raw query becomes the entry point. Attackers dump your database, modify records, or escalate to shell access.

The fix: Zero raw queries. If you absolutely must, use parameterized queries. But you don’t must.

4. Password Storage

MD5. No salt. “It’s fine it’s just a side project.”

Your users use the same password everywhere. You’ve compromised their bank accounts.

What actually happens: Rainbow tables crack MD5 instantly. No salt means identical passwords have identical hashes. Attackers get your whole user table in one lookup.

The fix: bcrypt with a cost factor of at least 10. Let the library handle salting. Never roll your own.

5. API Keys in the Frontend

“It’s a public API anyway.”

It’s not a public API. That’s your Stripe secret key. In a JavaScript bundle. Minified so nobody will find it.

They will find it.

What actually happens: Anyone with browser devtools extracts your keys. Automated tools scan JS bundles. Your “hidden” keys are public.

The fix: Keep secrets server-side. Use backend endpoints to proxy sensitive operations. Never trust the client.

6. CORS Policy

1
app.use(cors({ origin: '*' }));

“I’ll fix it later.”

You will not fix it later. Every script kiddie thanks you for your service.

What actually happens: Any website can make authenticated requests to your API. Cross-origin attacks become trivial. Your users’ sessions are free real estate.

The fix: Specify exact allowed origins. Use credentials: true only with specific origins. Test that it actually blocks what it should.

7. Input Validation

“The frontend validates it.”

The frontend is a suggestion. Anyone with devtools is an admin now. You’ve built a democracy.

What actually happens: Attackers bypass your beautiful React form validation entirely. They send whatever they want directly to your API. Your database accepts it gratefully.

The fix: Validate on the server. Every input. Every time. The frontend validation is just for UX.

8. Rate Limiting

What rate limiting?

Your login endpoint accepts 10,000 requests per second. Someone is brute forcing it right now. They started reading this article and had the password by point 4.

What actually happens: Credential stuffing attacks run through your entire breached password database in minutes. No alerts. No blocks. Just successful logins.

The fix: Rate limit authentication endpoints aggressively. Use progressive delays. Consider account lockouts with notification.

9. The Admin Panel

Lives at /admin. No auth. “Nobody will guess the URL.”

It’s literally /admin. You didn’t even try /dashboard.

What actually happens: Automated scanners check /admin, /administrator, /wp-admin, and about 500 other common paths. Yours is found immediately.

The fix: Require authentication. Use non-obvious paths. Better yet, put admin functionality behind VPN or IP allowlists.

10. Dependencies

847 packages. 12 critical vulnerabilities. npm audit? Never heard of her.

“If it was really bad someone would fix it.”

What actually happens: Known exploits get weaponized. That prototype pollution vulnerability in lodash becomes your RCE. Supply chain attacks slip through unnoticed.

The fix: Run npm audit. Actually read the output. Update or replace vulnerable packages. Consider Snyk or Socket for automated monitoring.

11. Error Messages

1
2
3
4
5
{
  "error": "Invalid password for user john@gmail.com",
  "hint": "No account exists with that email",
  "extra": "Your 2FA code was almost correct"
}

You’ve built an enumeration service.

What actually happens: Attackers learn which emails exist. They learn password reset status. They learn 2FA timing. Every error message is reconnaissance.

The fix: Generic error messages. “Invalid credentials.” For everything. Log the details server-side.

12. Session Management

Sessions never expire. Logout doesn’t invalidate anything. The “remember me” checkbox does nothing. Everyone is remembered forever.

What actually happens: Stolen sessions work indefinitely. That token from the coffee shop WiFi attack? Still valid. That session from the former employee? Still works.

The fix: Reasonable session timeouts. Actually invalidate sessions on logout. Maintain a session allowlist you can revoke.

13. File Uploads

Accepts any file type. No size limit. Stores directly in /public.

Someone uploaded a PHP shell 3 weeks ago. It’s still there. It’s thriving.

What actually happens: Attackers upload executable files. They access them via direct URL. They now have shell access to your server.

The fix: Validate file types by content, not extension. Set size limits. Store outside webroot. Generate random filenames. Use a CDN.

14. HTTPS

“Localhost doesn’t need HTTPS.”

You deployed to production on HTTP. Forgot to change it. Passwords flying through the internet in plaintext. Like postcards.

What actually happens: Anyone on the network path can read credentials. Coffee shop WiFi becomes a credential harvesting operation. MITM attacks are trivial.

The fix: HTTPS everywhere. Use HSTS. Redirect HTTP to HTTPS. Let’s Encrypt is free.

15. The Security Audit

“We should do a security audit before launch.”

You launched 6 months ago. The audit is finding things. So many things. The auditor is crying.

What actually happens: Every issue on this list gets documented. The remediation timeline extends past your next funding round. The audit report becomes a liability.

The fix: Security reviews during development, not after launch. Automated scanning in CI. Fix issues before they multiply.


Sleep well tonight.

Someone is testing your endpoints.

They’re not filing a bug report.

FAQ

How quickly can attackers find exposed .env files on GitHub?

Automated bots continuously scan GitHub for patterns matching API keys and credentials. Studies show exposed AWS keys get exploited within minutes of being pushed. GitHub’s secret scanning helps, but by the time you get the alert, attackers likely already have the keys.

Is frontend validation actually useless?

Frontend validation is valuable for user experience but provides zero security. Any HTTP client can send requests directly to your API, bypassing all frontend code. Server-side validation is the only validation that counts for security purposes.

Why do AI code generators create insecure patterns?

AI models learn from existing code, which includes millions of tutorials, Stack Overflow answers, and quick demos that prioritize “getting it working” over security. The patterns that appear most frequently in training data aren’t necessarily the secure ones.

What's the minimum security setup for a vibe-coded app?

At minimum: parameterized queries, bcrypt passwords, server-side validation, rate limiting on auth endpoints, HTTPS, httpOnly cookies for sessions, environment variables for secrets, and npm audit in CI. This covers the highest-impact vulnerabilities.

Key Takeaways

  • JWT secrets like “secret” and 100-year expiration are discovered immediately through predictable patterns
  • .env files in Git repos get found by automated scanners within hours of commit
  • One raw SQL query in “just this edge case” becomes the exploitation entry point
  • MD5 password hashes with no salt crack instantly via rainbow tables
  • Frontend JavaScript bundles expose every API key regardless of minification
  • Wildcard CORS policies enable cross-origin attacks from any website
  • Frontend validation provides UX only - attackers bypass it entirely
  • Login endpoints without rate limiting fall to credential stuffing in minutes
  • Generic error messages prevent account enumeration reconnaissance
  • File uploads accepting any type become remote code execution vectors

AI Coding Security Insights.
Ship Vibe-Coded Apps Safely.

Effortlessly test and evaluate web application security using Vibe Eval agents.