AI Coding Security Glossary

400+ security and AI development terms explained for developers. From prompt injection to zero trust, every term you need to know for building secure AI-coded applications.

What Are Security Headers? Security headers explained for developers. The essential HTTP headers every web application needs and how to configure them properly. What Is a Context Window? Context windows explained for developers. How token limits affect AI code generation quality and security implications. What Is a CVE (Common Vulnerabilities and Exposures)? CVE explained for developers. How CVE identifiers track security vulnerabilities and why they matter for AI-generated code dependencies. What Is a Foundation Model? Foundation models explained for developers. How base AI models are adapted for coding tools and their impact on code security. What Is a Hallucinated Dependency? AI hallucinated dependencies explained. How LLMs invent non-existent packages and how attackers exploit this for supply chain attacks. What Is a Supply Chain Attack? Supply chain attacks explained. How compromised dependencies and hallucinated packages threaten AI-coded applications. What Is a Vector Database? Vector databases explained for developers. How vector storage powers RAG applications and security considerations for AI-coded implementations. What Is a WAF (Web Application Firewall)? WAF explained for developers. How web application firewalls protect AI-coded apps from common attacks and their limitations. What Is a Zero-Day Vulnerability? Zero-day vulnerabilities explained for developers. How unknown security flaws threaten AI-coded apps and defense strategies. What Is AI Code Generation? AI code generation explained. How LLMs generate code, the security implications, and best practices for using AI-generated code safely. What Is AI Hallucination? AI hallucination explained for developers. How LLMs generate plausible but incorrect code and the security implications. What Is an AI Agent? AI agents explained for developers. How autonomous AI coding agents work, their security risks, and safe deployment practices. What Is an Embedding? Embeddings explained for developers. How vector representations power AI applications and security considerations for embedding pipelines. What Is an LLM (Large Language Model)? LLMs explained for developers. How large language models power AI coding tools and the security implications for generated code. What Is an SBOM (Software Bill of Materials)? SBOM explained for developers. How software bills of materials track components in AI-generated applications for security and compliance. What Is API Key Exposure? API key exposure explained. How API keys leak in AI-generated code, the real costs of exposed credentials, and how to manage secrets properly. What Is API Key Rotation? API key rotation explained for developers. How regular credential rotation limits breach impact in AI-coded applications. What Is Broken Access Control? Broken access control explained. The #1 OWASP vulnerability, why AI-generated apps are especially prone, and how to implement proper authorization. What Is Clickjacking? Clickjacking explained for developers. How invisible iframe attacks trick users and why AI-coded apps often lack frame protection. What Is Content Security Policy (CSP)? Content Security Policy explained for developers. How CSP headers prevent XSS and other injection attacks in web applications. What Is CORS (Cross-Origin Resource Sharing)? CORS explained for developers. How cross-origin resource sharing works, common misconfigurations in AI-generated code, and secure CORS setup. What Is CSRF (Cross-Site Request Forgery)? CSRF explained for developers. Learn how cross-site request forgery attacks work, why AI-generated apps are vulnerable, and how to implement CSRF tokens. What Is CVSS (Common Vulnerability Scoring System)? CVSS explained for developers. How vulnerability severity scores work and how to prioritize security fixes in AI-generated code. What Is CWE (Common Weakness Enumeration)? CWE explained for developers. How weakness categories help understand and prevent vulnerability types in AI-generated code. What Is DAST (Dynamic Application Security Testing)? DAST explained for developers. How dynamic security testing finds runtime vulnerabilities in AI-generated web applications. What Is Data Poisoning? Data poisoning explained for developers. How training data manipulation affects AI code generation and introduces systematic vulnerabilities. What Is Dependency Confusion? Dependency confusion explained for developers. How attackers exploit package manager resolution to inject malicious code into AI projects. What Is DevSecOps? DevSecOps explained for developers. How to integrate security into your CI/CD pipeline and why it matters for AI-coded applications. What Is Function Calling (Tool Use)? Function calling in AI models explained. How LLMs invoke external tools and the security implications for AI-coded applications. What Is HSTS (HTTP Strict Transport Security)? HSTS explained for developers. How HTTP Strict Transport Security prevents downgrade attacks and why AI-coded apps often miss it. What Is IDOR (Insecure Direct Object Reference)? IDOR explained for developers. How insecure direct object references let attackers access other users' data by changing IDs in requests. What Is Input Validation? Input validation explained for developers. How to properly validate user input to prevent injection attacks, data corruption, and application crashes. What Is Insecure Deserialization? Insecure deserialization explained for developers. How untrusted data deserialization leads to RCE in AI-generated applications. What Is JWT (JSON Web Token)? JWT explained for developers. How JSON Web Tokens work for authentication, common security mistakes, and best practices for AI-coded apps. What Is Mass Assignment? Mass assignment explained for developers. How auto-binding user input to model fields creates privilege escalation in AI-generated code. What Is MCP (Model Context Protocol)? MCP explained for developers. How the Model Context Protocol connects AI agents to tools and data sources with security considerations. What Is OAuth 2.0? OAuth 2.0 explained for developers. How the authorization framework works, common implementation mistakes in AI-generated code, and secure patterns. What Is OIDC (OpenID Connect)? OpenID Connect explained for developers. How OIDC extends OAuth 2.0 for authentication and common AI-generated implementation mistakes. What Is Path Traversal? Path traversal explained for developers. How directory traversal attacks exploit file handling in AI-generated code and how to prevent them. What Is Penetration Testing? Penetration testing explained for developers. How pentests find real-world vulnerabilities in AI-generated applications before attackers do. What Is Privilege Escalation? Privilege escalation explained for developers. How attackers gain unauthorized access levels in AI-generated applications. What Is Prompt Engineering? Prompt engineering explained for developers. How to write effective prompts for AI code generation with security-focused techniques. What Is Prompt Injection? Prompt injection explained for developers. How attackers manipulate AI models through crafted inputs and how to defend against it. What Is Prototype Pollution? Prototype pollution explained for developers. How JavaScript prototype chain manipulation creates vulnerabilities in AI-generated code. What Is RAG (Retrieval-Augmented Generation)? RAG explained for developers. How retrieval-augmented generation works, security considerations, and implementation best practices. What Is Rate Limiting? Rate limiting explained for developers. How to protect your API endpoints from abuse, brute force attacks, and resource exhaustion. What Is RCE (Remote Code Execution)? RCE explained for developers. How remote code execution attacks work, why AI-generated code is vulnerable, and how to prevent them. What Is ReDoS (Regular Expression Denial of Service)? ReDoS explained for developers. How catastrophic regex backtracking causes denial of service in AI-generated input validation. What Is SAST (Static Application Security Testing)? SAST explained for developers. How static analysis tools find vulnerabilities in source code without running the application. What Is SCA (Software Composition Analysis)? SCA explained for developers. How software composition analysis finds vulnerable dependencies in AI-generated projects. What Is Secret Scanning? Secret scanning explained for developers. How automated tools detect leaked API keys, passwords, and tokens in AI-generated code. What Is Session Fixation? Session fixation explained for developers. How session ID attacks work in AI-coded apps and how to prevent them with proper session management. What Is SQL Injection? SQL injection explained for developers. Learn how SQL injection works in AI-generated code and how to prevent it with parameterized queries. What Is SSRF (Server-Side Request Forgery)? SSRF explained for developers. How server-side request forgery lets attackers access internal services through your application. What Is Threat Modeling? Threat modeling explained for developers. A structured approach to identifying security risks in AI-generated application architectures. What Is Typosquatting (Package Squatting)? Typosquatting in package managers explained. How malicious packages with similar names target AI-generated dependency installs. What Is Vibe Coding? Vibe coding explained. What vibe coding means, how it works with AI tools like Cursor and Claude Code, and the security implications developers need to know. What Is XSS (Cross-Site Scripting)? XSS (Cross-Site Scripting) explained for developers. Learn what XSS means, how it affects AI-coded apps, and how to prevent it. What Is Zero Trust Security? Zero trust explained for developers. How the never-trust-always-verify model protects AI-generated applications from internal and external threats.