What Is Function Calling (Tool Use)?
Function calling in AI models explained. How LLMs invoke external tools and the security implications for AI-coded applications.
Expert insights on AI-powered coding security, vibe-based development practices, and protecting AI-generated web applications from vulnerabilities.
Function calling in AI models explained. How LLMs invoke external tools and the security implications for AI-coded applications.
HSTS explained for developers. How HTTP Strict Transport Security prevents downgrade attacks and why AI-coded apps often miss it.
IDOR explained for developers. How insecure direct object references let attackers access other users' data by changing IDs in requests.
Input validation explained for developers. How to properly validate user input to prevent injection attacks, data corruption, and application crashes.
Insecure deserialization explained for developers. How untrusted data deserialization leads to RCE in AI-generated applications.
JWT explained for developers. How JSON Web Tokens work for authentication, common security mistakes, and best practices for AI-coded apps.
Mass assignment explained for developers. How auto-binding user input to model fields creates privilege escalation in AI-generated code.
MCP explained for developers. How the Model Context Protocol connects AI agents to tools and data sources with security considerations.
OAuth 2.0 explained for developers. How the authorization framework works, common implementation mistakes in AI-generated code, and secure patterns.
OpenID Connect explained for developers. How OIDC extends OAuth 2.0 for authentication and common AI-generated implementation mistakes.
Path traversal explained for developers. How directory traversal attacks exploit file handling in AI-generated code and how to prevent them.
Penetration testing explained for developers. How pentests find real-world vulnerabilities in AI-generated applications before attackers do.
Effortlessly test and evaluate web application security using Vibe Eval agents.