AI coding tools are not security engineers. They generate code that satisfies the functional requirements they're given — without reasoning about what an attacker might do with it. After reviewing hundreds of AI-generated codebases, we see the same eight vulnerabilities appear with remarkable consistency.
The most common finding, by a significant margin. API keys, database connection strings, JWT secrets, and OAuth credentials embedded directly in source code. AI tools generate working code — and the fastest way to make a database connection work in a demo is to put the credentials inline. The credentials then get committed to version control, where they persist in git history even after being 'removed'. Remediation: environment variables, secret scanning in CI, git history scrubbing.
AI-generated auth middleware protects the routes it's explicitly applied to. Direct URL access to unprotected routes bypasses it entirely. We regularly find admin panels, API endpoints, and data export functions that require authentication in the UI but are accessible without a valid session via direct request. Remediation: server-side route protection, not UI-layer hiding.
AI tools generate resource endpoints using predictable identifiers — often sequential integers or UUIDs passed in query parameters. Without explicit access control checks at the data layer, any authenticated user can access any resource by manipulating the identifier. User A accesses User B's data by changing ?id=123 to ?id=124. Remediation: ownership checks at the query level, not the UI level.
AI-generated API responses often return entire database records when only a subset of fields are needed. The frontend displays three fields. The API returns thirty. The other twenty-seven — including email addresses, internal notes, payment references — are visible in the network tab to any user who opens developer tools. Remediation: field projection at the query level, response serialisation with explicit field allowlists.
AI tools generate forms and API handlers that process whatever input they receive. Without server-side validation, injection attacks — SQL, NoSQL, command injection — become possible. File upload handlers that don't validate type or size. Search endpoints that don't sanitise query strings. Remediation: server-side validation on every input, parameterised queries, strict type enforcement.
AI-generated webhook handlers receive payloads and process them — without verifying that the payload came from the expected source. An attacker who knows your webhook endpoint can send fabricated payloads that trigger unintended actions. Remediation: signature verification on every inbound webhook, using the signing secrets provided by the sending service.
AI-generated authentication endpoints, password reset flows, and API handlers rarely include rate limiting. Brute-force attacks, credential stuffing, and enumeration attacks rely on the ability to make thousands of requests without triggering a block. Remediation: rate limiting at the infrastructure level and application level, account lockout after failed attempts, CAPTCHA on sensitive flows.
AI-generated error handling returns useful debug information — stack traces, database query text, internal paths, user identifiers — in API error responses. The same information appears in application logs alongside the data that triggered the error. Both are information disclosure vulnerabilities. Remediation: generic error messages to clients, structured internal logging with PII scrubbing, log access controls.
Get our free AI Code Production Readiness Checklist — assess your codebase across six dimensions before investors or enterprise clients find the gaps.