Why this checklist exists
Every week, we review applications built with Cursor, Claude Code, Bolt, Lovable, and other AI coding tools. The same vulnerabilities appear over and over. Not because developers are careless, but because AI coding tools consistently skip security patterns that do not directly contribute to making features work.
This checklist is compiled from hundreds of real security assessments. It covers every category of vulnerability we commonly find in AI-generated applications, organized by priority and grouped by domain.
Use it before every launch. Use it when adding major features. Use it during your quarterly security reviews. It is designed to be practical, not theoretical.
How to use this checklist
Each section is ordered by impact. If you are short on time, focus on the items at the top of each section. Items marked with (Critical) are the most commonly exploited vulnerabilities in AI-generated applications.
1. Secrets Management
Exposed secrets are the single most exploitable vulnerability class. Start here.
- [ ] (Critical) No API keys, passwords, or tokens hardcoded in source files
- [ ] (Critical)
.envfiles are in.gitignoreand never committed - [ ] (Critical) Git history contains no committed secrets (check with
git log --all -p | grep -i secret) - [ ] All secrets are stored in the deployment platform's secret manager (Cloudflare, Vercel, AWS, etc.)
- [ ] Secret values are rotated on a defined schedule (at minimum, annually)
- [ ] Different secrets are used for development, staging, and production
- [ ] Service accounts use minimum necessary permissions (principle of least privilege)
- [ ] No secrets appear in client-side JavaScript bundles
What AI gets wrong
AI coding tools routinely hardcode secrets inline because their training data contains thousands of tutorials that use placeholder values like sk_test_abc123. When you provide real keys in context (via .env or conversation), the AI sometimes copies them into source files.
2. Authentication
- [ ] (Critical) Passwords are hashed with bcrypt, scrypt, or Argon2 (not MD5, SHA-1, or SHA-256)
- [ ] (Critical) Session tokens are cryptographically random (
crypto.randomUUID(), notMath.random()) - [ ] (Critical) Login endpoints are rate-limited (max 5-10 attempts per minute per IP)
- [ ] Sessions are stored server-side or in signed, HttpOnly cookies
- [ ] Tokens are not stored in localStorage or sessionStorage
- [ ] Cookies use
HttpOnly,Secure, andSameSite=Strict(orLax) flags - [ ] Sessions expire after a reasonable period (hours, not weeks)
- [ ] Logout invalidates the session server-side (not just client-side)
- [ ] Password reset tokens expire within 15-30 minutes
- [ ] Password reset tokens are single-use
- [ ] Email verification is required before granting access to sensitive features
- [ ] Account lockout after repeated failed attempts (with unlock via email)
What AI gets wrong
AI builds login pages that work but stores sessions in localStorage (XSS-accessible), uses weak token generation, and rarely implements rate limiting. Password reset flows often use predictable tokens or do not expire them.
3. Authorization & Access Control
This is the most dangerous category. Authentication bugs let attackers in; authorization bugs let them access everything.
- [ ] (Critical) Every API endpoint that returns user data checks resource ownership
- [ ] (Critical) Users cannot access other users' data by changing ID parameters (IDOR protection)
- [ ] (Critical) Role-based access control is enforced server-side (not just hidden in the UI)
- [ ] Admin functionality requires admin role verification on every request
- [ ] API endpoints use 404 (not 403) for resources the user cannot access
- [ ] Bulk/list endpoints filter results by the requesting user's permissions
- [ ] File upload/download endpoints verify the user owns the file
- [ ] State-changing operations verify the user has permission for that state transition
- [ ] No privilege escalation paths (user cannot promote themselves to admin)
- [ ] GraphQL queries are scoped to the requesting user (no unrestricted nested queries)
What AI gets wrong
AI implements authentication middleware (checking that a user is logged in) but almost never implements authorization middleware (checking what that user can access). Every requireAuth wrapper should also include ownership verification.
4. Input Validation & Injection Prevention
- [ ] (Critical) All database queries use parameterized statements (no string concatenation)
- [ ] (Critical) All user input is validated for type, length, and format before processing
- [ ] (Critical) No use of
dangerouslySetInnerHTML,.innerHTML, oreval()with user data - [ ] Form inputs have maximum length limits (prevent payload bombs)
- [ ] File uploads validate file type, size, and content (not just extension)
- [ ] File uploads are stored outside the web root with randomized filenames
- [ ] URL parameters and path segments are validated and sanitized
- [ ] JSON request bodies are parsed with strict schemas (Zod, Joi, etc.)
- [ ] API endpoints reject unexpected fields (no mass assignment)
- [ ] Email addresses, phone numbers, and URLs are validated with proper regex
- [ ] Numeric inputs are bounded (minimum and maximum values)
- [ ] No OS command execution with user-provided input (
exec(),spawn()with user data) - [ ] Template engines use auto-escaping (no raw/unescaped output modes with user data)
What AI gets wrong
AI generates the shortest path to a working feature. Parameterized queries are longer than string concatenation. Input validation adds lines of code that do not make the feature work. The AI skips them unless explicitly asked.
5. API Security
- [ ] (Critical) CORS is configured to allow only your domain(s), not
* - [ ] (Critical) All API endpoints have rate limiting
- [ ] POST/PUT/PATCH/DELETE requests validate the
Originheader - [ ] API responses include only the data the client needs (no full database records)
- [ ] Pagination is enforced on list endpoints (no unbounded queries)
- [ ] Request payload size is limited (
express.json({ limit: '1mb' })or equivalent) - [ ] API versioning strategy is defined (path-based or header-based)
- [ ] Webhook endpoints verify signatures (Stripe, GitHub, etc.)
- [ ] GraphQL has depth limiting and query complexity analysis
- [ ] Sensitive operations require re-authentication (password change, account deletion)
Ready to apply the FORGE framework?
VibeSec helps knowledge worker teams redesign their processes using the FORGE framework: Skills, Agents, Guardrails, and Schedule. Security is built in, not bolted on. Book a FORGE Discovery Workshop to get started.
What AI gets wrong
app.use(cors()) with no options allows all origins. AI-generated list endpoints return all records with no pagination. API responses often include every field from the database, including internal IDs, timestamps, and soft-delete flags that should not be exposed.
6. Security Headers
- [ ] (Critical)
Content-Security-Policyis configured and restricts script sources - [ ]
Strict-Transport-Securityis set with a long max-age (31536000) - [ ]
X-Frame-Optionsis set toDENY(prevents clickjacking) - [ ]
X-Content-Type-Optionsis set tonosniff - [ ]
Referrer-Policyis set tostrict-origin-when-cross-originor stricter - [ ]
Permissions-Policydisables unnecessary browser features - [ ]
Cross-Origin-Opener-Policyis set tosame-origin - [ ]
Cross-Origin-Resource-Policyis set appropriately - [ ] No
X-Powered-Byheader exposing server technology - [ ] Source maps are not deployed to production (
.mapfiles)
What AI gets wrong
AI almost never configures security headers. It generates a working server that serves content over HTTPS and calls it done. Security headers are a defense-in-depth layer that blocks clickjacking, XSS exploitation, MIME confusion, and data exfiltration even when application-level defenses fail.
7. Data Protection
- [ ] (Critical) Sensitive data is encrypted at rest (database encryption)
- [ ] All communication uses HTTPS (no mixed content)
- [ ] Personal data collection is minimized (only collect what you need)
- [ ] Database backups are encrypted and access-controlled
- [ ] Logs do not contain passwords, tokens, or full credit card numbers
- [ ] Error messages do not expose database schemas, file paths, or stack traces
- [ ] User data can be exported and deleted (GDPR/CCPA compliance)
- [ ] Password reset and verification emails do not contain sensitive information in the URL path
8. Dependency Security
- [ ]
npm auditorpnpm auditruns with zero critical/high vulnerabilities - [ ] Dependencies are updated regularly (at least monthly)
- [ ] Lock files (
pnpm-lock.yaml,package-lock.json) are committed - [ ] No unused dependencies in
package.json(Cursor often leaves orphaned imports) - [ ] Critical dependencies are from well-known, actively maintained packages
- [ ] No packages with typosquatting risk (verify package names carefully)
- [ ] Build tools and dev dependencies are in
devDependencies, notdependencies
9. Deployment & Infrastructure
- [ ] Production environment variables are set in the hosting platform (not in code)
- [ ] Debug mode is disabled in production
- [ ] Source maps are not publicly accessible
- [ ] Admin panels and internal tools are not exposed to the public internet
- [ ] Database is not directly accessible from the internet (use private networking)
- [ ] Server/function timeouts are configured to prevent resource exhaustion
- [ ] File system is read-only where possible (serverless is inherently good here)
- [ ] Deployment previews do not use production secrets or data
10. Monitoring & Incident Response
- [ ] Server errors (5xx) are logged to a monitoring service
- [ ] Rate limit violations are logged
- [ ] Failed authentication attempts are logged (with IP, without passwords)
- [ ] Uptime monitoring is configured with alerts
- [ ] SSL certificate expiration is monitored
- [ ] An incident response plan exists (even a simple one: who gets alerted, what to do first)
- [ ] Contact information for security reports is published (security.txt or a security@ email)
Using this checklist in your workflow
Pre-launch (required)
Before any application goes live with real users:
- Complete all (Critical) items
- Complete Sections 1-6 entirely
- Run a dependency audit
- Test with a security-focused mindset (try to break your own app)
Post-launch (ongoing)
After launch, on a regular cadence:
- Weekly: Check for dependency updates and security advisories
- Monthly: Review access logs for anomalies
- Quarterly: Re-run through this entire checklist
- Annually: Consider a professional security assessment
Before major releases
When adding significant new features:
- Re-check sections 2-5 for the new code
- Test authorization on all new endpoints
- Validate inputs on all new forms
- Update CSP if new external resources are added
Frequently Asked Questions
How often should I run this security checklist?
Run it before every deployment that touches authentication, authorization, data handling, or API endpoints. For routine UI changes, a quick pass through sections 1 and 2 (secrets and headers) is sufficient. Full checklist review should happen at minimum before launch and before any major feature release.
Does this checklist work for all frameworks?
Yes. The checklist covers security fundamentals that apply regardless of whether you built with Next.js, Express, Django, Rails, or any other framework. The specific implementation details differ, but the security patterns — input validation, authentication, headers, secrets management — are universal.
Can I automate this checklist with my AI coding assistant?
Partially. You can feed this checklist to Claude, Cursor, or ChatGPT and ask it to review your codebase against each item. AI assistants are good at finding missing headers, exposed secrets, and basic input validation gaps. However, they are less reliable at catching business logic vulnerabilities, complex authorization bypasses, and race conditions. For those, you need a human security reviewer.
What if my AI assistant says my code passes all these checks?
AI assistants can miss subtle vulnerabilities and sometimes produce false negatives — reporting code as secure when it is not. If your application handles sensitive data, processes payments, or stores personal information, an AI-assisted self-review is a good starting point but should not be your only security measure.
Is this checklist enough to be "secure"?
No checklist guarantees security. This covers the most common vulnerability patterns we see in AI-generated applications, but every application has unique business logic that creates unique attack surface. This checklist reduces your risk significantly, but comprehensive security requires professional testing specific to your application.
When the checklist is not enough
Checklists catch known patterns. They do not catch:
- Business logic vulnerabilities specific to your application
- Complex authorization bypasses that span multiple endpoints
- Race conditions in concurrent operations
- Novel attack patterns unique to your architecture
For these, you need a human security reviewer who thinks like an attacker. That is what we do at VibeSec Advisory. We review your AI-generated application against this checklist and beyond, finding the vulnerabilities that automated tools and checklists cannot reach.
If you are preparing to launch or have already launched an AI-built application, get in touch. We will tell you exactly what needs fixing and how to fix it.