The reality
We ran a test. We asked 3 popular AI coding assistants to add security headers to a website.
All 3 gave us instructions. All 3 were wrong in different ways.
What went wrong
Assistant A told us to add headers in Cloudflare Transform Rules. The option it described does not exist in the dashboard anymore.
Assistant B gave us a _headers file with syntax that Cloudflare Pages ignores. It mixed up Netlify and Cloudflare syntax.
Assistant C suggested a Content-Security-Policy with unsafe-inline in script-src. Technically valid, but it defeats the purpose of having a CSP at all.
All 3 sounded completely confident. None of them said "I'm not sure" or "you should verify this."
Why this happens
AI models are trained on old documentation. Security best practices evolve. Cloudflare changes their UI. What was correct 6 months ago might be wrong today.
The models also optimize for being helpful, not for being cautious. They will give you an answer even when they should say "I don't know."
The false positive problem
Wondering if your site has this issue?
VibeSec runs a full 4-phase security assessment and gives you fix prompts you can paste into Claude or Cursor. $199/mo for unlimited scans.
Get Your Free AssessmentWe see this constantly in security scanning too.
A tool reports a vulnerability. An AI assistant suggests a fix. The fix does not work. Hours wasted.
Common examples:
- CSP reports: Automated scanners flag missing headers that are actually there
- TLS issues: Tools report weak ciphers that were already disabled
- CORS findings: Reports of wildcards that only apply to specific endpoints
You need human verification. Every time.
What we do differently
We run automated scans. Then we manually verify every finding with curl, browser dev tools, and our own eyes.
If a scanner says "missing X-Frame-Options," we run:
curl -I https://yoursite.com | grep -i "x-frame-options"
If we see the header, we do not report it as missing. Seems obvious. You would be surprised how often this gets skipped.
The skill gap
AI coding tools are incredible for productivity. But security requires:
Context: Knowing which rules apply to your specific architecture Experience: Recognizing when a finding is real vs. a false positive Judgment: Understanding risk tolerance and business impact Verification: Never trusting automated output without manual checks
AI has none of these. It has patterns and confidence.
The bottom line
Use AI to code faster. But do not use the same AI to secure what you built.
It requires special security scaffolding to be effective. Yes, frontier models are finding vulnerabilities that have been hidden for decades. But they still miss the basics. They get tunnel vision and claim victory at the first sign of a breakthrough.
This is why you need special skills and tooling to guide the model and validate its findings.
Need a real security assessment? Get your free full assessment