The appeal of automated scanners
Automated security scanners are fast, consistent, and relatively affordable. Tools like Snyk, Intruder, Detectify, Qualys, Acunetix, and Nessus can scan your application in minutes and produce a report with a list of findings.
For many teams, this sounds like enough. Why pay for a human assessment when a tool can do it automatically?
The answer is simple: automated scanners and human assessments answer different questions. Scanners ask "does this app have known vulnerabilities?" A human assessment asks "is this app actually secure?"
Those are not the same question.
What automated scanners do well
Let us give credit where it is due. Automated scanners excel at:
- Known vulnerability detection (CVEs): Scanners maintain databases of thousands of known vulnerabilities and can check your dependencies, server configurations, and headers against them rapidly.
- Consistency: A scanner runs the same checks every time. It does not get tired, skip steps, or have a bad day.
- Speed: A full scan can complete in minutes to hours, depending on scope.
- Continuous monitoring: Most modern scanners can run on a schedule, alerting you when new issues appear.
- Compliance checklists: Scanners map findings to frameworks like OWASP Top 10, PCI DSS, and SOC 2 requirements.
If you are not running any scanner at all, adding one is absolutely better than nothing. We are not anti-scanner -- we are pro-context.
What automated scanners miss
Here is where it gets interesting. The gaps in automated scanning are exactly where the highest-impact vulnerabilities live.
1. Business logic flaws
Scanners test technical patterns. They cannot understand your application's business rules.
For example, a scanner will not catch:
- A pricing endpoint that accepts negative quantities, giving users credits instead of charging them
- An authorization bypass where user A can access user B's data by changing an ID in the URL
- A referral system that allows self-referral loops to generate unlimited credits
- A free tier that can be exploited to access paid features by manipulating request headers
These are not edge cases. In bug bounty programs, business logic flaws consistently rank among the highest-severity findings. They require understanding what the application is supposed to do, not just what it technically does.
2. Context-dependent vulnerabilities
A scanner might flag a missing X-Frame-Options header. A human reviewer will ask: "Does this page contain a form that processes payments? If so, the clickjacking risk is critical. If it is a static marketing page, it is low priority."
Context changes everything:
- CORS misconfigurations matter differently depending on whether authenticated endpoints are exposed
- Rate limiting gaps are critical on login forms but cosmetic on a public blog feed
- Information disclosure in error messages is a non-issue on internal tools but a serious concern on customer-facing APIs
Scanners report findings with a fixed severity. Human reviewers assess risk based on your specific architecture, users, and threat model.
3. AI-generated code patterns
This is where vibe-coded apps face unique challenges that scanners are not designed to detect.
AI coding assistants like Cursor, Claude Code, Bolt, and Copilot tend to produce code with consistent security blind spots:
- Overly permissive defaults: AI-generated code often starts with
Access-Control-Allow-Origin: *orcors({ origin: true })because it "just works." A scanner may flag the wildcard CORS but will not understand that the AI generated this pattern because the developer prompted "make the API accessible from my frontend."
Wondering if your site has this issue?
VibeSec runs a full 4-phase security assessment and gives you fix prompts you can paste into Claude or Cursor. $199/mo for unlimited scans.
Get Your Free Assessment-
Missing validation on the "happy path": AI code typically handles the expected flow well but skips edge cases like negative numbers, empty strings, or malformed JSON. Scanners do not fuzz your custom business logic.
-
Scaffolded secrets: AI assistants frequently generate placeholder API keys, hardcoded JWT secrets (like
"your-secret-here"), and default database credentials in code. Some scanners catch obvious patterns likepassword = "admin", but miss obfuscated or framework-specific secret patterns. -
Incomplete authentication flows: AI may generate a login endpoint but omit logout, session invalidation, or token refresh logic. A scanner tests the endpoints that exist -- it does not notice the ones that are missing.
4. Architecture and design weaknesses
Scanners test individual endpoints and configurations. They do not assess:
- Whether sensitive operations are properly separated from public-facing code
- Whether the database schema leaks more data than the API intends to expose (over-fetching)
- Whether client-side routing actually enforces authorization or just hides UI elements
- Whether environment variables are properly segmented between development and production
These are systemic issues that require reading code and understanding architecture, not running a checklist.
The comparison
Here is how automated scanners compare to human-led security assessments across key dimensions:
| Dimension | Automated Scanners | Human Assessment | | ---------------------------------------- | ------------------------- | ------------------------------------------------- | | Speed | Minutes to hours | 24-48 hours | | CVE/known vulnerability detection | Excellent | Good (informed by tools + context) | | Business logic testing | None | Deep contextual analysis | | AI-generated code patterns | Minimal | Specialized detection | | False positive rate | High (30-60% typical) | Low (verified findings only) | | Remediation guidance | Generic CVE descriptions | AI-ready prompts specific to your stack | | Context-aware severity | Fixed ratings | Risk-adjusted to your architecture | | Authentication/authorization testing | Basic (login brute force) | Flow analysis, token handling, session management | | Continuous monitoring | Yes (scheduled scans) | Point-in-time with retainer options | | Cost | $100-500/month (SaaS) | $199/month (VibeSec Pro) | | Compliance mapping | Automated | Available on request |
Neither approach is universally better. They cover different parts of the security surface.
The false positive problem
One of the most underappreciated costs of automated scanners is the false positive rate. Industry studies consistently show that automated scanners produce false positive rates between 30% and 60%.
When your scanner returns 47 findings and 20 of them are not real issues, several bad things happen:
- Your team wastes time investigating non-issues. Developer hours spent triaging false positives are expensive.
- Real issues get buried. When most alerts are noise, people start ignoring all of them.
- You lose confidence in the tool. If every scan generates a new set of "critical" findings that turn out to be false alarms, the scanner becomes background noise.
Human-led assessments verify every finding before including it in the report. If it is in the report, it is real, reproducible, and relevant.
Why "scan and fix" does not work for vibe-coded apps
The standard workflow for automated scanners looks like this:
- Scan runs automatically
- Report generates with findings
- Developer looks at the generic CVE description
- Developer Googles the fix
- Developer applies the fix
- Next scan verifies the fix
For a vibe coder using Cursor or Claude Code, step 4 becomes: "paste the CVE description into the AI assistant and ask it to fix it." The problem is that generic CVE descriptions are optimized for human security engineers, not for AI coding assistants.
A generic scanner output might say:
Missing Content-Security-Policy header. The Content-Security-Policy HTTP response header is not set. CSP helps prevent XSS attacks by specifying valid sources of content. CVSS: 6.1. Reference: CWE-16.
A vibe coder pastes this into Cursor. Cursor adds a CSP header. But without knowing the application's actual resource loading patterns, it generates a policy that either breaks the app (too restrictive) or does not protect it (too permissive).
VibeSec assessment reports include AI-ready remediation prompts that are specific to your application's stack, hosting environment, and resource loading patterns. You paste the prompt, your AI coding assistant understands the context, and the fix works the first time.
When to use a scanner
Automated scanners are the right choice when you need:
- Continuous monitoring of known vulnerabilities across a large portfolio
- Dependency scanning (checking your
package.json,requirements.txt, orGemfile.lockfor known vulnerable versions) - Compliance automation for frameworks that require regular scanning evidence
- CI/CD integration to catch known issues before deployment
We recommend running a scanner alongside periodic human assessments. They complement each other.
When you need a human
Human-led assessments are essential when:
- You are launching a product that handles user data, payments, or sensitive information
- Your app was built with AI coding tools and has not had a security review
- A scanner found zero issues and you want to verify that means "secure" rather than "the scanner did not check what matters"
- You need actionable fixes, not CVE references -- especially if your team uses AI coding assistants
- You handle data subject to regulations (HIPAA, GDPR, PCI DSS) and need to demonstrate due diligence beyond automated scanning
The bottom line
Automated scanners are a necessary tool. They catch the known vulnerabilities efficiently and can run continuously. Every production application should have one.
But relying on a scanner alone is like only checking your car's tire pressure and calling it a full inspection. You have verified one important thing while ignoring the engine, the brakes, and the steering.
For vibe-coded applications -- where AI-generated code introduces unique patterns that scanners are not trained to detect -- the gap between "scanned" and "secure" is wider than most teams realize.
A VibeSec Pro assessment ($199/mo) catches what your scanner cannot see. And every finding comes with a fix you can paste directly into your AI coding assistant.
VibeSec Advisory provides security assessments designed for teams building with AI coding tools. See what a report looks like or get started with an assessment.