Your AI coding assistant has no guardrails
Open your CLAUDE.md file right now. Or your .cursorrules. Or your Copilot agent configuration. What does it say about security?
For most people, the answer is nothing. Or worse — there is no configuration file at all. The AI agent runs with whatever defaults the tool shipped with, which typically means: access everything, modify anything, ask about nothing.
This is the equivalent of giving a new contractor the admin password on their first day and telling them to figure it out. The contractor might be brilliant. They might also rm -rf a production directory because you never told them it was off limits.
AI coding assistants are powerful. They can read your files, execute commands, write code, push to git, and interact with external services through MCP servers. That power is useful — but only if you have drawn a line around what it should and should not do.
That line is your configuration file. And right now, yours is probably empty.
What this generates
This prompt creates a security-focused configuration file tailored to your specific role. It works for:
- CLAUDE.md — Claude Code's project-level configuration
- .cursorrules — Cursor's project rules
- Copilot agent config — GitHub Copilot's agent instructions
You pick your role, and the prompt generates a config file with:
- File access boundaries — which directories the agent should and should not touch
- Secret detection rules — patterns to flag before they end up in a commit or a prompt
- Destructive action gates — commands and operations that require explicit approval
- Git hygiene rules — commit conventions, branch protections, pre-push checks
- Data handling rules — what types of data should never be included in AI context
- Escalation triggers — situations where the agent should stop and ask instead of proceeding
Before you start
Disclaimer: This content is provided for educational and informational purposes by VibeSec Advisory. These prompts and configurations interact with your local system and generate code automatically. Always review generated code before executing it in your environment. VibeSec Advisory provides recommendations and frameworks — not warranties on specific outcomes. Use at your own risk. By using these materials, you acknowledge that VibeSec Advisory is not responsible for any damages, data loss, or system issues resulting from their use.
You will need:
Ready to apply the FORGE framework?
VibeSec helps knowledge worker teams redesign their processes using the FORGE framework: Skills, Agents, Guardrails, and Schedule. Security is built in, not bolted on. Map your first process in 10 minutes.
- Claude Code (terminal, desktop app, or IDE extension)
- About 5 minutes
- To know which AI coding tool you want to configure (Claude Code, Cursor, or Copilot)
- To be in the project directory where you want the config file
The prompt
Copy this entire block and paste it into Claude Code.
I want you to generate a security-focused configuration file for my AI coding assistant. Ask me a few questions first, then generate the file.
STEP 1 — CONTEXT GATHERING
Ask me these questions one at a time:
1. Which AI coding tool are you configuring? (Claude Code → CLAUDE.md, Cursor → .cursorrules, GitHub Copilot → agent config, or "all" to get templates for each)
2. What is your role? Pick one or describe your own:
- Software developer (backend, frontend, or full-stack)
- Sales engineer / solutions architect
- Product manager
- DevOps / infrastructure engineer
- Data scientist / analyst
- Marketing / content creator
- Team lead / engineering manager
3. What does this project do? (One sentence is fine — I need to know the domain to tailor the rules)
4. What sensitive systems does this project interact with? (databases, payment processors, email services, cloud infrastructure, customer data, internal APIs — list whatever applies)
5. Do you work with any of the following? (Check all that apply)
- Production environment access
- Customer PII or personal data
- Financial data or payment processing
- Healthcare data (HIPAA)
- Authentication / credential systems
- Internal tools other people rely on
STEP 2 — GENERATE CONFIGURATION FILE
Based on the answers above, generate a complete configuration file with the following sections. Be specific — generic rules like "be careful with sensitive data" are useless. Name the actual files, directories, and patterns that apply to this project.
### Section: Identity and Scope
- What this project is
- What the agent's role is (assistant, not autonomous — always defer to the human on judgment calls)
- What is explicitly out of scope
### Section: File Access Boundaries
- Directories the agent should freely read and write
- Directories the agent should read but not modify without asking (e.g., config files, CI/CD, infrastructure)
- Directories and files the agent should NEVER access (e.g., .env, credentials, private keys, production configs)
- File types that require extra caution (.pem, .key, .env, .credentials, docker secrets)
### Section: Secret Detection
- Environment variable patterns to never include in code or commit messages (AWS_*, STRIPE_*, *_SECRET, *_KEY, *_TOKEN, *_PASSWORD)
- File patterns that should never be committed (.env*, *.pem, *.key, credentials.*, secrets.*)
- Inline patterns to flag: API keys, connection strings, hardcoded passwords, bearer tokens
- What to do when a secret is detected: stop, warn, suggest .env + .gitignore approach
### Section: Destructive Action Gates
Commands and operations that should ALWAYS require explicit user confirmation:
- Git: force push, reset --hard, branch deletion, anything touching main/master
- File system: rm -rf, deleting directories, overwriting config files
- Database: DROP, DELETE without WHERE, schema migrations on production
- Deployment: anything that pushes to production
- Package management: major version upgrades, removing dependencies
- System: killing processes, modifying system configs, changing permissions
### Section: Git Hygiene
- Always work on a branch, never commit directly to main
- Commit message format (conventional commits or whatever the project uses)
- Never commit files that match secret patterns
- Never use --no-verify to skip hooks
- Never force push to shared branches
- Always review the diff before committing
### Section: Data Handling
- Types of data that should never be included in prompts or AI context (based on the sensitive systems identified in Step 1)
- PII handling rules if applicable
- Log sanitization — never include real customer data in examples or test fixtures
- What to do if the agent encounters sensitive data unexpectedly
### Section: Escalation Triggers
Situations where the agent should stop and ask instead of proceeding:
- Ambiguous requirements that could be interpreted multiple ways
- Changes that affect more than N files (suggest a threshold)
- Any modification to authentication, authorization, or security-related code
- Performance-critical paths
- Anything that changes how data flows to or from external services
- When the agent is not confident in its approach
### Section: MCP Server Rules (Claude Code only)
If configuring Claude Code and MCP servers are in use:
- Which MCP servers are approved for use in this project
- What operations each server should and should not perform
- Data that should never be sent through MCP tools
STEP 3 — INSTALLATION INSTRUCTIONS
After generating the file, tell me:
- Exactly where to save it (file path)
- Whether to add it to .gitignore or commit it (recommendation: commit it — these rules should be shared with the team, and they contain no secrets)
- How to verify it is being picked up by the tool
- One thing to check after 24 hours of use to see if the rules need adjustment
IMPORTANT GUIDELINES:
- Be specific, not generic. "Don't access sensitive files" is not a rule. "Never read or modify files in ./secrets/, ./.env*, or any file ending in .pem, .key, or .p12" is a rule.
- Tailor everything to the role and project described. A frontend developer's config looks very different from a DevOps engineer's config.
- The config should be strict by default. It is easier to relax a rule that is too tight than to catch damage from a rule that was too loose.
- Include comments in the generated file explaining why each rule exists — the person reading this file in six months should understand the reasoning, not just the rule.
- Keep the file under 150 lines. Long config files get ignored. Prioritize the highest-impact rules.
What to expect
Claude will ask you five questions about your role, project, and environment. Answer in plain language — you do not need to know security terminology. If you are unsure about something, say so and the prompt will make a conservative recommendation.
The generated file usually runs 80-120 lines and covers the specific tools, directories, and patterns relevant to your setup. It is not a generic template — it references your actual project structure and the systems you described.
After generating, Claude will tell you exactly where to put the file and how to confirm your tool is reading it.
What this gives you — and what it does not
This gives one person a security baseline for one project. That is genuinely useful. It prevents the most common mistakes: committing secrets, running destructive commands without thinking, and letting the agent access files it has no reason to touch.
Here is what it does not solve:
Consistency across your team. You just generated a config file based on your role and your understanding of the project. The developer sitting next to you will generate a different one. The new hire will generate a third. Three people, three different security postures, one project. Which one is right?
Shared policy enforcement. Your config file says "never access the production database." But it is a suggestion, not an enforcement mechanism. If someone removes that line or ignores the warning, nothing stops them. Real guardrails are enforced, not aspirational.
Evolution over time. Your project changes. New MCP servers get added. New integrations get built. New people join the team. The config file you generated today will be outdated in a month unless someone is actively maintaining it. Across a team of ten people, that is ten config files drifting in ten different directions.
Cross-project governance. This config covers one project. Most teams work across multiple repositories, each with different sensitivity levels. An engineer who works on both the marketing site and the payment processing backend needs different guardrails for each — and the guardrails need to be designed together, not independently.
The FORGE methodology addresses this through the Guardrails pillar: data access policies, approval gates, escalation triggers, and action limits designed across your entire team, verified for consistency, and maintained as your tooling evolves. A single config file is a good first step. A system of guardrails is what makes AI adoption actually safe at scale.
Start with your riskiest project
If you work on multiple projects, run this on the one that touches the most sensitive data first. The project with production database access. The one with payment processing. The one with customer PII.
A config file takes five minutes to generate and zero minutes to install. The gap between "no guardrails" and "basic guardrails" is the widest gap in AI security — and it is the easiest one to close.
VibeSec Advisory helps teams design consistent, enforceable AI Guardrails using the FORGE methodology. One config file is a starting point. A team-wide guardrails system is a strategy. Book a Discovery Workshop and we will build yours.