Skip to main content
FAQ

Frequently asked
questions

Everything you need to know about the FORGE framework, FORGE engagements, Guardrails, and how we work.

About VibeSec Advisory

VibeSec Advisory helps knowledge workers move from random AI usage to governed AI workflows. Ryan Macomber works with sales, HR, marketing, onboarding, and product teams to map existing workflows and rebuild them using the FORGE Methodology. Security is built into every engagement as the Guardrails pillar. It is not a separate add-on or a later conversation.

Knowledge workers building agentic workflows. Sales teams automating prospecting. HR teams redesigning onboarding. Marketing teams building campaign tools. Product managers integrating AI into their daily processes. Sales engineers creating demos. Most clients are at companies with 20 to 500 employees, but the fit depends more on whether the team can name a real process, baseline metric, and 90-day target.

Most AI consultants deliver strategy decks with no security depth. Most security firms have no AI adoption expertise. Ryan Macomber created the FORGE Methodology to solve both problems. He has applied FORGE to hundreds of teams and uses agentic AI tools every day. That combination of process design experience and real guardrails depth is the differentiator.

Traditional firms usually start with testing and findings. VibeSec starts with the FORGE framework. Guardrails is one of the six FORGE pillars and includes human approval steps, automated checks, data boundaries, and escalation rules. Security is designed into your agentic workflows from the start instead of audited after the workflow is already spreading.

Yes. The FORGE AI Workflow Starter Kit is free. It gives you a workflow map, reusable Skill template, guardrails checklist, and 30-minute implementation plan before you consider deeper advisory support.

No. Start with the free FORGE AI Workflow Starter Kit or send an async advisory inquiry. If the fit is clear, Ryan will reply with written next steps, clarifying questions, or the right intake path.

FORGE Methodology

FORGE is a methodology for redesigning knowledge work using autonomous AI agents. It has six pillars: Baseline, Skills, Agents, Guardrails, Schedule, and Capture. Security lives in the Guardrails pillar of every engagement.

The FORGE AI Workflow Starter Kit is a free PDF for mapping one AI-enabled workflow. It includes a workflow map, reusable Skill template, guardrails checklist, and a simple 30-minute plan for turning scattered AI usage into a governed workflow.

The Async Advisory Retainer is the only public paid offer at $3,000/month. It includes async advisory through email, Loom, and shared docs, one priority workflow or guardrail problem per month, monthly process reviews, AI tooling guidance, and written next actions.

Yes. Private scoped engagements can be handled through a written proposal or SOW after Ryan reviews the workflow context. The public paid offer is the Advisory Retainer.

Not at all. FORGE engagements are designed for knowledge workers, not engineers. We explain everything in plain language and focus on your team's actual workflows. The framework is accessible to anyone who uses agentic AI tools to do their work.

Yes. The public paid support path is the Async Advisory Retainer at $3,000/month. It includes async advisory through email, Loom, and shared docs, one priority workflow or guardrail problem per month, monthly process reviews, AI tooling guidance, and written next actions.

AI Security Governance

AI Security Governance is part of the Guardrails pillar. It covers prompt injection awareness, MCP and tool poisoning risks, data leakage patterns, agent permission boundaries, shadow AI auditing, and model output verification guidance. This is advisory work delivered through structured interviews and workflow analysis, not external scanning.

Security lives inside the Guardrails pillar of the FORGE framework. When we map your workflows, we naturally identify where data boundaries are unclear, where agents have excessive permissions, and where human checkpoints are missing. Separating security from workflow design creates blind spots. No other agentic AI advisor covers prompt injection, MCP risks, or agent permission boundaries.

No. FORGE engagements are entirely advisory. We review your team's AI tool configurations, workflows, and governance practices through interviews and collaborative sessions. We do not run external scans, penetration tests, or automated tools against your applications. Our value is in governance education and FORGE methodology, not scan results.

About Agentic AI Security

Agentic AI refers to AI systems that can take autonomous actions on behalf of users. Tools like Claude, Cursor, Amp, and Copilot let knowledge workers describe tasks in natural language and the AI executes them, writing code, building workflows, searching databases, and calling external APIs. This is transformative for productivity but creates new security risks around data access, tool permissions, and unreviewed AI actions.

The biggest risks for knowledge workers using agentic AI tools are: exposed API keys and credentials in AI-generated code, MCP server vulnerabilities that let attackers hijack AI tool actions, prompt injection attacks that manipulate AI behavior through malicious input, data leakage through AI tools that send sensitive information to external services, shadow AI usage where teams adopt tools without IT or security review, and AI-generated code with missing security controls. Guardrails in the FORGE framework address all of these directly.

Not inherently. Research shows that a significant percentage of AI-generated code ships with security vulnerabilities. AI coding tools optimize for functionality, not security. Common issues include missing input validation, insecure default configurations, exposed API keys, and outdated dependency patterns. This is exactly why Guardrails is a core pillar of the FORGE Methodology, not an afterthought.

MCP (Model Context Protocol) is the standard that lets AI tools connect to external services like databases, APIs, and file systems. When you install an MCP server in Cursor or Claude Code, you give the AI the ability to take real actions in your environment. A compromised or malicious MCP server can read files, exfiltrate data, or modify code without your knowledge. VibeSec tested 6 MCP attack scenarios and all 6 were fully exploitable. Guardrails design in every FORGE engagement covers how to evaluate and safely configure MCP servers.

Yes. Guardrails, one of the six FORGE pillars, covers exactly this. Every FORGE engagement includes human approval steps, automated checks, data boundary definitions, acceptable use rules, model access frameworks, and escalation procedures. Our deeper engagements also include compliance mapping for SOC 2, ISO 27001, and the EU AI Act. All written in plain language for non-technical stakeholders.

Still have questions?

Reach out directly. We respond within one business day. No calls required.

Cookieless analytics only. No ad tracking. Privacy