Compare commits

...

11 Commits

Author SHA1 Message Date
elsahafy bb7663037c
Merge 8a3ba98f16 into c748f0f6cc 2026-01-01 14:54:54 +01:00
Brian Madison c748f0f6cc paths for workflow and sprint tatus files fixed 2026-01-01 21:20:14 +08:00
Andaman Lekawat 4142972b6a
fix: standardize variable naming from {project_root} to {project-root} (#1217)
Fixed inconsistent variable naming in workflow instruction files across
CIS, BMGD, and BMM modules. The standard variable format uses hyphens
({project-root}) not underscores ({project_root}).

Affected files:
- CIS: problem-solving, innovation-strategy, design-thinking, storytelling
- BMGD: brainstorm-game, narrative, create-story checklist
- BMM: excalidraw diagrams, create-story checklist

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Brian <bmadcode@gmail.com>
2026-01-01 21:14:21 +08:00
Murat K Ozcan cd45d22eb6
docs: chose your tea engagement (#1228)
* docs: chose your tea engagement

* docs: addressed PR comments

* docs: made refiements to the mermaid diagram

* docs: wired in test architect discoverability nudges

---------

Co-authored-by: Brian <bmadcode@gmail.com>
2026-01-01 19:06:55 +08:00
Ibrahim Elsahafy 8a3ba98f16 feat(bmm): add API design workflow with OpenAPI 3.0 support
- Add workflow.yaml for contract-first API design
- Add instructions.md with detailed design process
- Add openapi.template.yaml as starting template
- Add api-checklist.md for design validation
2025-12-31 21:07:29 +04:00
Ibrahim Elsahafy 4284f80a9a feat(bmm): add security audit workflow with OWASP Top 10
- Add workflow.yaml for comprehensive security auditing
- Add instructions.md with step-by-step audit process
- Add owasp-checklist.md covering all OWASP Top 10 categories
- Add security-report.template.md for consistent reporting
2025-12-31 21:07:15 +04:00
Ibrahim Elsahafy 2a746a6fc4 feat(core): add utility tools for codebase analysis
- Add dependency-check.xml for vulnerability scanning
- Add schema-validator.xml for JSON/YAML/OpenAPI validation
- Add code-metrics.xml for size and complexity analysis
- Add context-extractor.xml for AI-optimized context extraction
2025-12-31 21:07:00 +04:00
Ibrahim Elsahafy 1abc60e068 feat(core): add pipeline orchestrator for multi-stage workflows
- Add pipeline-config.yaml with templates (full_sdlc, quick_flow, etc.)
- Add orchestrator.xml task with create, run, status, abort, resume commands
- Add pipeline.md documentation with architecture and usage
- Support sequential, parallel, and hybrid execution modes
2025-12-31 21:06:45 +04:00
Ibrahim Elsahafy 5601e6de12 feat(core): add inter-agent messenger for formal handoffs
- Add messenger-config.yaml with message types and routing rules
- Add send-message.xml and receive-message.xml tasks
- Add messenger.md documentation with usage examples
- Support handoff, review, clarify, escalate, notify, collaborate messages
2025-12-31 21:06:29 +04:00
Ibrahim Elsahafy 28b4e7a8ee feat(core): add session manager workflow for persistent state
- Add workflow.yaml with session commands (start, resume, status, close)
- Add instructions.md with detailed command documentation
- Add session-template.yaml for consistent session structure
- Support token tracking, milestone progress, and context preservation
2025-12-31 21:06:14 +04:00
Ibrahim Elsahafy b59fcc7d55 feat(core): add token isolation architecture for multi-agent efficiency
- Add spawn-agent.xml task for isolated subprocess spawning
- Add token-isolation.md documentation with architecture patterns
- Support parallel and sequential agent collaboration patterns
- Preserve main session context by isolating agent workloads
2025-12-31 21:05:59 +04:00
55 changed files with 3817 additions and 47 deletions

View File

@ -161,7 +161,7 @@ Production workflows inherit from BMM and add game-specific overrides.
**Command:** `sprint-planning` **Command:** `sprint-planning`
**Agent:** Game Scrum Master **Agent:** Game Scrum Master
**Input:** GDD with epics **Input:** GDD with epics
**Output:** `{output_folder}/sprint-status.yaml` **Output:** `{implementation_artifacts}/sprint-status.yaml`
**Description:** **Description:**
Generates or updates sprint tracking from epic files. Sets up the sprint backlog and tracking. Generates or updates sprint tracking from epic files. Sets up the sprint backlog and tracking.

View File

@ -40,6 +40,8 @@ First know there is the full BMad Method Process and then there is a Quick Flow
- Implementation in minutes, not days - Implementation in minutes, not days
- Has a specialized single agent that does all of this: **[Quick Flow Solo Dev Agent](./quick-flow-solo-dev.md)** - Has a specialized single agent that does all of this: **[Quick Flow Solo Dev Agent](./quick-flow-solo-dev.md)**
- **TEA engagement (optional)** - Choose TEA engagement: none, TEA-only (standalone), or integrated by track. See **[Test Architect Guide](./test-architecture.md)**.
## 🤖 Agents and Collaboration ## 🤖 Agents and Collaboration
Complete guide to BMM's AI agent team: Complete guide to BMM's AI agent team:

View File

@ -179,6 +179,16 @@ Once epics and stories are created:
**Why run this?** It ensures all your planning assets align properly before you start building. **Why run this?** It ensures all your planning assets align properly before you start building.
#### Optional: TEA Engagement
Testing is not mandated by BMad. Decide how you want to engage TEA:
- **No TEA** - Use your existing team testing approach
- **TEA-only (Standalone)** - Use TEA workflows with your own requirements and environment
- **TEA-integrated** - Use TEA as part of the BMad Method or Enterprise flow
See the [Test Architect Guide](./test-architecture.md) for the five TEA engagement models and recommended sequences.
#### Context Management Tips #### Context Management Tips
- **Use 200k+ context models** for best results (Claude Sonnet 4.5, GPT-4, etc.) - **Use 200k+ context models** for best results (Claude Sonnet 4.5, GPT-4, etc.)
@ -211,7 +221,14 @@ Once planning and architecture are complete, you'll move to Phase 4. **Important
3. Tell the agent: "Run dev-story" 3. Tell the agent: "Run dev-story"
4. The DEV agent will implement the story and update the sprint status 4. The DEV agent will implement the story and update the sprint status
#### 3.4 Review the Code (Optional but Recommended) #### 3.4 Generate Guardrail Tests (Optional)
1. **Start a new chat** with the **TEA agent**
2. Wait for the menu
3. Tell the agent: "Run automate"
4. The TEA agent generates or expands tests to act as guardrails
#### 3.5 Review the Code (Optional but Recommended)
1. **Start a new chat** with the **DEV agent** 1. **Start a new chat** with the **DEV agent**
2. Wait for the menu 2. Wait for the menu
@ -224,7 +241,8 @@ For each subsequent story, repeat the cycle using **fresh chats** for each workf
1. **New chat** → SM agent → "Run create-story" 1. **New chat** → SM agent → "Run create-story"
2. **New chat** → DEV agent → "Run dev-story" 2. **New chat** → DEV agent → "Run dev-story"
3. **New chat** → DEV agent → "Run code-review" (optional but recommended) 3. **New chat** → TEA agent → "Run automate" (optional)
4. **New chat** → DEV agent → "Run code-review" (optional but recommended)
After completing all stories in an epic: After completing all stories in an epic:

View File

@ -6,6 +6,38 @@
- **Mission:** Deliver actionable quality strategies, automation coverage, and gate decisions that scale with project complexity and compliance demands. - **Mission:** Deliver actionable quality strategies, automation coverage, and gate decisions that scale with project complexity and compliance demands.
- **Use When:** BMad Method or Enterprise track projects, integration risk is non-trivial, brownfield regression risk exists, or compliance/NFR evidence is required. (Quick Flow projects typically don't require TEA) - **Use When:** BMad Method or Enterprise track projects, integration risk is non-trivial, brownfield regression risk exists, or compliance/NFR evidence is required. (Quick Flow projects typically don't require TEA)
## Choose Your TEA Engagement Model
BMad does not mandate TEA. There are five valid ways to use it (or skip it). Pick one intentionally.
1. **No TEA**
- Skip all TEA workflows. Use your existing team testing approach.
2. **TEA-only (Standalone)**
- Use TEA on a non-BMad project. Bring your own requirements, acceptance criteria, and environments.
- Typical sequence: `*test-design` (system or epic) -> `*atdd` and/or `*automate` -> optional `*test-review` -> `*trace` for coverage and gate decisions.
- Run `*framework` or `*ci` only if you want TEA to scaffold the harness or pipeline.
3. **Integrated: Greenfield - BMad Method (Simple/Standard Work)**
- Phase 3: system-level `*test-design`, then `*framework` and `*ci`.
- Phase 4: per-epic `*test-design`, optional `*atdd`, then `*automate` and optional `*test-review`.
- Gate (Phase 2): `*trace`.
4. **Integrated: Brownfield - BMad Method or Enterprise (Simple or Complex)**
- Phase 2: baseline `*trace`.
- Phase 3: system-level `*test-design`, then `*framework` and `*ci`.
- Phase 4: per-epic `*test-design` focused on regression and integration risks.
- Gate (Phase 2): `*trace`; `*nfr-assess` (if not done earlier).
- For brownfield BMad Method, follow the same flow with `*nfr-assess` optional.
5. **Integrated: Greenfield - Enterprise Method (Enterprise/Compliance Work)**
- Phase 2: `*nfr-assess`.
- Phase 3: system-level `*test-design`, then `*framework` and `*ci`.
- Phase 4: per-epic `*test-design`, plus `*atdd`/`*automate`/`*test-review`.
- Gate (Phase 2): `*trace`; archive artifacts as needed.
If you are unsure, default to the integrated path for your track and adjust later.
## TEA Workflow Lifecycle ## TEA Workflow Lifecycle
TEA integrates into the BMad development lifecycle during Solutioning (Phase 3) and Implementation (Phase 4): TEA integrates into the BMad development lifecycle during Solutioning (Phase 3) and Implementation (Phase 4):
@ -16,6 +48,9 @@ graph TB
subgraph Phase2["<b>Phase 2: PLANNING</b>"] subgraph Phase2["<b>Phase 2: PLANNING</b>"]
PM["<b>PM: *prd (creates PRD with FRs/NFRs)</b>"] PM["<b>PM: *prd (creates PRD with FRs/NFRs)</b>"]
PlanNote["<b>Business requirements phase</b>"] PlanNote["<b>Business requirements phase</b>"]
NFR2["<b>TEA: *nfr-assess (optional, enterprise)</b>"]
PM -.-> NFR2
NFR2 -.-> PlanNote
PM -.-> PlanNote PM -.-> PlanNote
end end
@ -23,8 +58,8 @@ graph TB
Architecture["<b>Architect: *architecture</b>"] Architecture["<b>Architect: *architecture</b>"]
EpicsStories["<b>PM/Architect: *create-epics-and-stories</b>"] EpicsStories["<b>PM/Architect: *create-epics-and-stories</b>"]
TestDesignSys["<b>TEA: *test-design (system-level)</b>"] TestDesignSys["<b>TEA: *test-design (system-level)</b>"]
Framework["<b>TEA: *framework</b>"] Framework["<b>TEA: *framework (optional if needed)</b>"]
CI["<b>TEA: *ci</b>"] CI["<b>TEA: *ci (optional if needed)</b>"]
GateCheck["<b>Architect: *implementation-readiness</b>"] GateCheck["<b>Architect: *implementation-readiness</b>"]
Architecture --> EpicsStories Architecture --> EpicsStories
Architecture --> TestDesignSys Architecture --> TestDesignSys
@ -174,7 +209,7 @@ npm install -D @seontechnologies/playwright-utils
**Enable during BMAD installation** by answering "Yes" when prompted. **Enable during BMAD installation** by answering "Yes" when prompted.
**Supported utilities (11 total):** **Supported utilities (10 total):**
- api-request, network-recorder, auth-session, intercept-network-call, recurse - api-request, network-recorder, auth-session, intercept-network-call, recurse
- log, file-utils, burn-in, network-error-monitor - log, file-utils, burn-in, network-error-monitor
@ -429,7 +464,7 @@ Provides fixture-based utilities that integrate into TEA's test generation and r
Benefit: Faster CI feedback, HTTP error detection Benefit: Faster CI feedback, HTTP error detection
**Utilities available** (11 total): api-request, network-recorder, auth-session, intercept-network-call, recurse, log, file-utils, burn-in, network-error-monitor, fixtures-composition **Utilities available** (10 total): api-request, network-recorder, auth-session, intercept-network-call, recurse, log, file-utils, burn-in, network-error-monitor, fixtures-composition
**Enable during BMAD installation** by answering "Yes" when prompted, or manually set `tea_use_playwright_utils: true` in `_bmad/bmm/config.yaml`. **Enable during BMAD installation** by answering "Yes" when prompted, or manually set `tea_use_playwright_utils: true` in `_bmad/bmm/config.yaml`.

View File

@ -98,8 +98,9 @@ Stories move through these states in the sprint status file:
1. SM runs `create-story` 1. SM runs `create-story`
2. DEV runs `dev-story` 2. DEV runs `dev-story`
3. DEV runs `code-review` 3. (Optional) TEA runs `*automate` to generate or expand guardrail tests
4. If code review fails: DEV fixes issues in `dev-story`, then re-runs `code-review` 4. DEV runs `code-review`
5. If code review fails: DEV fixes issues in `dev-story`, then re-runs `code-review`
**After Epic Complete:** **After Epic Complete:**

View File

@ -434,7 +434,7 @@ Architecture documents are living. Update them as you learn during implementatio
**Key Difference:** Enterprise adds optional extended workflows AFTER architecture but BEFORE create-epics-and-stories. Everything else is identical to BMad Method. **Key Difference:** Enterprise adds optional extended workflows AFTER architecture but BEFORE create-epics-and-stories. Everything else is identical to BMad Method.
**Note:** TEA (Test Architect) operates across all phases and validates architecture testability but is not a Phase 3-specific workflow. See [Test Architecture Guide](../../../../docs/modules/bmm-bmad-method/test-architecture.md) for TEA's full lifecycle integration. **Note:** TEA (Test Architect) operates across all phases and validates architecture testability but is not a Phase 3-specific workflow. See [Test Architecture Guide](./test-architecture.md) for TEA's full lifecycle integration.
--- ---

View File

@ -0,0 +1,162 @@
# Inter-Agent Messenger Configuration
# Enables formal handoff protocols between agents
name: inter-agent-messenger
version: "1.0.0"
description: "Formal handoff and communication protocols between BMAD agents"
# Message queue location
queue_dir: "{project-root}/_bmad-output/messenger"
queue_file: "{queue_dir}/message-queue.yaml"
archive_dir: "{queue_dir}/archive"
# Message types
message_types:
handoff:
description: "Transfer work from one agent to another"
required_fields:
- from_agent
- to_agent
- artifact_path
- context_summary
- next_actions
priority: high
review:
description: "Request review of completed work"
required_fields:
- from_agent
- to_agent
- artifact_path
- review_type
priority: medium
clarify:
description: "Request clarification on requirements or decisions"
required_fields:
- from_agent
- to_agent
- question
- context
priority: high
escalate:
description: "Escalate issue to user attention"
required_fields:
- from_agent
- issue
- severity
- recommendation
priority: critical
notify:
description: "Inform other agents of status or decisions"
required_fields:
- from_agent
- to_agents # Can be list or "all"
- message
priority: low
collaborate:
description: "Request collaborative input from multiple agents"
required_fields:
- from_agent
- to_agents
- topic
- deadline
priority: medium
# Standard routing rules
routing:
# Phase 1 → Phase 2 handoffs
analyst_to_pm:
trigger: "Product brief complete"
from: analyst
to: pm
payload:
- product_brief_path
- key_insights
- recommended_priorities
pm_to_architect:
trigger: "PRD complete"
from: pm
to: architect
payload:
- prd_path
- priority_features
- technical_constraints
- timeline_expectations
pm_to_ux:
trigger: "PRD complete with UI"
from: pm
to: ux-designer
payload:
- prd_path
- user_personas
- key_user_flows
# Phase 2 → Phase 3 handoffs
architect_to_sm:
trigger: "Architecture approved"
from: architect
to: sm
payload:
- architecture_path
- tech_decisions
- component_boundaries
- api_contracts
ux_to_sm:
trigger: "UX design complete"
from: ux-designer
to: sm
payload:
- ux_design_path
- component_library
- interaction_patterns
# Phase 3 → Phase 4 handoffs
sm_to_dev:
trigger: "Story ready for dev"
from: sm
to: dev
payload:
- story_path
- acceptance_criteria
- technical_notes
- dependencies
dev_to_tea:
trigger: "Implementation complete"
from: dev
to: tea
payload:
- story_path
- files_changed
- test_coverage
- review_request
# Review flows
tea_to_dev:
trigger: "Review complete with issues"
from: tea
to: dev
payload:
- review_findings
- severity_breakdown
- required_actions
# Priority levels
priorities:
critical: 1
high: 2
medium: 3
low: 4
# Message retention
retention:
active_messages_max: 100
archive_after_days: 7
delete_archived_after_days: 30

View File

@ -0,0 +1,212 @@
# Inter-Agent Messenger System
## Overview
The Inter-Agent Messenger enables formal handoff protocols between BMAD agents. It provides structured communication for work transitions, review requests, clarifications, and escalations.
## Architecture
```
┌─────────────┐ ┌─────────────────┐ ┌─────────────┐
│ Agent A │───▶│ Message Queue │───▶│ Agent B │
│ (sender) │ │ (YAML file) │ │ (receiver) │
└─────────────┘ └─────────────────┘ └─────────────┘
┌─────────────┐
│ Archive │
└─────────────┘
```
## Message Types
### handoff
Transfer work from one agent to another with full context.
**Example:**
```yaml
type: handoff
from: pm
to: architect
payload:
artifact_path: "_bmad-output/planning-artifacts/prd.md"
context_summary: "PRD complete for user auth feature"
next_actions:
- Review technical requirements
- Identify infrastructure needs
- Propose architecture patterns
```
### review
Request review of completed work.
**Example:**
```yaml
type: review
from: dev
to: tea
payload:
artifact_path: "_bmad-output/implementation-artifacts/stories/1-2-user-login.md"
review_type: "code-review"
files_changed:
- src/auth/login.ts
- src/auth/login.test.ts
```
### clarify
Request clarification on requirements or decisions.
**Example:**
```yaml
type: clarify
from: dev
to: architect
payload:
question: "Should authentication use JWT or session cookies?"
context: "Story 1-2 requires user login but auth method not specified in architecture"
```
### escalate
Escalate issue to user attention (critical).
**Example:**
```yaml
type: escalate
from: dev
to: user
payload:
issue: "Blocking dependency conflict between React 18 and legacy charting library"
severity: "blocker"
recommendation: "Either upgrade charting library or downgrade to React 17"
impact: "Cannot proceed with Epic 2 until resolved"
```
### notify
Inform agents of status or decisions.
**Example:**
```yaml
type: notify
from: architect
to_agents: all
payload:
message: "Architecture decision: Using PostgreSQL with Prisma ORM"
decision_doc: "_bmad-output/planning-artifacts/architecture.md#database"
```
### collaborate
Request collaborative input from multiple agents.
**Example:**
```yaml
type: collaborate
from: pm
to_agents:
- architect
- ux-designer
- tea
payload:
topic: "Feature complexity assessment for real-time collaboration"
deadline: "2025-01-16T17:00:00Z"
context: "Client requesting WebSocket-based real-time editing"
```
## Standard Handoff Routes
### Phase 1 → Phase 2
| From | To | Trigger | Payload |
|------|-----|---------|---------|
| analyst | pm | Product brief complete | brief_path, key_insights |
| pm | architect | PRD complete | prd_path, priority_features |
| pm | ux-designer | PRD with UI complete | prd_path, user_personas |
### Phase 2 → Phase 3
| From | To | Trigger | Payload |
|------|-----|---------|---------|
| architect | sm | Architecture approved | architecture_path, tech_decisions |
| ux-designer | sm | UX design complete | ux_path, component_library |
### Phase 3 → Phase 4
| From | To | Trigger | Payload |
|------|-----|---------|---------|
| sm | dev | Story ready | story_path, acceptance_criteria |
| dev | tea | Implementation complete | story_path, files_changed |
| tea | dev | Review with issues | review_findings, required_actions |
## Priority Levels
| Priority | Value | Use Case |
|----------|-------|----------|
| critical | 1 | Blockers, escalations |
| high | 2 | Handoffs, clarifications |
| medium | 3 | Reviews, collaborations |
| low | 4 | Notifications |
## Usage
### Sending a Message
```xml
<exec task="send-message">
<param name="type">handoff</param>
<param name="from_agent">pm</param>
<param name="to_agent">architect</param>
<param name="payload">
artifact_path: "_bmad-output/planning-artifacts/prd.md"
context_summary: "PRD complete, ready for architecture"
next_actions:
- Review technical requirements
- Design system architecture
</param>
</exec>
```
### Receiving Messages
```xml
<exec task="receive-message">
<param name="agent">architect</param>
<param name="type_filter">handoff</param>
</exec>
```
## Queue File Structure
```yaml
# _bmad-output/messenger/message-queue.yaml
messages:
- message_id: "MSG-20250115-a1b2"
type: "handoff"
from: "pm"
to: "architect"
priority: "high"
created: "2025-01-15T10:30:00Z"
status: "pending"
payload:
artifact_path: "_bmad-output/planning-artifacts/prd.md"
context_summary: "PRD complete"
next_actions:
- Review requirements
- Design architecture
- message_id: "MSG-20250115-c3d4"
type: "notify"
from: "architect"
to_agents: "all"
priority: "low"
created: "2025-01-15T11:00:00Z"
status: "read"
payload:
message: "Using PostgreSQL with Prisma"
```
## Best Practices
1. **Always include context** - Don't assume receiving agent knows the background
2. **Use appropriate priority** - Reserve critical for true blockers
3. **Include artifact paths** - Reference documents, not content
4. **Specify next actions** - Clear handoff means faster pickup
5. **Check messages at workflow start** - Agents should check queue before beginning work

View File

@ -0,0 +1,68 @@
<?xml version="1.0" encoding="UTF-8"?>
<task id="receive-message" name="Receive Inter-Agent Message" standalone="false">
<description>Receive and process messages from the messenger queue</description>
<config>
<source>{project-root}/_bmad/core/messenger/messenger-config.yaml</source>
<queue_file>{project-root}/_bmad-output/messenger/message-queue.yaml</queue_file>
<archive_dir>{project-root}/_bmad-output/messenger/archive</archive_dir>
</config>
<parameters>
<param name="agent" required="true" description="Agent checking for messages"/>
<param name="type_filter" required="false" description="Filter by message type"/>
<param name="mark_read" required="false" default="true" description="Mark messages as read"/>
</parameters>
<execution>
<step n="1" goal="Load message queue">
<action>Load queue from {queue_file}</action>
<action if="queue not exists or empty">Return: "No messages in queue"</action>
</step>
<step n="2" goal="Filter messages for agent">
<action>Filter messages where:
- to == {agent} OR
- to_agents contains {agent} OR
- to_agents == "all"
</action>
<action>Filter by status == "pending"</action>
<action if="type_filter specified">Filter by type == {type_filter}</action>
<action>Sort by priority (critical first), then by created (oldest first)</action>
</step>
<step n="3" goal="Process messages">
<action if="no matching messages">Return: "No pending messages for {agent}"</action>
<action>For each message:
- Display message summary
- If mark_read == true, update status to "read"
</action>
</step>
<step n="4" goal="Update queue">
<action if="mark_read == true">Save updated queue to {queue_file}</action>
</step>
<step n="5" goal="Return messages">
<action>Return list of messages with full payloads</action>
<output>
Messages for {agent}: {count}
{for each message}
---
[{priority}] {type} from {from_agent}
ID: {message_id}
Received: {created}
{payload_summary}
---
{end for}
</output>
</step>
</execution>
<output>
<field name="messages" description="Array of message objects"/>
<field name="count" description="Number of messages"/>
</output>
</task>

View File

@ -0,0 +1,72 @@
<?xml version="1.0" encoding="UTF-8"?>
<task id="send-message" name="Send Inter-Agent Message" standalone="false">
<description>Send a message between BMAD agents using the messenger system</description>
<config>
<source>{project-root}/_bmad/core/messenger/messenger-config.yaml</source>
<queue_file>{project-root}/_bmad-output/messenger/message-queue.yaml</queue_file>
</config>
<parameters>
<param name="type" required="true" description="Message type: handoff, review, clarify, escalate, notify, collaborate"/>
<param name="from_agent" required="true" description="Sending agent identifier"/>
<param name="to_agent" required="false" description="Receiving agent (not needed for escalate)"/>
<param name="to_agents" required="false" description="List of receiving agents (for notify/collaborate)"/>
<param name="payload" required="true" description="Message payload object"/>
<param name="priority" required="false" default="medium" description="Message priority"/>
</parameters>
<execution>
<step n="1" goal="Validate message type and required fields">
<action>Load messenger config from {config_source}</action>
<action>Validate type is one of: handoff, review, clarify, escalate, notify, collaborate</action>
<action>Check required_fields for message type are present in payload</action>
<action if="validation fails">HALT with validation error message</action>
</step>
<step n="2" goal="Create message object">
<action>Generate unique message_id: "MSG-{timestamp}-{random4}"</action>
<action>Build message object:
```yaml
message_id: "{message_id}"
type: "{type}"
from: "{from_agent}"
to: "{to_agent}" or "{to_agents}"
priority: "{priority}"
created: "{timestamp}"
status: "pending"
payload: {payload}
```
</action>
</step>
<step n="3" goal="Ensure queue directory exists">
<action>Check if {queue_dir} exists</action>
<action if="not exists">Create directory {queue_dir}</action>
</step>
<step n="4" goal="Add message to queue">
<action>Load existing queue from {queue_file} (or create empty if not exists)</action>
<action>Append new message to messages array</action>
<action>Sort messages by priority (critical first)</action>
<action>Save updated queue to {queue_file}</action>
</step>
<step n="5" goal="Return confirmation">
<action>Return message_id and status</action>
<output>
Message sent successfully.
ID: {message_id}
Type: {type}
From: {from_agent}
To: {to_agent}
Priority: {priority}
</output>
</step>
</execution>
<output>
<field name="message_id" description="Unique message identifier"/>
<field name="status" description="sent | failed"/>
</output>
</task>

View File

@ -0,0 +1,194 @@
<?xml version="1.0" encoding="UTF-8"?>
<task id="pipeline-orchestrator" name="Pipeline Orchestrator" standalone="true">
<description>Orchestrate multi-stage agent pipelines with dependency management</description>
<config>
<source>{project-root}/_bmad/core/pipeline/pipeline-config.yaml</source>
<pipeline_dir>{project-root}/_bmad-output/pipelines</pipeline_dir>
</config>
<commands>
<command name="create" description="Create a new pipeline from template or custom definition"/>
<command name="run" description="Execute a pipeline"/>
<command name="status" description="Show pipeline status"/>
<command name="list" description="List available pipelines and templates"/>
<command name="abort" description="Abort a running pipeline"/>
<command name="resume" description="Resume a failed/paused pipeline"/>
</commands>
<execution>
<!-- PIPELINE CREATE -->
<step n="1" goal="Handle create command" condition="command == 'create'">
<action>Ask user for pipeline source:
1. Use template (show available templates)
2. Define custom pipeline
</action>
<check if="user selects template">
<action>List templates from config: full_sdlc, quick_flow, analysis_only, design_review, test_suite</action>
<action>User selects template</action>
<action>Copy template stages to new pipeline definition</action>
</check>
<check if="user defines custom">
<action>Ask for pipeline name</action>
<action>Ask for stages (agent, parallel, depends_on, outputs)</action>
<action>Build pipeline definition</action>
</check>
<action>Generate pipeline_id: "PIPE-{YYYYMMDD}-{name}"</action>
<action>Create pipeline file: {pipeline_dir}/{pipeline_id}.yaml</action>
<action>Initialize all stages as "pending"</action>
<output>
Pipeline created: {pipeline_id}
Stages: {stage_count}
Estimated duration: {estimate}
Run with: PIPELINE run {pipeline_id}
</output>
</step>
<!-- PIPELINE RUN -->
<step n="2" goal="Handle run command" condition="command == 'run'">
<action>Load pipeline from {pipeline_id}.yaml</action>
<action>Validate pipeline definition (agents exist, dependencies valid)</action>
<action>Set pipeline status to "running"</action>
<action>Initialize execution state tracking</action>
<loop while="pending or queued stages exist">
<action>Find stages where:
- Status == "pending" AND
- All depends_on stages are "completed"
</action>
<action>Mark found stages as "queued"</action>
<!-- Execute queued stages -->
<action for_each="queued stage">
<check if="stage.parallel == true AND multiple queued stages">
<action>Launch agents in parallel using Task tool with run_in_background=true</action>
<action>Track agent_ids for each stage</action>
</check>
<check if="stage.parallel == false OR single queued stage">
<action>Launch agent using Task tool with run_in_background=false</action>
<action>Wait for completion</action>
</check>
</action>
<!-- Collect results -->
<action>For parallel stages: Use TaskOutput to collect results</action>
<action>Update stage status based on result (completed/failed)</action>
<action>Store outputs in {pipeline_dir}/outputs/{stage_name}/</action>
<!-- Handle failures -->
<check if="stage failed AND error_handling == 'halt'">
<action>Mark pipeline as "failed"</action>
<action>Mark dependent stages as "skipped"</action>
<action>HALT: "Pipeline failed at stage: {stage_name}"</action>
</check>
<check if="stage failed AND error_handling == 'skip_dependents'">
<action>Mark dependent stages as "skipped"</action>
<action>Continue with non-dependent stages</action>
</check>
<action>Update pipeline file with current state</action>
</loop>
<action>Mark pipeline as "completed"</action>
<output>
Pipeline completed: {pipeline_id}
Results:
{for each stage}
- {stage_name}: {status} ({duration})
{end for}
Outputs saved to: {pipeline_dir}/outputs/
</output>
</step>
<!-- PIPELINE STATUS -->
<step n="3" goal="Handle status command" condition="command == 'status'">
<action>Load pipeline from {pipeline_id}.yaml (or active-pipeline.yaml)</action>
<action>Calculate progress percentage</action>
<output>
Pipeline: {pipeline_id}
Status: {overall_status}
Progress: [{progress_bar}] {percentage}%
Stages:
{for each stage}
[{status_icon}] {stage_name}
Agents: {agents}
Duration: {duration or "pending"}
Output: {output_path or "n/a"}
{end for}
{if running}
Currently executing: {current_stage}
Estimated remaining: {remaining_estimate}
{end if}
</output>
</step>
<!-- PIPELINE LIST -->
<step n="4" goal="Handle list command" condition="command == 'list'">
<action>Scan {pipeline_dir} for pipeline files</action>
<action>Load templates from config</action>
<output>
Available Templates:
{for each template}
- {template_name}: {description}
Stages: {stage_names}
{end for}
Existing Pipelines:
{for each pipeline}
- {pipeline_id}: {status} ({created_date})
{end for}
</output>
</step>
<!-- PIPELINE ABORT -->
<step n="5" goal="Handle abort command" condition="command == 'abort'">
<action>Load running pipeline</action>
<action>Send abort signal to running agents</action>
<action>Mark running stages as "aborted"</action>
<action>Mark pipeline as "aborted"</action>
<output>
Pipeline {pipeline_id} aborted.
Completed stages: {completed_count}
Aborted stages: {aborted_count}
</output>
</step>
<!-- PIPELINE RESUME -->
<step n="6" goal="Handle resume command" condition="command == 'resume'">
<action>Load failed/aborted pipeline</action>
<action>Identify failed/aborted stages</action>
<action>Ask user: Retry failed stages or skip?</action>
<check if="retry">
<action>Reset failed stages to "pending"</action>
<action>Continue with run logic</action>
</check>
<check if="skip">
<action>Mark failed stages as "skipped"</action>
<action>Continue with remaining stages</action>
</check>
</step>
</execution>
<output>
<field name="pipeline_id" description="Pipeline identifier"/>
<field name="status" description="Overall pipeline status"/>
<field name="stages" description="Array of stage results"/>
<field name="outputs" description="Map of output file paths"/>
</output>
</task>

View File

@ -0,0 +1,157 @@
# Pipeline Orchestrator Configuration
# Enables multi-agent pipelines with parallel and sequential execution
name: pipeline-orchestrator
version: "1.0.0"
description: "Orchestrate multi-stage agent pipelines with dependency management"
# Pipeline storage
pipeline_dir: "{project-root}/_bmad-output/pipelines"
active_pipeline_file: "{pipeline_dir}/active-pipeline.yaml"
pipeline_archive_dir: "{pipeline_dir}/archive"
# Execution modes
execution_modes:
sequential:
description: "Execute stages one after another"
use_when: "Each stage depends on previous output"
parallel:
description: "Execute independent stages simultaneously"
use_when: "Stages have no dependencies on each other"
hybrid:
description: "Mix of parallel and sequential based on dependencies"
use_when: "Complex pipelines with partial dependencies"
# Pipeline templates
templates:
full_sdlc:
name: "Full SDLC Pipeline"
description: "Complete software development lifecycle"
stages:
- name: analysis
agents: [analyst]
parallel: false
outputs: [product_brief]
- name: requirements
agents: [pm]
parallel: false
depends_on: [analysis]
outputs: [prd]
- name: design
agents: [architect, ux-designer]
parallel: true
depends_on: [requirements]
outputs: [architecture, ux_design]
- name: planning
agents: [sm]
parallel: false
depends_on: [design]
outputs: [epics_and_stories]
- name: implementation
agents: [dev]
parallel: false
depends_on: [planning]
outputs: [code, tests]
- name: review
agents: [tea]
parallel: false
depends_on: [implementation]
outputs: [review_report]
quick_flow:
name: "Quick Flow Pipeline"
description: "Rapid development with minimal ceremony"
stages:
- name: spec
agents: [quick-flow-solo-dev]
parallel: false
outputs: [tech_spec]
- name: implement
agents: [quick-flow-solo-dev]
parallel: false
depends_on: [spec]
outputs: [code, tests]
analysis_only:
name: "Analysis Pipeline"
description: "Product analysis and requirements"
stages:
- name: research
agents: [analyst]
parallel: false
outputs: [research_findings]
- name: brief
agents: [analyst]
parallel: false
depends_on: [research]
outputs: [product_brief]
- name: requirements
agents: [pm]
parallel: false
depends_on: [brief]
outputs: [prd]
design_review:
name: "Design Review Pipeline"
description: "Architecture and UX design with review"
stages:
- name: architecture
agents: [architect]
parallel: false
outputs: [architecture]
- name: ux
agents: [ux-designer]
parallel: false
depends_on: [architecture]
outputs: [ux_design]
- name: review
agents: [analyst, pm]
parallel: true
depends_on: [architecture, ux]
outputs: [design_review]
test_suite:
name: "Test Suite Pipeline"
description: "Comprehensive testing workflow"
stages:
- name: test_design
agents: [tea]
parallel: false
outputs: [test_plan]
- name: test_impl
agents: [tea]
parallel: false
depends_on: [test_design]
outputs: [test_suite]
- name: security
agents: [tea]
parallel: false
depends_on: [test_impl]
outputs: [security_report]
- name: trace
agents: [tea]
parallel: false
depends_on: [test_impl]
outputs: [traceability_matrix]
# Stage status values
status_values:
- pending # Not yet started
- queued # Ready to start (dependencies met)
- running # Currently executing
- completed # Finished successfully
- failed # Finished with errors
- skipped # Skipped (dependency failed)
- blocked # Waiting for dependencies
# Error handling
error_handling:
on_stage_failure: "halt" # halt, skip_dependents, retry
max_retries: 2
retry_delay_seconds: 30
# Output management
output_management:
intermediate_outputs_dir: "{pipeline_dir}/outputs"
preserve_intermediate: true
compress_on_complete: false

View File

@ -0,0 +1,249 @@
# Pipeline Orchestrator
## Overview
The Pipeline Orchestrator enables multi-stage agent pipelines with automatic dependency management. It supports sequential, parallel, and hybrid execution modes.
## Architecture
```
┌─────────────────────────────────────────────────────────────────────┐
│ PIPELINE ORCHESTRATOR │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ Stage 1 │───▶│ Stage 2 │───▶│ Stage 3 │ (Sequential) │
│ └─────────┘ └─────────┘ └─────────┘ │
│ │
│ ┌─────────┐ │
│ │ Stage A │─┐ │
│ └─────────┘ │ ┌─────────┐ │
│ ├─▶│ Stage D │ (Parallel → Sequential) │
│ ┌─────────┐ │ └─────────┘ │
│ │ Stage B │─┘ │
│ └─────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘
```
## Commands
### PIPELINE create
Create a new pipeline from template or custom definition.
```
PIPELINE create
PIPELINE create --template full_sdlc
PIPELINE create --name "Custom Pipeline"
```
### PIPELINE run
Execute a pipeline.
```
PIPELINE run PIPE-20250115-myproject
PIPELINE run --active
```
### PIPELINE status
Show pipeline status.
```
PIPELINE status
PIPELINE status PIPE-20250115-myproject
```
**Example output:**
```
Pipeline: PIPE-20250115-myproject
Status: running
Progress: [████████░░░░░░░░░░░░] 40%
Stages:
[✓] analysis
Agents: analyst
Duration: 12m 34s
Output: pipelines/outputs/analysis/
[✓] requirements
Agents: pm
Duration: 18m 22s
Output: pipelines/outputs/requirements/
[►] design
Agents: architect, ux-designer (parallel)
Duration: 8m 15s (running)
Output: pending
[○] planning
Agents: sm
Duration: pending
Output: n/a
Currently executing: design
Estimated remaining: ~45 minutes
```
### PIPELINE list
List available pipelines and templates.
```
PIPELINE list
```
### PIPELINE abort
Abort a running pipeline.
```
PIPELINE abort PIPE-20250115-myproject
```
### PIPELINE resume
Resume a failed or paused pipeline.
```
PIPELINE resume PIPE-20250115-myproject
```
## Pipeline Templates
### full_sdlc
Complete software development lifecycle.
```yaml
stages:
- analysis (analyst)
- requirements (pm) → depends on analysis
- design (architect + ux-designer, parallel) → depends on requirements
- planning (sm) → depends on design
- implementation (dev) → depends on planning
- review (tea) → depends on implementation
```
### quick_flow
Rapid development with minimal ceremony.
```yaml
stages:
- spec (quick-flow-solo-dev)
- implement (quick-flow-solo-dev) → depends on spec
```
### analysis_only
Product analysis and requirements.
```yaml
stages:
- research (analyst)
- brief (analyst) → depends on research
- requirements (pm) → depends on brief
```
### design_review
Architecture and UX design with review.
```yaml
stages:
- architecture (architect)
- ux (ux-designer) → depends on architecture
- review (analyst + pm, parallel) → depends on architecture, ux
```
### test_suite
Comprehensive testing workflow.
```yaml
stages:
- test_design (tea)
- test_impl (tea) → depends on test_design
- security (tea) → depends on test_impl
- trace (tea) → depends on test_impl
```
## Custom Pipeline Definition
```yaml
pipeline_id: "PIPE-20250115-custom"
name: "Custom Pipeline"
description: "My custom pipeline"
created: "2025-01-15T10:00:00Z"
status: "pending"
stages:
- name: "stage1"
agents: ["analyst"]
parallel: false
depends_on: []
outputs: ["analysis_report"]
status: "pending"
- name: "stage2"
agents: ["architect", "ux-designer"]
parallel: true
depends_on: ["stage1"]
outputs: ["architecture", "ux_design"]
status: "pending"
- name: "stage3"
agents: ["dev"]
parallel: false
depends_on: ["stage2"]
outputs: ["implementation"]
status: "pending"
```
## Execution Flow
1. **Initialize**: Load pipeline definition, validate agents exist
2. **Queue**: Find stages with all dependencies completed
3. **Execute**: Run queued stages (parallel or sequential based on config)
4. **Collect**: Gather outputs from completed stages
5. **Update**: Update pipeline state file
6. **Repeat**: Continue until all stages complete or failure
## Error Handling
| Mode | Behavior |
|------|----------|
| halt | Stop pipeline on first failure |
| skip_dependents | Skip stages that depend on failed stage |
| retry | Retry failed stage (up to max_retries) |
## Output Management
Pipeline outputs are stored in:
```
_bmad-output/
└── pipelines/
├── PIPE-20250115-myproject.yaml # Pipeline state
└── outputs/
├── analysis/
│ └── product-brief.md
├── requirements/
│ └── prd.md
└── design/
├── architecture.md
└── ux-design.md
```
## Integration with Token Isolation
When executing stages:
1. Each agent runs in isolated subprocess (via Task tool)
2. Outputs written to pipeline output directory
3. Only status summaries return to orchestrator
4. Token budget preserved across multi-stage pipelines
## Best Practices
1. **Use templates** for common workflows
2. **Define dependencies explicitly** for correct execution order
3. **Enable parallel** only for truly independent stages
4. **Monitor progress** with status command
5. **Archive completed** pipelines to maintain clean state

View File

@ -0,0 +1,66 @@
<?xml version="1.0" encoding="UTF-8"?>
<task id="spawn-agent" name="Spawn Isolated Agent" standalone="false">
<description>Spawn an agent in an isolated subprocess with token isolation</description>
<config>
<source>{project-root}/_bmad/core/config.yaml</source>
<token_config>{project-root}/_bmad/bmm/config.yaml:token_management</token_config>
</config>
<parameters>
<param name="agent_type" required="true" description="Type of agent to spawn (analyst, architect, dev, pm, etc.)"/>
<param name="task_description" required="true" description="Brief description of the task (3-5 words)"/>
<param name="prompt" required="true" description="Full prompt/instructions for the agent"/>
<param name="model" required="false" default="sonnet" description="Model to use: sonnet, opus, haiku"/>
<param name="run_in_background" required="false" default="false" description="Run agent in background"/>
<param name="output_file" required="false" description="Path for agent output file"/>
</parameters>
<execution>
<step n="1" goal="Validate parameters">
<action>Verify agent_type is valid (exists in agent-manifest.csv)</action>
<action>Verify prompt is not empty</action>
<action>Set default model to sonnet if not specified</action>
</step>
<step n="2" goal="Prepare agent context">
<action>Load agent persona from {project-root}/_bmad/_config/agent-manifest.csv</action>
<action>Load any agent customizations from {project-root}/_bmad/_config/agents/</action>
<action>Construct full agent prompt with persona + task prompt</action>
</step>
<step n="3" goal="Configure output handling">
<action if="output_file specified">Use specified output path</action>
<action if="output_file not specified">
Generate path: {output_folder}/temp/{agent_type}-{timestamp}.md
</action>
<action>Append output instructions to prompt:
"Write your complete output to: {output_file}
Return only a brief summary (under 500 words) to this conversation."
</action>
</step>
<step n="4" goal="Spawn agent subprocess">
<action>Use Task tool with:
- description: "{agent_type}: {task_description}"
- prompt: {constructed_prompt}
- subagent_type: "general-purpose"
- model: {model}
- run_in_background: {run_in_background}
</action>
</step>
<step n="5" goal="Handle response">
<action if="run_in_background == false">Wait for agent completion</action>
<action if="run_in_background == true">Return agent_id for later retrieval</action>
<action>Return summary and output file path to caller</action>
</step>
</execution>
<output>
<field name="status" description="success | failed | running"/>
<field name="agent_id" description="ID for background agents"/>
<field name="output_file" description="Path to full output"/>
<field name="summary" description="Brief summary of agent work"/>
</output>
</task>

View File

@ -0,0 +1,103 @@
# Token Isolation Architecture
## Overview
Token isolation prevents context bloat in multi-agent scenarios by running agents in isolated subprocesses. Each agent gets its own 150K token context window without consuming the main session's tokens.
## Architecture Diagram
```
┌─────────────────────────────────────────────────────────────────────────┐
│ TOKEN ISOLATION ARCHITECTURE │
├─────────────────────────────────────────────────────────────────────────┤
│ Main Session (150K tokens preserved) │
│ │ │
│ ▼ Task Tool spawns subprocess │
│ ┌────────────────────┐ │
│ │ Agent Instance │ ◄── Own 150K token context │
│ │ (subprocess) │ ◄── Doesn't consume main session │
│ └─────────┬──────────┘ │
│ │ │
│ ▼ Only summary returns │
│ "Task complete. Output: temp/raw/file.md" │
└─────────────────────────────────────────────────────────────────────────┘
```
## Core Rules
1. **ALWAYS** use Task tool to invoke agents (preserves main session tokens)
2. Agents write to files, not return full content to main session
3. Use `run_in_background: true` for parallel independent agents
4. Sequential agents receive previous output via content injection
5. Summaries returned to main session must be < 2000 tokens
## Agent Execution Template
```javascript
Task({
description: "agent-name: brief task description",
prompt: agentPrompt,
subagent_type: "general-purpose",
model: "sonnet", // or "haiku" for quick tasks
run_in_background: false // true for parallel
});
```
## Collaboration Patterns
### Sequential Pattern
```
Agent A → Agent B → Agent C
```
Use when each step depends on previous output.
### Parallel Pattern
```
Agent A ─┐
Agent B ─┼→ Synthesis
Agent C ─┘
```
Use when analyses are independent.
### Debate Pattern
```
Proposer ↔ Challenger → Refined Output
```
Use for critical decisions requiring adversarial review.
### War Room Pattern
```
All Agents → Intensive Collaboration → Solution
```
Use for complex urgent problems.
## Output Management
### File-Based Output
Agents should write outputs to:
- `{output_folder}/temp/` - Temporary working files
- `{output_folder}/raw/` - Raw agent outputs
- `{output_folder}/final/` - Validated final outputs
### Summary Protocol
When an agent completes:
1. Write full output to designated file
2. Return only: status, file path, key findings (< 2000 tokens)
3. Main session can read file if details needed
## Token Budget Tracking
| Threshold | Action |
|-----------|--------|
| 0-80% | Normal operation |
| 80-90% | Warning: Consider wrapping up |
| 90-100% | Critical: Summarize and hand off |
| 100%+ | HALT: Must spawn new subprocess |
## Best Practices
1. **Pre-plan agent work** - Know what each agent needs to do
2. **Minimize cross-agent data** - Pass file references, not content
3. **Use haiku for simple tasks** - Reduces cost and latency
4. **Batch related work** - One agent, multiple related tasks
5. **Checkpoint frequently** - Save progress to files regularly

View File

@ -0,0 +1,111 @@
<?xml version="1.0" encoding="UTF-8"?>
<tool id="code-metrics" name="Code Metrics Analyzer" standalone="true">
<description>Analyze codebase for size, complexity, and quality metrics</description>
<parameters>
<param name="path" required="false" default="." description="Path to analyze"/>
<param name="include" required="false" description="File patterns to include (e.g., '*.ts,*.tsx')"/>
<param name="exclude" required="false" default="node_modules,dist,build,.git" description="Directories to exclude"/>
<param name="output" required="false" default="summary" description="Output format: summary, detailed, json"/>
</parameters>
<metrics>
<category name="Size">
<metric name="total_files" description="Total number of source files"/>
<metric name="total_lines" description="Total lines of code"/>
<metric name="lines_of_code" description="Lines of actual code (excluding blanks/comments)"/>
<metric name="blank_lines" description="Number of blank lines"/>
<metric name="comment_lines" description="Number of comment lines"/>
</category>
<category name="Complexity">
<metric name="cyclomatic_complexity" description="Average cyclomatic complexity"/>
<metric name="max_complexity" description="Highest complexity in single function"/>
<metric name="deep_nesting" description="Functions with nesting > 4 levels"/>
</category>
<category name="Quality">
<metric name="duplicate_blocks" description="Detected code duplication"/>
<metric name="long_functions" description="Functions > 50 lines"/>
<metric name="large_files" description="Files > 500 lines"/>
<metric name="todo_count" description="TODO/FIXME comments"/>
</category>
<category name="Structure">
<metric name="avg_file_size" description="Average lines per file"/>
<metric name="avg_function_size" description="Average lines per function"/>
<metric name="dependency_depth" description="Maximum import depth"/>
</category>
</metrics>
<execution>
<step n="1" goal="Scan codebase">
<action>Find all files matching include patterns</action>
<action>Exclude directories from exclude list</action>
<action>Build file list for analysis</action>
</step>
<step n="2" goal="Count lines">
<action>For each file, count total, code, blank, comment lines</action>
<action>Aggregate totals by file type</action>
</step>
<step n="3" goal="Analyze complexity">
<action>Parse functions/methods in each file</action>
<action>Calculate cyclomatic complexity per function</action>
<action>Identify deeply nested code blocks</action>
</step>
<step n="4" goal="Quality checks">
<action>Detect duplicate code blocks</action>
<action>Find long functions (> 50 lines)</action>
<action>Find large files (> 500 lines)</action>
<action>Count TODO/FIXME comments</action>
</step>
<step n="5" goal="Generate report">
<action>Format metrics according to output parameter</action>
<action>Include recommendations for concerning metrics</action>
</step>
</execution>
<output>
```
Code Metrics Report
===================
Project: {project_name}
Path: {path}
Date: {date}
Size Metrics:
- Total Files: {total_files}
- Lines of Code: {lines_of_code}
- Blank Lines: {blank_lines}
- Comment Lines: {comment_lines}
- Comment Ratio: {comment_ratio}%
Complexity Metrics:
- Avg Cyclomatic Complexity: {avg_complexity}
- Max Complexity: {max_complexity} ({max_complexity_file})
- Deep Nesting Issues: {deep_nesting_count}
Quality Metrics:
- Duplicate Code Blocks: {duplicate_count}
- Long Functions (>50 lines): {long_functions_count}
- Large Files (>500 lines): {large_files_count}
- TODO/FIXME Count: {todo_count}
Structure:
- Avg File Size: {avg_file_size} lines
- Avg Function Size: {avg_function_size} lines
Breakdown by Language:
{for each language}
- {language}: {file_count} files, {line_count} lines
{end for}
Recommendations:
{recommendations}
```
</output>
</tool>

View File

@ -0,0 +1,123 @@
<?xml version="1.0" encoding="UTF-8"?>
<tool id="context-extractor" name="Context Extractor" standalone="true">
<description>Extract key context from codebase for AI agent consumption (patterns, conventions, critical files)</description>
<parameters>
<param name="path" required="false" default="." description="Project root path"/>
<param name="focus" required="false" default="all" description="Focus area: all, patterns, dependencies, structure, conventions"/>
<param name="max_tokens" required="false" default="4000" description="Maximum token budget for output"/>
</parameters>
<extraction_areas>
<area name="project_structure">
<description>Key directories and their purposes</description>
<sources>
<source>Directory listing (depth 3)</source>
<source>README.md</source>
<source>package.json / setup.py / Cargo.toml</source>
</sources>
</area>
<area name="coding_patterns">
<description>Common patterns used in the codebase</description>
<sources>
<source>Import patterns</source>
<source>Naming conventions</source>
<source>Error handling patterns</source>
<source>Async patterns</source>
</sources>
</area>
<area name="dependencies">
<description>Key dependencies and their usage</description>
<sources>
<source>package.json dependencies</source>
<source>Import frequency analysis</source>
</sources>
</area>
<area name="conventions">
<description>Coding conventions and style</description>
<sources>
<source>.eslintrc / .prettierrc</source>
<source>tsconfig.json / jsconfig.json</source>
<source>editorconfig</source>
<source>Observed patterns in code</source>
</sources>
</area>
<area name="critical_files">
<description>Most important files to understand</description>
<sources>
<source>Entry points (index, main, app)</source>
<source>Configuration files</source>
<source>Type definitions</source>
<source>Shared utilities</source>
</sources>
</area>
</extraction_areas>
<execution>
<step n="1" goal="Scan project structure">
<action>List directories up to depth 3</action>
<action>Identify key directory patterns (src, lib, tests, etc.)</action>
<action>Note technology indicators (package.json, Cargo.toml, etc.)</action>
</step>
<step n="2" goal="Analyze entry points">
<action>Find entry files (index.ts, main.py, main.go, etc.)</action>
<action>Extract high-level architecture from imports</action>
</step>
<step n="3" goal="Extract conventions">
<action>Parse linter/formatter configs</action>
<action>Sample 10 representative files for pattern analysis</action>
<action>Identify naming conventions (camelCase, snake_case, etc.)</action>
</step>
<step n="4" goal="Analyze dependencies">
<action>Extract key dependencies from package manager files</action>
<action>Identify most-imported modules</action>
<action>Note framework/library choices</action>
</step>
<step n="5" goal="Generate context document">
<action>Compile findings within max_tokens budget</action>
<action>Prioritize most critical information</action>
<action>Format for AI agent consumption</action>
</step>
</execution>
<output>
```markdown
# Project Context
## Technology Stack
- Language: {language}
- Framework: {framework}
- Key Libraries: {libraries}
## Project Structure
```
{directory_tree}
```
## Entry Points
{entry_points}
## Coding Conventions
- Naming: {naming_convention}
- Formatting: {formatting_rules}
- Imports: {import_pattern}
## Key Patterns
{patterns}
## Critical Files
{critical_files}
## Dependencies
{key_dependencies}
```
</output>
</tool>

View File

@ -0,0 +1,68 @@
<?xml version="1.0" encoding="UTF-8"?>
<tool id="dependency-check" name="Dependency Checker" standalone="true">
<description>Scan project dependencies for outdated packages and known vulnerabilities</description>
<parameters>
<param name="path" required="false" default="." description="Path to project root"/>
<param name="output_format" required="false" default="summary" description="Output format: summary, detailed, json"/>
<param name="severity_threshold" required="false" default="low" description="Minimum severity to report: low, medium, high, critical"/>
</parameters>
<detection>
<package_manager name="npm" files="['package.json', 'package-lock.json']" command="npm audit"/>
<package_manager name="yarn" files="['package.json', 'yarn.lock']" command="yarn audit"/>
<package_manager name="pnpm" files="['package.json', 'pnpm-lock.yaml']" command="pnpm audit"/>
<package_manager name="pip" files="['requirements.txt', 'Pipfile', 'pyproject.toml']" command="pip-audit"/>
<package_manager name="poetry" files="['pyproject.toml', 'poetry.lock']" command="poetry audit"/>
<package_manager name="go" files="['go.mod', 'go.sum']" command="govulncheck ./..."/>
<package_manager name="cargo" files="['Cargo.toml', 'Cargo.lock']" command="cargo audit"/>
<package_manager name="composer" files="['composer.json', 'composer.lock']" command="composer audit"/>
</detection>
<execution>
<step n="1" goal="Detect package manager">
<action>Scan {path} for package manager files</action>
<action>Identify primary package manager from detected files</action>
<action if="no package manager found">Report: "No supported package manager detected"</action>
</step>
<step n="2" goal="Run dependency audit">
<action>Execute audit command for detected package manager</action>
<action>Capture stdout and stderr</action>
<action>Parse output for vulnerabilities</action>
</step>
<step n="3" goal="Check for outdated packages">
<action>Run outdated check command (e.g., npm outdated, pip list --outdated)</action>
<action>Parse output for package versions</action>
</step>
<step n="4" goal="Generate report">
<action>Filter by severity_threshold</action>
<action>Format output according to output_format</action>
</step>
</execution>
<output_format name="summary">
```
Dependency Check Report
=======================
Project: {project_name}
Package Manager: {package_manager}
Date: {date}
Vulnerabilities:
- Critical: {critical_count}
- High: {high_count}
- Medium: {medium_count}
- Low: {low_count}
Outdated Packages: {outdated_count}
Top Issues:
1. {top_issue_1}
2. {top_issue_2}
3. {top_issue_3}
```
</output_format>
</tool>

View File

@ -0,0 +1,81 @@
<?xml version="1.0" encoding="UTF-8"?>
<tool id="schema-validator" name="Schema Validator" standalone="true">
<description>Validate JSON/YAML files against schemas (JSON Schema, OpenAPI, etc.)</description>
<parameters>
<param name="file" required="true" description="Path to file to validate"/>
<param name="schema" required="false" description="Path to schema file (auto-detect if not provided)"/>
<param name="schema_type" required="false" default="auto" description="Schema type: json-schema, openapi, asyncapi, auto"/>
</parameters>
<supported_schemas>
<schema type="json-schema" versions="['draft-04', 'draft-06', 'draft-07', '2019-09', '2020-12']"/>
<schema type="openapi" versions="['3.0', '3.1']"/>
<schema type="asyncapi" versions="['2.0', '2.1', '2.2', '2.3', '2.4', '2.5', '2.6']"/>
<schema type="yaml" description="YAML syntax validation"/>
<schema type="json" description="JSON syntax validation"/>
</supported_schemas>
<execution>
<step n="1" goal="Load and parse file">
<action>Read file content</action>
<action>Detect file format (JSON or YAML)</action>
<action>Parse content into object</action>
<action if="parse error">Return: "Syntax error: {error_message}"</action>
</step>
<step n="2" goal="Detect schema type">
<check if="schema_type == 'auto'">
<action>Check for $schema property (JSON Schema)</action>
<action>Check for openapi property (OpenAPI)</action>
<action>Check for asyncapi property (AsyncAPI)</action>
<action>Default to json-schema if detected</action>
</check>
</step>
<step n="3" goal="Load schema">
<check if="schema provided">
<action>Load schema from {schema} path</action>
</check>
<check if="schema not provided AND type detected">
<action>Use built-in schema for detected type</action>
</check>
</step>
<step n="4" goal="Validate">
<action>Run validation against schema</action>
<action>Collect all validation errors</action>
<action>Format error messages with line numbers (if possible)</action>
</step>
<step n="5" goal="Report">
<action if="valid">Return: "Valid {schema_type} document"</action>
<action if="invalid">Return: "Validation errors: {errors}"</action>
</step>
</execution>
<output>
```
Schema Validation Report
========================
File: {file}
Schema Type: {schema_type}
Status: {valid|invalid}
{if errors}
Errors ({error_count}):
{for each error}
- Line {line}: {path}
{message}
{end for}
{end if}
{if warnings}
Warnings ({warning_count}):
{for each warning}
- {path}: {message}
{end for}
{end if}
```
</output>
</tool>

View File

@ -0,0 +1,263 @@
# Session Manager Instructions
## Overview
The Session Manager provides persistent state management across conversations with token isolation, progress tracking, and context preservation.
## Session ID Format
```
{PREFIX}{YYYYMM}-{CLIENT}-{PROJECT}
```
Examples:
- `SES202501-ACME-AUDIT` - Default session for ACME audit project
- `ENG202501-TESLA-API` - Engineering session for Tesla API project
- `TST202501-INTERNAL-REWRITE` - Testing session for internal rewrite
## Commands
### SESSION start
Start a new session with structured identification.
**Usage:**
```
SESSION start
SESSION start --client ACME --project AUDIT
SESSION start --prefix ENG --client TESLA --project API
```
**Workflow:**
1. Check if active session exists
- If yes, prompt: "Active session found: {id}. Close it first? [y/n]"
2. Generate session ID using format: `{PREFIX}{YYYYMM}-{CLIENT}-{PROJECT}`
3. Create session file at `{sessions_dir}/{session_id}.yaml`
4. Set as active session in `active-session.yaml`
5. Initialize session state:
```yaml
session_id: "{session_id}"
created: "{timestamp}"
status: "active"
client: "{client}"
project: "{project}"
tokens:
initial: 0
current: 0
saved: 0
agents_spawned: []
artifacts: []
milestones: []
context_summary: ""
```
6. Display confirmation with session ID
### SESSION resume
Resume an existing session with context restoration.
**Workflow:**
1. Load `active-session.yaml` to get current session ID
2. If no active session, prompt: "No active session. Start new? [y/n]"
3. Load session file `{sessions_dir}/{session_id}.yaml`
4. Display session summary:
```
Resuming Session: {session_id}
Status: {status}
Started: {created}
Milestones: {milestone_count}
Artifacts: {artifact_count}
Token Usage: {current}/{max}
Last Context:
{context_summary}
```
5. Restore session variables to working memory
### SESSION status
Show current session status with visual indicators.
**Visual Status Indicators:**
| Indicator | Meaning |
|-----------|---------|
| :green_circle: | Active / On Track |
| :yellow_circle: | At Risk / Warning (>80% tokens) |
| :red_circle: | Blocked / Failed |
| :pause_button: | Paused |
| :white_check_mark: | Completed |
**Output Format:**
```
Session: {session_id} {status_indicator}
Duration: {duration}
Tokens: {current}/{max} ({percentage}%)
Progress Bar: [████████░░] 80%
Milestones:
- [x] Initial setup
- [x] Requirements gathered
- [ ] Implementation started
Recent Artifacts:
- docs/prd.md (created 2h ago)
- docs/architecture.md (modified 1h ago)
Agents Spawned This Session: {count}
Token Savings from Isolation: {saved_tokens}
```
### SESSION close
Close current session and archive.
**Workflow:**
1. Load active session
2. Generate context summary (key decisions, outcomes, next steps)
3. Update session status to "closed"
4. Move session file to `{session_archive_dir}/`
5. Clear `active-session.yaml`
6. Display closure summary with learnings
### SESSION list
List all sessions with status.
**Output:**
```
Active Sessions:
SES202501-ACME-AUDIT :green_circle: (3 days)
Archived Sessions:
ENG202412-TESLA-API :white_check_mark: (closed 2024-12-28)
TST202412-INTERNAL-MVP :white_check_mark: (closed 2024-12-15)
```
### SESSION tokens
Show detailed token usage report.
**Output:**
```
Token Usage Report: {session_id}
Main Session:
Used: 45,000 / 150,000 (30%)
Remaining: 105,000
Subprocess Agents:
analyst: 32,000 tokens (isolated)
architect: 28,000 tokens (isolated)
dev: 85,000 tokens (isolated)
Total Consumed (if no isolation): 190,000
Actual Main Session: 45,000
Tokens Saved: 145,000 (76% savings)
```
### SESSION savings
Show token savings from isolation architecture.
**Output:**
```
Token Isolation Savings
Without Isolation:
All agent work in main context: 190,000 tokens
Would exceed limit by: 40,000 tokens
With Isolation:
Main session: 45,000 tokens
Agents in subprocesses: 145,000 tokens (not counted)
Savings: 145,000 tokens (76%)
Status: :green_circle: Within budget
```
### SESSION switch
Switch to a different session.
**Workflow:**
1. Save current session state
2. Load specified session
3. Restore context
4. Update active-session.yaml
## Session File Structure
```yaml
# {session_id}.yaml
session_id: "SES202501-ACME-AUDIT"
created: "2025-01-15T10:30:00Z"
last_updated: "2025-01-15T14:45:00Z"
status: "active" # active, paused, closed
# Identification
client: "ACME"
project: "AUDIT"
prefix: "SES"
# Token tracking
tokens:
initial: 0
current: 45000
peak: 52000
saved_by_isolation: 145000
# Agent tracking
agents_spawned:
- id: "agent-123"
type: "analyst"
started: "2025-01-15T10:35:00Z"
completed: "2025-01-15T10:42:00Z"
tokens_used: 32000
output_file: "_bmad-output/temp/analyst-123.md"
# Artifacts produced
artifacts:
- path: "docs/prd.md"
created: "2025-01-15T11:00:00Z"
agent: "pm"
- path: "docs/architecture.md"
created: "2025-01-15T12:30:00Z"
agent: "architect"
# Progress tracking
milestones:
- name: "Requirements Complete"
completed: "2025-01-15T11:30:00Z"
- name: "Architecture Approved"
completed: "2025-01-15T13:00:00Z"
- name: "Implementation Started"
completed: null
# Context for resume
context_summary: |
Working on ACME security audit project.
PRD complete, architecture approved.
Currently implementing Epic 1: User Authentication.
Next: Complete story 1-2-user-login.
# Session notes
notes:
- "Client prefers OAuth2 over JWT"
- "Performance requirement: <200ms response time"
```
## Integration with Token Isolation
When spawning agents via Task tool:
1. Record agent spawn in session
2. Track output file location
3. Calculate token savings
4. Update session totals
## Best Practices
1. **Always start a session** before beginning substantial work
2. **Update milestones** as you complete major phases
3. **Close sessions** when work is complete to maintain clean state
4. **Use meaningful names** for client/project identification
5. **Review savings** periodically to understand isolation benefits

View File

@ -0,0 +1,81 @@
# Session Template
# Copy and customize for new sessions
session_id: "{PREFIX}{YYYYMM}-{CLIENT}-{PROJECT}"
created: "{timestamp}"
last_updated: "{timestamp}"
status: "active"
# Identification
client: "{CLIENT}"
project: "{PROJECT}"
prefix: "{PREFIX}"
description: ""
# Token tracking
tokens:
initial: 0
current: 0
peak: 0
saved_by_isolation: 0
warning_threshold: 120000
max_limit: 150000
# Agent tracking
agents_spawned: []
# Example entry:
# - id: "agent-abc123"
# type: "analyst"
# started: "2025-01-15T10:35:00Z"
# completed: "2025-01-15T10:42:00Z"
# tokens_used: 32000
# output_file: "_bmad-output/temp/analyst-abc123.md"
# status: "completed" # running, completed, failed
# Artifacts produced
artifacts: []
# Example entry:
# - path: "docs/prd.md"
# type: "document"
# created: "2025-01-15T11:00:00Z"
# modified: "2025-01-15T11:30:00Z"
# agent: "pm"
# description: "Product Requirements Document"
# Progress tracking
milestones: []
# Example entry:
# - name: "Requirements Complete"
# target_date: "2025-01-15"
# completed: "2025-01-15T11:30:00Z"
# notes: "All stakeholder requirements gathered"
# Current work context
current_focus:
epic: ""
story: ""
task: ""
blocker: ""
# Context for resume (AI-generated summary)
context_summary: |
Session initialized. Ready to begin work.
# Decision log
decisions: []
# Example entry:
# - date: "2025-01-15"
# decision: "Use PostgreSQL for primary database"
# rationale: "Team expertise, ACID compliance needs"
# made_by: "architect"
# Session notes
notes: []
# Example entry:
# - timestamp: "2025-01-15T10:30:00Z"
# note: "Client prefers OAuth2 over JWT"
# source: "user"
# Tags for filtering
tags: []
# Example: ["security", "audit", "enterprise"]

View File

@ -0,0 +1,52 @@
# Session Manager Workflow
name: session-manager
description: "Persistent session management with token isolation, state tracking, and context preservation across conversations"
author: "BMAD"
version: "1.0.0"
# Configuration sources
config_source: "{project-root}/_bmad/core/config.yaml"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
output_folder: "{config_source}:output_folder"
date: system-generated
# Session storage
sessions_dir: "{project-root}/_bmad-output/sessions"
active_session_file: "{sessions_dir}/active-session.yaml"
session_archive_dir: "{sessions_dir}/archive"
# Workflow components
installed_path: "{project-root}/_bmad/core/workflows/session-manager"
instructions: "{installed_path}/instructions.md"
# Session ID format: {PREFIX}{YYYYMM}-{CLIENT}-{PROJECT}
session_prefixes:
default: "SES"
engineering: "ENG"
analysis: "ANA"
design: "DES"
testing: "TST"
standalone: true
# Commands
commands:
- name: "start"
description: "Start a new session"
args: "--client NAME --project NAME --prefix PREFIX"
- name: "resume"
description: "Resume the active session"
- name: "status"
description: "Show current session status and token usage"
- name: "close"
description: "Close current session and archive"
- name: "list"
description: "List all sessions (active and archived)"
- name: "tokens"
description: "Show detailed token usage report"
- name: "savings"
description: "Show token savings from isolation"
- name: "switch"
description: "Switch to a different session"
args: "SESSION_ID"

View File

@ -1,4 +1,4 @@
<critical>The workflow execution engine is governed by: {project_root}/_bmad/core/tasks/workflow.xml</critical> <critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical> <critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language}</critical> <critical>Communicate all responses in {communication_language}</critical>
<critical>This is a meta-workflow that orchestrates the CIS brainstorming workflow with game-specific context and additional game design techniques</critical> <critical>This is a meta-workflow that orchestrates the CIS brainstorming workflow with game-specific context and additional game design techniques</critical>

View File

@ -2,7 +2,7 @@
<workflow> <workflow>
<critical>The workflow execution engine is governed by: {project_root}/_bmad/core/tasks/workflow.xml</critical> <critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already completed the GDD workflow</critical> <critical>You MUST have already completed the GDD workflow</critical>
<critical>Communicate all responses in {communication_language}</critical> <critical>Communicate all responses in {communication_language}</critical>
<critical>This workflow creates detailed narrative content for story-driven games</critical> <critical>This workflow creates detailed narrative content for story-driven games</critical>

View File

@ -12,7 +12,7 @@ user_skill_level: "{config_source}:user_skill_level"
document_output_language: "{config_source}:document_output_language" document_output_language: "{config_source}:document_output_language"
date: system-generated date: system-generated
implementation_artifacts: "{config_source}:implementation_artifacts" implementation_artifacts: "{config_source}:implementation_artifacts"
sprint_status: "{implementation_artifacts}/sprint-status.yaml || {output_folder}/sprint-status.yaml" sprint_status: "{implementation_artifacts}/sprint-status.yaml"
# Workflow components # Workflow components
installed_path: "{project-root}/_bmad/bmgd/workflows/4-production/code-review" installed_path: "{project-root}/_bmad/bmgd/workflows/4-production/code-review"

View File

@ -11,7 +11,7 @@ user_skill_level: "{config_source}:user_skill_level"
document_output_language: "{config_source}:document_output_language" document_output_language: "{config_source}:document_output_language"
date: system-generated date: system-generated
implementation_artifacts: "{config_source}:implementation_artifacts" implementation_artifacts: "{config_source}:implementation_artifacts"
sprint_status: "{implementation_artifacts}/sprint-status.yaml || {output_folder}/sprint-status.yaml" sprint_status: "{implementation_artifacts}/sprint-status.yaml"
# Smart input file references - handles both whole docs and sharded docs # Smart input file references - handles both whole docs and sharded docs
# Priority: Whole document first, then sharded version # Priority: Whole document first, then sharded version

View File

@ -33,7 +33,7 @@ This is a COMPETITION to create the **ULTIMATE story context** that makes LLM de
### **When Running from Create-Story Workflow:** ### **When Running from Create-Story Workflow:**
- The `{project_root}/_bmad/core/tasks/validate-workflow.xml` framework will automatically: - The `{project-root}/_bmad/core/tasks/validate-workflow.xml` framework will automatically:
- Load this checklist file - Load this checklist file
- Load the newly created story file (`{story_file_path}`) - Load the newly created story file (`{story_file_path}`)
- Load workflow variables from `{installed_path}/workflow.yaml` - Load workflow variables from `{installed_path}/workflow.yaml`
@ -63,7 +63,7 @@ You will systematically re-do the entire story creation process, but with a crit
1. **Load the workflow configuration**: `{installed_path}/workflow.yaml` for variable inclusion 1. **Load the workflow configuration**: `{installed_path}/workflow.yaml` for variable inclusion
2. **Load the story file**: `{story_file_path}` (provided by user or discovered) 2. **Load the story file**: `{story_file_path}` (provided by user or discovered)
3. **Load validation framework**: `{project_root}/_bmad/core/tasks/validate-workflow.xml` 3. **Load validation framework**: `{project-root}/_bmad/core/tasks/validate-workflow.xml`
4. **Extract metadata**: epic_num, story_num, story_key, story_title from story file 4. **Extract metadata**: epic_num, story_num, story_key, story_title from story file
5. **Resolve all workflow variables**: story_dir, output_folder, epics_file, architecture_file, etc. 5. **Resolve all workflow variables**: story_dir, output_folder, epics_file, architecture_file, etc.
6. **Understand current status**: What story implementation guidance is currently provided? 6. **Understand current status**: What story implementation guidance is currently provided?

View File

@ -19,7 +19,7 @@ validation: "{installed_path}/checklist.md"
# Variables and inputs # Variables and inputs
variables: variables:
sprint_status: "{implementation_artifacts}/sprint-status.yaml || {output_folder}/sprint-status.yaml" # Primary source for story tracking sprint_status: "{implementation_artifacts}/sprint-status.yaml || {implementation_artifacts}/sprint-status.yaml" # Primary source for story tracking
epics_file: "{output_folder}/epics.md" # Preferred source for epic/story breakdown epics_file: "{output_folder}/epics.md" # Preferred source for epic/story breakdown
prd_file: "{output_folder}/PRD.md" # Fallback for requirements prd_file: "{output_folder}/PRD.md" # Fallback for requirements
architecture_file: "{planning_artifacts}/architecture.md" # Optional architecture context architecture_file: "{planning_artifacts}/architecture.md" # Optional architecture context

View File

@ -16,7 +16,7 @@ story_file: "" # Explicit story path; auto-discovered if empty
# Context file uses same story_key as story file (e.g., "1-2-user-authentication.context.xml") # Context file uses same story_key as story file (e.g., "1-2-user-authentication.context.xml")
context_file: "{story_dir}/{{story_key}}.context.xml" context_file: "{story_dir}/{{story_key}}.context.xml"
implementation_artifacts: "{config_source}:implementation_artifacts" implementation_artifacts: "{config_source}:implementation_artifacts"
sprint_status: "{implementation_artifacts}/sprint-status.yaml || {output_folder}/sprint-status.yaml" sprint_status: "{implementation_artifacts}/sprint-status.yaml || {implementation_artifacts}/sprint-status.yaml"
# Smart input file references - handles both whole docs and sharded docs # Smart input file references - handles both whole docs and sharded docs
# Priority: Whole document first, then sharded version # Priority: Whole document first, then sharded version

View File

@ -54,7 +54,7 @@ input_file_patterns:
load_strategy: "INDEX_GUIDED" load_strategy: "INDEX_GUIDED"
# Required files # Required files
sprint_status_file: "{implementation_artifacts}/sprint-status.yaml || {output_folder}/sprint-status.yaml" sprint_status_file: "{implementation_artifacts}/sprint-status.yaml || {implementation_artifacts}/sprint-status.yaml"
story_directory: "{implementation_artifacts}" story_directory: "{implementation_artifacts}"
retrospectives_folder: "{implementation_artifacts}" retrospectives_folder: "{implementation_artifacts}"

View File

@ -18,14 +18,14 @@ instructions: "{installed_path}/instructions.md"
# Inputs # Inputs
variables: variables:
sprint_status_file: "{implementation_artifacts}/sprint-status.yaml || {output_folder}/sprint-status.yaml" sprint_status_file: "{implementation_artifacts}/sprint-status.yaml || {implementation_artifacts}/sprint-status.yaml"
tracking_system: "file-system" tracking_system: "file-system"
# Smart input file references # Smart input file references
input_file_patterns: input_file_patterns:
sprint_status: sprint_status:
description: "Sprint status file generated by sprint-planning" description: "Sprint status file generated by sprint-planning"
whole: "{implementation_artifacts}/sprint-status.yaml || {output_folder}/sprint-status.yaml" whole: "{implementation_artifacts}/sprint-status.yaml || {implementation_artifacts}/sprint-status.yaml"
load_strategy: "FULL_LOAD" load_strategy: "FULL_LOAD"
# Standalone so IDE commands get generated # Standalone so IDE commands get generated

View File

@ -0,0 +1,117 @@
# API Design Checklist
## Resource Design
- [ ] Resources use plural nouns (users, orders, products)
- [ ] Resource names are lowercase with hyphens (user-profiles)
- [ ] Relationships expressed via nesting or links
- [ ] No verbs in resource paths (use HTTP methods instead)
## HTTP Methods
- [ ] GET for reading (no side effects)
- [ ] POST for creating new resources
- [ ] PUT for full resource replacement
- [ ] PATCH for partial updates
- [ ] DELETE for removing resources
- [ ] HEAD for metadata requests (if needed)
- [ ] OPTIONS for CORS preflight (automatic)
## Status Codes
- [ ] 200 OK for successful GET/PUT/PATCH
- [ ] 201 Created for successful POST
- [ ] 204 No Content for successful DELETE
- [ ] 400 Bad Request for malformed requests
- [ ] 401 Unauthorized for missing/invalid auth
- [ ] 403 Forbidden for insufficient permissions
- [ ] 404 Not Found for missing resources
- [ ] 409 Conflict for state conflicts
- [ ] 422 Unprocessable Entity for validation errors
- [ ] 429 Too Many Requests for rate limiting
- [ ] 500 Internal Server Error (avoid exposing details)
## Request Design
- [ ] Content-Type headers required for POST/PUT/PATCH
- [ ] Accept headers for content negotiation
- [ ] Query parameters for filtering/sorting/pagination
- [ ] Path parameters for resource identifiers
- [ ] Request body validation documented
## Response Design
- [ ] Consistent envelope structure (data, meta, links, error)
- [ ] Timestamps in ISO 8601 format
- [ ] IDs as strings (UUIDs recommended)
- [ ] Pagination for list endpoints
- [ ] HATEOAS links where appropriate
## Pagination
- [ ] Page-based or cursor-based pagination
- [ ] Default and maximum limits defined
- [ ] Total count available
- [ ] Navigation links included
## Filtering & Sorting
- [ ] Filter syntax documented
- [ ] Sortable fields specified
- [ ] Default sort order defined
- [ ] Multiple sort fields supported
## Authentication
- [ ] Auth method documented (Bearer, API Key, OAuth2)
- [ ] Token format specified (JWT structure)
- [ ] Token expiration documented
- [ ] Refresh token flow if applicable
## Authorization
- [ ] Per-endpoint permissions documented
- [ ] Role-based access defined
- [ ] Resource ownership rules clear
## Versioning
- [ ] Versioning strategy chosen (URL, header, parameter)
- [ ] Major version in URL (/v1/, /v2/)
- [ ] Deprecation policy documented
- [ ] Breaking changes defined
## Error Handling
- [ ] Error response format consistent
- [ ] Error codes meaningful and documented
- [ ] Validation errors include field details
- [ ] No sensitive info in error messages
## Security
- [ ] HTTPS required
- [ ] Rate limiting implemented
- [ ] CORS properly configured
- [ ] Input validation on all fields
- [ ] SQL injection prevention
- [ ] No sensitive data in URLs
## Documentation
- [ ] OpenAPI 3.0+ specification complete
- [ ] All endpoints documented
- [ ] Request/response examples provided
- [ ] Authentication documented
- [ ] Error codes listed
## Testing
- [ ] Mock server available
- [ ] Example requests for each endpoint
- [ ] Postman/Insomnia collection exported
- [ ] SDK generation tested
---
## Quick Validation
```bash
# Validate OpenAPI spec
npx @stoplight/spectral-cli lint api-spec.yaml
# Alternative validation
npx swagger-cli validate api-spec.yaml
# Generate types (TypeScript)
npx openapi-typescript api-spec.yaml -o types.d.ts
# Start mock server
npx @stoplight/prism-cli mock api-spec.yaml
```

View File

@ -0,0 +1,309 @@
# API Design Workflow Instructions
## Overview
Design APIs using a contract-first approach. This workflow produces OpenAPI 3.0+ specifications, mock server configurations, and client SDK generation guidance.
## Workflow Steps
### Step 1: Context Loading
**Load existing documentation:**
1. Load PRD for feature requirements
2. Load Architecture document for system design
3. Load project-context.md for coding standards
4. Identify existing API patterns (if any)
### Step 2: API Style Selection
**Ask user for API style:**
```
API Style Selection
Available styles:
1. [rest] RESTful API (OpenAPI 3.0+)
2. [graphql] GraphQL Schema
3. [grpc] gRPC/Protocol Buffers
4. [websocket] WebSocket Event Schema
Select style [1-4]:
```
### Step 3: Resource Identification
**For REST APIs, identify resources:**
1. Extract nouns from PRD (users, orders, products, etc.)
2. Map to REST resources
3. Identify relationships (1:1, 1:N, N:N)
4. Determine resource hierarchy
**Questions to ask:**
- What are the main entities in this system?
- How do entities relate to each other?
- What operations are needed for each entity?
- Are there any batch operations required?
### Step 4: Endpoint Design
**For each resource, design endpoints:**
| Operation | Method | Path Pattern | Example |
|-----------|--------|--------------|---------|
| List | GET | /resources | GET /users |
| Create | POST | /resources | POST /users |
| Read | GET | /resources/{id} | GET /users/123 |
| Update | PUT/PATCH | /resources/{id} | PATCH /users/123 |
| Delete | DELETE | /resources/{id} | DELETE /users/123 |
| Nested | GET | /resources/{id}/subs | GET /users/123/orders |
**Naming conventions:**
- Use plural nouns for resources
- Use kebab-case for multi-word resources
- Use path parameters for identifiers
- Use query parameters for filtering/pagination
### Step 5: Request/Response Design
**For each endpoint, define:**
1. **Request body schema** (POST/PUT/PATCH)
- Required vs optional fields
- Data types and formats
- Validation rules (min/max, pattern, enum)
2. **Response schema**
- Success response structure
- Error response structure
- Pagination format
3. **Headers**
- Authentication headers
- Content-Type
- Custom headers
**Standard response format:**
```json
{
"data": { ... },
"meta": {
"page": 1,
"limit": 20,
"total": 100
},
"links": {
"self": "/users?page=1",
"next": "/users?page=2"
}
}
```
**Standard error format:**
```json
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Request validation failed",
"details": [
{ "field": "email", "message": "Invalid email format" }
]
}
}
```
### Step 6: Authentication & Authorization
**Define security scheme:**
```yaml
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
apiKey:
type: apiKey
in: header
name: X-API-Key
oauth2:
type: oauth2
flows:
authorizationCode:
authorizationUrl: /oauth/authorize
tokenUrl: /oauth/token
scopes:
read: Read access
write: Write access
```
**Apply security to endpoints:**
- Public endpoints (no auth)
- Authenticated endpoints (user token)
- Admin-only endpoints (role-based)
### Step 7: Generate OpenAPI Specification
**Create OpenAPI 3.0+ document:**
```yaml
openapi: 3.0.3
info:
title: {project_name} API
version: 1.0.0
description: |
{api_description}
servers:
- url: https://api.example.com/v1
description: Production
- url: https://staging-api.example.com/v1
description: Staging
- url: http://localhost:3000/v1
description: Development
paths:
/resources:
get:
summary: List resources
operationId: listResources
tags:
- Resources
parameters:
- $ref: '#/components/parameters/page'
- $ref: '#/components/parameters/limit'
responses:
'200':
description: Successful response
content:
application/json:
schema:
$ref: '#/components/schemas/ResourceList'
components:
schemas:
Resource:
type: object
properties:
id:
type: string
format: uuid
name:
type: string
required:
- id
- name
parameters:
page:
name: page
in: query
schema:
type: integer
default: 1
limit:
name: limit
in: query
schema:
type: integer
default: 20
maximum: 100
```
### Step 8: API Documentation
**Generate API design document with:**
1. **Overview**
- API purpose and scope
- Base URL and versioning strategy
- Authentication methods
2. **Quick Start**
- Getting API credentials
- Making first request
- Common patterns
3. **Resource Reference**
- Detailed endpoint documentation
- Request/response examples
- Error codes
4. **Best Practices**
- Rate limiting guidance
- Pagination recommendations
- Error handling
### Step 9: Mock Server Guidance
**Provide mock server setup:**
```bash
# Using Prism (OpenAPI)
npm install -g @stoplight/prism-cli
prism mock api-spec.yaml
# Using json-server (simple)
npm install -g json-server
json-server --watch db.json
# Using MSW (frontend mocking)
npm install msw --save-dev
```
**Include sample mock data:**
```json
{
"users": [
{ "id": "1", "name": "Alice", "email": "alice@example.com" },
{ "id": "2", "name": "Bob", "email": "bob@example.com" }
]
}
```
### Step 10: SDK Generation Guidance
**Client SDK generation options:**
```bash
# OpenAPI Generator
npx @openapitools/openapi-generator-cli generate \
-i api-spec.yaml \
-g typescript-axios \
-o ./sdk
# Available generators:
# - typescript-axios
# - typescript-fetch
# - python
# - go
# - java
# - csharp
```
**Type generation (TypeScript):**
```bash
# Using openapi-typescript
npx openapi-typescript api-spec.yaml -o types.d.ts
```
### Step 11: Validation Checklist
Before completing:
- [ ] All PRD features have corresponding endpoints
- [ ] Resource naming follows conventions
- [ ] Request/response schemas complete
- [ ] Authentication defined for protected endpoints
- [ ] Error responses documented
- [ ] Pagination implemented for list endpoints
- [ ] OpenAPI spec validates (use swagger-cli validate)
- [ ] Examples provided for complex endpoints
### Step 12: Output Files
**Save to:**
- OpenAPI spec: `{output_file}` (api-spec.yaml)
- API design doc: `{output_doc}` (api-design.md)
**Notify user with:**
- Summary of endpoints created
- Link to specification file
- Mock server quick start
- Next steps (implementation, SDK generation)

View File

@ -0,0 +1,467 @@
openapi: 3.0.3
info:
title: "{{project_name}} API"
version: "{{api_version}}"
description: |
{{api_description}}
## Authentication
{{auth_description}}
## Rate Limiting
{{rate_limit_description}}
contact:
name: API Support
email: api@example.com
license:
name: MIT
url: https://opensource.org/licenses/MIT
servers:
- url: https://api.{{domain}}/v{{major_version}}
description: Production
- url: https://staging-api.{{domain}}/v{{major_version}}
description: Staging
- url: http://localhost:{{port}}/v{{major_version}}
description: Development
tags:
# Define tags for each resource group
- name: Authentication
description: Authentication and authorization endpoints
- name: Users
description: User management operations
# Add more tags as needed
paths:
# Authentication endpoints
/auth/login:
post:
summary: Authenticate user
operationId: login
tags:
- Authentication
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/LoginRequest'
responses:
'200':
description: Authentication successful
content:
application/json:
schema:
$ref: '#/components/schemas/AuthResponse'
'401':
$ref: '#/components/responses/Unauthorized'
'422':
$ref: '#/components/responses/ValidationError'
/auth/logout:
post:
summary: Logout user
operationId: logout
tags:
- Authentication
security:
- bearerAuth: []
responses:
'204':
description: Logout successful
'401':
$ref: '#/components/responses/Unauthorized'
# Resource template - copy and customize
/resources:
get:
summary: List resources
operationId: listResources
tags:
- Resources
security:
- bearerAuth: []
parameters:
- $ref: '#/components/parameters/page'
- $ref: '#/components/parameters/limit'
- $ref: '#/components/parameters/sort'
- $ref: '#/components/parameters/filter'
responses:
'200':
description: Successful response
content:
application/json:
schema:
$ref: '#/components/schemas/ResourceListResponse'
'401':
$ref: '#/components/responses/Unauthorized'
post:
summary: Create resource
operationId: createResource
tags:
- Resources
security:
- bearerAuth: []
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/CreateResourceRequest'
responses:
'201':
description: Resource created
content:
application/json:
schema:
$ref: '#/components/schemas/ResourceResponse'
'401':
$ref: '#/components/responses/Unauthorized'
'422':
$ref: '#/components/responses/ValidationError'
/resources/{id}:
parameters:
- $ref: '#/components/parameters/resourceId'
get:
summary: Get resource by ID
operationId: getResource
tags:
- Resources
security:
- bearerAuth: []
responses:
'200':
description: Successful response
content:
application/json:
schema:
$ref: '#/components/schemas/ResourceResponse'
'401':
$ref: '#/components/responses/Unauthorized'
'404':
$ref: '#/components/responses/NotFound'
patch:
summary: Update resource
operationId: updateResource
tags:
- Resources
security:
- bearerAuth: []
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/UpdateResourceRequest'
responses:
'200':
description: Resource updated
content:
application/json:
schema:
$ref: '#/components/schemas/ResourceResponse'
'401':
$ref: '#/components/responses/Unauthorized'
'404':
$ref: '#/components/responses/NotFound'
'422':
$ref: '#/components/responses/ValidationError'
delete:
summary: Delete resource
operationId: deleteResource
tags:
- Resources
security:
- bearerAuth: []
responses:
'204':
description: Resource deleted
'401':
$ref: '#/components/responses/Unauthorized'
'404':
$ref: '#/components/responses/NotFound'
components:
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
description: JWT authentication token
apiKey:
type: apiKey
in: header
name: X-API-Key
description: API key for service-to-service calls
parameters:
page:
name: page
in: query
description: Page number for pagination
schema:
type: integer
minimum: 1
default: 1
limit:
name: limit
in: query
description: Number of items per page
schema:
type: integer
minimum: 1
maximum: 100
default: 20
sort:
name: sort
in: query
description: Sort field and direction (e.g., -createdAt for descending)
schema:
type: string
example: "-createdAt"
filter:
name: filter
in: query
description: Filter expression
schema:
type: string
example: "status:active"
resourceId:
name: id
in: path
required: true
description: Resource identifier
schema:
type: string
format: uuid
schemas:
# Authentication schemas
LoginRequest:
type: object
required:
- email
- password
properties:
email:
type: string
format: email
password:
type: string
format: password
minLength: 8
AuthResponse:
type: object
properties:
accessToken:
type: string
refreshToken:
type: string
expiresIn:
type: integer
description: Token expiry in seconds
tokenType:
type: string
default: Bearer
# Resource schemas (template)
Resource:
type: object
properties:
id:
type: string
format: uuid
readOnly: true
name:
type: string
minLength: 1
maxLength: 255
description:
type: string
status:
type: string
enum: [active, inactive, archived]
default: active
createdAt:
type: string
format: date-time
readOnly: true
updatedAt:
type: string
format: date-time
readOnly: true
required:
- name
CreateResourceRequest:
allOf:
- $ref: '#/components/schemas/Resource'
- type: object
required:
- name
UpdateResourceRequest:
type: object
properties:
name:
type: string
description:
type: string
status:
type: string
enum: [active, inactive, archived]
ResourceResponse:
type: object
properties:
data:
$ref: '#/components/schemas/Resource'
ResourceListResponse:
type: object
properties:
data:
type: array
items:
$ref: '#/components/schemas/Resource'
meta:
$ref: '#/components/schemas/PaginationMeta'
links:
$ref: '#/components/schemas/PaginationLinks'
# Pagination
PaginationMeta:
type: object
properties:
page:
type: integer
limit:
type: integer
total:
type: integer
totalPages:
type: integer
PaginationLinks:
type: object
properties:
self:
type: string
format: uri
first:
type: string
format: uri
prev:
type: string
format: uri
nullable: true
next:
type: string
format: uri
nullable: true
last:
type: string
format: uri
# Error schemas
Error:
type: object
properties:
error:
type: object
properties:
code:
type: string
message:
type: string
details:
type: array
items:
type: object
properties:
field:
type: string
message:
type: string
responses:
Unauthorized:
description: Authentication required
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
example:
error:
code: UNAUTHORIZED
message: Authentication required
Forbidden:
description: Permission denied
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
example:
error:
code: FORBIDDEN
message: Permission denied
NotFound:
description: Resource not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
example:
error:
code: NOT_FOUND
message: Resource not found
ValidationError:
description: Validation error
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
example:
error:
code: VALIDATION_ERROR
message: Request validation failed
details:
- field: email
message: Invalid email format
RateLimitExceeded:
description: Rate limit exceeded
headers:
X-RateLimit-Limit:
schema:
type: integer
X-RateLimit-Remaining:
schema:
type: integer
X-RateLimit-Reset:
schema:
type: integer
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
example:
error:
code: RATE_LIMIT_EXCEEDED
message: Too many requests

View File

@ -0,0 +1,39 @@
# API Design Workflow
name: create-api-spec
description: "Contract-first API design workflow producing OpenAPI 3.0+ specifications with mock server guidance and client SDK generation recommendations"
author: "BMAD"
version: "1.0.0"
# Configuration sources
config_source: "{project-root}/_bmad/bmm/config.yaml"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
user_skill_level: "{config_source}:user_skill_level"
document_output_language: "{config_source}:document_output_language"
planning_artifacts: "{config_source}:planning_artifacts"
output_folder: "{planning_artifacts}"
date: system-generated
# Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/3-solutioning/create-api-spec"
instructions: "{installed_path}/instructions.md"
template: "{installed_path}/openapi.template.yaml"
checklist: "{installed_path}/api-checklist.md"
# Input references
prd_doc: "{planning_artifacts}/*prd*.md"
architecture_doc: "{planning_artifacts}/*architecture*.md"
project_context: "**/project-context.md"
# Output
output_file: "{output_folder}/api-spec.yaml"
output_doc: "{output_folder}/api-design.md"
# API styles supported
api_styles:
- rest # RESTful API
- graphql # GraphQL schema
- grpc # Protocol Buffers
- websocket # WebSocket events
standalone: true

View File

@ -13,7 +13,7 @@ implementation_artifacts: "{config_source}:implementation_artifacts"
planning_artifacts: "{config_source}:planning_artifacts" planning_artifacts: "{config_source}:planning_artifacts"
project_knowledge: "{config_source}:project_knowledge" project_knowledge: "{config_source}:project_knowledge"
output_folder: "{implementation_artifacts}" output_folder: "{implementation_artifacts}"
sprint_status: "{implementation_artifacts}/sprint-status.yaml || {output_folder}/sprint-status.yaml" sprint_status: "{implementation_artifacts}/sprint-status.yaml"
# Smart input file references - handles both whole docs and sharded docs # Smart input file references - handles both whole docs and sharded docs
# Priority: Whole document first, then sharded version # Priority: Whole document first, then sharded version

View File

@ -33,7 +33,7 @@ This is a COMPETITION to create the **ULTIMATE story context** that makes LLM de
### **When Running from Create-Story Workflow:** ### **When Running from Create-Story Workflow:**
- The `{project_root}/_bmad/core/tasks/validate-workflow.xml` framework will automatically: - The `{project-root}/_bmad/core/tasks/validate-workflow.xml` framework will automatically:
- Load this checklist file - Load this checklist file
- Load the newly created story file (`{story_file_path}`) - Load the newly created story file (`{story_file_path}`)
- Load workflow variables from `{installed_path}/workflow.yaml` - Load workflow variables from `{installed_path}/workflow.yaml`
@ -63,7 +63,7 @@ You will systematically re-do the entire story creation process, but with a crit
1. **Load the workflow configuration**: `{installed_path}/workflow.yaml` for variable inclusion 1. **Load the workflow configuration**: `{installed_path}/workflow.yaml` for variable inclusion
2. **Load the story file**: `{story_file_path}` (provided by user or discovered) 2. **Load the story file**: `{story_file_path}` (provided by user or discovered)
3. **Load validation framework**: `{project_root}/_bmad/core/tasks/validate-workflow.xml` 3. **Load validation framework**: `{project-root}/_bmad/core/tasks/validate-workflow.xml`
4. **Extract metadata**: epic_num, story_num, story_key, story_title from story file 4. **Extract metadata**: epic_num, story_num, story_key, story_title from story file
5. **Resolve all workflow variables**: story_dir, output_folder, epics_file, architecture_file, etc. 5. **Resolve all workflow variables**: story_dir, output_folder, epics_file, architecture_file, etc.
6. **Understand current status**: What story implementation guidance is currently provided? 6. **Understand current status**: What story implementation guidance is currently provided?

View File

@ -336,9 +336,10 @@
1. Review the comprehensive story in {{story_file}} 1. Review the comprehensive story in {{story_file}}
2. Run dev agents `dev-story` for optimized implementation 2. Run dev agents `dev-story` for optimized implementation
3. Run `code-review` when complete (auto-marks done) 3. Run `code-review` when complete (auto-marks done)
4. Optional: Run TEA `*automate` after `dev-story` to generate guardrail tests
**The developer now has everything needed for flawless implementation!** **The developer now has everything needed for flawless implementation!**
</output> </output>
</step> </step>
</workflow> </workflow>

View File

@ -20,7 +20,7 @@ validation: "{installed_path}/checklist.md"
# Variables and inputs # Variables and inputs
variables: variables:
sprint_status: "{implementation_artifacts}/sprint-status.yaml || {output_folder}/sprint-status.yaml" # Primary source for story tracking sprint_status: "{implementation_artifacts}/sprint-status.yaml" # Primary source for story tracking
epics_file: "{planning_artifacts}/epics.md" # Enhanced epics+stories with BDD and source hints epics_file: "{planning_artifacts}/epics.md" # Enhanced epics+stories with BDD and source hints
prd_file: "{planning_artifacts}/prd.md" # Fallback for requirements (if not in epics file) prd_file: "{planning_artifacts}/prd.md" # Fallback for requirements (if not in epics file)
architecture_file: "{planning_artifacts}/architecture.md" # Fallback for constraints (if not in epics file) architecture_file: "{planning_artifacts}/architecture.md" # Fallback for constraints (if not in epics file)

View File

@ -397,6 +397,7 @@
- Verify all acceptance criteria are met - Verify all acceptance criteria are met
- Ensure deployment readiness if applicable - Ensure deployment readiness if applicable
- Run `code-review` workflow for peer review - Run `code-review` workflow for peer review
- Optional: Run TEA `*automate` to expand guardrail tests
</action> </action>
<output>💡 **Tip:** For best results, run `code-review` using a **different** LLM than the one that implemented this story.</output> <output>💡 **Tip:** For best results, run `code-review` using a **different** LLM than the one that implemented this story.</output>
@ -406,4 +407,4 @@
<action>Remain flexible - allow user to choose their own path or ask for other assistance</action> <action>Remain flexible - allow user to choose their own path or ask for other assistance</action>
</step> </step>
</workflow> </workflow>

View File

@ -19,14 +19,14 @@ instructions: "{installed_path}/instructions.md"
# Inputs # Inputs
variables: variables:
sprint_status_file: "{implementation_artifacts}/sprint-status.yaml || {output_folder}/sprint-status.yaml" sprint_status_file: "{implementation_artifacts}/sprint-status.yaml"
tracking_system: "file-system" tracking_system: "file-system"
# Smart input file references # Smart input file references
input_file_patterns: input_file_patterns:
sprint_status: sprint_status:
description: "Sprint status file generated by sprint-planning" description: "Sprint status file generated by sprint-planning"
whole: "{implementation_artifacts}/sprint-status.yaml || {output_folder}/sprint-status.yaml" whole: "{implementation_artifacts}/sprint-status.yaml"
load_strategy: "FULL_LOAD" load_strategy: "FULL_LOAD"
# Standalone so IDE commands get generated # Standalone so IDE commands get generated

View File

@ -1,7 +1,7 @@
# Create Data Flow Diagram - Workflow Instructions # Create Data Flow Diagram - Workflow Instructions
```xml ```xml
<critical>The workflow execution engine is governed by: {project_root}/_bmad/core/tasks/workflow.xml</critical> <critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical> <critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>This workflow creates data flow diagrams (DFD) in Excalidraw format.</critical> <critical>This workflow creates data flow diagrams (DFD) in Excalidraw format.</critical>

View File

@ -1,7 +1,7 @@
# Create Diagram - Workflow Instructions # Create Diagram - Workflow Instructions
```xml ```xml
<critical>The workflow execution engine is governed by: {project_root}/_bmad/core/tasks/workflow.xml</critical> <critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical> <critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>This workflow creates system architecture diagrams, ERDs, UML diagrams, or general technical diagrams in Excalidraw format.</critical> <critical>This workflow creates system architecture diagrams, ERDs, UML diagrams, or general technical diagrams in Excalidraw format.</critical>

View File

@ -1,7 +1,7 @@
# Create Flowchart - Workflow Instructions # Create Flowchart - Workflow Instructions
```xml ```xml
<critical>The workflow execution engine is governed by: {project_root}/_bmad/core/tasks/workflow.xml</critical> <critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical> <critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>This workflow creates a flowchart visualization in Excalidraw format for processes, pipelines, or logic flows.</critical> <critical>This workflow creates a flowchart visualization in Excalidraw format for processes, pipelines, or logic flows.</critical>

View File

@ -1,7 +1,7 @@
# Create Wireframe - Workflow Instructions # Create Wireframe - Workflow Instructions
```xml ```xml
<critical>The workflow execution engine is governed by: {project_root}/_bmad/core/tasks/workflow.xml</critical> <critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical> <critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>This workflow creates website or app wireframes in Excalidraw format.</critical> <critical>This workflow creates website or app wireframes in Excalidraw format.</critical>

View File

@ -0,0 +1,269 @@
# Security Audit Workflow Instructions
## Overview
Conduct a comprehensive security audit of the codebase covering OWASP Top 10 vulnerabilities, dependency security, secret detection, and authentication/authorization patterns.
## Workflow Steps
### Step 1: Scope Determination
**Ask user for audit scope:**
```
Security Audit Scope Selection
Available scopes:
1. [full] Complete security audit (recommended)
2. [owasp] OWASP Top 10 vulnerability focus
3. [deps] Dependency vulnerabilities only
4. [secrets] Secret detection only
5. [auth] Authentication/authorization review
6. [api] API security assessment
Select scope [1-6] or enter scope name:
```
### Step 2: Context Loading
**Load project context:**
1. Load architecture document for understanding system design
2. Load project-context.md for coding standards and patterns
3. Identify technology stack (framework, language, dependencies)
4. Note any existing security configurations
### Step 3: OWASP Top 10 Assessment
**For each vulnerability category:**
#### A01:2021 - Broken Access Control
- [ ] Check for missing access controls on functions
- [ ] Review CORS configuration
- [ ] Verify principle of least privilege
- [ ] Check for insecure direct object references (IDOR)
- [ ] Review JWT/session validation
#### A02:2021 - Cryptographic Failures
- [ ] Check for hardcoded secrets
- [ ] Verify HTTPS enforcement
- [ ] Review encryption algorithms used
- [ ] Check password hashing (bcrypt, argon2)
- [ ] Verify secure random number generation
#### A03:2021 - Injection
- [ ] SQL injection in database queries
- [ ] NoSQL injection patterns
- [ ] Command injection in system calls
- [ ] LDAP injection
- [ ] XPath injection
#### A04:2021 - Insecure Design
- [ ] Review authentication flows
- [ ] Check for business logic flaws
- [ ] Verify rate limiting implementation
- [ ] Review error handling patterns
#### A05:2021 - Security Misconfiguration
- [ ] Default credentials check
- [ ] Unnecessary features enabled
- [ ] Error messages exposing info
- [ ] Security headers missing
- [ ] Debug mode in production
#### A06:2021 - Vulnerable Components
- [ ] Outdated dependencies
- [ ] Known CVEs in dependencies
- [ ] Unmaintained packages
- [ ] License compliance issues
#### A07:2021 - Authentication Failures
- [ ] Weak password policies
- [ ] Missing brute-force protection
- [ ] Session management issues
- [ ] Multi-factor authentication gaps
#### A08:2021 - Software Integrity Failures
- [ ] CI/CD pipeline security
- [ ] Unsigned code/packages
- [ ] Insecure deserialization
- [ ] Missing integrity checks
#### A09:2021 - Logging & Monitoring Failures
- [ ] Insufficient logging
- [ ] Missing audit trails
- [ ] No alerting mechanisms
- [ ] Log injection vulnerabilities
#### A10:2021 - Server-Side Request Forgery
- [ ] Unvalidated URL parameters
- [ ] Internal service exposure
- [ ] DNS rebinding risks
### Step 4: Dependency Vulnerability Scan
**Scan dependencies for known vulnerabilities:**
```bash
# Node.js
npm audit
npx better-npm-audit audit
# Python
pip-audit
safety check
# Go
govulncheck ./...
# General
trivy fs .
grype .
```
**Document findings:**
- CVE identifier
- Severity (Critical/High/Medium/Low)
- Affected package and version
- Fix version available
- Remediation path
### Step 5: Secret Detection
**Scan for exposed secrets:**
Patterns to detect:
- API keys (AWS, GCP, Azure, etc.)
- Database connection strings
- Private keys (RSA, SSH)
- OAuth tokens
- JWT secrets
- Password literals
- Environment variable leaks
**Tools:**
```bash
# Gitleaks
gitleaks detect --source . --verbose
# TruffleHog
trufflehog filesystem .
# detect-secrets
detect-secrets scan
```
**Check locations:**
- Source code files
- Configuration files
- Environment files (.env, .env.*)
- Docker files
- CI/CD configurations
- Git history
### Step 6: Authentication/Authorization Review
**Authentication checks:**
- Password storage mechanism
- Session management
- Token handling (JWT, OAuth)
- MFA implementation
- Password reset flow
- Account lockout policy
**Authorization checks:**
- Role-based access control (RBAC)
- Attribute-based access control (ABAC)
- API endpoint protection
- Resource-level permissions
- Admin panel security
### Step 7: API Security Assessment
**Review API endpoints for:**
- Authentication requirements
- Rate limiting
- Input validation
- Output encoding
- CORS configuration
- API versioning
- Documentation exposure
**Check for:**
- Mass assignment vulnerabilities
- Excessive data exposure
- Broken function level authorization
- Improper inventory management
### Step 8: Generate Report
**Create security audit report with:**
```markdown
# Security Audit Report
**Date:** {date}
**Scope:** {audit_scope}
**Auditor:** {user_name} + TEA Agent
## Executive Summary
{brief_overview_of_findings}
## Risk Summary
| Severity | Count |
|----------|-------|
| Critical | X |
| High | X |
| Medium | X |
| Low | X |
## Findings
### Critical Findings
{detailed_critical_issues}
### High Severity Findings
{detailed_high_issues}
### Medium Severity Findings
{detailed_medium_issues}
### Low Severity Findings
{detailed_low_issues}
## Recommendations
{prioritized_remediation_steps}
## Appendix
- Full OWASP checklist results
- Dependency scan output
- Secret detection results
```
### Step 9: Remediation Guidance
**For each finding, provide:**
1. Clear description of the vulnerability
2. Location in codebase (file:line)
3. Risk assessment (likelihood + impact)
4. Remediation steps
5. Code example of fix (where applicable)
6. References (CWE, OWASP, CVE)
### Step 10: Validation Checklist
Before completing audit:
- [ ] All scope items assessed
- [ ] Findings documented with evidence
- [ ] Severity ratings justified
- [ ] Remediation steps actionable
- [ ] Report saved to output location
- [ ] No false positives in critical findings
## Output
Save report to: `{output_file}`
Notify user of completion with:
- Summary of findings
- Link to full report
- Top 3 priority items to address
- Offer to help with remediation

View File

@ -0,0 +1,215 @@
# OWASP Top 10 (2021) Security Checklist
## A01:2021 - Broken Access Control
### Access Control Checks
- [ ] All endpoints require authentication unless explicitly public
- [ ] Authorization checked on every request (not just UI)
- [ ] Deny by default policy implemented
- [ ] CORS properly configured with allowlisted origins
- [ ] Directory listing disabled on web servers
- [ ] Metadata files (.git, .svn) not accessible
- [ ] Rate limiting implemented on sensitive endpoints
### IDOR Prevention
- [ ] Object references are indirect or validated
- [ ] User can only access their own resources
- [ ] Admin functions properly protected
- [ ] API endpoints validate ownership
### Session Security
- [ ] Session invalidated on logout
- [ ] Session timeout implemented
- [ ] Session fixation prevented
- [ ] Concurrent session limits (if required)
---
## A02:2021 - Cryptographic Failures
### Data Protection
- [ ] Sensitive data identified and classified
- [ ] Data encrypted at rest
- [ ] Data encrypted in transit (TLS 1.2+)
- [ ] No sensitive data in URLs
- [ ] Secure cookies (HttpOnly, Secure, SameSite)
### Password Security
- [ ] Passwords hashed with bcrypt/argon2/scrypt
- [ ] No MD5/SHA1 for passwords
- [ ] Salt unique per password
- [ ] Work factor appropriate (>=10 for bcrypt)
### Key Management
- [ ] No hardcoded secrets in code
- [ ] Secrets in environment variables or vault
- [ ] Encryption keys rotated periodically
- [ ] Secure random number generation
---
## A03:2021 - Injection
### SQL Injection
- [ ] Parameterized queries used everywhere
- [ ] ORM/query builder used correctly
- [ ] No string concatenation in queries
- [ ] Input validation on all user data
### NoSQL Injection
- [ ] MongoDB queries use proper operators
- [ ] No eval() on user input
- [ ] Input sanitized for NoSQL patterns
### Command Injection
- [ ] No shell commands with user input
- [ ] If needed, strict allowlist validation
- [ ] Escape special characters
### XSS Prevention
- [ ] Output encoding on all user data
- [ ] Content-Security-Policy header set
- [ ] Dangerous HTML stripped or sanitized
- [ ] Template engines auto-escape
---
## A04:2021 - Insecure Design
### Threat Modeling
- [ ] Security requirements documented
- [ ] Threat model exists for critical flows
- [ ] Security user stories in backlog
### Business Logic
- [ ] Rate limiting on business operations
- [ ] Transaction limits enforced server-side
- [ ] Workflow state validated
### Error Handling
- [ ] Generic error messages to users
- [ ] Detailed errors only in logs
- [ ] No stack traces in production
---
## A05:2021 - Security Misconfiguration
### Server Configuration
- [ ] Unnecessary features disabled
- [ ] Default accounts removed/changed
- [ ] Directory browsing disabled
- [ ] Error pages customized
### Security Headers
- [ ] Content-Security-Policy
- [ ] X-Content-Type-Options: nosniff
- [ ] X-Frame-Options or CSP frame-ancestors
- [ ] Strict-Transport-Security
- [ ] X-XSS-Protection (legacy browsers)
- [ ] Referrer-Policy
### Cloud/Container Security
- [ ] Least privilege IAM roles
- [ ] Security groups properly configured
- [ ] Container images scanned
- [ ] No root processes in containers
---
## A06:2021 - Vulnerable Components
### Dependency Management
- [ ] Dependencies up to date
- [ ] No known CVEs in dependencies
- [ ] Automated vulnerability scanning
- [ ] Lock files committed (package-lock, yarn.lock)
### Update Process
- [ ] Regular dependency updates scheduled
- [ ] Security updates prioritized
- [ ] Breaking changes tested before deploy
---
## A07:2021 - Authentication Failures
### Password Policies
- [ ] Minimum length >= 8 characters
- [ ] No common password check
- [ ] Breach database check (optional)
- [ ] Account lockout after failures
### Multi-Factor Authentication
- [ ] MFA available for sensitive accounts
- [ ] MFA recovery process secure
- [ ] TOTP/WebAuthn preferred over SMS
### Session Management
- [ ] Strong session IDs (>=128 bits)
- [ ] Session regeneration on privilege change
- [ ] Secure session storage
---
## A08:2021 - Software Integrity Failures
### CI/CD Security
- [ ] Build pipeline secured
- [ ] Dependency sources verified
- [ ] Signed commits (optional)
- [ ] Artifact integrity verified
### Deserialization
- [ ] No unsafe deserialization of user data
- [ ] Type checking before deserialization
- [ ] Integrity checks on serialized data
---
## A09:2021 - Logging & Monitoring Failures
### Logging
- [ ] Authentication events logged
- [ ] Access control failures logged
- [ ] Input validation failures logged
- [ ] Sensitive data NOT logged
### Monitoring
- [ ] Alerts for suspicious activity
- [ ] Log aggregation implemented
- [ ] Incident response plan exists
---
## A10:2021 - Server-Side Request Forgery
### URL Validation
- [ ] User-supplied URLs validated
- [ ] Allowlist of permitted domains
- [ ] No access to internal services
- [ ] DNS rebinding prevented
### Network Segmentation
- [ ] Internal services not exposed
- [ ] Firewall rules block unnecessary traffic
---
## Severity Rating Guide
| Severity | CVSS Score | Examples |
|----------|------------|----------|
| Critical | 9.0-10.0 | RCE, Auth bypass, Data breach |
| High | 7.0-8.9 | SQL injection, Privilege escalation |
| Medium | 4.0-6.9 | XSS, CSRF, Info disclosure |
| Low | 0.1-3.9 | Minor info leak, Missing headers |
---
## References
- [OWASP Top 10](https://owasp.org/Top10/)
- [OWASP Testing Guide](https://owasp.org/www-project-web-security-testing-guide/)
- [CWE Top 25](https://cwe.mitre.org/top25/)
- [NIST Cybersecurity Framework](https://www.nist.gov/cyberframework)

View File

@ -0,0 +1,194 @@
# Security Audit Report
**Project:** {{project_name}}
**Date:** {{date}}
**Scope:** {{audit_scope}}
**Auditor:** {{user_name}} + TEA Agent
---
## Executive Summary
{{executive_summary}}
---
## Risk Summary
| Severity | Count | Status |
|----------|-------|--------|
| Critical | {{critical_count}} | {{critical_status}} |
| High | {{high_count}} | {{high_status}} |
| Medium | {{medium_count}} | {{medium_status}} |
| Low | {{low_count}} | {{low_status}} |
**Overall Risk Level:** {{overall_risk}}
---
## Technology Stack
| Component | Technology | Version |
|-----------|------------|---------|
| Framework | {{framework}} | {{framework_version}} |
| Language | {{language}} | {{language_version}} |
| Database | {{database}} | {{database_version}} |
| Authentication | {{auth_method}} | - |
---
## Critical Findings
{{#each critical_findings}}
### {{this.id}}: {{this.title}}
**Severity:** CRITICAL
**Category:** {{this.category}}
**Location:** `{{this.location}}`
**Description:**
{{this.description}}
**Evidence:**
```
{{this.evidence}}
```
**Impact:**
{{this.impact}}
**Remediation:**
{{this.remediation}}
**References:**
- {{this.references}}
---
{{/each}}
## High Severity Findings
{{#each high_findings}}
### {{this.id}}: {{this.title}}
**Severity:** HIGH
**Category:** {{this.category}}
**Location:** `{{this.location}}`
**Description:**
{{this.description}}
**Remediation:**
{{this.remediation}}
---
{{/each}}
## Medium Severity Findings
{{#each medium_findings}}
### {{this.id}}: {{this.title}}
**Severity:** MEDIUM
**Category:** {{this.category}}
**Location:** `{{this.location}}`
**Description:**
{{this.description}}
**Remediation:**
{{this.remediation}}
---
{{/each}}
## Low Severity Findings
{{#each low_findings}}
### {{this.id}}: {{this.title}}
**Severity:** LOW
**Category:** {{this.category}}
**Description:**
{{this.description}}
**Remediation:**
{{this.remediation}}
---
{{/each}}
## Dependency Vulnerabilities
| Package | Version | CVE | Severity | Fix Version |
|---------|---------|-----|----------|-------------|
{{#each dependency_vulns}}
| {{this.package}} | {{this.version}} | {{this.cve}} | {{this.severity}} | {{this.fix_version}} |
{{/each}}
---
## Secret Detection Results
| Type | File | Line | Status |
|------|------|------|--------|
{{#each secrets_found}}
| {{this.type}} | {{this.file}} | {{this.line}} | {{this.status}} |
{{/each}}
---
## OWASP Coverage
| Category | Status | Findings |
|----------|--------|----------|
| A01 - Broken Access Control | {{a01_status}} | {{a01_count}} |
| A02 - Cryptographic Failures | {{a02_status}} | {{a02_count}} |
| A03 - Injection | {{a03_status}} | {{a03_count}} |
| A04 - Insecure Design | {{a04_status}} | {{a04_count}} |
| A05 - Security Misconfiguration | {{a05_status}} | {{a05_count}} |
| A06 - Vulnerable Components | {{a06_status}} | {{a06_count}} |
| A07 - Authentication Failures | {{a07_status}} | {{a07_count}} |
| A08 - Software Integrity Failures | {{a08_status}} | {{a08_count}} |
| A09 - Logging & Monitoring Failures | {{a09_status}} | {{a09_count}} |
| A10 - SSRF | {{a10_status}} | {{a10_count}} |
---
## Recommendations
### Immediate Actions (Critical/High)
1. {{immediate_action_1}}
2. {{immediate_action_2}}
3. {{immediate_action_3}}
### Short-term Actions (Medium)
1. {{short_term_action_1}}
2. {{short_term_action_2}}
### Long-term Improvements (Low/Hardening)
1. {{long_term_action_1}}
2. {{long_term_action_2}}
---
## Appendix A: Tools Used
- Dependency Scanner: {{dep_scanner}}
- Secret Scanner: {{secret_scanner}}
- Static Analysis: {{static_analysis}}
## Appendix B: Files Reviewed
{{#each files_reviewed}}
- `{{this}}`
{{/each}}
---
**Report Generated:** {{timestamp}}
**Next Audit Recommended:** {{next_audit_date}}

View File

@ -0,0 +1,40 @@
# Security Audit Workflow
name: testarch-security-audit
description: "Comprehensive security audit covering OWASP Top 10, dependency vulnerabilities, secret detection, and authentication/authorization review"
author: "BMAD"
version: "1.0.0"
# Configuration sources
config_source: "{project-root}/_bmad/bmm/config.yaml"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
user_skill_level: "{config_source}:user_skill_level"
document_output_language: "{config_source}:document_output_language"
planning_artifacts: "{config_source}:planning_artifacts"
implementation_artifacts: "{config_source}:implementation_artifacts"
output_folder: "{implementation_artifacts}"
date: system-generated
# Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/testarch/security-audit"
instructions: "{installed_path}/instructions.md"
checklist: "{installed_path}/owasp-checklist.md"
report_template: "{installed_path}/security-report.template.md"
# Input references
architecture_doc: "{planning_artifacts}/*architecture*.md"
project_context: "**/project-context.md"
# Output
output_file: "{output_folder}/security-audit-report-{date}.md"
# Audit scope options
audit_scopes:
- full # Complete security audit
- owasp # OWASP Top 10 focus
- deps # Dependency vulnerabilities only
- secrets # Secret detection only
- auth # Authentication/authorization only
- api # API security only
standalone: true

View File

@ -25,11 +25,11 @@ The workflow auto-detects which mode to use based on project phase.
### Mode Detection ### Mode Detection
1. **Check for sprint-status.yaml** 1. **Check for sprint-status.yaml**
- If `{output_folder}/bmm-sprint-status.yaml` exists → **Epic-Level Mode** (Phase 4) - If `{implementation_artifacts}/sprint-status.yaml` exists → **Epic-Level Mode** (Phase 4)
- If NOT exists → Check workflow status - If NOT exists → Check workflow status
2. **Check workflow-status.yaml** 2. **Check workflow-status.yaml**
- Read `{output_folder}/bmm-workflow-status.yaml` - Read `{planning_artifacts}/bmm-workflow-status.yaml`
- If `implementation-readiness: required` or `implementation-readiness: recommended`**System-Level Mode** (Phase 3) - If `implementation-readiness: required` or `implementation-readiness: recommended`**System-Level Mode** (Phase 3)
- Otherwise → **Epic-Level Mode** (Phase 4 without sprint status yet) - Otherwise → **Epic-Level Mode** (Phase 4 without sprint status yet)

View File

@ -197,7 +197,7 @@ Your choice:</ask>
<!-- ============================================= --> <!-- ============================================= -->
<step n="10" goal="Validate mode - Check if calling workflow should proceed"> <step n="10" goal="Validate mode - Check if calling workflow should proceed">
<action>Read {output_folder}/bmm-workflow-status.yaml if exists</action> <action>Read {planning_artifacts}/bmm-workflow-status.yaml if exists</action>
<check if="status file not found"> <check if="status file not found">
<template-output>status_exists = false</template-output> <template-output>status_exists = false</template-output>
@ -261,7 +261,7 @@ Your choice:</ask>
</step> </step>
<step n="20" goal="Data mode - Extract specific information"> <step n="20" goal="Data mode - Extract specific information">
<action>Read {output_folder}/bmm-workflow-status.yaml if exists</action> <action>Read {planning_artifacts}/bmm-workflow-status.yaml if exists</action>
<check if="status file not found"> <check if="status file not found">
<template-output>status_exists = false</template-output> <template-output>status_exists = false</template-output>
@ -309,7 +309,7 @@ Your choice:</ask>
</step> </step>
<step n="30" goal="Init-check mode - Simple existence check"> <step n="30" goal="Init-check mode - Simple existence check">
<action>Check if {output_folder}/bmm-workflow-status.yaml exists</action> <action>Check if {planning_artifacts}/bmm-workflow-status.yaml exists</action>
<check if="exists"> <check if="exists">
<template-output>status_exists = true</template-output> <template-output>status_exists = true</template-output>
@ -325,7 +325,7 @@ Your choice:</ask>
</step> </step>
<step n="40" goal="Update mode - Centralized status file updates"> <step n="40" goal="Update mode - Centralized status file updates">
<action>Read {output_folder}/bmm-workflow-status.yaml</action> <action>Read {planning_artifacts}/bmm-workflow-status.yaml</action>
<check if="status file not found"> <check if="status file not found">
<template-output>success = false</template-output> <template-output>success = false</template-output>

View File

@ -1,7 +1,7 @@
# Design Thinking Workflow Instructions # Design Thinking Workflow Instructions
<critical>The workflow execution engine is governed by: {project_root}/_bmad/core/tasks/workflow.xml</critical> <critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {project_root}/_bmad/cis/workflows/design-thinking/workflow.yaml</critical> <critical>You MUST have already loaded and processed: {project-root}/_bmad/cis/workflows/design-thinking/workflow.yaml</critical>
<critical>Load and understand design methods from: {design_methods}</critical> <critical>Load and understand design methods from: {design_methods}</critical>
<critical>⚠️ ABSOLUTELY NO TIME ESTIMATES - NEVER mention hours, days, weeks, months, or ANY time-based predictions. AI has fundamentally changed development speed - what once took teams weeks/months can now be done by one person in hours. DO NOT give ANY time estimates whatsoever.</critical> <critical>⚠️ ABSOLUTELY NO TIME ESTIMATES - NEVER mention hours, days, weeks, months, or ANY time-based predictions. AI has fundamentally changed development speed - what once took teams weeks/months can now be done by one person in hours. DO NOT give ANY time estimates whatsoever.</critical>
<critical>⚠️ CHECKPOINT PROTOCOL: After EVERY <template-output> tag, you MUST follow workflow.xml substep 2c: SAVE content to file immediately → SHOW checkpoint separator (━━━━━━━━━━━━━━━━━━━━━━━) → DISPLAY generated content → PRESENT options [a]Advanced Elicitation/[c]Continue/[p]Party-Mode/[y]YOLO → WAIT for user response. Never batch saves or skip checkpoints.</critical> <critical>⚠️ CHECKPOINT PROTOCOL: After EVERY <template-output> tag, you MUST follow workflow.xml substep 2c: SAVE content to file immediately → SHOW checkpoint separator (━━━━━━━━━━━━━━━━━━━━━━━) → DISPLAY generated content → PRESENT options [a]Advanced Elicitation/[c]Continue/[p]Party-Mode/[y]YOLO → WAIT for user response. Never batch saves or skip checkpoints.</critical>

View File

@ -1,7 +1,7 @@
# Innovation Strategy Workflow Instructions # Innovation Strategy Workflow Instructions
<critical>The workflow execution engine is governed by: {project_root}/_bmad/core/tasks/workflow.xml</critical> <critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {project_root}/_bmad/cis/workflows/innovation-strategy/workflow.yaml</critical> <critical>You MUST have already loaded and processed: {project-root}/_bmad/cis/workflows/innovation-strategy/workflow.yaml</critical>
<critical>Load and understand innovation frameworks from: {innovation_frameworks}</critical> <critical>Load and understand innovation frameworks from: {innovation_frameworks}</critical>
<critical>⚠️ ABSOLUTELY NO TIME ESTIMATES - NEVER mention hours, days, weeks, months, or ANY time-based predictions. AI has fundamentally changed development speed - what once took teams weeks/months can now be done by one person in hours. DO NOT give ANY time estimates whatsoever.</critical> <critical>⚠️ ABSOLUTELY NO TIME ESTIMATES - NEVER mention hours, days, weeks, months, or ANY time-based predictions. AI has fundamentally changed development speed - what once took teams weeks/months can now be done by one person in hours. DO NOT give ANY time estimates whatsoever.</critical>
<critical>⚠️ CHECKPOINT PROTOCOL: After EVERY <template-output> tag, you MUST follow workflow.xml substep 2c: SAVE content to file immediately → SHOW checkpoint separator (━━━━━━━━━━━━━━━━━━━━━━━) → DISPLAY generated content → PRESENT options [a]Advanced Elicitation/[c]Continue/[p]Party-Mode/[y]YOLO → WAIT for user response. Never batch saves or skip checkpoints.</critical> <critical>⚠️ CHECKPOINT PROTOCOL: After EVERY <template-output> tag, you MUST follow workflow.xml substep 2c: SAVE content to file immediately → SHOW checkpoint separator (━━━━━━━━━━━━━━━━━━━━━━━) → DISPLAY generated content → PRESENT options [a]Advanced Elicitation/[c]Continue/[p]Party-Mode/[y]YOLO → WAIT for user response. Never batch saves or skip checkpoints.</critical>

View File

@ -1,7 +1,7 @@
# Problem Solving Workflow Instructions # Problem Solving Workflow Instructions
<critical>The workflow execution engine is governed by: {project_root}/_bmad/core/tasks/workflow.xml</critical> <critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {project_root}/_bmad/cis/workflows/problem-solving/workflow.yaml</critical> <critical>You MUST have already loaded and processed: {project-root}/_bmad/cis/workflows/problem-solving/workflow.yaml</critical>
<critical>Load and understand solving methods from: {solving_methods}</critical> <critical>Load and understand solving methods from: {solving_methods}</critical>
<critical>⚠️ ABSOLUTELY NO TIME ESTIMATES - NEVER mention hours, days, weeks, months, or ANY time-based predictions. AI has fundamentally changed development speed - what once took teams weeks/months can now be done by one person in hours. DO NOT give ANY time estimates whatsoever.</critical> <critical>⚠️ ABSOLUTELY NO TIME ESTIMATES - NEVER mention hours, days, weeks, months, or ANY time-based predictions. AI has fundamentally changed development speed - what once took teams weeks/months can now be done by one person in hours. DO NOT give ANY time estimates whatsoever.</critical>
<critical>⚠️ CHECKPOINT PROTOCOL: After EVERY <template-output> tag, you MUST follow workflow.xml substep 2c: SAVE content to file immediately → SHOW checkpoint separator (━━━━━━━━━━━━━━━━━━━━━━━) → DISPLAY generated content → PRESENT options [a]Advanced Elicitation/[c]Continue/[p]Party-Mode/[y]YOLO → WAIT for user response. Never batch saves or skip checkpoints.</critical> <critical>⚠️ CHECKPOINT PROTOCOL: After EVERY <template-output> tag, you MUST follow workflow.xml substep 2c: SAVE content to file immediately → SHOW checkpoint separator (━━━━━━━━━━━━━━━━━━━━━━━) → DISPLAY generated content → PRESENT options [a]Advanced Elicitation/[c]Continue/[p]Party-Mode/[y]YOLO → WAIT for user response. Never batch saves or skip checkpoints.</critical>

View File

@ -3,8 +3,8 @@
## Workflow ## Workflow
<workflow> <workflow>
<critical>The workflow execution engine is governed by: {project_root}/_bmad/core/tasks/workflow.xml</critical> <critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {project_root}/_bmad/cis/workflows/storytelling/workflow.yaml</critical> <critical>You MUST have already loaded and processed: {project-root}/_bmad/cis/workflows/storytelling/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language}</critical> <critical>Communicate all responses in {communication_language}</critical>
<critical>⚠️ ABSOLUTELY NO TIME ESTIMATES - NEVER mention hours, days, weeks, months, or ANY time-based predictions. AI has fundamentally changed development speed - what once took teams weeks/months can now be done by one person in hours. DO NOT give ANY time estimates whatsoever.</critical> <critical>⚠️ ABSOLUTELY NO TIME ESTIMATES - NEVER mention hours, days, weeks, months, or ANY time-based predictions. AI has fundamentally changed development speed - what once took teams weeks/months can now be done by one person in hours. DO NOT give ANY time estimates whatsoever.</critical>
<critical>⚠️ CHECKPOINT PROTOCOL: After EVERY <template-output> tag, you MUST follow workflow.xml substep 2c: SAVE content to file immediately → SHOW checkpoint separator (━━━━━━━━━━━━━━━━━━━━━━━) → DISPLAY generated content → PRESENT options [a]Advanced Elicitation/[c]Continue/[p]Party-Mode/[y]YOLO → WAIT for user response. Never batch saves or skip checkpoints.</critical> <critical>⚠️ CHECKPOINT PROTOCOL: After EVERY <template-output> tag, you MUST follow workflow.xml substep 2c: SAVE content to file immediately → SHOW checkpoint separator (━━━━━━━━━━━━━━━━━━━━━━━) → DISPLAY generated content → PRESENT options [a]Advanced Elicitation/[c]Continue/[p]Party-Mode/[y]YOLO → WAIT for user response. Never batch saves or skip checkpoints.</critical>