The workflow execution engine is governed by: {project-root}/{bmad_folder}/core/tasks/workflow.xml
You MUST have already loaded and processed: {installed_path}/workflow.yaml
Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}
Generate all documents in {document_output_language}
đ¨ FIRST ACTION REQUIRED: Kill all background shells đ¨
Before ANY other workflow step, you MUST terminate all background processes.
DO NOT use BashOutput to read from stale shells - this causes context pollution.
DO NOT try to "check what they're doing" - just kill them.
IMMEDIATELY use KillShell to terminate ALL background shells - no exceptions
Do NOT read output from these shells first - kill them blind
Do NOT rationalize keeping them alive - they contain stale data
If multiple shells exist, kill ALL of them one by one
Only proceed to Step 1 AFTER all background shells are terminated.
đ¨ MANDATORY EXTERNAL AGENT RULE - NO EXCEPTIONS đ¨
If an external agent CLI (codex, gemini, or claude) is detected as available, you MUST delegate the code review to that agent.
You are FORBIDDEN from performing your own code review analysis if an external agent is available.
The external agent provides an independent, unbiased review. Your job is to INVOKE it, not replace it.
Only perform built-in review if ALL external agents fail detection OR the CLI invocation actually fails with a non-zero exit code.
DO NOT SKIP the invoke-bash commands for detection and invocation - they are MANDATORY.
đĢ PROHIBITED RATIONALIZATIONS - You may NOT skip the external agent for ANY of these reasons:
â "The prompt is too long" - Long prompts are expected and supported. Invoke anyway.
â "CLI is meant for simple operations" - FALSE. The CLI handles complex prompts. Invoke anyway.
â "This is a re-review" - Re-reviews MUST use external agent. No exception.
â "I can do this myself" - You are FORBIDDEN from self-review when external agent is available.
â "It would be faster/better if I do it" - Irrelevant. External agent is MANDATORY.
â "The context is too complex" - The external agent handles complexity. Invoke anyway.
If you find yourself rationalizing why to skip the external agent, STOP and invoke it anyway.
đĨ ADVERSARIAL CODE REVIEW REQUIREMENTS đĨ
Your purpose: Validate story file claims against actual implementation
Challenge everything: Are tasks marked [x] actually done? Are ACs really implemented?
Find 3-10 specific issues in every review minimum - no lazy "looks good" reviews
Read EVERY file in the File List - verify implementation against story requirements
Tasks marked complete but not done = CRITICAL finding
Acceptance Criteria not implemented = HIGH severity finding
Use provided {{story_path}} or ask user which story file to review
Read COMPLETE story file
Set {{story_key}} = extracted key from filename (e.g., "1-2-user-authentication.md" â "1-2-user-authentication") or story metadata
Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Agent Record â File List, Change Log
Check if git repository detected in current directory
Run `git status --porcelain` to find uncommitted changes
Run `git diff --name-only` to see modified files
Run `git diff --cached --name-only` to see staged files
Compile list of actually changed files from git output
Compare story's Dev Agent Record â File List with actual git changes
Note discrepancies:
- Files in git but not in story File List
- Files in story File List but no git changes
- Missing documentation of what was actually changed
Load {project_context} for coding standards (if exists)
Extract ALL Acceptance Criteria from story
Extract ALL Tasks/Subtasks with completion status ([x] vs [ ])
From Dev Agent Record â File List, compile list of claimed changes
Create review plan:
1. **AC Validation**: Verify each AC is actually implemented
2. **Task Audit**: Verify each [x] task is really done
3. **Code Quality**: Security, performance, maintainability
4. **Test Quality**: Real tests vs placeholder bullshit
VALIDATE EVERY CLAIM - Check git reality vs story claims
Review git vs story File List discrepancies:
1. **Files changed but not in story File List** â MEDIUM finding (incomplete documentation)
2. **Story lists files but no git changes** â HIGH finding (false claims)
3. **Uncommitted changes not documented** â MEDIUM finding (transparency issue)
Create comprehensive review file list from story File List and git changes
Store git discrepancy findings in {{git_findings}}
If {{use_external_agent}} == true, you MUST invoke the external agent via CLI.
DO NOT perform your own code review - delegate to the external agent.
đ¨ USE EXACT COMMAND SYNTAX - DO NOT MODIFY OR SIMPLIFY đ¨
Copy the invoke-bash cmd attribute EXACTLY as written below.
DO NOT remove flags, reorder arguments, or "improve" the command.
Load {{external_prompt_file}} content into {{external_prompt}}
CODEX: Use codex exec with read-only sandbox and full-auto
GEMINI: Use gemini -p with prompt from file and --yolo
CLAUDE: Use claude -p with prompt from file
Parse {{external_findings}} into structured HIGH/MEDIUM/LOW lists
Merge {{git_findings}} with {{external_findings}} into {{all_findings}}
This section should ONLY execute if ALL external agents failed detection or invocation.
If you are here but an external agent was available, you have violated the workflow rules.
For EACH Acceptance Criterion:
1. Read the AC requirement
2. Search implementation files for evidence
3. Determine: IMPLEMENTED, PARTIAL, or MISSING
4. If MISSING/PARTIAL â HIGH SEVERITY finding
For EACH task marked [x]:
1. Read the task description
2. Search files for evidence it was actually done
3. **CRITICAL**: If marked [x] but NOT DONE â CRITICAL finding
4. Record specific proof (file:line)
For EACH file in comprehensive review list:
1. **Security**: Look for injection risks, missing validation, auth issues
2. **Performance**: N+1 queries, inefficient loops, missing caching
3. **Error Handling**: Missing try/catch, poor error messages
4. **Code Quality**: Complex functions, magic numbers, poor naming
5. **Test Quality**: Are tests real assertions or placeholders?
Merge {{git_findings}} with built-in findings into {{all_findings}}
NOT LOOKING HARD ENOUGH - Find more problems!
Re-examine code for:
- Edge cases and null handling
- Architecture violations
- Documentation gaps
- Integration issues
- Dependency problems
- Git commit message quality (if applicable)
Find at least 3 more specific, actionable issues
Categorize findings: HIGH (must fix), MEDIUM (should fix), LOW (nice to fix)
Set {{fixed_count}} = 0
Set {{action_count}} = 0
What should I do with these issues?
1. **Fix them automatically** - I'll update the code and tests
2. **Create action items** - Add to story Tasks/Subtasks for later
3. **Show me details** - Deep dive into specific issues
Choose [1], [2], or specify which issue to examine:
Fix all HIGH and MEDIUM issues in the code
Add/update tests as needed
Update File List in story if files changed
Update story Dev Agent Record with fixes applied
Set {{fixed_count}} = number of HIGH and MEDIUM issues fixed
Set {{action_count}} = 0
Add "Review Follow-ups (AI)" subsection to Tasks/Subtasks
For each issue: `- [ ] [AI-Review][Severity] Description [file:line]`
Set {{action_count}} = number of action items created
Set {{fixed_count}} = 0
Show detailed explanation with code examples
Return to fix decision
Set {{new_status}} = "done"
Update story Status field to "done"
Set {{new_status}} = "in-progress"
Update story Status field to "in-progress"
Save story file
Set {{current_sprint_status}} = "enabled"
Set {{current_sprint_status}} = "no-sprint-tracking"
Load the FULL file: {sprint_status}
Find development_status key matching {{story_key}}
Update development_status[{{story_key}}] = "done"
Save file, preserving ALL comments and structure
Update development_status[{{story_key}}] = "in-progress"
Save file, preserving ALL comments and structure