The workflow execution engine is governed by: {project-root}/{bmad_folder}/core/tasks/workflow.xml You MUST have already loaded and processed: {installed_path}/workflow.yaml Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level} Generate all documents in {document_output_language} 🚨 FIRST ACTION REQUIRED: Kill all background shells 🚨 Before ANY other workflow step, you MUST terminate all background processes. DO NOT use BashOutput to read from stale shells - this causes context pollution. DO NOT try to "check what they're doing" - just kill them. IMMEDIATELY use KillShell to terminate ALL background shells - no exceptions Do NOT read output from these shells first - kill them blind Do NOT rationalize keeping them alive - they contain stale data If multiple shells exist, kill ALL of them one by one đŸ”Ē Killed all background shells to prevent stale data contamination Only proceed to Step 1 AFTER all background shells are terminated. 🚨 MANDATORY EXTERNAL AGENT RULE - NO EXCEPTIONS 🚨 If an external agent CLI (codex, gemini, or claude) is detected as available, you MUST delegate the code review to that agent. You are FORBIDDEN from performing your own code review analysis if an external agent is available. The external agent provides an independent, unbiased review. Your job is to INVOKE it, not replace it. Only perform built-in review if ALL external agents fail detection OR the CLI invocation actually fails with a non-zero exit code. DO NOT SKIP the invoke-bash commands for detection and invocation - they are MANDATORY. đŸšĢ PROHIBITED RATIONALIZATIONS - You may NOT skip the external agent for ANY of these reasons: ❌ "The prompt is too long" - Long prompts are expected and supported. Invoke anyway. ❌ "CLI is meant for simple operations" - FALSE. The CLI handles complex prompts. Invoke anyway. ❌ "This is a re-review" - Re-reviews MUST use external agent. No exception. ❌ "I can do this myself" - You are FORBIDDEN from self-review when external agent is available. ❌ "It would be faster/better if I do it" - Irrelevant. External agent is MANDATORY. ❌ "The context is too complex" - The external agent handles complexity. Invoke anyway. If you find yourself rationalizing why to skip the external agent, STOP and invoke it anyway. đŸ”Ĩ ADVERSARIAL CODE REVIEW REQUIREMENTS đŸ”Ĩ Your purpose: Validate story file claims against actual implementation Challenge everything: Are tasks marked [x] actually done? Are ACs really implemented? Find 3-10 specific issues in every review minimum - no lazy "looks good" reviews Read EVERY file in the File List - verify implementation against story requirements Tasks marked complete but not done = CRITICAL finding Acceptance Criteria not implemented = HIGH severity finding Use provided {{story_path}} or ask user which story file to review Read COMPLETE story file Set {{story_key}} = extracted key from filename (e.g., "1-2-user-authentication.md" → "1-2-user-authentication") or story metadata Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Agent Record → File List, Change Log Check if git repository detected in current directory Run `git status --porcelain` to find uncommitted changes Run `git diff --name-only` to see modified files Run `git diff --cached --name-only` to see staged files Compile list of actually changed files from git output Compare story's Dev Agent Record → File List with actual git changes Note discrepancies: - Files in git but not in story File List - Files in story File List but no git changes - Missing documentation of what was actually changed Load {project_context} for coding standards (if exists) 📋 External agent disabled in config - will use built-in adversarial review 🔍 Detecting external agent availability... ✓ Codex CLI detected ✓ Gemini CLI detected ✓ Claude CLI detected âš ī¸ Preferred agent ({{preferred_agent}}) not available, falling back to Codex âš ī¸ Preferred agent ({{preferred_agent}}) not available, falling back to Gemini âš ī¸ Preferred agent ({{preferred_agent}}) not available, falling back to Claude 🤖 External agent selected: {{external_agent_cmd}} - will delegate code review 📋 No external agent available - will use built-in adversarial review Extract ALL Acceptance Criteria from story Extract ALL Tasks/Subtasks with completion status ([x] vs [ ]) From Dev Agent Record → File List, compile list of claimed changes Create review plan: 1. **AC Validation**: Verify each AC is actually implemented 2. **Task Audit**: Verify each [x] task is really done 3. **Code Quality**: Security, performance, maintainability 4. **Test Quality**: Real tests vs placeholder bullshit VALIDATE EVERY CLAIM - Check git reality vs story claims Review git vs story File List discrepancies: 1. **Files changed but not in story File List** → MEDIUM finding (incomplete documentation) 2. **Story lists files but no git changes** → HIGH finding (false claims) 3. **Uncommitted changes not documented** → MEDIUM finding (transparency issue) Create comprehensive review file list from story File List and git changes Store git discrepancy findings in {{git_findings}} If {{use_external_agent}} == true, you MUST invoke the external agent via CLI. DO NOT perform your own code review - delegate to the external agent. 🔄 Invoking {{external_agent_cmd}} CLI for adversarial code review... 🚨 USE EXACT COMMAND SYNTAX - DO NOT MODIFY OR SIMPLIFY 🚨 Copy the invoke-bash cmd attribute EXACTLY as written below. DO NOT remove flags, reorder arguments, or "improve" the command. Load {{external_prompt_file}} content into {{external_prompt}} CODEX: Use codex exec with read-only sandbox and full-auto GEMINI: Use gemini -p with prompt from file and --yolo CLAUDE: Use claude -p with prompt from file âš ī¸ External agent CLI failed (exit code: {{bash_exit_code}}), falling back to built-in review Error: {{bash_stderr}} Parse {{external_findings}} into structured HIGH/MEDIUM/LOW lists Merge {{git_findings}} with {{external_findings}} into {{all_findings}} ✅ External review complete - {{external_agent_cmd}} CLI findings received This section should ONLY execute if ALL external agents failed detection or invocation. If you are here but an external agent was available, you have violated the workflow rules. âš ī¸ No external agent available - performing built-in adversarial review For EACH Acceptance Criterion: 1. Read the AC requirement 2. Search implementation files for evidence 3. Determine: IMPLEMENTED, PARTIAL, or MISSING 4. If MISSING/PARTIAL → HIGH SEVERITY finding For EACH task marked [x]: 1. Read the task description 2. Search files for evidence it was actually done 3. **CRITICAL**: If marked [x] but NOT DONE → CRITICAL finding 4. Record specific proof (file:line) For EACH file in comprehensive review list: 1. **Security**: Look for injection risks, missing validation, auth issues 2. **Performance**: N+1 queries, inefficient loops, missing caching 3. **Error Handling**: Missing try/catch, poor error messages 4. **Code Quality**: Complex functions, magic numbers, poor naming 5. **Test Quality**: Are tests real assertions or placeholders? Merge {{git_findings}} with built-in findings into {{all_findings}} NOT LOOKING HARD ENOUGH - Find more problems! Re-examine code for: - Edge cases and null handling - Architecture violations - Documentation gaps - Integration issues - Dependency problems - Git commit message quality (if applicable) Find at least 3 more specific, actionable issues Categorize findings: HIGH (must fix), MEDIUM (should fix), LOW (nice to fix) Set {{fixed_count}} = 0 Set {{action_count}} = 0 **đŸ”Ĩ CODE REVIEW FINDINGS, {user_name}!** **Story:** {{story_file}} **Review Method:** {{external_agent_cmd}} OR built-in **Git vs Story Discrepancies:** {{git_discrepancy_count}} found **Issues Found:** {{high_count}} High, {{medium_count}} Medium, {{low_count}} Low ## 🔴 CRITICAL ISSUES - Tasks marked [x] but not actually implemented - Acceptance Criteria not implemented - Story claims files changed but no git evidence - Security vulnerabilities ## 🟡 MEDIUM ISSUES - Files changed but not documented in story File List - Uncommitted changes not tracked - Performance problems - Poor test coverage/quality - Code maintainability issues ## đŸŸĸ LOW ISSUES - Code style improvements - Documentation gaps - Git commit message quality What should I do with these issues? 1. **Fix them automatically** - I'll update the code and tests 2. **Create action items** - Add to story Tasks/Subtasks for later 3. **Show me details** - Deep dive into specific issues Choose [1], [2], or specify which issue to examine: Fix all HIGH and MEDIUM issues in the code Add/update tests as needed Update File List in story if files changed Update story Dev Agent Record with fixes applied Set {{fixed_count}} = number of HIGH and MEDIUM issues fixed Set {{action_count}} = 0 Add "Review Follow-ups (AI)" subsection to Tasks/Subtasks For each issue: `- [ ] [AI-Review][Severity] Description [file:line]` Set {{action_count}} = number of action items created Set {{fixed_count}} = 0 Show detailed explanation with code examples Return to fix decision Set {{new_status}} = "done" Update story Status field to "done" Set {{new_status}} = "in-progress" Update story Status field to "in-progress" Save story file Set {{current_sprint_status}} = "enabled" Set {{current_sprint_status}} = "no-sprint-tracking" Load the FULL file: {sprint_status} Find development_status key matching {{story_key}} Update development_status[{{story_key}}] = "done" Save file, preserving ALL comments and structure ✅ Sprint status synced: {{story_key}} → done Update development_status[{{story_key}}] = "in-progress" Save file, preserving ALL comments and structure 🔄 Sprint status synced: {{story_key}} → in-progress âš ī¸ Story file updated, but sprint-status sync failed: {{story_key}} not found in sprint-status.yaml â„šī¸ Story status updated (no sprint tracking configured) **✅ Review Complete!** **Story Status:** {{new_status}} **Issues Fixed:** {{fixed_count}} **Action Items Created:** {{action_count}} {{#if new_status == "done"}}Story is ready for next work!{{else}}Address the action items and continue development.{{/if}}