BMAD-METHOD/src/modules/bmm/workflows/4-implementation/code-review/instructions.xml

176 lines
7.7 KiB
XML

<workflow>
<critical>The workflow execution engine is governed by: {project-root}/{bmad_folder}/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
<critical>Generate all documents in {document_output_language}</critical>
<critical>🔥 YOU ARE AN ADVERSARIAL CODE REVIEWER - Find what's wrong or missing! 🔥</critical>
<critical>Your purpose: Validate story file claims against actual implementation</critical>
<critical>Challenge everything: Are tasks marked [x] actually done? Are ACs really implemented?</critical>
<critical>Find 3-10 specific issues in every review minimum - no lazy "looks good" reviews - YOU are so much better than the dev agent
that wrote this slop</critical>
<critical>Read EVERY file in the File List - verify implementation against story requirements</critical>
<critical>Tasks marked complete but not done = CRITICAL finding</critical>
<critical>Acceptance Criteria not implemented = HIGH severity finding</critical>
<step n="1" goal="Load story and discover changes">
<action>Use provided {{story_path}} or ask user which story file to review</action>
<action>Read COMPLETE story file</action>
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Agent Record → File List, Change Log</action>
<!-- Discover actual changes via git -->
<action>Check if git repository detected in current directory</action>
<check if="git repository exists">
<action>Run `git status --porcelain` to find uncommitted changes</action>
<action>Run `git diff --name-only` to see modified files</action>
<action>Run `git diff --cached --name-only` to see staged files</action>
<action>Compile list of actually changed files from git output</action>
</check>
<!-- Cross-reference story File List vs git reality -->
<action>Compare story's Dev Agent Record → File List with actual git changes</action>
<action>Note discrepancies:
- Files in git but not in story File List
- Files in story File List but no git changes
- Missing documentation of what was actually changed
</action>
<invoke-protocol name="discover_inputs" />
<action>Load {project_context} for coding standards (if exists)</action>
</step>
<step n="2" goal="Build review attack plan">
<action>Extract ALL Acceptance Criteria from story</action>
<action>Extract ALL Tasks/Subtasks with completion status ([x] vs [ ])</action>
<action>From Dev Agent Record → File List, compile list of claimed changes</action>
<action>Create review plan:
1. **AC Validation**: Verify each AC is actually implemented
2. **Task Audit**: Verify each [x] task is really done
3. **Code Quality**: Security, performance, maintainability
4. **Test Quality**: Real tests vs placeholder bullshit
</action>
</step>
<step n="3" goal="Execute adversarial review">
<critical>VALIDATE EVERY CLAIM - Check git reality vs story claims</critical>
<!-- Git vs Story Discrepancies -->
<action>Review git vs story File List discrepancies:
1. **Files changed but not in story File List** → MEDIUM finding (incomplete documentation)
2. **Story lists files but no git changes** → HIGH finding (false claims)
3. **Uncommitted changes not documented** → MEDIUM finding (transparency issue)
</action>
<!-- Use combined file list: story File List + git discovered files -->
<action>Create comprehensive review file list from story File List and git changes</action>
<!-- AC Validation -->
<action>For EACH Acceptance Criterion:
1. Read the AC requirement
2. Search implementation files for evidence
3. Determine: IMPLEMENTED, PARTIAL, or MISSING
4. If MISSING/PARTIAL → HIGH SEVERITY finding
</action>
<!-- Task Completion Audit -->
<action>For EACH task marked [x]:
1. Read the task description
2. Search files for evidence it was actually done
3. **CRITICAL**: If marked [x] but NOT DONE → CRITICAL finding
4. Record specific proof (file:line)
</action>
<!-- Code Quality Deep Dive -->
<action>For EACH file in comprehensive review list:
1. **Security**: Look for injection risks, missing validation, auth issues
2. **Performance**: N+1 queries, inefficient loops, missing caching
3. **Error Handling**: Missing try/catch, poor error messages
4. **Code Quality**: Complex functions, magic numbers, poor naming
5. **Test Quality**: Are tests real assertions or placeholders?
</action>
<check if="total_issues_found lt 3">
<critical>NOT LOOKING HARD ENOUGH - Find more problems!</critical>
<action>Re-examine code for:
- Edge cases and null handling
- Architecture violations
- Documentation gaps
- Integration issues
- Dependency problems
- Git commit message quality (if applicable)
</action>
<action>Find at least 3 more specific, actionable issues</action>
</check>
</step>
<step n="4" goal="Present findings and fix them">
<action>Categorize findings: HIGH (must fix), MEDIUM (should fix), LOW (nice to fix)</action>
<output>**🔥 CODE REVIEW FINDINGS, {user_name}!**
**Story:** {{story_file}}
**Git vs Story Discrepancies:** {{git_discrepancy_count}} found
**Issues Found:** {{high_count}} High, {{medium_count}} Medium, {{low_count}} Low
## 🔴 CRITICAL ISSUES
- Tasks marked [x] but not actually implemented
- Acceptance Criteria not implemented
- Story claims files changed but no git evidence
- Security vulnerabilities
## 🟡 MEDIUM ISSUES
- Files changed but not documented in story File List
- Uncommitted changes not tracked
- Performance problems
- Poor test coverage/quality
- Code maintainability issues
## 🟢 LOW ISSUES
- Code style improvements
- Documentation gaps
- Git commit message quality
</output>
<ask>What should I do with these issues?
1. **Fix them automatically** - I'll update the code and tests
2. **Create action items** - Add to story Tasks/Subtasks for later
3. **Show me details** - Deep dive into specific issues
Choose [1], [2], or specify which issue to examine:</ask>
<check if="user chooses 1">
<action>Fix all HIGH and MEDIUM issues in the code</action>
<action>Add/update tests as needed</action>
<action>Update File List in story if files changed</action>
<action>Update story Dev Agent Record with fixes applied</action>
</check>
<check if="user chooses 2">
<action>Add "Review Follow-ups (AI)" subsection to Tasks/Subtasks</action>
<action>For each issue: `- [ ] [AI-Review][Severity] Description [file:line]`</action>
</check>
<check if="user chooses 3">
<action>Show detailed explanation with code examples</action>
<action>Return to fix decision</action>
</check>
</step>
<step n="5" goal="Update story status">
<action>If all HIGH issues fixed and ACs implemented → Update story Status to "done"</action>
<action>If issues remain → Update story Status to "in-progress"</action>
<action>Save story file</action>
<output>**✅ Review Complete!**
**Story Status:** {{new_status}}
**Issues Fixed:** {{fixed_count}}
**Action Items Created:** {{action_count}}
{{#if new_status == "done"}}Story is ready for next work!{{else}}Address the action items and continue development.{{/if}}
</output>
</step>
</workflow>