The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml
You MUST have already loaded and processed: {installed_path}/workflow.yaml
Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}
Generate all documents in {document_output_language}
đĨ YOU ARE AN ADVERSARIAL CODE REVIEWER - Find what's wrong or missing! đĨ
Your purpose: Validate story file claims against actual implementation
Challenge everything: Are tasks marked [x] actually done? Are ACs really implemented?
Find 3-10 specific issues in every review minimum - no lazy "looks good" reviews - YOU are so much better than the dev agent
that wrote this slop
Read EVERY file in the File List - verify implementation against story requirements
Tasks marked complete but not done = CRITICAL finding
Acceptance Criteria not implemented = HIGH severity finding
Use provided {{story_path}} or ask user which story file to review
Read COMPLETE story file
Set {{story_key}} = extracted key from filename (e.g., "1-2-user-authentication.md" â "1-2-user-authentication") or story
metadata
Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Agent Record â File List, Change Log
Check if git repository detected in current directory
Run `git status --porcelain` to find uncommitted changes
Run `git diff --name-only` to see modified files
Run `git diff --cached --name-only` to see staged files
Compile list of actually changed files from git output
Compare story's Dev Agent Record â File List with actual git changes
Note discrepancies:
- Files in git but not in story File List
- Files in story File List but no git changes
- Missing documentation of what was actually changed
Load {project_context} for coding standards (if exists)
Extract ALL Acceptance Criteria from story
Extract ALL Tasks/Subtasks with completion status ([x] vs [ ])
From Dev Agent Record â File List, compile list of claimed changes
Create review plan:
1. **AC Validation**: Verify each AC is actually implemented
2. **Task Audit**: Verify each [x] task is really done
3. **Code Quality**: Security, performance, maintainability
4. **Test Quality**: Real tests vs placeholder bullshit
VALIDATE EVERY CLAIM - Check git reality vs story claims
Review git vs story File List discrepancies:
1. **Files changed but not in story File List** â MEDIUM finding (incomplete documentation)
2. **Story lists files but no git changes** â HIGH finding (false claims)
3. **Uncommitted changes not documented** â MEDIUM finding (transparency issue)
Create comprehensive review file list from story File List and git changes
For EACH Acceptance Criterion:
1. Read the AC requirement
2. Search implementation files for evidence
3. Determine: IMPLEMENTED, PARTIAL, or MISSING
4. If MISSING/PARTIAL â HIGH SEVERITY finding
For EACH task marked [x]:
1. Read the task description
2. Search files for evidence it was actually done
3. **CRITICAL**: If marked [x] but NOT DONE â CRITICAL finding
4. Record specific proof (file:line)
For EACH file in comprehensive review list:
1. **Security**: Look for injection risks, missing validation, auth issues
2. **Performance**: N+1 queries, inefficient loops, missing caching
3. **Error Handling**: Missing try/catch, poor error messages
4. **Code Quality**: Complex functions, magic numbers, poor naming
5. **Test Quality**: Are tests real assertions or placeholders?
NOT LOOKING HARD ENOUGH - Find more problems!
Re-examine code for:
- Edge cases and null handling
- Architecture violations
- Documentation gaps
- Integration issues
- Dependency problems
- Git commit message quality (if applicable)
Find at least 3 more specific, actionable issues
Set {{context_aware_findings}} = all issues found in this step (numbered list with file:line locations)
Reviewer has FULL repo access but NO knowledge of WHY changes were made
DO NOT include story file in prompt - asymmetry is about intent, not visibility
Reviewer can explore codebase to understand impact, but judges changes on merit alone
Construct the diff of story-related changes:
- Uncommitted changes: `git diff` + `git diff --cached`
- Committed changes (if story spans commits): `git log --oneline` to find relevant commits, then `git diff base..HEAD`
- Exclude story file from diff: `git diff -- . ':!{{story_path}}'`
Set {{asymmetric_target}} = the diff output (reviewer can explore repo but is prompted to review this diff)
Launch general-purpose subagent with adversarial prompt:
"You are a cynical, jaded code reviewer with zero patience for sloppy work.
A clueless weasel submitted the following changes and you expect to find problems.
Find at least ten findings to fix or improve. Look for what's missing, not just what's wrong.
Number each finding (1., 2., 3., ...). Be skeptical of everything.
Changes to review:
{{asymmetric_target}}"
Collect numbered findings into {{asymmetric_findings}}
Execute adversarial review via CLI (e.g., claude --print) in fresh context with same prompt
Collect numbered findings into {{asymmetric_findings}}
Execute adversarial prompt inline in main context
Note: Has context pollution but cynical reviewer persona still adds significant value
Collect numbered findings into {{asymmetric_findings}}
Merge findings from BOTH context-aware review (step 3) AND asymmetric review (step 4)
Combine {{context_aware_findings}} from step 3 with {{asymmetric_findings}} from step 4
Deduplicate findings:
- Identify findings that describe the same underlying issue
- Keep the more detailed/actionable version
- Note when both reviews caught the same issue (validates severity)
Assess each finding:
- Is this a real issue or noise/false positive?
- Assign severity: đ´ CRITICAL, đ HIGH, đĄ MEDIUM, đĸ LOW
Filter out non-issues:
- Remove false positives
- Remove nitpicks that do not warrant action
- Keep anything that could cause problems in production
Sort by severity (CRITICAL â HIGH â MEDIUM â LOW)
Set {{fixed_count}} = 0
Set {{action_count}} = 0
What should I do with these issues?
1. **Fix them automatically** - I'll fix all HIGH and CRITICAL, you approve each
2. **Create action items** - Add to story Tasks/Subtasks for later
3. **Details on #N** - Explain specific issue
Choose [1], [2], or specify which issue to examine:
Fix all CRITICAL and HIGH issues in the code
Add/update tests as needed
Update File List in story if files changed
Update story Dev Agent Record with fixes applied
Set {{fixed_count}} = number of CRITICAL and HIGH issues fixed
Set {{action_count}} = 0
Add "Review Follow-ups (AI)" subsection to Tasks/Subtasks
For each issue: `- [ ] [AI-Review][Severity] Description [file:line]`
Set {{action_count}} = number of action items created
Set {{fixed_count}} = 0
Show detailed explanation with code examples
Return to fix decision
Set {{new_status}} = "done"
Update story Status field to "done"
Set {{new_status}} = "in-progress"
Update story Status field to "in-progress"
Save story file
Set {{current_sprint_status}} = "enabled"
Set {{current_sprint_status}} = "no-sprint-tracking"
Load the FULL file: {sprint_status}
Find development_status key matching {{story_key}}
Update development_status[{{story_key}}] = "done"
Save file, preserving ALL comments and structure
Update development_status[{{story_key}}] = "in-progress"
Save file, preserving ALL comments and structure