This commit is contained in:
Alex Verkhovsky 2026-01-05 02:29:53 +00:00 committed by GitHub
commit f781ba6566
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
12 changed files with 1062 additions and 281 deletions

View File

@ -1,7 +1,7 @@
<!-- if possible, run this in a separate subagent or process with read access to the project, <!-- if possible, run this in a separate subagent or process with read access to the project,
but no context except the content to review --> but no context except the content to review -->
<task id="_bmad/core/tasks/review-adversarial-general.xml" name="Adversarial Review (General)"> <task id="_bmad/core/tasks/review-adversarial-general.xml" name="Adversarial Review">
<objective>Cynically review content and produce findings</objective> <objective>Cynically review content and produce findings</objective>
<inputs> <inputs>

View File

@ -1,227 +0,0 @@
<workflow>
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
<critical>Generate all documents in {document_output_language}</critical>
<critical>🔥 YOU ARE AN ADVERSARIAL CODE REVIEWER - Find what's wrong or missing! 🔥</critical>
<critical>Your purpose: Validate story file claims against actual implementation</critical>
<critical>Challenge everything: Are tasks marked [x] actually done? Are ACs really implemented?</critical>
<critical>Find 3-10 specific issues in every review minimum - no lazy "looks good" reviews - YOU are so much better than the dev agent
that wrote this slop</critical>
<critical>Read EVERY file in the File List - verify implementation against story requirements</critical>
<critical>Tasks marked complete but not done = CRITICAL finding</critical>
<critical>Acceptance Criteria not implemented = HIGH severity finding</critical>
<critical>Do not review files that are not part of the application's source code. Always exclude the _bmad/ and _bmad-output/ folders from the review. Always exclude IDE and CLI configuration folders like .cursor/ and .windsurf/ and .claude/</critical>
<step n="1" goal="Load story and discover changes">
<action>Use provided {{story_path}} or ask user which story file to review</action>
<action>Read COMPLETE story file</action>
<action>Set {{story_key}} = extracted key from filename (e.g., "1-2-user-authentication.md" → "1-2-user-authentication") or story
metadata</action>
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Agent Record → File List, Change Log</action>
<!-- Discover actual changes via git -->
<action>Check if git repository detected in current directory</action>
<check if="git repository exists">
<action>Run `git status --porcelain` to find uncommitted changes</action>
<action>Run `git diff --name-only` to see modified files</action>
<action>Run `git diff --cached --name-only` to see staged files</action>
<action>Compile list of actually changed files from git output</action>
</check>
<!-- Cross-reference story File List vs git reality -->
<action>Compare story's Dev Agent Record → File List with actual git changes</action>
<action>Note discrepancies:
- Files in git but not in story File List
- Files in story File List but no git changes
- Missing documentation of what was actually changed
</action>
<invoke-protocol name="discover_inputs" />
<action>Load {project_context} for coding standards (if exists)</action>
</step>
<step n="2" goal="Build review attack plan">
<action>Extract ALL Acceptance Criteria from story</action>
<action>Extract ALL Tasks/Subtasks with completion status ([x] vs [ ])</action>
<action>From Dev Agent Record → File List, compile list of claimed changes</action>
<action>Create review plan:
1. **AC Validation**: Verify each AC is actually implemented
2. **Task Audit**: Verify each [x] task is really done
3. **Code Quality**: Security, performance, maintainability
4. **Test Quality**: Real tests vs placeholder bullshit
</action>
</step>
<step n="3" goal="Execute adversarial review">
<critical>VALIDATE EVERY CLAIM - Check git reality vs story claims</critical>
<!-- Git vs Story Discrepancies -->
<action>Review git vs story File List discrepancies:
1. **Files changed but not in story File List** → MEDIUM finding (incomplete documentation)
2. **Story lists files but no git changes** → HIGH finding (false claims)
3. **Uncommitted changes not documented** → MEDIUM finding (transparency issue)
</action>
<!-- Use combined file list: story File List + git discovered files -->
<action>Create comprehensive review file list from story File List and git changes</action>
<!-- AC Validation -->
<action>For EACH Acceptance Criterion:
1. Read the AC requirement
2. Search implementation files for evidence
3. Determine: IMPLEMENTED, PARTIAL, or MISSING
4. If MISSING/PARTIAL → HIGH SEVERITY finding
</action>
<!-- Task Completion Audit -->
<action>For EACH task marked [x]:
1. Read the task description
2. Search files for evidence it was actually done
3. **CRITICAL**: If marked [x] but NOT DONE → CRITICAL finding
4. Record specific proof (file:line)
</action>
<!-- Code Quality Deep Dive -->
<action>For EACH file in comprehensive review list:
1. **Security**: Look for injection risks, missing validation, auth issues
2. **Performance**: N+1 queries, inefficient loops, missing caching
3. **Error Handling**: Missing try/catch, poor error messages
4. **Code Quality**: Complex functions, magic numbers, poor naming
5. **Test Quality**: Are tests real assertions or placeholders?
</action>
<check if="total_issues_found lt 3">
<critical>NOT LOOKING HARD ENOUGH - Find more problems!</critical>
<action>Re-examine code for:
- Edge cases and null handling
- Architecture violations
- Documentation gaps
- Integration issues
- Dependency problems
- Git commit message quality (if applicable)
</action>
<action>Find at least 3 more specific, actionable issues</action>
</check>
</step>
<step n="4" goal="Present findings and fix them">
<action>Categorize findings: HIGH (must fix), MEDIUM (should fix), LOW (nice to fix)</action>
<action>Set {{fixed_count}} = 0</action>
<action>Set {{action_count}} = 0</action>
<output>**🔥 CODE REVIEW FINDINGS, {user_name}!**
**Story:** {{story_file}}
**Git vs Story Discrepancies:** {{git_discrepancy_count}} found
**Issues Found:** {{high_count}} High, {{medium_count}} Medium, {{low_count}} Low
## 🔴 CRITICAL ISSUES
- Tasks marked [x] but not actually implemented
- Acceptance Criteria not implemented
- Story claims files changed but no git evidence
- Security vulnerabilities
## 🟡 MEDIUM ISSUES
- Files changed but not documented in story File List
- Uncommitted changes not tracked
- Performance problems
- Poor test coverage/quality
- Code maintainability issues
## 🟢 LOW ISSUES
- Code style improvements
- Documentation gaps
- Git commit message quality
</output>
<ask>What should I do with these issues?
1. **Fix them automatically** - I'll update the code and tests
2. **Create action items** - Add to story Tasks/Subtasks for later
3. **Show me details** - Deep dive into specific issues
Choose [1], [2], or specify which issue to examine:</ask>
<check if="user chooses 1">
<action>Fix all HIGH and MEDIUM issues in the code</action>
<action>Add/update tests as needed</action>
<action>Update File List in story if files changed</action>
<action>Update story Dev Agent Record with fixes applied</action>
<action>Set {{fixed_count}} = number of HIGH and MEDIUM issues fixed</action>
<action>Set {{action_count}} = 0</action>
</check>
<check if="user chooses 2">
<action>Add "Review Follow-ups (AI)" subsection to Tasks/Subtasks</action>
<action>For each issue: `- [ ] [AI-Review][Severity] Description [file:line]`</action>
<action>Set {{action_count}} = number of action items created</action>
<action>Set {{fixed_count}} = 0</action>
</check>
<check if="user chooses 3">
<action>Show detailed explanation with code examples</action>
<action>Return to fix decision</action>
</check>
</step>
<step n="5" goal="Update story status and sync sprint tracking">
<!-- Determine new status based on review outcome -->
<check if="all HIGH and MEDIUM issues fixed AND all ACs implemented">
<action>Set {{new_status}} = "done"</action>
<action>Update story Status field to "done"</action>
</check>
<check if="HIGH or MEDIUM issues remain OR ACs not fully implemented">
<action>Set {{new_status}} = "in-progress"</action>
<action>Update story Status field to "in-progress"</action>
</check>
<action>Save story file</action>
<!-- Determine sprint tracking status -->
<check if="{sprint_status} file exists">
<action>Set {{current_sprint_status}} = "enabled"</action>
</check>
<check if="{sprint_status} file does NOT exist">
<action>Set {{current_sprint_status}} = "no-sprint-tracking"</action>
</check>
<!-- Sync sprint-status.yaml when story status changes (only if sprint tracking enabled) -->
<check if="{{current_sprint_status}} != 'no-sprint-tracking'">
<action>Load the FULL file: {sprint_status}</action>
<action>Find development_status key matching {{story_key}}</action>
<check if="{{new_status}} == 'done'">
<action>Update development_status[{{story_key}}] = "done"</action>
<action>Save file, preserving ALL comments and structure</action>
<output>✅ Sprint status synced: {{story_key}} → done</output>
</check>
<check if="{{new_status}} == 'in-progress'">
<action>Update development_status[{{story_key}}] = "in-progress"</action>
<action>Save file, preserving ALL comments and structure</action>
<output>🔄 Sprint status synced: {{story_key}} → in-progress</output>
</check>
<check if="story key not found in sprint status">
<output>⚠️ Story file updated, but sprint-status sync failed: {{story_key}} not found in sprint-status.yaml</output>
</check>
</check>
<check if="{{current_sprint_status}} == 'no-sprint-tracking'">
<output> Story status updated (no sprint tracking configured)</output>
</check>
<output>**✅ Review Complete!**
**Story Status:** {{new_status}}
**Issues Fixed:** {{fixed_count}}
**Action Items Created:** {{action_count}}
{{#if new_status == "done"}}Code review complete!{{else}}Address the action items and continue development.{{/if}}
</output>
</step>
</workflow>

View File

@ -0,0 +1,119 @@
---
name: 'step-01-load-story'
description: 'Load story file, discover git changes, establish review context'
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/code-review'
thisStepFile: '{workflow_path}/steps/step-01-load-story.md'
nextStepFile: '{workflow_path}/steps/step-02-build-attack-plan.md'
---
# Step 1: Load Story and Discover Changes
**Goal:** Load story file, extract all sections, discover actual git changes for comparison.
---
## STATE VARIABLES (capture now, persist throughout)
These variables MUST be set in this step and available to all subsequent steps:
- `{story_path}` - Path to the story file being reviewed
- `{story_key}` - Story identifier (e.g., "1-2-user-authentication")
- `{story_file_list}` - Files claimed in story's Dev Agent Record → File List
- `{git_changed_files}` - Files actually changed according to git
- `{git_discrepancies}` - Differences between story claims and git reality
---
## EXECUTION SEQUENCE
### 1. Identify Story
**If `{story_path}` provided by user:**
- Use the provided path directly
**If NOT provided:**
- Ask user which story file to review
- Wait for response before proceeding
### 2. Load Story File
- Read COMPLETE story file from `{story_path}`
- Extract `{story_key}` from filename (e.g., "1-2-user-authentication.md" → "1-2-user-authentication") or story metadata
### 3. Parse Story Sections
Extract and store:
- **Story**: Title, description, status
- **Acceptance Criteria**: All ACs with their requirements
- **Tasks/Subtasks**: All tasks with completion status ([x] vs [ ])
- **Dev Agent Record → File List**: Claimed file changes
- **Change Log**: History of modifications
Set `{story_file_list}` = list of files from Dev Agent Record → File List
### 4. Discover Git Changes
Check if git repository exists:
**If Git repo detected:**
```bash
git status --porcelain
git diff --name-only
git diff --cached --name-only
```
Compile `{git_changed_files}` = union of modified, staged, and new files
**If NOT a Git repo:**
Set `{git_changed_files}` = "NO_GIT"
### 5. Cross-Reference Story vs Git
Compare `{story_file_list}` with `{git_changed_files}`:
Set `{git_discrepancies}` with categories:
- **files_in_git_not_story**: Files changed in git but not in story File List
- **files_in_story_not_git**: Files in story File List but no git changes
- **uncommitted_undocumented**: Uncommitted changes not tracked in story
### 6. Load Context
- Load `{project_context}` if exists (`**/project-context.md`)
- Invoke `discover_inputs` protocol to load architecture, UX, epic docs as needed
---
## NEXT STEP DIRECTIVE
**CRITICAL:** When this step completes, explicitly state:
"**NEXT:** Loading `step-02-build-attack-plan.md`"
---
## SUCCESS METRICS
- `{story_path}` identified and loaded
- `{story_key}` extracted
- All story sections parsed
- `{story_file_list}` compiled from Dev Agent Record
- `{git_changed_files}` discovered via git commands
- `{git_discrepancies}` calculated
- Context documents loaded
- Explicit NEXT directive provided
## FAILURE MODES
- Proceeding without story file loaded
- Missing `{story_key}` extraction
- Not parsing all story sections
- Skipping git change discovery
- Not calculating discrepancies
- No explicit NEXT directive at step completion

View File

@ -0,0 +1,149 @@
---
name: 'step-02-build-attack-plan'
description: 'Extract ACs and tasks, create comprehensive review plan for both phases'
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/code-review'
thisStepFile: '{workflow_path}/steps/step-02-build-attack-plan.md'
nextStepFile: '{workflow_path}/steps/step-03-context-aware-review.md'
---
# Step 2: Build Review Attack Plan
**Goal:** Extract all reviewable items from story and create attack plan for both review phases.
---
## AVAILABLE STATE
From previous step:
- `{story_path}` - Path to the story file
- `{story_key}` - Story identifier
- `{story_file_list}` - Files claimed in story
- `{git_changed_files}` - Files actually changed (git)
- `{git_discrepancies}` - Differences between claims and reality
---
## STATE VARIABLES (capture now)
- `{acceptance_criteria}` - All ACs extracted from story
- `{tasks_with_status}` - All tasks with their [x] or [ ] status
- `{comprehensive_file_list}` - Union of story files + git files
- `{review_attack_plan}` - Structured plan for both phases
---
## EXECUTION SEQUENCE
### 1. Extract Acceptance Criteria
Parse all Acceptance Criteria from story:
```
{acceptance_criteria} = [
{ id: "AC1", requirement: "...", testable: true/false },
{ id: "AC2", requirement: "...", testable: true/false },
...
]
```
Note any ACs that are vague or untestable.
### 2. Extract Tasks with Status
Parse all Tasks/Subtasks with completion markers:
```
{tasks_with_status} = [
{ id: "T1", description: "...", status: "complete" ([x]) or "incomplete" ([ ]) },
{ id: "T1.1", description: "...", status: "complete" or "incomplete" },
...
]
```
Flag any tasks marked complete [x] for verification.
### 3. Build Comprehensive File List
Merge `{story_file_list}` and `{git_changed_files}`:
```
{comprehensive_file_list} = union of:
- Files in story Dev Agent Record
- Files changed according to git
- Deduped and sorted
```
Exclude from review:
- `_bmad/`, `_bmad-output/`
- `.cursor/`, `.windsurf/`, `.claude/`
- IDE/editor config files
### 4. Create Review Attack Plan
Structure the `{review_attack_plan}`:
```
PHASE 1: Context-Aware Review (Step 3)
├── Git vs Story Discrepancies
│ └── {git_discrepancies} items
├── AC Validation
│ └── {acceptance_criteria} items to verify
├── Task Completion Audit
│ └── {tasks_with_status} marked [x] to verify
└── Code Quality Review
└── {comprehensive_file_list} files to review
PHASE 2: Adversarial Asymmetric Review (Step 4)
└── Context-independent diff review
└── Diff constructed from git changes
└── Story file EXCLUDED from reviewer context
└── Pure code quality assessment
```
### 5. Preview Attack Plan
Present to user (brief summary):
```
**Review Attack Plan**
**Story:** {story_key}
**ACs to verify:** {count}
**Tasks marked complete:** {count}
**Files to review:** {count}
**Git discrepancies detected:** {count}
Proceeding with dual-phase review...
```
---
## NEXT STEP DIRECTIVE
**CRITICAL:** When this step completes, explicitly state:
"**NEXT:** Loading `step-03-context-aware-review.md`"
---
## SUCCESS METRICS
- All ACs extracted with testability assessment
- All tasks extracted with completion status
- Comprehensive file list built (story + git)
- Exclusions applied correctly
- Attack plan structured for both phases
- Summary presented to user
- Explicit NEXT directive provided
## FAILURE MODES
- Missing AC extraction
- Not capturing task completion status
- Forgetting to merge story + git files
- Not excluding IDE/config directories
- Skipping attack plan structure
- No explicit NEXT directive at step completion

View File

@ -0,0 +1,175 @@
---
name: 'step-03-context-aware-review'
description: 'Story-aware validation: verify ACs, audit task completion, check git discrepancies'
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/code-review'
thisStepFile: '{workflow_path}/steps/step-03-context-aware-review.md'
nextStepFile: '{workflow_path}/steps/step-04-adversarial-review.md'
---
# Step 3: Context-Aware Review
**Goal:** Perform story-aware validation - verify AC implementation, audit task completion, review code quality with full story context.
<critical>VALIDATE EVERY CLAIM - Check git reality vs story claims</critical>
<critical>You KNOW the story requirements - use that knowledge to find gaps</critical>
---
## AVAILABLE STATE
From previous steps:
- `{story_path}`, `{story_key}`
- `{story_file_list}`, `{git_changed_files}`, `{git_discrepancies}`
- `{acceptance_criteria}`, `{tasks_with_status}`
- `{comprehensive_file_list}`, `{review_attack_plan}`
---
## STATE VARIABLE (capture now)
- `{context_aware_findings}` - All findings from this phase
Initialize `{context_aware_findings}` as empty list.
---
## EXECUTION SEQUENCE
### 1. Git vs Story Discrepancies
Review `{git_discrepancies}` and create findings:
| Discrepancy Type | Severity |
| --- | --- |
| Files changed but not in story File List | Medium |
| Story lists files but no git changes | High |
| Uncommitted changes not documented | Medium |
For each discrepancy, add to `{context_aware_findings}` (no IDs yet - assigned after merge):
```
{
source: "git-discrepancy",
severity: "...",
description: "...",
evidence: "file: X, git says: Y, story says: Z"
}
```
### 2. Acceptance Criteria Validation
For EACH AC in `{acceptance_criteria}`:
1. Read the AC requirement
2. Search implementation files in `{comprehensive_file_list}` for evidence
3. Determine status: IMPLEMENTED, PARTIAL, or MISSING
4. If PARTIAL or MISSING → add High severity finding
Add to `{context_aware_findings}`:
```
{
source: "ac-validation",
severity: "High",
description: "AC {id} not fully implemented: {details}",
evidence: "Expected: {ac}, Found: {what_was_found}"
}
```
### 3. Task Completion Audit
For EACH task marked [x] in `{tasks_with_status}`:
1. Read the task description
2. Search files for evidence it was actually done
3. **Critical**: If marked [x] but NOT DONE → Critical finding
4. Record specific proof (file:line) if done
Add to `{context_aware_findings}` if false:
```
{
source: "task-audit",
severity: "Critical",
description: "Task marked complete but not implemented: {task}",
evidence: "Searched: {files}, Found: no evidence of {expected}"
}
```
### 4. Code Quality Review (Context-Aware)
For EACH file in `{comprehensive_file_list}`:
Review with STORY CONTEXT (you know what was supposed to be built):
- **Security**: Missing validation for AC-specified inputs?
- **Performance**: Story mentioned scale requirements met?
- **Error Handling**: Edge cases from AC covered?
- **Test Quality**: Tests actually verify ACs or just placeholders?
- **Architecture Compliance**: Follows patterns in architecture doc?
Add findings to `{context_aware_findings}` with appropriate severity.
### 5. Minimum Finding Check
<critical>If total findings < 3, NOT LOOKING HARD ENOUGH</critical>
Re-examine for:
- Edge cases not covered by implementation
- Documentation gaps
- Integration issues with other components
- Dependency problems
- Comments missing for complex logic
---
## PHASE 1 SUMMARY
Present context-aware findings:
```
**Phase 1: Context-Aware Review Complete**
**Findings:** {count}
- Critical: {count}
- High: {count}
- Medium: {count}
- Low: {count}
Proceeding to Phase 2: Adversarial Review...
```
Store `{context_aware_findings}` for consolidation in step 5.
---
## NEXT STEP DIRECTIVE
**CRITICAL:** When this step completes, explicitly state:
"**NEXT:** Loading `step-04-adversarial-review.md`"
---
## SUCCESS METRICS
- All git discrepancies reviewed and findings created
- Every AC checked for implementation evidence
- Every [x] task verified with proof
- Code quality reviewed with story context
- Minimum 3 findings (push harder if not)
- `{context_aware_findings}` populated
- Phase summary presented
- Explicit NEXT directive provided
## FAILURE MODES
- Accepting "looks good" with < 3 findings
- Not verifying [x] tasks with actual evidence
- Missing AC validation
- Ignoring git discrepancies
- Not storing findings for consolidation
- No explicit NEXT directive at step completion

View File

@ -0,0 +1,159 @@
---
name: 'step-04-adversarial-review'
description: 'Context-independent adversarial diff review via subagent - no story knowledge'
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/code-review'
thisStepFile: '{workflow_path}/steps/step-04-adversarial-review.md'
nextStepFile: '{workflow_path}/steps/step-05-consolidate-findings.md'
---
# Step 4: Adversarial Review (Information Asymmetric)
**Goal:** Perform context-independent adversarial review of code changes. Reviewer sees ONLY the diff - no story, no ACs, no context about WHY changes were made.
<critical>Reviewer has FULL repo access but NO knowledge of WHY changes were made</critical>
<critical>DO NOT include story file in prompt - asymmetry is about intent, not visibility</critical>
<critical>This catches issues a fresh reviewer would find that story-biased review might miss</critical>
---
## AVAILABLE STATE
From previous steps:
- `{story_path}`, `{story_key}`
- `{file_list}` - Files listed in story's File List section
- `{context_aware_findings}` - Findings from Phase 1
---
## STATE VARIABLE (capture now)
- `{baseline_commit}` - From story file Dev Agent Record
- `{diff_output}` - Complete diff of changes
- `{asymmetric_findings}` - Findings from adversarial review
---
## EXECUTION SEQUENCE
### 1. Construct Diff
Build complete diff of all changes for this story.
**Step 1a: Read baseline from story file**
Extract `Baseline Commit` from the story file's Dev Agent Record section.
- If found and not "NO_GIT": use as `{baseline_commit}`
- If "NO_GIT" or missing: proceed to fallback
**Step 1b: Construct diff (with baseline)**
If `{baseline_commit}` is a valid commit hash:
```bash
git diff {baseline_commit} -- ':!{implementation_artifacts}'
```
This captures all changes (committed + uncommitted) since dev-story started.
**Step 1c: Fallback (no baseline)**
If no baseline available, review current state of files in `{file_list}`:
- Read each file listed in the story's File List section
- Review as full file content (not a diff)
**Include in `{diff_output}`:**
- All modified tracked files (except files in `{implementation_artifacts}` - asymmetry requires hiding intent)
- All new files created for this story
- Full content for new files
**Note:** Do NOT `git add` anything - this is read-only inspection.
### 2. Invoke Adversarial Review
With `{diff_output}` constructed, invoke the review task. If possible, use information asymmetry: run this step, and only it, in a separate subagent or process with read access to the project, but no context except the `{diff_output}`.
```xml
<invoke-task>Review {diff_output} using {project-root}/_bmad/core/tasks/review-adversarial-general.xml</invoke-task>
```
**Platform fallback:** If task invocation not available, load the task file and execute its instructions inline, passing `{diff_output}` as the content.
The task should: review `{diff_output}` and return a list of findings.
### 3. Process Adversarial Findings
Capture findings from adversarial review.
**If zero findings:** HALT - this is suspicious. Re-analyze or ask for guidance.
Evaluate severity (Critical, High, Medium, Low) and validity (Real, Noise, Undecided).
Add each finding to `{asymmetric_findings}` (no IDs yet - assigned after merge):
```
{
source: "adversarial",
severity: "...",
validity: "...",
description: "...",
location: "file:line (if applicable)"
}
```
### 4. Phase 2 Summary
Present adversarial findings:
```
**Phase 2: Adversarial Review Complete**
**Reviewer Context:** Pure diff review (no story knowledge)
**Findings:** {count}
- Critical: {count}
- High: {count}
- Medium: {count}
- Low: {count}
**Validity Assessment:**
- Real: {count}
- Noise: {count}
- Undecided: {count}
Proceeding to findings consolidation...
```
---
## NEXT STEP DIRECTIVE
**CRITICAL:** When this step completes, explicitly state:
"**NEXT:** Loading `step-05-consolidate-findings.md`"
---
## SUCCESS METRICS
- Diff constructed from correct source (uncommitted or commits)
- Story file excluded from diff
- Task invoked with diff as input
- Adversarial review executed
- Findings captured with severity and validity
- `{asymmetric_findings}` populated
- Phase summary presented
- Explicit NEXT directive provided
## FAILURE MODES
- Including story file in diff (breaks asymmetry)
- Skipping adversarial review entirely
- Accepting zero findings without halt
- Invoking task without providing diff input
- Missing severity/validity classification
- Not storing findings for consolidation
- No explicit NEXT directive at step completion

View File

@ -0,0 +1,162 @@
---
name: 'step-05-consolidate-findings'
description: 'Merge and deduplicate findings from both review phases'
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/code-review'
thisStepFile: '{workflow_path}/steps/step-05-consolidate-findings.md'
nextStepFile: '{workflow_path}/steps/step-06-resolve-and-update.md'
---
# Step 5: Consolidate Findings
**Goal:** Merge findings from context-aware review (Phase 1) and adversarial review (Phase 2), deduplicate, and present unified findings table.
---
## AVAILABLE STATE
From previous steps:
- `{story_path}`, `{story_key}`
- `{context_aware_findings}` - Findings from Phase 1 (step 3)
- `{asymmetric_findings}` - Findings from Phase 2 (step 4)
---
## STATE VARIABLE (capture now)
- `{consolidated_findings}` - Merged, deduplicated findings
---
## EXECUTION SEQUENCE
### 1. Merge All Findings
Combine both finding lists:
```
all_findings = {context_aware_findings} + {asymmetric_findings}
```
### 2. Deduplicate Findings
Identify duplicates (same underlying issue found by both phases):
**Duplicate Detection Criteria:**
- Same file + same line range
- Same issue type (e.g., both about error handling in same function)
- Overlapping descriptions
**Resolution Rule:**
Keep the MORE DETAILED version:
- If context-aware finding has AC reference → keep that
- If adversarial finding has better technical detail → keep that
- When in doubt, keep context-aware (has more context)
Note which findings were merged (for transparency in the summary).
### 3. Normalize Severity
Apply consistent severity scale (Critical, High, Medium, Low).
### 4. Filter Noise
Review adversarial findings marked as Noise:
- If clearly false positive (e.g., style preference, not actual issue) → exclude
- If questionable → keep with Undecided validity
- If context reveals it's actually valid → upgrade to Real
**Do NOT filter:**
- Any Critical or High severity
- Any context-aware findings (they have story context)
### 5. Sort and Number Findings
Sort by severity (Critical → High → Medium → Low), then assign IDs: F1, F2, F3, etc.
Build `{consolidated_findings}`:
```markdown
| ID | Severity | Source | Description | Location |
|----|----------|--------|-------------|----------|
| F1 | Critical | task-audit | Task 3 marked [x] but not implemented | src/auth.ts |
| F2 | High | ac-validation | AC2 partially implemented | src/api/*.ts |
| F3 | High | adversarial | Missing error handling in API calls | src/api/client.ts:45 |
| F4 | Medium | git-discrepancy | File changed but not in story | src/utils.ts |
| F5 | Low | adversarial | Magic number should be constant | src/config.ts:12 |
```
### 6. Present Consolidated Findings
```markdown
**Consolidated Code Review Findings**
**Story:** {story_key}
**Summary:**
- Total findings: {count}
- Critical: {count}
- High: {count}
- Medium: {count}
- Low: {count}
**Deduplication:** {merged_count} duplicate findings merged
---
## Findings by Severity
### Critical (Must Fix)
{list critical findings with full details}
### High (Should Fix)
{list high findings with full details}
### Medium (Consider Fixing)
{list medium findings}
### Low (Nice to Fix)
{list low findings}
---
**Phase Sources:**
- Context-Aware (Phase 1): {count} findings
- Adversarial (Phase 2): {count} findings
```
---
## NEXT STEP DIRECTIVE
**CRITICAL:** When this step completes, explicitly state:
"**NEXT:** Loading `step-06-resolve-and-update.md`"
---
## SUCCESS METRICS
- All findings merged from both phases
- Duplicates identified and resolved (kept more detailed)
- Severity normalized consistently
- Noise filtered appropriately (but not excessively)
- Consolidated table created
- `{consolidated_findings}` populated
- Summary presented to user
- Explicit NEXT directive provided
## FAILURE MODES
- Missing findings from either phase
- Not detecting duplicates (double-counting issues)
- Inconsistent severity assignment
- Filtering real issues as noise
- Not storing consolidated findings
- No explicit NEXT directive at step completion

View File

@ -0,0 +1,216 @@
---
name: 'step-06-resolve-and-update'
description: 'Present findings, fix or create action items, update story and sprint status'
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/code-review'
thisStepFile: '{workflow_path}/steps/step-06-resolve-and-update.md'
---
# Step 6: Resolve Findings and Update Status
**Goal:** Present findings to user, handle resolution (fix or action items), update story file and sprint status.
---
## AVAILABLE STATE
From previous steps:
- `{story_path}`, `{story_key}`
- `{consolidated_findings}` - Merged findings from step 5
- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
---
## STATE VARIABLES (capture now)
- `{fixed_count}` - Number of issues fixed
- `{action_count}` - Number of action items created
- `{new_status}` - Final story status
---
## EXECUTION SEQUENCE
### 1. Present Resolution Options
```markdown
**Code Review Findings for {user_name}**
**Story:** {story_key}
**Total Issues:** {consolidated_findings.count}
{consolidated_findings_table}
---
**What should I do with these issues?**
**[1] Fix them automatically** - I'll update the code and tests
**[2] Create action items** - Add to story Tasks/Subtasks for later
**[3] Walk through** - Discuss each finding individually
**[4] Show details** - Deep dive into specific issues
Choose [1], [2], [3], [4], or specify which issue (e.g., "CF-3"):
```
### 2. Handle User Choice
**Option [1]: Fix Automatically**
1. For each CRITICAL and HIGH finding:
- Apply the fix in the code
- Add/update tests if needed
- Record what was fixed
2. Update story Dev Agent Record → File List if files changed
3. Add "Code Review Fixes Applied" entry to Change Log
4. Set `{fixed_count}` = number of issues fixed
5. Set `{action_count}` = 0 (LOW findings can become action items)
**Option [2]: Create Action Items**
1. Add "Review Follow-ups (AI)" subsection to Tasks/Subtasks
2. For each finding:
```
- [ ] [AI-Review][{severity}] {description} [{location}]
```
3. Set `{action_count}` = number of action items created
4. Set `{fixed_count}` = 0
**Option [3]: Walk Through**
For each finding in order:
1. Present finding with full context and code snippet
2. Ask: **[f]ix now / [s]kip / [d]iscuss more**
3. If fix: Apply fix immediately, increment `{fixed_count}`
4. If skip: Note as acknowledged, optionally create action item
5. If discuss: Provide more detail, repeat choice
6. Continue to next finding
After all processed, summarize what was fixed/skipped.
**Option [4]: Show Details**
1. Present expanded details for specific finding(s)
2. Return to resolution choice
### 3. Determine Final Status
Evaluate completion:
**If ALL conditions met:**
- All CRITICAL issues fixed
- All HIGH issues fixed or have action items
- All ACs verified as implemented
Set `{new_status}` = "done"
**Otherwise:**
Set `{new_status}` = "in-progress"
### 4. Update Story File
1. Update story Status field to `{new_status}`
2. Add review notes to Dev Agent Record:
```markdown
## Senior Developer Review (AI)
**Date:** {date}
**Reviewer:** AI Code Review
**Findings Summary:**
- CRITICAL: {count} ({fixed}/{action_items})
- HIGH: {count} ({fixed}/{action_items})
- MEDIUM: {count}
- LOW: {count}
**Resolution:** {approach_taken}
**Files Modified:** {list if fixes applied}
```
3. Update Change Log:
```markdown
- [{date}] Code review completed - {outcome_summary}
```
4. Save story file
### 5. Sync Sprint Status
Check if `{sprint_status}` file exists:
**If exists:**
1. Load `{sprint_status}`
2. Find `{story_key}` in development_status
3. Update status to `{new_status}`
4. Save file, preserving ALL comments and structure
```
Sprint status synced: {story_key} {new_status}
```
**If not exists or key not found:**
```
Sprint status sync skipped (no sprint tracking or key not found)
```
### 6. Completion Output
```markdown
** Code Review Complete!**
**Story:** {story_key}
**Final Status:** {new_status}
**Issues Fixed:** {fixed_count}
**Action Items Created:** {action_count}
{if new_status == "done"}
Code review passed! Story is ready for final verification.
{else}
Address the action items and run another review cycle.
{endif}
---
**Next Steps:**
- Commit changes (if fixes applied)
- Run tests to verify fixes
- Address remaining action items (if any)
- Mark story complete when all items resolved
```
---
## WORKFLOW COMPLETE
This is the final step. The Code Review workflow is now complete.
---
## SUCCESS METRICS
- Resolution options presented clearly
- User choice handled correctly
- Fixes applied cleanly (if chosen)
- Action items created correctly (if chosen)
- Story status determined correctly
- Story file updated with review notes
- Sprint status synced (if applicable)
- Completion summary provided
## FAILURE MODES
- Not presenting resolution options
- Fixing without user consent
- Not updating story file
- Wrong status determination (done when issues remain)
- Not syncing sprint status when it exists
- Missing completion summary

View File

@ -0,0 +1,62 @@
---
name: code-review
description: 'Perform an ADVERSARIAL Senior Developer code review with dual-phase analysis: context-aware story validation plus context-independent adversarial diff review. Finds 3-10 specific problems in every story. NEVER accepts "looks good" - must find minimum issues and can auto-fix with user approval.'
---
# Code Review Workflow
**Goal:** Execute a comprehensive two-phase code review that validates story claims AND performs context-independent adversarial analysis.
**Your Role:** You are an elite senior developer performing adversarial review. Challenge everything. Verify claims against reality. Find what's wrong or missing.
---
## WORKFLOW ARCHITECTURE
This uses **step-file architecture** for focused execution:
- Each step loads fresh to combat "lost in the middle"
- State persists via variables: `{story_path}`, `{story_key}`, `{context_aware_findings}`, `{asymmetric_findings}`
- Dual-phase review: context-aware (step 3) + adversarial asymmetric (step 4)
---
## INITIALIZATION
### Configuration Loading
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
- `user_name`, `communication_language`, `user_skill_level`, `document_output_language`
- `planning_artifacts`, `implementation_artifacts`
- `date` as system-generated current datetime
### Paths
- `installed_path` = `{project-root}/_bmad/bmm/workflows/4-implementation/code-review`
- `project_context` = `**/project-context.md` (load if exists)
- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
- `validation` = `{installed_path}/checklist.md`
### Related Tasks
- `adversarial_review_task` = `{project-root}/_bmad/core/tasks/review-adversarial-general.xml`
---
## CRITICAL DIRECTIVES
<critical>YOU ARE AN ADVERSARIAL CODE REVIEWER - Find what's wrong or missing!</critical>
<critical>Your purpose: Validate story file claims against actual implementation</critical>
<critical>Challenge everything: Are tasks marked [x] actually done? Are ACs really implemented?</critical>
<critical>Find 3-10 specific issues in every review minimum - no lazy "looks good" reviews</critical>
<critical>Read EVERY file in the File List - verify implementation against story requirements</critical>
<critical>Tasks marked complete but not done = CRITICAL finding</critical>
<critical>Acceptance Criteria not implemented = HIGH severity finding</critical>
<critical>Exclude `_bmad/`, `_bmad-output/`, `.cursor/`, `.windsurf/`, `.claude/` from review</critical>
---
## EXECUTION
Load and execute `steps/step-01-load-story.md` to begin the workflow.

View File

@ -1,51 +0,0 @@
# Review Story Workflow
name: code-review
description: "Perform an ADVERSARIAL Senior Developer code review that finds 3-10 specific problems in every story. Challenges everything: code quality, test coverage, architecture compliance, security, performance. NEVER accepts `looks good` - must find minimum issues and can auto-fix with user approval."
author: "BMad"
# Critical variables from config
config_source: "{project-root}/_bmad/bmm/config.yaml"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
user_skill_level: "{config_source}:user_skill_level"
document_output_language: "{config_source}:document_output_language"
date: system-generated
planning_artifacts: "{config_source}:planning_artifacts"
implementation_artifacts: "{config_source}:implementation_artifacts"
output_folder: "{implementation_artifacts}"
sprint_status: "{implementation_artifacts}/sprint-status.yaml"
# Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/code-review"
instructions: "{installed_path}/instructions.xml"
validation: "{installed_path}/checklist.md"
template: false
variables:
# Project context
project_context: "**/project-context.md"
story_dir: "{implementation_artifacts}"
# Smart input file references - handles both whole docs and sharded docs
# Priority: Whole document first, then sharded version
# Strategy: SELECTIVE LOAD - only load the specific epic needed for this story review
input_file_patterns:
architecture:
description: "System architecture for review context"
whole: "{planning_artifacts}/*architecture*.md"
sharded: "{planning_artifacts}/*architecture*/*.md"
load_strategy: "FULL_LOAD"
ux_design:
description: "UX design specification (if UI review)"
whole: "{planning_artifacts}/*ux*.md"
sharded: "{planning_artifacts}/*ux*/*.md"
load_strategy: "FULL_LOAD"
epics:
description: "Epic containing story being reviewed"
whole: "{planning_artifacts}/*epic*.md"
sharded_index: "{planning_artifacts}/*epic*/index.md"
sharded_single: "{planning_artifacts}/*epic*/epic-{{epic_num}}.md"
load_strategy: "SELECTIVE_LOAD"
standalone: true
web_bundle: false

View File

@ -219,6 +219,17 @@
<output> No sprint status file exists - story progress will be tracked in story file only</output> <output> No sprint status file exists - story progress will be tracked in story file only</output>
<action>Set {{current_sprint_status}} = "no-sprint-tracking"</action> <action>Set {{current_sprint_status}} = "no-sprint-tracking"</action>
</check> </check>
<!-- Capture baseline commit for code review -->
<check if="git is available">
<action>Capture current HEAD commit: `git rev-parse HEAD`</action>
<action>Store as {{baseline_commit}}</action>
<action>Write to story file Dev Agent Record: "**Baseline Commit:** {{baseline_commit}}"</action>
</check>
<check if="git is NOT available">
<action>Set {{baseline_commit}} = "NO_GIT"</action>
<action>Write to story file Dev Agent Record: "**Baseline Commit:** NO_GIT"</action>
</check>
</step> </step>
<step n="5" goal="Implement task following red-green-refactor cycle"> <step n="5" goal="Implement task following red-green-refactor cycle">

View File

@ -51,7 +51,11 @@ Use best-effort diff construction:
### Capture as {diff_output} ### Capture as {diff_output}
Merge all changes into `{diff_output}`. **Include in `{diff_output}`:**
- All modified tracked files (except `{tech_spec_path}` if tech-spec mode - asymmetry requires hiding intent)
- All new files created during this workflow
- Full content for new files
**Note:** Do NOT `git add` anything - this is read-only inspection. **Note:** Do NOT `git add` anything - this is read-only inspection.
@ -75,7 +79,7 @@ The task should: review `{diff_output}` and return a list of findings.
Capture the findings from the task output. Capture the findings from the task output.
**If zero findings:** HALT - this is suspicious. Re-analyze or request user guidance. **If zero findings:** HALT - this is suspicious. Re-analyze or request user guidance.
Evaluate severity (Critical, High, Medium, Low) and validity (real, noise, undecided). Evaluate severity (Critical, High, Medium, Low) and validity (Real, Noise, Undecided).
DO NOT exclude findings based on severity or validity unless explicitly asked to do so. DO NOT exclude findings based on severity or validity unless explicitly asked to do so.
Order findings by severity. Order findings by severity.
Number the ordered findings (F1, F2, F3, etc.). Number the ordered findings (F1, F2, F3, etc.).
@ -92,6 +96,7 @@ With findings in hand, load `step-06-resolve-findings.md` for user to choose res
## SUCCESS METRICS ## SUCCESS METRICS
- Diff constructed from baseline_commit - Diff constructed from baseline_commit
- Tech-spec excluded from diff when in tech-spec mode (information asymmetry)
- New files included in diff - New files included in diff
- Task invoked with diff as input - Task invoked with diff as input
- Findings received - Findings received
@ -100,6 +105,7 @@ With findings in hand, load `step-06-resolve-findings.md` for user to choose res
## FAILURE MODES ## FAILURE MODES
- Missing baseline_commit (can't construct accurate diff) - Missing baseline_commit (can't construct accurate diff)
- Including tech_spec_path in diff when in tech-spec mode (breaks asymmetry)
- Not including new untracked files in diff - Not including new untracked files in diff
- Invoking task without providing diff input - Invoking task without providing diff input
- Accepting zero findings without questioning - Accepting zero findings without questioning