From 5de8a78c6a0215eee3091ddf84bd83b905348cea Mon Sep 17 00:00:00 2001 From: Alex Verkhovsky Date: Sun, 15 Mar 2026 09:23:42 -0600 Subject: [PATCH] feat(skills): rewrite code-review skill with sharded step-file architecture Replace monolithic XML-step workflow with 4 sharded step files: - step-01: gather context (diff source, spec, chunking) - step-02: parallel review (blind hunter, edge case, acceptance auditor) - step-03: triage (normalize, deduplicate, classify into 5 buckets) - step-04: present (categorized findings with recommendations) Key changes: - Three parallel review layers via subagent invocation - Structured triage with intent_gap/bad_spec/patch/defer/reject categories - Works with or without a spec file - Loopbacks are recommendations, not automated re-runs - Remove checklist.md (superseded by triage step) - Remove discover-inputs.md (no longer referenced) Co-Authored-By: Claude Opus 4.6 (1M context) --- .../bmad-code-review/checklist.md | 23 -- .../bmad-code-review/discover-inputs.md | 88 ------ .../steps/step-01-gather-context.md | 48 +++ .../bmad-code-review/steps/step-02-review.md | 40 +++ .../bmad-code-review/steps/step-03-triage.md | 49 +++ .../bmad-code-review/steps/step-04-present.md | 40 +++ .../bmad-code-review/workflow.md | 289 +++--------------- 7 files changed, 218 insertions(+), 359 deletions(-) delete mode 100644 src/bmm/workflows/4-implementation/bmad-code-review/checklist.md delete mode 100644 src/bmm/workflows/4-implementation/bmad-code-review/discover-inputs.md create mode 100644 src/bmm/workflows/4-implementation/bmad-code-review/steps/step-01-gather-context.md create mode 100644 src/bmm/workflows/4-implementation/bmad-code-review/steps/step-02-review.md create mode 100644 src/bmm/workflows/4-implementation/bmad-code-review/steps/step-03-triage.md create mode 100644 src/bmm/workflows/4-implementation/bmad-code-review/steps/step-04-present.md diff --git a/src/bmm/workflows/4-implementation/bmad-code-review/checklist.md b/src/bmm/workflows/4-implementation/bmad-code-review/checklist.md deleted file mode 100644 index f213a6b96..000000000 --- a/src/bmm/workflows/4-implementation/bmad-code-review/checklist.md +++ /dev/null @@ -1,23 +0,0 @@ -# Senior Developer Review - Validation Checklist - -- [ ] Story file loaded from `{{story_path}}` -- [ ] Story Status verified as reviewable (review) -- [ ] Epic and Story IDs resolved ({{epic_num}}.{{story_num}}) -- [ ] Story Context located or warning recorded -- [ ] Epic Tech Spec located or warning recorded -- [ ] Architecture/standards docs loaded (as available) -- [ ] Tech stack detected and documented -- [ ] MCP doc search performed (or web fallback) and references captured -- [ ] Acceptance Criteria cross-checked against implementation -- [ ] File List reviewed and validated for completeness -- [ ] Tests identified and mapped to ACs; gaps noted -- [ ] Code quality review performed on changed files -- [ ] Security review performed on changed files and dependencies -- [ ] Outcome decided (Approve/Changes Requested/Blocked) -- [ ] Review notes appended under "Senior Developer Review (AI)" -- [ ] Change Log updated with review entry -- [ ] Status updated according to settings (if enabled) -- [ ] Sprint status synced (if sprint tracking enabled) -- [ ] Story saved successfully - -_Reviewer: {{user_name}} on {{date}}_ diff --git a/src/bmm/workflows/4-implementation/bmad-code-review/discover-inputs.md b/src/bmm/workflows/4-implementation/bmad-code-review/discover-inputs.md deleted file mode 100644 index 2c313db3d..000000000 --- a/src/bmm/workflows/4-implementation/bmad-code-review/discover-inputs.md +++ /dev/null @@ -1,88 +0,0 @@ -# Discover Inputs Protocol - -**Objective:** Intelligently load project files (whole or sharded) based on the workflow's Input Files configuration. - -**Prerequisite:** Only execute this protocol if the workflow defines an Input Files section. If no input file patterns are configured, skip this entirely. - ---- - -## Step 1: Parse Input File Patterns - -- Read the Input Files table from the workflow configuration. -- For each input group (prd, architecture, epics, ux, etc.), note the **load strategy** if specified. - -## Step 2: Load Files Using Smart Strategies - -For each pattern in the Input Files table, work through the following substeps in order: - -### 2a: Try Sharded Documents First - -If a sharded pattern exists for this input, determine the load strategy (defaults to **FULL_LOAD** if not specified), then apply the matching strategy: - -#### FULL_LOAD Strategy - -Load ALL files in the sharded directory. Use this for PRD, Architecture, UX, brownfield docs, or whenever the full picture is needed. - -1. Use the glob pattern to find ALL `.md` files (e.g., `{planning_artifacts}/*architecture*/*.md`). -2. Load EVERY matching file completely. -3. Concatenate content in logical order: `index.md` first if it exists, then alphabetical. -4. Store the combined result in a variable named `{pattern_name_content}` (e.g., `{architecture_content}`). - -#### SELECTIVE_LOAD Strategy - -Load a specific shard using a template variable. Example: used for epics with `{{epic_num}}`. - -1. Check for template variables in the sharded pattern (e.g., `{{epic_num}}`). -2. If the variable is undefined, ask the user for the value OR infer it from context. -3. Resolve the template to a specific file path. -4. Load that specific file. -5. Store in variable: `{pattern_name_content}`. - -#### INDEX_GUIDED Strategy - -Load index.md, analyze the structure and description of each doc in the index, then intelligently load relevant docs. - -**DO NOT BE LAZY** -- use best judgment to load documents that might have relevant information, even if there is only a 5% chance of relevance. - -1. Load `index.md` from the sharded directory. -2. Parse the table of contents, links, and section headers. -3. Analyze the workflow's purpose and objective. -4. Identify which linked/referenced documents are likely relevant. - - *Example:* If the workflow is about authentication and the index shows "Auth Overview", "Payment Setup", "Deployment" -- load the auth docs, consider deployment docs, skip payment. -5. Load all identified relevant documents. -6. Store combined content in variable: `{pattern_name_content}`. - -**When in doubt, LOAD IT** -- context is valuable, and being thorough is better than missing critical info. - ---- - -After applying the matching strategy, mark the pattern as **RESOLVED** and move to the next pattern. - -### 2b: Try Whole Document if No Sharded Found - -If no sharded matches were found OR no sharded pattern exists for this input: - -1. Attempt a glob match on the "whole" pattern (e.g., `{planning_artifacts}/*prd*.md`). -2. If matches are found, load ALL matching files completely (no offset/limit). -3. Store content in variable: `{pattern_name_content}` (e.g., `{prd_content}`). -4. Mark pattern as **RESOLVED** and move to the next pattern. - -### 2c: Handle Not Found - -If no matches were found for either sharded or whole patterns: - -1. Set `{pattern_name_content}` to empty string. -2. Note in session: "No {pattern_name} files found" -- this is not an error, just unavailable. Offer the user a chance to provide the file. - -## Step 3: Report Discovery Results - -List all loaded content variables with file counts. Example: - -``` -OK Loaded {prd_content} from 5 sharded files: prd/index.md, prd/requirements.md, ... -OK Loaded {architecture_content} from 1 file: Architecture.md -OK Loaded {epics_content} from selective load: epics/epic-3.md --- No ux_design files found -``` - -This gives the workflow transparency into what context is available. diff --git a/src/bmm/workflows/4-implementation/bmad-code-review/steps/step-01-gather-context.md b/src/bmm/workflows/4-implementation/bmad-code-review/steps/step-01-gather-context.md new file mode 100644 index 000000000..fb2a897a0 --- /dev/null +++ b/src/bmm/workflows/4-implementation/bmad-code-review/steps/step-01-gather-context.md @@ -0,0 +1,48 @@ +--- +name: Gather Context +description: 'Determine what to review, construct the diff, and load any spec/context documents.' +diff_output: '' # set at runtime +spec_file: '' # set at runtime (path or empty) +review_mode: '' # set at runtime: "full" or "no-spec" +--- + +# Step 1: Gather Context + +## RULES + +- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}` +- Do not modify any files. This step is read-only. + +## INSTRUCTIONS + +1. Ask the user: **What do you want to review?** Present these options: + - **Uncommitted changes** (staged + unstaged) + - **Staged changes only** + - **Branch diff** vs a base branch (ask which base branch) + - **Specific commit range** (ask for the range) + - **Provided diff or file list** (user pastes or provides a path) + +2. Construct `{diff_output}` from the chosen source. + - For **branch diff**: verify the base branch exists before running `git diff`. If it does not exist, HALT and ask the user for a valid branch. + - For **commit range**: verify the range resolves. If it does not, HALT and ask the user for a valid range. + - For **provided diff**: validate the content is non-empty and parseable. + - After constructing `{diff_output}`, verify it is non-empty regardless of source type. If empty, HALT and tell the user there is nothing to review. + +3. Ask the user: **Is there a spec or story file that provides context for these changes?** + - If yes: load it as `{spec_file}`, set `{review_mode}` = `"full"`. + - If no: set `{review_mode}` = `"no-spec"`. + +4. If `{review_mode}` = `"full"` and `{spec_file}` has a `context` field in its frontmatter listing additional docs, load each referenced document. Warn the user about any docs that cannot be found. + +5. Sanity check: if `{diff_output}` exceeds approximately 3000 lines, warn the user and offer to chunk the review by file group. + - If the user opts to chunk: agree on the first group, narrow `{diff_output}` accordingly, and list the remaining groups for the user to note for follow-up runs. + - If the user declines: proceed as-is with the full diff. + +### CHECKPOINT + +Present a summary before proceeding: diff stats (files changed, lines added/removed), `{review_mode}`, and loaded spec/context docs (if any). HALT and wait for user confirmation to proceed. + + +## NEXT + +Read fully and follow `./steps/step-02-review.md` diff --git a/src/bmm/workflows/4-implementation/bmad-code-review/steps/step-02-review.md b/src/bmm/workflows/4-implementation/bmad-code-review/steps/step-02-review.md new file mode 100644 index 000000000..ff81f8c6e --- /dev/null +++ b/src/bmm/workflows/4-implementation/bmad-code-review/steps/step-02-review.md @@ -0,0 +1,40 @@ +--- +name: Review +description: 'Launch parallel adversarial review layers and collect findings.' +--- + +# Step 2: Review + +## RULES + +- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}` +- The Blind Hunter subagent receives NO project context — diff only. +- The Edge Case Hunter subagent receives diff and project read access. +- The Acceptance Auditor subagent receives diff, spec, and context docs. + +## INSTRUCTIONS + +1. Launch parallel subagents. Each subagent gets NO conversation history from this session: + + - **Blind Hunter** -- Invoke the `bmad-review-adversarial-general` skill in a subagent. Pass `content` = `{diff_output}` only. No spec, no project access. + + - **Edge Case Hunter** -- Invoke the `bmad-review-edge-case-hunter` skill in a subagent. Pass `content` = `{diff_output}`. This subagent has read access to the project. + + - **Acceptance Auditor** (only if `{review_mode}` = `"full"`) -- A subagent that receives `{diff_output}`, `{spec_file}` content, and any loaded context docs. Its prompt: + > You are an Acceptance Auditor. Review this diff against the spec and context docs. Check for: violations of acceptance criteria, deviations from spec intent, missing implementation of specified behavior, contradictions between spec constraints and actual code. Output findings as a markdown list. Each finding: one-line title, which AC/constraint it violates, and evidence from the diff. + +2. **Subagent failure handling**: If any subagent fails, times out, or returns empty results, note the failed layer and proceed with findings from the remaining layers. Report the failure to the user in the next step. + +3. **Fallback** (if subagents are not available): Generate prompt files in `{implementation_artifacts}` -- one per active reviewer: + - `review-blind-hunter.md` (always) + - `review-edge-case-hunter.md` (always) + - `review-acceptance-auditor.md` (only if `{review_mode}` = `"full"`) + + HALT. Tell the user to run each prompt in a separate session and paste back findings. When findings are pasted, resume from this point and proceed to step 3. + +4. Collect all findings from the completed layers. + + +## NEXT + +Read fully and follow `./steps/step-03-triage.md` diff --git a/src/bmm/workflows/4-implementation/bmad-code-review/steps/step-03-triage.md b/src/bmm/workflows/4-implementation/bmad-code-review/steps/step-03-triage.md new file mode 100644 index 000000000..dac4e9699 --- /dev/null +++ b/src/bmm/workflows/4-implementation/bmad-code-review/steps/step-03-triage.md @@ -0,0 +1,49 @@ +--- +name: Triage +description: 'Normalize, deduplicate, and classify all review findings into actionable categories.' +--- + +# Step 3: Triage + +## RULES + +- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}` +- Be precise. When uncertain between categories, prefer the more conservative classification. + +## INSTRUCTIONS + +1. **Normalize** findings into a common format. Expected input formats: + - Adversarial (Blind Hunter): markdown list of descriptions + - Edge Case Hunter: JSON array with `location`, `trigger_condition`, `guard_snippet`, `potential_consequence` fields + - Acceptance Auditor: markdown list with title, AC/constraint reference, and evidence + + If a layer's output does not match its expected format, attempt best-effort parsing. Note any parsing issues for the user. + + Convert all to a unified list where each finding has: + - `id` -- sequential integer + - `source` -- `blind`, `edge`, `auditor`, or merged sources (e.g., `blind+edge`) + - `title` -- one-line summary + - `detail` -- full description + - `location` -- file and line reference (if available) + +2. **Deduplicate.** If two findings describe the same issue, keep the one with more specificity (prefer edge-case JSON with location over adversarial prose). Note merged sources on the surviving finding. + +3. **Classify** each finding into exactly one bucket: + - **intent_gap** -- The spec/intent is incomplete; cannot resolve from existing information. Only possible if `{review_mode}` = `"full"`. + - **bad_spec** -- The spec should have prevented this; spec is wrong or ambiguous. Only possible if `{review_mode}` = `"full"`. + - **patch** -- Code issue that is trivially fixable without human input. Just needs a code change. + - **defer** -- Pre-existing issue not caused by the current change. Real but not actionable now. + - **reject** -- Noise, false positive, or handled elsewhere. + + If `{review_mode}` = `"no-spec"` and a finding would otherwise be `intent_gap` or `bad_spec`, reclassify it as `patch` (if code-fixable) or `defer` (if not). + +4. **Drop** all `reject` findings. Record the reject count for the summary. + +5. If zero findings remain after dropping rejects, note clean review. + +6. If any review layer failed or returned empty (noted in step 2), report this to the user now. + + +## NEXT + +Read fully and follow `./steps/step-04-present.md` diff --git a/src/bmm/workflows/4-implementation/bmad-code-review/steps/step-04-present.md b/src/bmm/workflows/4-implementation/bmad-code-review/steps/step-04-present.md new file mode 100644 index 000000000..4721ec48c --- /dev/null +++ b/src/bmm/workflows/4-implementation/bmad-code-review/steps/step-04-present.md @@ -0,0 +1,40 @@ +--- +name: Present +description: 'Present triaged findings grouped by category with actionable recommendations.' +--- + +# Step 4: Present + +## RULES + +- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}` +- Do NOT auto-fix anything. Present findings and let the user decide next steps. + +## INSTRUCTIONS + +1. Group remaining findings by category. + +2. Present to the user in this order (include a section only if findings exist in that category): + + - **Intent Gaps**: "These findings suggest the captured intent is incomplete. Consider clarifying intent before proceeding." + - List each with title + detail. + + - **Bad Spec**: "These findings suggest the spec should be amended. Consider regenerating or amending the spec with this context:" + - List each with title + detail + suggested spec amendment. + + - **Patch**: "These are fixable code issues:" + - List each with title + detail + location (if available). + + - **Defer**: "Pre-existing issues surfaced by this review (not caused by current changes):" + - List each with title + detail. + +3. Summary line: **X** intent_gap, **Y** bad_spec, **Z** patch, **W** defer findings. **R** findings rejected as noise. + +4. If clean review (zero findings across all layers after triage): state that N findings were raised but all were classified as noise, or that no findings were raised at all (as applicable). + +5. Offer the user next steps (recommendations, not automated actions): + - If `patch` findings exist: "You can ask me to apply these patches, or address them manually." + - If `intent_gap` or `bad_spec` findings exist: "Consider running the planning workflow to clarify intent or amend the spec before continuing." + - If only `defer` findings remain: "No action needed for this change. Deferred items are noted for future attention." + +Workflow complete. diff --git a/src/bmm/workflows/4-implementation/bmad-code-review/workflow.md b/src/bmm/workflows/4-implementation/bmad-code-review/workflow.md index a10e7a809..6653e3c8a 100644 --- a/src/bmm/workflows/4-implementation/bmad-code-review/workflow.md +++ b/src/bmm/workflows/4-implementation/bmad-code-review/workflow.md @@ -1,261 +1,54 @@ +--- +main_config: '{project-root}/_bmad/bmm/config.yaml' +--- + # Code Review Workflow -**Goal:** Perform adversarial code review finding specific issues. +**Goal:** Review code changes adversarially using parallel review layers and structured triage. -**Your Role:** Adversarial Code Reviewer. -- YOU ARE AN ADVERSARIAL CODE REVIEWER - Find what's wrong or missing! -- Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level} -- Generate all documents in {document_output_language} -- Your purpose: Validate story file claims against actual implementation -- Challenge everything: Are tasks marked [x] actually done? Are ACs really implemented? -- Be thorough and specific — find real issues, not manufactured ones. If the code is genuinely good after fixes, say so -- Read EVERY file in the File List - verify implementation against story requirements -- Tasks marked complete but not done = CRITICAL finding -- Acceptance Criteria not implemented = HIGH severity finding -- Do not review files that are not part of the application's source code. Always exclude the `_bmad/` and `_bmad-output/` folders from the review. Always exclude IDE and CLI configuration folders like `.cursor/` and `.windsurf/` and `.claude/` +**Your Role:** You are an elite code reviewer. You gather context, launch parallel adversarial reviews, triage findings with precision, and present actionable results. No noise, no filler. ---- -## INITIALIZATION +## WORKFLOW ARCHITECTURE -### Configuration Loading +This uses **step-file architecture** for disciplined execution: -Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve: +- **Micro-file Design**: Each step is self-contained and followed exactly +- **Just-In-Time Loading**: Only load the current step file +- **Sequential Enforcement**: Complete steps in order, no skipping +- **State Tracking**: Persist progress via in-memory variables +- **Append-Only Building**: Build artifacts incrementally -- `project_name`, `user_name` -- `communication_language`, `document_output_language` -- `user_skill_level` -- `planning_artifacts`, `implementation_artifacts` +### Step Processing Rules + +1. **READ COMPLETELY**: Read the entire step file before acting +2. **FOLLOW SEQUENCE**: Execute sections in order +3. **WAIT FOR INPUT**: Halt at checkpoints and wait for human +4. **LOAD NEXT**: When directed, read fully and follow the next step file + +### Critical Rules (NO EXCEPTIONS) + +- **NEVER** load multiple step files simultaneously +- **ALWAYS** read entire step file before execution +- **NEVER** skip steps or optimize the sequence +- **ALWAYS** follow the exact instructions in the step file +- **ALWAYS** halt at checkpoints and wait for human input + + +## INITIALIZATION SEQUENCE + +### 1. Configuration Loading + +Load and read full config from `{main_config}` and resolve: + +- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name` +- `communication_language`, `document_output_language`, `user_skill_level` - `date` as system-generated current datetime - -### Paths - -- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml` - -### Input Files - -| Input | Description | Path Pattern(s) | Load Strategy | -|-------|-------------|------------------|---------------| -| architecture | System architecture for review context | whole: `{planning_artifacts}/*architecture*.md`, sharded: `{planning_artifacts}/*architecture*/*.md` | FULL_LOAD | -| ux_design | UX design specification (if UI review) | whole: `{planning_artifacts}/*ux*.md`, sharded: `{planning_artifacts}/*ux*/*.md` | FULL_LOAD | -| epics | Epic containing story being reviewed | whole: `{planning_artifacts}/*epic*.md`, sharded_index: `{planning_artifacts}/*epic*/index.md`, sharded_single: `{planning_artifacts}/*epic*/epic-{{epic_num}}.md` | SELECTIVE_LOAD | - -### Context - - `project_context` = `**/project-context.md` (load if exists) +- CLAUDE.md / memory files (load if exist) ---- +YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`. -## EXECUTION +### 2. First Step Execution - - - - Use provided {{story_path}} or ask user which story file to review - Read COMPLETE story file - Set {{story_key}} = extracted key from filename (e.g., "1-2-user-authentication.md" → "1-2-user-authentication") or story - metadata - Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Agent Record → File List, Change Log - - - Check if git repository detected in current directory - - Run `git status --porcelain` to find uncommitted changes - Run `git diff --name-only` to see modified files - Run `git diff --cached --name-only` to see staged files - Compile list of actually changed files from git output - - - - Compare story's Dev Agent Record → File List with actual git changes - Note discrepancies: - - Files in git but not in story File List - - Files in story File List but no git changes - - Missing documentation of what was actually changed - - - Read fully and follow `./discover-inputs.md` to load all input files - Load {project_context} for coding standards (if exists) - - - - Extract ALL Acceptance Criteria from story - Extract ALL Tasks/Subtasks with completion status ([x] vs [ ]) - From Dev Agent Record → File List, compile list of claimed changes - - Create review plan: - 1. **AC Validation**: Verify each AC is actually implemented - 2. **Task Audit**: Verify each [x] task is really done - 3. **Code Quality**: Security, performance, maintainability - 4. **Test Quality**: Real tests vs placeholder bullshit - - - - - VALIDATE EVERY CLAIM - Check git reality vs story claims - - - Review git vs story File List discrepancies: - 1. **Files changed but not in story File List** → MEDIUM finding (incomplete documentation) - 2. **Story lists files but no git changes** → HIGH finding (false claims) - 3. **Uncommitted changes not documented** → MEDIUM finding (transparency issue) - - - - Create comprehensive review file list from story File List and git changes - - - For EACH Acceptance Criterion: - 1. Read the AC requirement - 2. Search implementation files for evidence - 3. Determine: IMPLEMENTED, PARTIAL, or MISSING - 4. If MISSING/PARTIAL → HIGH SEVERITY finding - - - - For EACH task marked [x]: - 1. Read the task description - 2. Search files for evidence it was actually done - 3. **CRITICAL**: If marked [x] but NOT DONE → CRITICAL finding - 4. Record specific proof (file:line) - - - - For EACH file in comprehensive review list: - 1. **Security**: Look for injection risks, missing validation, auth issues - 2. **Performance**: N+1 queries, inefficient loops, missing caching - 3. **Error Handling**: Missing try/catch, poor error messages - 4. **Code Quality**: Complex functions, magic numbers, poor naming - 5. **Test Quality**: Are tests real assertions or placeholders? - - - - Double-check by re-examining code for: - - Edge cases and null handling - - Architecture violations - - Integration issues - - Dependency problems - - If still no issues found after thorough re-examination, that is a valid outcome — report a clean review - - - - - Categorize findings: HIGH (must fix), MEDIUM (should fix), LOW (nice to fix) - Set {{fixed_count}} = 0 - Set {{action_count}} = 0 - - **🔥 CODE REVIEW FINDINGS, {user_name}!** - - **Story:** {{story_file}} - **Git vs Story Discrepancies:** {{git_discrepancy_count}} found - **Issues Found:** {{high_count}} High, {{medium_count}} Medium, {{low_count}} Low - - ## 🔴 CRITICAL ISSUES - - Tasks marked [x] but not actually implemented - - Acceptance Criteria not implemented - - Story claims files changed but no git evidence - - Security vulnerabilities - - ## 🟡 MEDIUM ISSUES - - Files changed but not documented in story File List - - Uncommitted changes not tracked - - Performance problems - - Poor test coverage/quality - - Code maintainability issues - - ## 🟢 LOW ISSUES - - Code style improvements - - Documentation gaps - - Git commit message quality - - - What should I do with these issues? - - 1. **Fix them automatically** - I'll update the code and tests - 2. **Create action items** - Add to story Tasks/Subtasks for later - 3. **Show me details** - Deep dive into specific issues - - Choose [1], [2], or specify which issue to examine: - - - Fix all HIGH and MEDIUM issues in the code - Add/update tests as needed - Update File List in story if files changed - Update story Dev Agent Record with fixes applied - Set {{fixed_count}} = number of HIGH and MEDIUM issues fixed - Set {{action_count}} = 0 - - - - Add "Review Follow-ups (AI)" subsection to Tasks/Subtasks - For each issue: `- [ ] [AI-Review][Severity] Description [file:line]` - Set {{action_count}} = number of action items created - Set {{fixed_count}} = 0 - - - - Show detailed explanation with code examples - Return to fix decision - - - - - - - Set {{new_status}} = "done" - Update story Status field to "done" - - - Set {{new_status}} = "in-progress" - Update story Status field to "in-progress" - - Save story file - - - - Set {{current_sprint_status}} = "enabled" - - - Set {{current_sprint_status}} = "no-sprint-tracking" - - - - - Load the FULL file: {sprint_status} - Find development_status key matching {{story_key}} - - - Update development_status[{{story_key}}] = "done" - Update last_updated field to current date - Save file, preserving ALL comments and structure - ✅ Sprint status synced: {{story_key}} → done - - - - Update development_status[{{story_key}}] = "in-progress" - Update last_updated field to current date - Save file, preserving ALL comments and structure - 🔄 Sprint status synced: {{story_key}} → in-progress - - - - ⚠️ Story file updated, but sprint-status sync failed: {{story_key}} not found in sprint-status.yaml - - - - - ℹ️ Story status updated (no sprint tracking configured) - - - **✅ Review Complete!** - - **Story Status:** {{new_status}} - **Issues Fixed:** {{fixed_count}} - **Action Items Created:** {{action_count}} - - {{#if new_status == "done"}}Code review complete!{{else}}Address the action items and continue development.{{/if}} - - - - +Read fully and follow: `./steps/step-01-gather-context.md` to begin the workflow.