fix: copy custom workflows to src/bmm for installer compatibility
The installer reads from src/bmm/, not src/modules/bmm/. Copied all custom workflows to installer-expected location: ✅ batch-super-dev (v1.3.0 with execution modes) ✅ super-dev-pipeline (v1.5.0 complete a-k workflow) ✅ multi-agent-review (fresh context, smart agents) ✅ revalidate-story (RVS) ✅ revalidate-epic (RVE) ✅ detect-ghost-features (GFD) ✅ migrate-to-github (MIG) Now these workflows will actually install when users run the installer!
This commit is contained in:
parent
82f16adf7a
commit
c5795510e0
|
|
@ -0,0 +1,237 @@
|
|||
# Agent Limitations in Batch Mode
|
||||
|
||||
**CRITICAL:** Agents running in batch-super-dev have specific limitations. Understanding these prevents wasted time and sets correct expectations.
|
||||
|
||||
---
|
||||
|
||||
## Core Limitations
|
||||
|
||||
### ❌ Agents CANNOT Invoke Other Workflows
|
||||
|
||||
**What this means:**
|
||||
- Agents cannot run `/create-story-with-gap-analysis`
|
||||
- Agents cannot execute `/` slash commands (those are for user CLI)
|
||||
- Agents cannot call other BMAD workflows mid-execution
|
||||
|
||||
**Why:**
|
||||
- Slash commands require user terminal context
|
||||
- Workflow invocation requires special tool access
|
||||
- Batch agents are isolated execution contexts
|
||||
|
||||
**Implication:**
|
||||
- Story creation MUST happen before batch execution
|
||||
- If stories are incomplete, batch will skip them
|
||||
- No way to "fix" stories during batch
|
||||
|
||||
---
|
||||
|
||||
### ❌ Agents CANNOT Prompt User Interactively
|
||||
|
||||
**What this means:**
|
||||
- Batch runs autonomously, no user interaction
|
||||
- `<ask>` tags are auto-answered with defaults
|
||||
- No way to clarify ambiguous requirements mid-batch
|
||||
|
||||
**Why:**
|
||||
- Batch is designed for unattended execution
|
||||
- User may not be present during execution
|
||||
- Prompts would break parallel execution
|
||||
|
||||
**Implication:**
|
||||
- All requirements must be clear in story file
|
||||
- Optional steps are skipped
|
||||
- Ambiguous stories will halt or skip
|
||||
|
||||
---
|
||||
|
||||
### ❌ Agents CANNOT Generate Missing BMAD Sections
|
||||
|
||||
**What this means:**
|
||||
- If story has <12 sections, agent halts
|
||||
- If story has 0 tasks, agent halts
|
||||
- Agent will NOT try to "fix" the story format
|
||||
|
||||
**Why:**
|
||||
- Story format is structural, not implementation
|
||||
- Generating sections requires context agent doesn't have
|
||||
- Gap analysis requires codebase scanning beyond agent scope
|
||||
|
||||
**Implication:**
|
||||
- All stories must be properly formatted BEFORE batch
|
||||
- Run validation: `./scripts/validate-bmad-format.sh`
|
||||
- Regenerate incomplete stories manually
|
||||
|
||||
---
|
||||
|
||||
## What Agents CAN Do
|
||||
|
||||
### ✅ Execute Clear, Well-Defined Tasks
|
||||
|
||||
**Works well:**
|
||||
- Stories with 10-30 specific tasks
|
||||
- Clear acceptance criteria
|
||||
- Existing code to modify
|
||||
- Well-defined scope
|
||||
|
||||
### ✅ Make Implementation Decisions
|
||||
|
||||
**Works well:**
|
||||
- Choose between valid approaches
|
||||
- Apply patterns from codebase
|
||||
- Fix bugs based on error messages
|
||||
- Optimize existing code
|
||||
|
||||
### ✅ Run Tests and Verify
|
||||
|
||||
**Works well:**
|
||||
- Execute test suites
|
||||
- Measure coverage
|
||||
- Fix failing tests
|
||||
- Validate implementations
|
||||
|
||||
---
|
||||
|
||||
## Pre-Batch Validation Checklist
|
||||
|
||||
**Before running /batch-super-dev, verify ALL selected stories:**
|
||||
|
||||
```bash
|
||||
# 1. Check story files exist
|
||||
for story in $(grep "ready-for-dev" docs/sprint-artifacts/sprint-status.yaml | awk '{print $1}' | sed 's/://'); do
|
||||
[ -f "docs/sprint-artifacts/story-$story.md" ] || echo "❌ Missing: $story"
|
||||
done
|
||||
|
||||
# 2. Check all have 12 BMAD sections
|
||||
for file in docs/sprint-artifacts/story-*.md; do
|
||||
sections=$(grep -c "^## " "$file")
|
||||
if [ "$sections" -lt 12 ]; then
|
||||
echo "❌ Incomplete: $file ($sections/12 sections)"
|
||||
fi
|
||||
done
|
||||
|
||||
# 3. Check all have tasks
|
||||
for file in docs/sprint-artifacts/story-*.md; do
|
||||
tasks=$(grep -c "^- \[ \]" "$file")
|
||||
if [ "$tasks" -eq 0 ]; then
|
||||
echo "❌ No tasks: $file"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
**If any checks fail:**
|
||||
1. Regenerate those stories: `/create-story-with-gap-analysis`
|
||||
2. Validate again
|
||||
3. THEN run batch-super-dev
|
||||
|
||||
---
|
||||
|
||||
## Error Messages Explained
|
||||
|
||||
### "EARLY BAILOUT: No Tasks Found"
|
||||
|
||||
**What it means:** Story file has 0 unchecked tasks
|
||||
**Is this a bug?** ❌ NO - This is correct validation
|
||||
**What to do:**
|
||||
- If story is skeleton: Regenerate with /create-story-with-gap-analysis
|
||||
- If story is complete: Mark as "done" in sprint-status.yaml
|
||||
- If story needs work: Add tasks to story file
|
||||
|
||||
### "EARLY BAILOUT: Invalid Story Format"
|
||||
|
||||
**What it means:** Story missing required sections (Tasks, AC, etc.)
|
||||
**Is this a bug?** ❌ NO - This is correct validation
|
||||
**What to do:**
|
||||
- Regenerate with /create-story-with-gap-analysis
|
||||
- Do NOT try to manually add sections (skip gap analysis)
|
||||
- Do NOT launch batch with incomplete stories
|
||||
|
||||
### "Story Creation Failed" or "Skipped"
|
||||
|
||||
**What it means:** Agent tried to create story but couldn't
|
||||
**Is this a bug?** ❌ NO - Agents can't create stories
|
||||
**What to do:**
|
||||
- Exit batch-super-dev
|
||||
- Manually run /create-story-with-gap-analysis
|
||||
- Re-run batch after story created
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### ✅ DO: Generate All Stories Before Batch
|
||||
|
||||
**Workflow:**
|
||||
```
|
||||
1. Plan epic → Identify stories → Create list
|
||||
2. Generate stories: /create-story-with-gap-analysis (1-2 days)
|
||||
3. Validate stories: ./scripts/validate-all-stories.sh
|
||||
4. Execute stories: /batch-super-dev (parallel, fast)
|
||||
```
|
||||
|
||||
### ✅ DO: Use Small Batches for Mixed Complexity
|
||||
|
||||
**Workflow:**
|
||||
```
|
||||
1. Group by complexity (micro, standard, complex)
|
||||
2. Batch micro stories (quick wins)
|
||||
3. Batch standard stories
|
||||
4. Execute complex stories individually
|
||||
```
|
||||
|
||||
### ❌ DON'T: Try to Batch Regenerate
|
||||
|
||||
**Why it fails:**
|
||||
```
|
||||
1. Create 20 skeleton files with just widget lists
|
||||
2. Run /batch-super-dev
|
||||
3. Expect agents to regenerate them
|
||||
→ FAILS: Agents can't invoke /create-story workflow
|
||||
```
|
||||
|
||||
### ❌ DON'T: Mix Skeletons with Proper Stories
|
||||
|
||||
**Why it fails:**
|
||||
```
|
||||
1. 10 proper BMAD stories + 10 skeletons
|
||||
2. Run /batch-super-dev
|
||||
3. Expect batch to handle both
|
||||
→ RESULT: 10 execute, 10 skipped (confusing)
|
||||
```
|
||||
|
||||
### ❌ DON'T: Assume Agents Will "Figure It Out"
|
||||
|
||||
**Why it fails:**
|
||||
```
|
||||
1. Launch batch with unclear stories
|
||||
2. Hope agents will regenerate/fix/create
|
||||
→ RESULT: Agents halt correctly, nothing happens
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
**The Golden Rule:**
|
||||
> **Batch-super-dev is for EXECUTION, not CREATION.**
|
||||
>
|
||||
> Story creation is interactive and requires user input.
|
||||
> Always create/regenerate stories BEFORE batch execution.
|
||||
|
||||
**Remember:**
|
||||
- Agents have limitations (documented above)
|
||||
- These are features, not bugs
|
||||
- Workflows correctly validate and halt
|
||||
- User must prepare stories properly first
|
||||
|
||||
**Success Formula:**
|
||||
```
|
||||
Proper Story Generation (1-2 days manual work)
|
||||
↓
|
||||
Validation (5 minutes automated)
|
||||
↓
|
||||
Batch Execution (4-8 hours parallel autonomous)
|
||||
↓
|
||||
Review & Merge (1-2 hours)
|
||||
```
|
||||
|
||||
Don't skip the preparation steps!
|
||||
|
|
@ -0,0 +1,742 @@
|
|||
# Batch Super-Dev Workflow
|
||||
|
||||
**Version:** 1.3.1 (Agent Limitations Documentation)
|
||||
**Created:** 2026-01-06
|
||||
**Updated:** 2026-01-08
|
||||
**Author:** BMad
|
||||
|
||||
---
|
||||
|
||||
## Critical Prerequisites
|
||||
|
||||
> **⚠️ IMPORTANT: Read before running batch-super-dev!**
|
||||
|
||||
**BEFORE running batch-super-dev:**
|
||||
|
||||
### ✅ 1. All stories must be properly generated
|
||||
|
||||
- Run: `/create-story-with-gap-analysis` for each story
|
||||
- Do NOT create skeleton/template files manually
|
||||
- Validation: `./scripts/validate-all-stories.sh`
|
||||
|
||||
**Why:** Agents CANNOT invoke `/create-story-with-gap-analysis` workflow. Story generation requires user interaction and context-heavy codebase scanning.
|
||||
|
||||
### ✅ 2. All stories must have 12 BMAD sections
|
||||
|
||||
Required sections:
|
||||
1. Business Context
|
||||
2. Current State
|
||||
3. Acceptance Criteria
|
||||
4. Tasks/Subtasks
|
||||
5. Technical Requirements
|
||||
6. Architecture Compliance
|
||||
7. Testing Requirements
|
||||
8. Dev Agent Guardrails
|
||||
9. Definition of Done
|
||||
10. References
|
||||
11. Dev Agent Record
|
||||
12. Change Log
|
||||
|
||||
### ✅ 3. All stories must have tasks
|
||||
|
||||
- At least 3 unchecked tasks (minimum for valid story)
|
||||
- Zero-task stories will be skipped
|
||||
- Validation: `grep -c "^- \[ \]" story-file.md`
|
||||
|
||||
### Common Failure Mode: Batch Regeneration
|
||||
|
||||
**What you might try:**
|
||||
```
|
||||
1. Create 20 skeleton story files (just headers + widget lists)
|
||||
2. Run /batch-super-dev
|
||||
3. Expect agents to regenerate them
|
||||
```
|
||||
|
||||
**What happens:**
|
||||
- Agents identify stories are incomplete
|
||||
- Agents correctly halt per super-dev-pipeline validation
|
||||
- Stories get skipped (not regenerated)
|
||||
- You waste time
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# 1. Generate all stories (1-2 days, manual)
|
||||
/create-story-with-gap-analysis # For each story
|
||||
|
||||
# 2. Validate (30 seconds, automated)
|
||||
./scripts/validate-all-stories.sh
|
||||
|
||||
# 3. Execute (4-8 hours, parallel autonomous)
|
||||
/batch-super-dev
|
||||
```
|
||||
|
||||
See: `AGENT-LIMITATIONS.md` for full documentation on what agents can and cannot do.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Interactive batch workflow for processing multiple `ready-for-dev` stories sequentially or in parallel using the super-dev-pipeline with full quality gates.
|
||||
|
||||
**New in v1.2.0:** Smart Story Validation & Auto-Creation - validates story files, creates missing stories, regenerates invalid ones automatically.
|
||||
**New in v1.1.0:** Smart Story Reconciliation - automatically verifies story accuracy after each implementation.
|
||||
|
||||
---
|
||||
|
||||
## Features
|
||||
|
||||
### Core Capabilities
|
||||
|
||||
1. **🆕 Smart Story Validation & Auto-Creation** (NEW v1.2.0)
|
||||
- Validates all selected stories before processing
|
||||
- Checks for 12 required BMAD sections
|
||||
- Validates content quality (Current State ≥100 words, gap analysis present)
|
||||
- **Auto-creates missing story files** with codebase gap analysis
|
||||
- **Auto-regenerates invalid stories** (incomplete or stub files)
|
||||
- Interactive prompts (or fully automated with settings)
|
||||
- Backups existing files before regeneration
|
||||
|
||||
2. **Interactive Story Selection**
|
||||
- Lists all `ready-for-dev` stories from sprint-status.yaml
|
||||
- Shows story status icons (✅ file exists, ❌ missing, 🔄 needs status update)
|
||||
- Supports flexible selection syntax: single, ranges, comma-separated, "all"
|
||||
- Optional epic filtering (process only Epic 3 stories, etc.)
|
||||
|
||||
3. **Execution Modes**
|
||||
- **Sequential:** Process stories one-by-one in current session (easier monitoring)
|
||||
- **Parallel:** Spawn Task agents to process stories concurrently (faster, autonomous)
|
||||
- Configurable parallelism: 2, 4, or all stories at once
|
||||
|
||||
4. **Full Quality Gates** (from super-dev-pipeline)
|
||||
- Pre-gap analysis (validate story completeness)
|
||||
- Test-driven implementation
|
||||
- Post-validation (verify requirements met)
|
||||
- Multi-agent code review (4 specialized agents)
|
||||
- Targeted git commits
|
||||
- Definition of done verification
|
||||
|
||||
5. **Smart Story Reconciliation** (v1.1.0)
|
||||
- Automatically checks story accuracy after implementation
|
||||
- Verifies Acceptance Criteria checkboxes match Dev Agent Record
|
||||
- Verifies Tasks/Subtasks checkboxes match implementation
|
||||
- Verifies Definition of Done completion
|
||||
- Updates story status (done/review/in-progress) based on actual completion
|
||||
- Synchronizes sprint-status.yaml with story file status
|
||||
- **Prevents "done" stories with unchecked items** ✅
|
||||
|
||||
---
|
||||
|
||||
## Smart Story Validation & Auto-Creation (NEW v1.2.0)
|
||||
|
||||
### What It Does
|
||||
|
||||
Before processing any selected stories, the workflow automatically validates each story file:
|
||||
|
||||
1. **File Existence Check** - Verifies story file exists (tries multiple naming patterns)
|
||||
2. **Section Validation** - Ensures all 12 BMAD sections are present
|
||||
3. **Content Quality Check** - Validates sufficient content (not stubs):
|
||||
- Current State: ≥100 words
|
||||
- Gap analysis markers: ✅/❌ present
|
||||
- Acceptance Criteria: ≥3 items
|
||||
- Tasks: ≥5 items
|
||||
4. **Auto-Creation** - Creates missing stories with codebase gap analysis
|
||||
5. **Auto-Regeneration** - Regenerates invalid/incomplete story files
|
||||
|
||||
### Why This Matters
|
||||
|
||||
**Problem this solves:**
|
||||
|
||||
Before v1.2.0:
|
||||
```
|
||||
User: "Process stories 3.1, 3.2, 3.3, 3.4"
|
||||
Workflow: "Story 3.3 file missing - please create it first"
|
||||
User: Ctrl+C → /create-story → /batch-super-dev again
|
||||
```
|
||||
|
||||
After v1.2.0:
|
||||
```
|
||||
User: "Process stories 3.1, 3.2, 3.3, 3.4"
|
||||
Workflow: "Story 3.3 missing - create it? (yes)"
|
||||
User: "yes"
|
||||
Workflow: Creates story 3.3 with gap analysis → Processes all 4 stories
|
||||
```
|
||||
|
||||
**Prevents:**
|
||||
- Incomplete story files being processed
|
||||
- Missing gap analysis
|
||||
- Stub files (< 100 words)
|
||||
- Manual back-and-forth workflow interruptions
|
||||
|
||||
### Validation Process
|
||||
|
||||
```
|
||||
Load Sprint Status
|
||||
↓
|
||||
Display Available Stories
|
||||
↓
|
||||
🆕 VALIDATE EACH STORY ← NEW STEP 2.5
|
||||
↓
|
||||
For each story:
|
||||
┌─ File missing? → Prompt: "Create story with gap analysis?"
|
||||
│ └─ yes → /create-story-with-gap-analysis → ✅ Created
|
||||
│ └─ no → ⏭️ Skip story
|
||||
│
|
||||
┌─ File exists but invalid?
|
||||
│ (< 12 sections OR < 100 words OR no gap analysis)
|
||||
│ → Prompt: "Regenerate story with codebase scan?"
|
||||
│ └─ yes → Backup original → /create-story-with-gap-analysis → ✅ Regenerated
|
||||
│ └─ no → ⏭️ Skip story
|
||||
│
|
||||
└─ File valid? → ✅ Ready to process
|
||||
↓
|
||||
Remove skipped stories
|
||||
↓
|
||||
Display Validated Stories
|
||||
↓
|
||||
User Selection (only validated stories)
|
||||
↓
|
||||
Process Stories
|
||||
```
|
||||
|
||||
### Configuration Options
|
||||
|
||||
**In workflow.yaml:**
|
||||
|
||||
```yaml
|
||||
# Story validation settings (NEW in v1.2.0)
|
||||
validation:
|
||||
enabled: true # Enable/disable validation
|
||||
auto_create_missing: false # Auto-create without prompting (use cautiously)
|
||||
auto_regenerate_invalid: false # Auto-regenerate without prompting (use cautiously)
|
||||
min_sections: 12 # BMAD format requires all 12
|
||||
min_current_state_words: 100 # Minimum content length
|
||||
require_gap_analysis: true # Must have ✅/❌ markers
|
||||
backup_before_regenerate: true # Create .backup before regenerating
|
||||
```
|
||||
|
||||
**Interactive Mode (default):**
|
||||
- Prompts before creating/regenerating each story
|
||||
- Safe, user retains control
|
||||
- Recommended for most workflows
|
||||
|
||||
**Fully Automated Mode:**
|
||||
```yaml
|
||||
validation:
|
||||
auto_create_missing: true
|
||||
auto_regenerate_invalid: true
|
||||
```
|
||||
- Creates/regenerates without prompting
|
||||
- Faster for large batches
|
||||
- Use with caution (may overwrite valid stories)
|
||||
|
||||
### Example Session (v1.2.0)
|
||||
|
||||
```
|
||||
🤖 /batch-super-dev
|
||||
|
||||
📊 Ready-for-Dev Stories (5)
|
||||
|
||||
1. **3-1-vehicle-card** ✅
|
||||
→ Story file exists
|
||||
2. **3-2-vehicle-search** ✅
|
||||
→ Story file exists
|
||||
3. **3-3-vehicle-compare** ❌
|
||||
→ Story file missing
|
||||
4. **3-4-vehicle-details** ⚠️
|
||||
→ File exists (7/12 sections, stub content)
|
||||
5. **3-5-vehicle-history** ✅
|
||||
→ Story file exists
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 VALIDATING STORY FILES
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Story 3-1-vehicle-card: ✅ Valid (12/12 sections, gap analysis present)
|
||||
|
||||
Story 3-2-vehicle-search: ✅ Valid (12/12 sections, gap analysis present)
|
||||
|
||||
📝 Story 3-3-vehicle-compare: File missing
|
||||
|
||||
Create story file with gap analysis? (yes/no): yes
|
||||
|
||||
Creating story 3-3-vehicle-compare with codebase gap analysis...
|
||||
→ Scanning apps/frontend/web for existing components...
|
||||
→ Scanning packages/widgets for related widgets...
|
||||
→ Analyzing gap: 3 files exist, 5 need creation
|
||||
|
||||
✅ Story 3-3-vehicle-compare created successfully (12/12 sections)
|
||||
|
||||
⚠️ Story 3-4-vehicle-details: File incomplete or invalid
|
||||
- Sections: 7/12
|
||||
- Current State: stub (32 words, expected ≥100)
|
||||
- Gap analysis: missing
|
||||
|
||||
Regenerate story with codebase scan? (yes/no): yes
|
||||
|
||||
Regenerating story 3-4-vehicle-details with gap analysis...
|
||||
→ Backing up to docs/sprint-artifacts/3-4-vehicle-details.md.backup
|
||||
→ Scanning codebase for VehicleDetails implementation...
|
||||
→ Found: packages/widgets/vehicle-details-v2 (partial)
|
||||
→ Analyzing gap: 8 files exist, 3 need creation
|
||||
|
||||
✅ Story 3-4-vehicle-details regenerated successfully (12/12 sections)
|
||||
|
||||
Story 3-5-vehicle-history: ✅ Valid (12/12 sections, gap analysis present)
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ Story Validation Complete
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
**Validated:** 5 stories ready to process
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Select stories to process: all
|
||||
|
||||
[Proceeds to process all 5 validated stories...]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Smart Story Reconciliation (v1.1.0)
|
||||
|
||||
### What It Does
|
||||
|
||||
After each story completes, the workflow automatically:
|
||||
|
||||
1. **Loads Dev Agent Record** - Reads implementation summary, file list, test results
|
||||
2. **Analyzes Acceptance Criteria** - Checks which ACs have evidence of completion
|
||||
3. **Analyzes Tasks** - Verifies which tasks have been implemented
|
||||
4. **Analyzes Definition of Done** - Confirms quality gates passed
|
||||
5. **Calculates Completion %** - AC%, Tasks%, DoD% percentages
|
||||
6. **Determines Correct Status:**
|
||||
- `done`: AC≥95% AND Tasks≥95% AND DoD≥95%
|
||||
- `review`: AC≥80% AND Tasks≥80% AND DoD≥80%
|
||||
- `in-progress`: Below 80% on any category
|
||||
7. **Updates Story File** - Checks/unchecks boxes to match reality
|
||||
8. **Updates sprint-status.yaml** - Synchronizes status entry
|
||||
|
||||
### Why This Matters
|
||||
|
||||
**Problem this solves:**
|
||||
|
||||
Story 20.8 (before reconciliation):
|
||||
- Dev Agent Record: "COMPLETE - 10 files created, 37 tests passing"
|
||||
- Acceptance Criteria: All unchecked ❌
|
||||
- Tasks: All unchecked ❌
|
||||
- Definition of Done: All unchecked ❌
|
||||
- sprint-status.yaml: `ready-for-dev` ❌
|
||||
- **Reality:** Story was 100% complete but looked 0% complete!
|
||||
|
||||
**After reconciliation:**
|
||||
- Acceptance Criteria: 17/18 checked ✅
|
||||
- Tasks: 24/24 checked ✅
|
||||
- Definition of Done: 24/25 checked ✅
|
||||
- sprint-status.yaml: `done` ✅
|
||||
- **Accurate representation of actual completion** ✅
|
||||
|
||||
### Reconciliation Process
|
||||
|
||||
```
|
||||
Implementation Complete
|
||||
↓
|
||||
Load Dev Agent Record
|
||||
↓
|
||||
Parse: Implementation Summary, File List, Test Results, Completion Notes
|
||||
↓
|
||||
For each checkbox in ACs/Tasks/DoD:
|
||||
- Search Dev Agent Record for evidence
|
||||
- Determine expected status (checked/unchecked/partial)
|
||||
- Compare actual vs expected
|
||||
- Record discrepancies
|
||||
↓
|
||||
Calculate completion percentages:
|
||||
- AC: X/Y checked (Z%)
|
||||
- Tasks: X/Y checked (Z%)
|
||||
- DoD: X/Y checked (Z%)
|
||||
↓
|
||||
Determine correct story status (done/review/in-progress)
|
||||
↓
|
||||
Apply changes (with user confirmation):
|
||||
- Update checkboxes in story file
|
||||
- Update story status header
|
||||
- Update sprint-status.yaml entry
|
||||
↓
|
||||
Report final completion statistics
|
||||
```
|
||||
|
||||
### Reconciliation Output
|
||||
|
||||
```
|
||||
🔧 Story 20.8: Reconciling 42 issues
|
||||
|
||||
Changes to apply:
|
||||
1. AC1: FlexibleGridSection component - CHECK (File created: FlexibleGridSection.tsx)
|
||||
2. AC2: Screenshot automation - CHECK (File created: screenshot-pages.ts)
|
||||
3. Task 1.3: Create page corpus generator - CHECK (File created: generate-page-corpus.ts)
|
||||
... (39 more)
|
||||
|
||||
Apply these reconciliation changes? (yes/no): yes
|
||||
|
||||
✅ Story 20.8: Reconciliation complete (42 changes applied)
|
||||
|
||||
📊 Story 20.8 - Final Status
|
||||
|
||||
Acceptance Criteria: 17/18 (94%)
|
||||
Tasks/Subtasks: 24/24 (100%)
|
||||
Definition of Done: 24/25 (96%)
|
||||
|
||||
Story Status: done
|
||||
sprint-status.yaml: done
|
||||
|
||||
✅ Story is COMPLETE and accurately reflects implementation
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
# Process all ready-for-dev stories
|
||||
/batch-super-dev
|
||||
|
||||
# Follow prompts:
|
||||
# 1. See list of ready stories
|
||||
# 2. Select stories to process (1,3-5,8 or "all")
|
||||
# 3. Choose execution mode (sequential/parallel)
|
||||
# 4. Confirm execution plan
|
||||
# 5. Stories process automatically with reconciliation
|
||||
# 6. Review batch summary
|
||||
```
|
||||
|
||||
### Epic Filtering
|
||||
|
||||
```bash
|
||||
# Only process Epic 3 stories
|
||||
/batch-super-dev filter_by_epic=3
|
||||
```
|
||||
|
||||
### Selection Syntax
|
||||
|
||||
```
|
||||
Single: 1
|
||||
Multiple: 1,3,5
|
||||
Range: 1-5 (processes 1,2,3,4,5)
|
||||
Mixed: 1,3-5,8 (processes 1,3,4,5,8)
|
||||
All: all (processes all ready-for-dev stories)
|
||||
```
|
||||
|
||||
### Execution Modes
|
||||
|
||||
**Sequential (Recommended for ≤5 stories):**
|
||||
- Processes one story at a time in current session
|
||||
- Easier to monitor progress
|
||||
- Lower resource usage
|
||||
- Can pause/cancel between stories
|
||||
|
||||
**Parallel (Recommended for >5 stories):**
|
||||
- Spawns autonomous Task agents
|
||||
- Much faster (2-4x speedup)
|
||||
- Choose parallelism: 2 (conservative), 4 (moderate), all (aggressive)
|
||||
- Requires more system resources
|
||||
|
||||
---
|
||||
|
||||
## Workflow Configuration
|
||||
|
||||
**File:** `_bmad/bmm/workflows/4-implementation/batch-super-dev/workflow.yaml`
|
||||
|
||||
### Key Settings
|
||||
|
||||
```yaml
|
||||
# Safety limits
|
||||
max_stories: 20 # Won't process more than 20 in one batch
|
||||
|
||||
# Pacing
|
||||
pause_between_stories: 5 # Seconds between stories (sequential mode)
|
||||
|
||||
# Error handling
|
||||
continue_on_failure: true # Keep processing if one story fails
|
||||
|
||||
# Reconciliation (NEW v1.1.0)
|
||||
reconciliation:
|
||||
enabled: true # Auto-reconcile after each story
|
||||
require_confirmation: true # Ask before applying changes
|
||||
update_sprint_status: true # Sync sprint-status.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Steps
|
||||
|
||||
### 1. Load Sprint Status
|
||||
- Parses sprint-status.yaml
|
||||
- Filters stories with status="ready-for-dev"
|
||||
- Excludes epics and retrospectives
|
||||
- Optionally filters by epic number
|
||||
|
||||
### 2. Display Available Stories
|
||||
- Shows all ready-for-dev stories
|
||||
- Verifies story files exist
|
||||
- Displays status icons and comments
|
||||
|
||||
### 2.5. 🆕 Validate and Create/Regenerate Stories (NEW v1.2.0)
|
||||
**For each story:**
|
||||
- Check file existence (multiple naming patterns)
|
||||
- Validate 12 BMAD sections present
|
||||
- Check content quality (Current State ≥100 words, gap analysis)
|
||||
- **If missing:** Prompt to create with gap analysis
|
||||
- **If invalid:** Prompt to regenerate with codebase scan
|
||||
- **If valid:** Mark ready to process
|
||||
- Remove skipped stories from selection
|
||||
|
||||
### 3. Get User Selection
|
||||
- Interactive story picker
|
||||
- Supports flexible selection syntax
|
||||
- Validates selection and confirms
|
||||
|
||||
### 3.5. Choose Execution Strategy
|
||||
- Sequential vs Parallel
|
||||
- If parallel: choose concurrency level
|
||||
- Confirm execution plan
|
||||
|
||||
### 4. Process Stories
|
||||
**Sequential Mode:**
|
||||
- For each selected story:
|
||||
- Invoke super-dev-pipeline
|
||||
- Execute reconciliation (Step 4.5)
|
||||
- Report results
|
||||
- Pause between stories
|
||||
|
||||
**Parallel Mode (Semaphore Pattern - NEW v1.3.0):**
|
||||
- Initialize worker pool with N slots (user-selected concurrency)
|
||||
- Fill initial N slots with first N stories
|
||||
- Poll workers continuously (non-blocking)
|
||||
- As soon as worker completes → immediately refill slot with next story
|
||||
- Maintain constant N concurrent agents until queue empty
|
||||
- Execute reconciliation after each story completes
|
||||
- **Commit Queue:** File-based locking prevents git lock conflicts
|
||||
- Workers acquire `.git/bmad-commit.lock` before committing
|
||||
- Automatic retry with exponential backoff (1s → 30s)
|
||||
- Stale lock cleanup (>5 min)
|
||||
- Serialized commits, parallel implementation
|
||||
- No idle time waiting for batch synchronization
|
||||
- **20-40% faster** than old batch-and-wait pattern
|
||||
|
||||
### 4.5. Smart Story Reconciliation (NEW)
|
||||
**Executed after each story completes:**
|
||||
- Load Dev Agent Record
|
||||
- Analyze ACs/Tasks/DoD vs implementation
|
||||
- Calculate completion percentages
|
||||
- Determine correct story status
|
||||
- Update checkboxes and status
|
||||
- Sync sprint-status.yaml
|
||||
|
||||
See: `step-4.5-reconcile-story-status.md` for detailed algorithm
|
||||
|
||||
### 5. Display Batch Summary
|
||||
- Shows completion statistics
|
||||
- Lists failed stories (if any)
|
||||
- Lists reconciliation warnings (if any)
|
||||
- Provides next steps
|
||||
- Saves batch log
|
||||
|
||||
---
|
||||
|
||||
## Output Files
|
||||
|
||||
### Batch Log
|
||||
|
||||
**Location:** `docs/sprint-artifacts/batch-super-dev-{date}.log`
|
||||
|
||||
**Contains:**
|
||||
- Start/end timestamps
|
||||
- Selected stories
|
||||
- Completed stories
|
||||
- Failed stories
|
||||
- Reconciliation warnings
|
||||
- Success rate
|
||||
- Total duration
|
||||
|
||||
### Reconciliation Results (per story)
|
||||
|
||||
**Embedded in Dev Agent Record:**
|
||||
- Reconciliation summary
|
||||
- Changes applied
|
||||
- Final completion percentages
|
||||
- Status determination reasoning
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Story Implementation Fails
|
||||
- Increments failed counter
|
||||
- Adds to failed_stories list
|
||||
- If `continue_on_failure=true`, continues with remaining stories
|
||||
- If `continue_on_failure=false`, stops batch
|
||||
|
||||
### Reconciliation Fails
|
||||
- Story still marked as completed (implementation succeeded)
|
||||
- Adds to reconciliation_warnings list
|
||||
- User warned to manually verify story accuracy
|
||||
- Does NOT fail the batch
|
||||
|
||||
### Task Agent Fails (Parallel Mode)
|
||||
- Collects error from TaskOutput
|
||||
- Marks story as failed
|
||||
- Continues with remaining stories in batch
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Story Selection
|
||||
- ✅ Start small: Process 2-3 stories first to verify workflow
|
||||
- ✅ Group by epic: Related stories often share context
|
||||
- ✅ Check file status: ✅ stories are ready, ❌ need creation first
|
||||
- ❌ Don't process 20 stories at once on first run
|
||||
|
||||
### Execution Mode
|
||||
- Sequential for ≤5 stories (easier monitoring)
|
||||
- Parallel for >5 stories (faster completion)
|
||||
- Use parallelism=2 first, then increase if stable
|
||||
|
||||
### During Execution
|
||||
- Monitor progress output
|
||||
- Check reconciliation reports
|
||||
- Verify changes look correct
|
||||
- Spot-check 1-2 completed stories
|
||||
|
||||
### After Completion
|
||||
1. Review batch summary
|
||||
2. Check reconciliation warnings
|
||||
3. Verify sprint-status.yaml updated
|
||||
4. Run tests: `pnpm test`
|
||||
5. Check coverage: `pnpm test --coverage`
|
||||
6. Review commits: `git log -<count>`
|
||||
7. Spot-check 2-3 stories for quality
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Reconciliation Reports Many Warnings
|
||||
|
||||
**Cause:** Dev Agent Record may be incomplete or stories weren't fully implemented
|
||||
|
||||
**Fix:**
|
||||
1. Review listed stories manually
|
||||
2. Check Dev Agent Record has all required sections
|
||||
3. Re-run super-dev-pipeline for problematic stories
|
||||
4. Manually reconcile checkboxes if needed
|
||||
|
||||
### Parallel Mode Hangs
|
||||
|
||||
**Cause:** Too many agents running concurrently, system resources exhausted
|
||||
|
||||
**Fix:**
|
||||
1. Kill hung agents: `/tasks` then `kill <task-id>`
|
||||
2. Reduce parallelism: Use 2 instead of 4
|
||||
3. Process remaining stories sequentially
|
||||
|
||||
### Story Marked "done" but has Unchecked Items
|
||||
|
||||
**Cause:** Reconciliation may have missed some checkboxes
|
||||
|
||||
**Fix:**
|
||||
1. Review Dev Agent Record
|
||||
2. Check which checkboxes should be checked
|
||||
3. Manually check them or re-run reconciliation:
|
||||
- Load story file
|
||||
- Compare ACs/Tasks/DoD to Dev Agent Record
|
||||
- Update checkboxes to match reality
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
### v1.3.0 (2026-01-07)
|
||||
- **NEW:** Complexity-Based Routing (Step 2.6)
|
||||
- Automatic story complexity scoring (micro/standard/complex)
|
||||
- Risk keyword detection with configurable weights
|
||||
- Smart pipeline selection: micro → lightweight, complex → enhanced
|
||||
- 50-70% token savings for micro stories
|
||||
- Deterministic classification with mutually exclusive thresholds
|
||||
- **CRITICAL:** Rejects stories with <3 tasks as INVALID (prevents 0-task stories from being processed)
|
||||
- **NEW:** Semaphore Pattern for Parallel Execution
|
||||
- Worker pool maintains constant N concurrent agents
|
||||
- As soon as worker completes → immediately start next story
|
||||
- No idle time waiting for batch synchronization
|
||||
- 20-40% faster than old batch-and-wait pattern
|
||||
- Non-blocking task polling with live progress dashboard
|
||||
- **NEW:** Git Commit Queue (Parallel-Safe)
|
||||
- File-based locking prevents concurrent commit conflicts
|
||||
- Workers acquire `.git/bmad-commit.lock` before committing
|
||||
- Automatic retry with exponential backoff (1s → 30s max)
|
||||
- Stale lock cleanup (>5 min old locks auto-removed)
|
||||
- Eliminates "Another git process is running" errors
|
||||
- Serializes commits while keeping implementations parallel
|
||||
- **NEW:** Continuous Sprint-Status Tracking
|
||||
- sprint-status.yaml updated after EVERY task completion
|
||||
- Real-time progress: "# 7/10 tasks (70%)"
|
||||
- CRITICAL enforcement with HALT on update failure
|
||||
- Immediate visibility into story progress
|
||||
- **NEW:** Stricter Story Validation
|
||||
- Step 2.5 now rejects stories with <3 tasks
|
||||
- Step 2.6 marks stories with <3 tasks as INVALID
|
||||
- Prevents incomplete/stub stories from being processed
|
||||
- Requires /validate-create-story to fix before processing
|
||||
|
||||
### v1.2.0 (2026-01-06)
|
||||
- **NEW:** Smart Story Validation & Auto-Creation (Step 2.5)
|
||||
- Validates story files before processing
|
||||
- Auto-creates missing stories with gap analysis
|
||||
- Auto-regenerates invalid/incomplete stories
|
||||
- Checks 12 BMAD sections, content quality
|
||||
- Interactive or fully automated modes
|
||||
- Backups before regeneration
|
||||
- **Removes friction:** No more "story file missing" interruptions
|
||||
- **Ensures quality:** Only valid stories with gap analysis proceed
|
||||
- **Configuration:** New `validation` settings in workflow.yaml
|
||||
|
||||
### v1.1.0 (2026-01-06)
|
||||
- **NEW:** Smart Story Reconciliation (Step 4.5)
|
||||
- Auto-verifies story accuracy after implementation
|
||||
- Updates checkboxes based on Dev Agent Record
|
||||
- Synchronizes sprint-status.yaml
|
||||
- Prevents "done" stories with unchecked items
|
||||
- Added reconciliation warnings to batch summary
|
||||
- Added reconciliation statistics to output
|
||||
|
||||
### v1.0.0 (2026-01-05)
|
||||
- Initial release
|
||||
- Interactive story selector
|
||||
- Sequential and parallel execution modes
|
||||
- Integration with super-dev-pipeline
|
||||
- Batch summary and logging
|
||||
|
||||
---
|
||||
|
||||
## Related Workflows
|
||||
|
||||
- **super-dev-pipeline:** Individual story implementation (invoked by batch-super-dev)
|
||||
- **create-story-with-gap-analysis:** Create new stories with codebase scan
|
||||
- **sprint-status:** View/update sprint status
|
||||
- **multi-agent-review:** Standalone code review (part of super-dev-pipeline)
|
||||
|
||||
---
|
||||
|
||||
## Support
|
||||
|
||||
**Questions or Issues:**
|
||||
- Check workflow logs: `docs/sprint-artifacts/batch-super-dev-*.log`
|
||||
- Review reconciliation step: `step-4.5-reconcile-story-status.md`
|
||||
- Check story file format: Ensure 12-section BMAD format
|
||||
- Verify Dev Agent Record populated: Required for reconciliation
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** 2026-01-07
|
||||
**Status:** Active - Production-ready with semaphore pattern and continuous tracking
|
||||
**Maintained By:** BMad
|
||||
|
|
@ -0,0 +1,347 @@
|
|||
# Batch-Super-Dev Step 2.5 Patch
|
||||
|
||||
**Issue:** Step 2.5 tries to invoke `/create-story-with-gap-analysis` which agents cannot do
|
||||
**Impact:** Skeleton stories get skipped instead of regenerated
|
||||
**Fix:** Explicitly halt batch and tell user to regenerate manually
|
||||
|
||||
---
|
||||
|
||||
## Current Code (BROKEN)
|
||||
|
||||
**File:** `instructions.md` lines 82-99
|
||||
|
||||
```xml
|
||||
<ask>Create story file with gap analysis? (yes/no):</ask>
|
||||
|
||||
<check if="response == 'yes'">
|
||||
<output>Creating story {{story_key}} with codebase gap analysis...</output>
|
||||
<action>Invoke workflow: /bmad:bmm:workflows:create-story-with-gap-analysis</action>
|
||||
<action>Parameters: story_key={{story_key}}</action>
|
||||
|
||||
<check if="story creation succeeded">
|
||||
<output>✅ Story {{story_key}} created successfully (12/12 sections)</output>
|
||||
<action>Update file_status_icon to ✅</action>
|
||||
<action>Mark story as validated</action>
|
||||
</check>
|
||||
|
||||
<check if="story creation failed">
|
||||
<output>❌ Story creation failed: {{story_key}}</output>
|
||||
<action>Mark story for removal from selection</action>
|
||||
<action>Add to skipped_stories list with reason: "Creation failed"</action>
|
||||
</check>
|
||||
</check>
|
||||
```
|
||||
|
||||
**Problem:**
|
||||
- Line 86: "Invoke workflow: /" doesn't work for agents
|
||||
- Agents can't execute slash commands
|
||||
- This always fails in batch mode
|
||||
|
||||
---
|
||||
|
||||
## Recommended Fix (WORKING)
|
||||
|
||||
**Replace lines 82-99 with:**
|
||||
|
||||
```xml
|
||||
<ask>Create story file with gap analysis? (yes/no):</ask>
|
||||
|
||||
<check if="response == 'yes'">
|
||||
<output>
|
||||
⚠️ STORY CREATION REQUIRES MANUAL WORKFLOW EXECUTION
|
||||
|
||||
**Story:** {{story_key}}
|
||||
**Status:** File missing or incomplete
|
||||
|
||||
**Problem:**
|
||||
Agents cannot invoke /create-story-with-gap-analysis workflow autonomously.
|
||||
This workflow requires:
|
||||
- Interactive user prompts
|
||||
- Context-heavy codebase scanning
|
||||
- Gap analysis decision-making
|
||||
|
||||
**Required Action:**
|
||||
|
||||
1. **Exit this batch execution:**
|
||||
- Remaining stories will be skipped
|
||||
- Batch will continue with valid stories only
|
||||
|
||||
2. **Regenerate story manually:**
|
||||
```
|
||||
/create-story-with-gap-analysis
|
||||
```
|
||||
When prompted, provide:
|
||||
- Story key: {{story_key}}
|
||||
- Epic: {epic from parent story}
|
||||
- Scope: {widget list or feature description}
|
||||
|
||||
3. **Validate story format:**
|
||||
```
|
||||
./scripts/validate-bmad-format.sh docs/sprint-artifacts/story-{{story_key}}.md
|
||||
```
|
||||
Must show: "✅ All 12 sections present"
|
||||
|
||||
4. **Re-run batch-super-dev:**
|
||||
- Story will now be properly formatted
|
||||
- Can be executed in next batch run
|
||||
|
||||
**Skipping story {{story_key}} from current batch execution.**
|
||||
</output>
|
||||
|
||||
<action>Mark story for removal from selection</action>
|
||||
<action>Add to skipped_stories list with reason: "Story creation requires manual workflow (agents cannot invoke /create-story)"</action>
|
||||
<action>Add to manual_actions_required list: "Regenerate {{story_key}} with /create-story-with-gap-analysis"</action>
|
||||
</check>
|
||||
|
||||
<check if="response == 'no'">
|
||||
<output>⏭️ Skipping story {{story_key}} (file missing, user declined creation)</output>
|
||||
<action>Mark story for removal from selection</action>
|
||||
<action>Add to skipped_stories list with reason: "User declined story creation"</action>
|
||||
</check>
|
||||
```
|
||||
|
||||
**Why This Works:**
|
||||
- ✅ Explicitly states agents can't create stories
|
||||
- ✅ Provides clear step-by-step user actions
|
||||
- ✅ Skips gracefully instead of failing silently
|
||||
- ✅ Tracks manual actions needed
|
||||
- ✅ Sets correct expectations
|
||||
|
||||
---
|
||||
|
||||
## Additional Improvements
|
||||
|
||||
### Add Manual Actions Tracking
|
||||
|
||||
**At end of batch execution (Step 5), add:**
|
||||
|
||||
```xml
|
||||
<check if="manual_actions_required is not empty">
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
⚠️ MANUAL ACTIONS REQUIRED
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
**{{manual_actions_required.length}} stories require manual intervention:**
|
||||
|
||||
{{#each manual_actions_required}}
|
||||
{{@index}}. **{{story_key}}**
|
||||
Action: {{action_description}}
|
||||
Command: {{command_to_run}}
|
||||
{{/each}}
|
||||
|
||||
**After completing these actions:**
|
||||
1. Validate all stories: ./scripts/validate-all-stories.sh
|
||||
2. Re-run batch-super-dev for these stories
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</check>
|
||||
```
|
||||
|
||||
**Why This Helps:**
|
||||
- User gets clear todo list
|
||||
- Knows exactly what to do next
|
||||
- Can track progress on manual actions
|
||||
|
||||
---
|
||||
|
||||
## Validation Script Enhancement
|
||||
|
||||
**Create:** `scripts/validate-all-stories.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Validate all ready-for-dev stories have proper BMAD format
|
||||
|
||||
set -e
|
||||
|
||||
STORIES=$(grep "ready-for-dev" docs/sprint-artifacts/sprint-status.yaml | awk '{print $1}' | sed 's/://')
|
||||
|
||||
echo "=========================================="
|
||||
echo " BMAD Story Format Validation"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
TOTAL=0
|
||||
VALID=0
|
||||
INVALID=0
|
||||
|
||||
for story in $STORIES; do
|
||||
STORY_FILE="docs/sprint-artifacts/story-$story.md"
|
||||
|
||||
if [ ! -f "$STORY_FILE" ]; then
|
||||
echo "❌ $story - FILE MISSING"
|
||||
INVALID=$((INVALID + 1))
|
||||
TOTAL=$((TOTAL + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
# Check BMAD format
|
||||
./scripts/validate-bmad-format.sh "$STORY_FILE" >/dev/null 2>&1
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "✅ $story - Valid BMAD format"
|
||||
VALID=$((VALID + 1))
|
||||
else
|
||||
echo "❌ $story - Invalid format (run validation for details)"
|
||||
INVALID=$((INVALID + 1))
|
||||
fi
|
||||
|
||||
TOTAL=$((TOTAL + 1))
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo " Summary"
|
||||
echo "=========================================="
|
||||
echo "Total Stories: $TOTAL"
|
||||
echo "Valid: $VALID"
|
||||
echo "Invalid: $INVALID"
|
||||
echo ""
|
||||
|
||||
if [ $INVALID -eq 0 ]; then
|
||||
echo "✅ All stories ready for batch execution!"
|
||||
exit 0
|
||||
else
|
||||
echo "❌ $INVALID stories need regeneration"
|
||||
echo ""
|
||||
echo "Run: /create-story-with-gap-analysis"
|
||||
echo "For each invalid story"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
**Why This Helps:**
|
||||
- Quick validation before batch
|
||||
- Prevents wasted time on incomplete stories
|
||||
- Clear pass/fail criteria
|
||||
|
||||
---
|
||||
|
||||
## Documentation Update
|
||||
|
||||
**Add to:** `_bmad/bmm/workflows/4-implementation/batch-super-dev/README.md`
|
||||
|
||||
```markdown
|
||||
# Batch Super-Dev Workflow
|
||||
|
||||
## Critical Prerequisites
|
||||
|
||||
**BEFORE running batch-super-dev:**
|
||||
|
||||
1. ✅ **All stories must be properly generated**
|
||||
- Run: `/create-story-with-gap-analysis` for each story
|
||||
- Do NOT create skeleton/template files manually
|
||||
- Validation: `./scripts/validate-all-stories.sh`
|
||||
|
||||
2. ✅ **All stories must have 12 BMAD sections**
|
||||
- Business Context, Current State, Acceptance Criteria
|
||||
- Tasks/Subtasks, Technical Requirements, Architecture Compliance
|
||||
- Testing Requirements, Dev Agent Guardrails, Definition of Done
|
||||
- References, Dev Agent Record, Change Log
|
||||
|
||||
3. ✅ **All stories must have tasks**
|
||||
- At least 1 unchecked task (something to implement)
|
||||
- Zero-task stories will be skipped
|
||||
- Validation: `grep -c "^- \[ \]" story-file.md`
|
||||
|
||||
## Common Failure Modes
|
||||
|
||||
### ❌ Attempting Batch Regeneration
|
||||
|
||||
**What you might try:**
|
||||
```
|
||||
1. Create 20 skeleton story files (just headers + widget lists)
|
||||
2. Run /batch-super-dev
|
||||
3. Expect agents to regenerate them
|
||||
```
|
||||
|
||||
**What happens:**
|
||||
- Agents identify stories are incomplete
|
||||
- Agents correctly halt per super-dev-pipeline validation
|
||||
- Stories get skipped (not regenerated)
|
||||
- You waste time
|
||||
|
||||
**Why:**
|
||||
- Agents CANNOT execute /create-story-with-gap-analysis
|
||||
- Agents CANNOT invoke other BMAD workflows
|
||||
- Story generation requires user interaction
|
||||
|
||||
**Solution:**
|
||||
- Generate ALL stories manually FIRST: /create-story-with-gap-analysis
|
||||
- Validate: ./scripts/validate-all-stories.sh
|
||||
- THEN run batch: /batch-super-dev
|
||||
|
||||
### ❌ Mixed Story Quality
|
||||
|
||||
**What you might try:**
|
||||
- Mix 10 proper stories + 10 skeletons
|
||||
- Run batch hoping it "figures it out"
|
||||
|
||||
**What happens:**
|
||||
- 10 proper stories execute successfully
|
||||
- 10 skeletons get skipped
|
||||
- Confusing results
|
||||
|
||||
**Solution:**
|
||||
- Ensure ALL stories have same quality
|
||||
- Validate before batch
|
||||
- Don't mix skeletons with proper stories
|
||||
|
||||
## Success Pattern
|
||||
|
||||
```bash
|
||||
# 1. Generate all stories (1-2 days, manual)
|
||||
for story in story-20-13a-{1..5}; do
|
||||
/create-story-with-gap-analysis
|
||||
# Provide story details interactively
|
||||
done
|
||||
|
||||
# 2. Validate (30 seconds, automated)
|
||||
./scripts/validate-all-stories.sh
|
||||
|
||||
# 3. Execute (4-8 hours, parallel autonomous)
|
||||
/batch-super-dev
|
||||
# Select all 5 stories
|
||||
# Choose 2-4 agents parallel
|
||||
|
||||
# 4. Review (1-2 hours)
|
||||
# Review commits, merge to main
|
||||
```
|
||||
|
||||
**Total Time:**
|
||||
- Manual work: 1-2 days (story generation)
|
||||
- Autonomous work: 4-8 hours (batch execution)
|
||||
- Review: 1-2 hours
|
||||
|
||||
**Efficiency:**
|
||||
- Story generation: Cannot be batched (requires user input)
|
||||
- Story execution: Highly parallelizable (4x speedup with 4 agents)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
**To apply these improvements:**
|
||||
|
||||
- [ ] Update `batch-super-dev/instructions.md` Step 2.5 (lines 82-99)
|
||||
- [ ] Add `batch-super-dev/AGENT-LIMITATIONS.md` (new file)
|
||||
- [ ] Add `batch-super-dev/BATCH-BEST-PRACTICES.md` (new file)
|
||||
- [ ] Update `batch-super-dev/README.md` with prerequisites
|
||||
- [ ] Create `scripts/validate-all-stories.sh` (new script)
|
||||
- [ ] Add manual actions tracking to Step 5 summary
|
||||
- [ ] Update super-dev-pipeline Step 1.4.5 with agent guidance
|
||||
|
||||
**Testing:**
|
||||
- Try batch with mixed story quality → Should skip skeletons gracefully
|
||||
- Verify error messages are clear
|
||||
- Confirm agents halt correctly (not crash)
|
||||
|
||||
---
|
||||
|
||||
**Expected Result:**
|
||||
- Users understand limitations upfront
|
||||
- Clear guidance when stories are incomplete
|
||||
- No false expectations about batch regeneration
|
||||
- Better error messages
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,390 @@
|
|||
# Step 4.5: Smart Story Reconciliation
|
||||
|
||||
<critical>Execute AFTER super-dev-pipeline completes but BEFORE marking story as "completed"</critical>
|
||||
<critical>This ensures story checkboxes and status accurately reflect actual implementation</critical>
|
||||
|
||||
## Goal
|
||||
|
||||
Verify story file accuracy by reconciling:
|
||||
1. **Acceptance Criteria checkboxes** vs Dev Agent Record
|
||||
2. **Tasks/Subtasks checkboxes** vs Dev Agent Record
|
||||
3. **Definition of Done checkboxes** vs Dev Agent Record
|
||||
4. **Story status** (should be "done" if implementation complete)
|
||||
5. **sprint-status.yaml entry** (should match story file status)
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### 1. Load Story File
|
||||
|
||||
<action>Read story file: {story_file_path}</action>
|
||||
<action>Extract sections:
|
||||
- Acceptance Criteria (## Acceptance Criteria)
|
||||
- Tasks / Subtasks (## Tasks / Subtasks)
|
||||
- Definition of Done (## Definition of Done)
|
||||
- Dev Agent Record (## Dev Agent Record)
|
||||
- Story status header (**Status:** ...)
|
||||
</action>
|
||||
|
||||
### 2. Analyze Dev Agent Record
|
||||
|
||||
<action>Read "Dev Agent Record" section</action>
|
||||
|
||||
<check if="Dev Agent Record is empty or says '(To be filled by dev agent)'">
|
||||
<output>⚠️ Story {{story_key}}: Dev Agent Record is empty - cannot reconcile</output>
|
||||
<output>This suggests super-dev-pipeline did not complete successfully.</output>
|
||||
<action>Mark story as FAILED reconciliation</action>
|
||||
<action>Return early (skip remaining checks)</action>
|
||||
</check>
|
||||
|
||||
<action>Parse Dev Agent Record fields:
|
||||
- **Agent Model Used** (should have model name, not empty)
|
||||
- **Implementation Summary** (should describe what was built)
|
||||
- **File List** (should list new/modified files)
|
||||
- **Test Results** (should show test counts)
|
||||
- **Completion Notes** (should document what works)
|
||||
</action>
|
||||
|
||||
<check if="Implementation Summary contains 'COMPLETE' or lists specific deliverables">
|
||||
<action>Set implementation_status = COMPLETE</action>
|
||||
</check>
|
||||
|
||||
<check if="Implementation Summary is vague or says 'pending'">
|
||||
<action>Set implementation_status = INCOMPLETE</action>
|
||||
<output>⚠️ Story {{story_key}}: Implementation appears incomplete based on Dev Agent Record</output>
|
||||
</check>
|
||||
|
||||
### 3. Reconcile Acceptance Criteria
|
||||
|
||||
<action>For each AC subsection (AC1, AC2, AC3, AC4, etc.):</action>
|
||||
|
||||
<iterate>For each checkbox in AC section:</iterate>
|
||||
|
||||
<substep n="3a" title="Identify expected status from Dev Agent Record">
|
||||
<action>Search Implementation Summary and File List for keywords from checkbox text</action>
|
||||
|
||||
<example>
|
||||
Checkbox: "[ ] FlexibleGridSection component (renders dynamic grid layouts)"
|
||||
Implementation Summary mentions: "FlexibleGridSection component created"
|
||||
File List includes: "FlexibleGridSection.tsx"
|
||||
→ Expected status: CHECKED
|
||||
</example>
|
||||
|
||||
<action>Determine expected_checkbox_status:
|
||||
- CHECKED if Implementation Summary confirms it OR File List shows created files OR Test Results mention it
|
||||
- UNCHECKED if no evidence in Dev Agent Record
|
||||
- PARTIAL if mentioned as "pending" or "infrastructure ready"
|
||||
</action>
|
||||
</substep>
|
||||
|
||||
<substep n="3b" title="Compare actual vs expected">
|
||||
<action>Read actual checkbox state from story file ([x] vs [ ] vs [~])</action>
|
||||
|
||||
<check if="actual != expected">
|
||||
<output>🔧 Reconciling AC: "{{checkbox_text}}"
|
||||
Actual: {{actual_status}}
|
||||
Expected: {{expected_status}}
|
||||
Reason: {{evidence_from_dev_record}}
|
||||
</output>
|
||||
<action>Add to reconciliation_changes list</action>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<action>After checking all ACs:
|
||||
- Count total AC items
|
||||
- Count checked AC items (after reconciliation)
|
||||
- Calculate AC completion percentage
|
||||
</action>
|
||||
|
||||
### 4. Reconcile Tasks / Subtasks
|
||||
|
||||
<action>For each Task (Task 1, Task 2, etc.):</action>
|
||||
|
||||
<iterate>For each checkbox in Tasks section:</iterate>
|
||||
|
||||
<substep n="4a" title="Identify expected status from Dev Agent Record">
|
||||
<action>Search Implementation Summary and File List for task keywords</action>
|
||||
|
||||
<example>
|
||||
Task checkbox: "[ ] **2.2:** Create FlexibleGridSection component"
|
||||
File List includes: "apps/frontend/web/src/components/FlexibleGridSection.tsx"
|
||||
→ Expected status: CHECKED
|
||||
</example>
|
||||
|
||||
<action>Determine expected_checkbox_status using same logic as AC section</action>
|
||||
</substep>
|
||||
|
||||
<substep n="4b" title="Compare and reconcile">
|
||||
<action>Read actual checkbox state</action>
|
||||
|
||||
<check if="actual != expected">
|
||||
<output>🔧 Reconciling Task: "{{task_text}}"
|
||||
Actual: {{actual_status}}
|
||||
Expected: {{expected_status}}
|
||||
Reason: {{evidence_from_dev_record}}
|
||||
</output>
|
||||
<action>Add to reconciliation_changes list</action>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<action>After checking all Tasks:
|
||||
- Count total task items
|
||||
- Count checked task items (after reconciliation)
|
||||
- Calculate task completion percentage
|
||||
</action>
|
||||
|
||||
### 5. Reconcile Definition of Done
|
||||
|
||||
<action>For each DoD category (Code Quality, Testing, Security, etc.):</action>
|
||||
|
||||
<iterate>For each checkbox in DoD section:</iterate>
|
||||
|
||||
<substep n="5a" title="Determine expected status">
|
||||
<action>Check Test Results, Completion Notes for evidence</action>
|
||||
|
||||
<example>
|
||||
DoD checkbox: "[ ] Type check passes: `pnpm type-check` (zero errors)"
|
||||
Completion Notes say: "Type check passes ✅"
|
||||
→ Expected status: CHECKED
|
||||
</example>
|
||||
|
||||
<example>
|
||||
DoD checkbox: "[ ] Unit tests: 90%+ coverage"
|
||||
Test Results say: "37 tests passing"
|
||||
Completion Notes say: "100% coverage on FlexibleGridSection"
|
||||
→ Expected status: CHECKED
|
||||
</example>
|
||||
|
||||
<action>Determine expected_checkbox_status</action>
|
||||
</substep>
|
||||
|
||||
<substep n="5b" title="Compare and reconcile">
|
||||
<action>Read actual checkbox state</action>
|
||||
|
||||
<check if="actual != expected">
|
||||
<output>🔧 Reconciling DoD: "{{dod_text}}"
|
||||
Actual: {{actual_status}}
|
||||
Expected: {{expected_status}}
|
||||
Reason: {{evidence_from_dev_record}}
|
||||
</output>
|
||||
<action>Add to reconciliation_changes list</action>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<action>After checking all DoD items:
|
||||
- Count total DoD items
|
||||
- Count checked DoD items (after reconciliation)
|
||||
- Calculate DoD completion percentage
|
||||
</action>
|
||||
|
||||
### 6. Determine Correct Story Status
|
||||
|
||||
<action>Based on completion percentages, determine correct story status:</action>
|
||||
|
||||
<check if="AC >= 95% AND Tasks >= 95% AND DoD >= 95%">
|
||||
<action>Set correct_story_status = "done"</action>
|
||||
</check>
|
||||
|
||||
<check if="AC >= 80% AND Tasks >= 80% AND DoD >= 80%">
|
||||
<action>Set correct_story_status = "review"</action>
|
||||
</check>
|
||||
|
||||
<check if="AC < 80% OR Tasks < 80% OR DoD < 80%">
|
||||
<action>Set correct_story_status = "in-progress"</action>
|
||||
</check>
|
||||
|
||||
<check if="implementation_status == INCOMPLETE">
|
||||
<action>Override: Set correct_story_status = "in-progress"</action>
|
||||
<output>⚠️ Overriding status to "in-progress" due to incomplete implementation</output>
|
||||
</check>
|
||||
|
||||
<action>Read current story status from story file (**Status:** ...)</action>
|
||||
|
||||
<check if="current_story_status != correct_story_status">
|
||||
<output>🔧 Story status mismatch:
|
||||
Current: {{current_story_status}}
|
||||
Expected: {{correct_story_status}}
|
||||
Reason: AC={{ac_pct}}% Tasks={{tasks_pct}}% DoD={{dod_pct}}%
|
||||
</output>
|
||||
<action>Add to reconciliation_changes list</action>
|
||||
</check>
|
||||
|
||||
### 7. Verify sprint-status.yaml Entry
|
||||
|
||||
<action>Read {sprint_status} file</action>
|
||||
<action>Find entry for {{story_key}}</action>
|
||||
<action>Extract current status from sprint-status.yaml</action>
|
||||
|
||||
<check if="sprint_status_yaml_status != correct_story_status">
|
||||
<output>🔧 sprint-status.yaml mismatch:
|
||||
Current: {{sprint_status_yaml_status}}
|
||||
Expected: {{correct_story_status}}
|
||||
</output>
|
||||
<action>Add to reconciliation_changes list</action>
|
||||
</check>
|
||||
|
||||
### 8. Apply Reconciliation Changes
|
||||
|
||||
<check if="reconciliation_changes is empty">
|
||||
<output>✅ Story {{story_key}}: Already accurate (0 changes needed)</output>
|
||||
<action>Return SUCCESS (no updates needed)</action>
|
||||
</check>
|
||||
|
||||
<check if="reconciliation_changes is NOT empty">
|
||||
<output>
|
||||
🔧 Story {{story_key}}: Reconciling {{count}} issues
|
||||
|
||||
**Changes to apply:**
|
||||
{{#each reconciliation_changes}}
|
||||
{{@index}}. {{change_description}}
|
||||
{{/each}}
|
||||
</output>
|
||||
|
||||
<ask>Apply these reconciliation changes? (yes/no):</ask>
|
||||
|
||||
<check if="response != 'yes'">
|
||||
<output>⏭️ Skipping reconciliation for {{story_key}}</output>
|
||||
<action>Return SUCCESS (user declined changes)</action>
|
||||
</check>
|
||||
|
||||
<substep n="8a" title="Update Acceptance Criteria">
|
||||
<action>For each AC checkbox that needs updating:</action>
|
||||
<action>Use Edit tool to update checkbox from [ ] to [x] or [~]</action>
|
||||
<action>Add note explaining why: "- [x] Item - COMPLETE: {{evidence}}"</action>
|
||||
</substep>
|
||||
|
||||
<substep n="8b" title="Update Tasks / Subtasks">
|
||||
<action>For each Task checkbox that needs updating:</action>
|
||||
<action>Use Edit tool to update checkbox</action>
|
||||
<action>Update task header if all subtasks complete: "### Task 1: ... ✅ COMPLETE"</action>
|
||||
</substep>
|
||||
|
||||
<substep n="8c" title="Update Definition of Done">
|
||||
<action>For each DoD checkbox that needs updating:</action>
|
||||
<action>Use Edit tool to update checkbox</action>
|
||||
<action>Add verification note: "- [x] Item ✅ (verified in Dev Agent Record)"</action>
|
||||
</substep>
|
||||
|
||||
<substep n="8d" title="Update Story Status">
|
||||
<check if="story status needs updating">
|
||||
<action>Use Edit tool to update status line</action>
|
||||
<action>Change from: **Status:** {{old_status}}</action>
|
||||
<action>Change to: **Status:** {{correct_story_status}}</action>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<substep n="8e" title="Update sprint-status.yaml with status and progress">
|
||||
<check if="sprint-status.yaml needs updating">
|
||||
<action>Use Edit tool to update status entry</action>
|
||||
|
||||
<action>Count tasks from story file:
|
||||
- total_tasks = all top-level tasks
|
||||
- checked_tasks = tasks marked [x]
|
||||
- progress_pct = (checked_tasks / total_tasks) × 100
|
||||
</action>
|
||||
|
||||
<action>Update comment with progress tracking (NEW v1.3.0):
|
||||
If status == "in-progress":
|
||||
Format: {{story_key}}: in-progress # {{checked_tasks}}/{{total_tasks}} tasks ({{progress_pct}}%)
|
||||
|
||||
If status == "review":
|
||||
Format: {{story_key}}: review # {{checked_tasks}}/{{total_tasks}} tasks ({{progress_pct}}%) - awaiting review
|
||||
|
||||
If status == "done":
|
||||
Format: {{story_key}}: done # ✅ COMPLETED: {{brief_summary}}
|
||||
</action>
|
||||
|
||||
<example>
|
||||
Before: 20-8-...: ready-for-dev # Story description
|
||||
During: 20-8-...: in-progress # 3/10 tasks (30%)
|
||||
During: 20-8-...: in-progress # 7/10 tasks (70%)
|
||||
Review: 20-8-...: review # 10/10 tasks (100%) - awaiting review
|
||||
After: 20-8-...: done # ✅ COMPLETED: Component + tests + docs
|
||||
</example>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<output>✅ Story {{story_key}}: Reconciliation complete ({{count}} changes applied)</output>
|
||||
</check>
|
||||
|
||||
### 9. Final Verification
|
||||
|
||||
<action>Re-read story file to verify changes applied correctly</action>
|
||||
<action>Calculate final completion percentages</action>
|
||||
|
||||
<output>
|
||||
📊 Story {{story_key}} - Final Status
|
||||
|
||||
**Acceptance Criteria:** {{ac_checked}}/{{ac_total}} ({{ac_pct}}%)
|
||||
**Tasks/Subtasks:** {{tasks_checked}}/{{tasks_total}} ({{tasks_pct}}%)
|
||||
**Definition of Done:** {{dod_checked}}/{{dod_total}} ({{dod_pct}}%)
|
||||
|
||||
**Story Status:** {{correct_story_status}}
|
||||
**sprint-status.yaml:** {{correct_story_status}}
|
||||
|
||||
{{#if correct_story_status == "done"}}
|
||||
✅ Story is COMPLETE and accurately reflects implementation
|
||||
{{/if}}
|
||||
|
||||
{{#if correct_story_status == "review"}}
|
||||
⚠️ Story needs review (some items incomplete)
|
||||
{{/if}}
|
||||
|
||||
{{#if correct_story_status == "in-progress"}}
|
||||
⚠️ Story has significant gaps (implementation incomplete)
|
||||
{{/if}}
|
||||
</output>
|
||||
|
||||
<action>Return SUCCESS with reconciliation summary</action>
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
Story reconciliation succeeds when:
|
||||
1. ✅ All checkboxes match Dev Agent Record evidence
|
||||
2. ✅ Story status accurately reflects completion (done/review/in-progress)
|
||||
3. ✅ sprint-status.yaml entry matches story file status
|
||||
4. ✅ Completion percentages calculated and reported
|
||||
5. ✅ Changes documented in reconciliation summary
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
<check if="story file not found">
|
||||
<output>❌ Story {{story_key}}: File not found at {{story_file_path}}</output>
|
||||
<action>Return FAILED reconciliation</action>
|
||||
</check>
|
||||
|
||||
<check if="Dev Agent Record missing or empty">
|
||||
<output>⚠️ Story {{story_key}}: Cannot reconcile - Dev Agent Record not populated</output>
|
||||
<action>Mark as INCOMPLETE (not implemented yet)</action>
|
||||
<action>Return WARNING reconciliation</action>
|
||||
</check>
|
||||
|
||||
<check if="Edit tool fails">
|
||||
<output>❌ Story {{story_key}}: Failed to apply changes (Edit tool error)</output>
|
||||
<action>Log error details</action>
|
||||
<action>Return FAILED reconciliation</action>
|
||||
</check>
|
||||
|
||||
---
|
||||
|
||||
## Integration with batch-super-dev
|
||||
|
||||
**Insert this step:**
|
||||
- **Sequential mode:** After Step 4s-a (Process individual story), before marking completed
|
||||
- **Parallel mode:** After Step 4p-a (Spawn Task agents), after agent completes but before marking completed
|
||||
|
||||
**Flow:**
|
||||
```
|
||||
super-dev-pipeline completes → Step 4.5 (Reconcile) → Mark as completed/failed
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Ensures all batch-processed stories have accurate status
|
||||
- Catches mismatches automatically
|
||||
- Prevents "done" stories with unchecked items
|
||||
- Maintains sprint-status.yaml accuracy
|
||||
|
|
@ -0,0 +1,97 @@
|
|||
name: batch-super-dev
|
||||
description: "Interactive batch selector for super-dev-pipeline with complexity-based routing. Micro stories get lightweight path, standard stories get full pipeline, complex stories get enhanced validation."
|
||||
author: "BMad"
|
||||
version: "1.3.0"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{config_source}:sprint_artifacts"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow paths
|
||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/batch-super-dev"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
|
||||
# State management
|
||||
sprint_status: "{sprint_artifacts}/sprint-status.yaml"
|
||||
batch_log: "{sprint_artifacts}/batch-super-dev-{date}.log"
|
||||
|
||||
# Variables
|
||||
filter_by_epic: "" # Optional: Filter stories by epic number (e.g., "3" for only Epic 3 stories)
|
||||
max_stories: 20 # Safety limit - won't process more than this in one batch
|
||||
pause_between_stories: 5 # Seconds to pause between stories (allows monitoring, prevents rate limits)
|
||||
|
||||
# Super-dev-pipeline invocation settings
|
||||
super_dev_settings:
|
||||
mode: "batch" # Always use batch mode for autonomous execution
|
||||
workflow_path: "{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline"
|
||||
|
||||
# Story validation settings (NEW in v1.2.0)
|
||||
validation:
|
||||
enabled: true # Validate story files before processing
|
||||
auto_create_missing: false # If true, auto-create without prompting (use with caution)
|
||||
auto_regenerate_invalid: false # If true, auto-regenerate without prompting (use with caution)
|
||||
min_sections: 12 # BMAD format requires all 12 sections
|
||||
min_current_state_words: 100 # Current State must have substantial content
|
||||
require_gap_analysis: true # Current State must have ✅/❌ markers
|
||||
backup_before_regenerate: true # Create .backup file before regenerating
|
||||
|
||||
# Story complexity scoring (NEW in v1.3.0)
|
||||
# Routes stories to appropriate pipeline based on complexity
|
||||
complexity:
|
||||
enabled: true
|
||||
thresholds:
|
||||
micro: # Lightweight path: skip gap analysis + code review
|
||||
max_tasks: 3
|
||||
max_files: 5
|
||||
risk_keywords: [] # No high-risk keywords allowed
|
||||
standard: # Normal path: full pipeline
|
||||
max_tasks: 15
|
||||
max_files: 30
|
||||
risk_keywords: ["api", "service", "component", "feature"]
|
||||
complex: # Enhanced path: extra validation, consider splitting
|
||||
min_tasks: 16
|
||||
risk_keywords: ["auth", "security", "migration", "database", "payment", "encryption"]
|
||||
|
||||
# Risk keyword scoring (adds to complexity)
|
||||
risk_weights:
|
||||
high: ["auth", "security", "payment", "encryption", "migration", "database", "schema"]
|
||||
medium: ["api", "integration", "external", "third-party", "cache"]
|
||||
low: ["ui", "style", "config", "docs", "test"]
|
||||
|
||||
# Keyword matching configuration (defines how risk keywords are detected)
|
||||
keyword_matching:
|
||||
case_sensitive: false # "AUTH" matches "auth"
|
||||
require_word_boundaries: true # "auth" won't match "author"
|
||||
match_strategy: "exact" # exact word match required (no stemming)
|
||||
scan_locations:
|
||||
- story_title
|
||||
- task_descriptions
|
||||
- subtask_descriptions
|
||||
# Keyword variants (synonyms that map to canonical forms)
|
||||
variants:
|
||||
auth: ["authentication", "authorize", "authorization", "authz", "authn"]
|
||||
database: ["db", "databases", "datastore"]
|
||||
payment: ["payments", "pay", "billing", "checkout"]
|
||||
migration: ["migrations", "migrate"]
|
||||
security: ["secure", "security"]
|
||||
encryption: ["encrypt", "encrypted", "cipher"]
|
||||
|
||||
# Task counting rules
|
||||
task_counting:
|
||||
method: "top_level_only" # Only count [ ] at task level, not subtasks
|
||||
# Options: "top_level_only", "include_subtasks", "weighted"
|
||||
# Example:
|
||||
# - [ ] Parent task <- counts as 1
|
||||
# - [ ] Subtask 1 <- ignored
|
||||
# - [ ] Subtask 2 <- ignored
|
||||
|
||||
# Execution settings
|
||||
execution:
|
||||
continue_on_failure: true # Keep processing remaining stories if one fails
|
||||
display_progress: true # Show running summary after each story
|
||||
save_state: true # Save progress to resume if interrupted
|
||||
|
||||
standalone: true
|
||||
|
|
@ -0,0 +1,625 @@
|
|||
# Detect Ghost Features - Reverse Gap Analysis (Who You Gonna Call?)
|
||||
|
||||
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="1" goal="Load all stories in scope">
|
||||
<action>Determine scan scope based on parameters:</action>
|
||||
|
||||
<check if="scan_scope == 'epic' AND epic_number provided">
|
||||
<action>Read {sprint_status}</action>
|
||||
<action>Filter stories starting with "{{epic_number}}-"</action>
|
||||
<action>Store as: stories_in_scope</action>
|
||||
<output>🔍 Scanning Epic {{epic_number}} stories for documented features...</output>
|
||||
</check>
|
||||
|
||||
<check if="scan_scope == 'sprint'">
|
||||
<action>Read {sprint_status}</action>
|
||||
<action>Get ALL story keys (exclude epics and retrospectives)</action>
|
||||
<action>Store as: stories_in_scope</action>
|
||||
<output>🔍 Scanning entire sprint for documented features...</output>
|
||||
</check>
|
||||
|
||||
<check if="scan_scope == 'codebase'">
|
||||
<action>Set stories_in_scope = ALL stories found in {sprint_artifacts}</action>
|
||||
<output>🔍 Scanning entire codebase for documented features...</output>
|
||||
</check>
|
||||
|
||||
<action>For each story in stories_in_scope:</action>
|
||||
<action> Read story file</action>
|
||||
<action> Extract documented artifacts:</action>
|
||||
<action> - File List (all paths mentioned)</action>
|
||||
<action> - Tasks (all file/component/service names mentioned)</action>
|
||||
<action> - ACs (all features/functionality mentioned)</action>
|
||||
<action> Store in: documented_artifacts[story_key] = {files, components, services, apis, features}</action>
|
||||
|
||||
<output>
|
||||
✅ Loaded {{stories_in_scope.length}} stories
|
||||
📋 Documented artifacts extracted from {{total_sections}} sections
|
||||
</output>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Scan codebase for actual implementations">
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
👻 SCANNING FOR GHOST FEATURES
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Looking for: Components, APIs, Services, DB Tables, Models
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<substep n="2a" title="Scan for React/Vue/Angular components">
|
||||
<check if="scan_for.components == true">
|
||||
<action>Use Glob to find component files:</action>
|
||||
<action> - **/*.component.{tsx,jsx,ts,js,vue} (Angular/Vue pattern)</action>
|
||||
<action> - **/components/**/*.{tsx,jsx} (React pattern)</action>
|
||||
<action> - **/src/**/*{Component,View,Screen,Page}.{tsx,jsx} (Named pattern)</action>
|
||||
|
||||
<action>For each found component file:</action>
|
||||
<action> Extract component name from filename or export</action>
|
||||
<action> Check file size (ignore <50 lines as trivial)</action>
|
||||
<action> Read file to determine if it's a significant feature</action>
|
||||
|
||||
<action>Store as: codebase_components = [{name, path, size, purpose}]</action>
|
||||
|
||||
<output>📦 Found {{codebase_components.length}} components</output>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<substep n="2b" title="Scan for API endpoints">
|
||||
<check if="scan_for.api_endpoints == true">
|
||||
<action>Use Glob to find API files:</action>
|
||||
<action> - **/api/**/*.{ts,js} (Next.js/Express pattern)</action>
|
||||
<action> - **/*.controller.{ts,js} (NestJS pattern)</action>
|
||||
<action> - **/routes/**/*.{ts,js} (Generic routes)</action>
|
||||
|
||||
<action>Use Grep to find endpoint definitions:</action>
|
||||
<action> - @Get|@Post|@Put|@Delete decorators (NestJS)</action>
|
||||
<action> - export async function GET|POST|PUT|DELETE (Next.js App Router)</action>
|
||||
<action> - router.get|post|put|delete (Express)</action>
|
||||
<action> - app.route (Flask/FastAPI if Python)</action>
|
||||
|
||||
<action>For each endpoint found:</action>
|
||||
<action> Extract: HTTP method, path, handler name</action>
|
||||
<action> Read file to understand functionality</action>
|
||||
|
||||
<action>Store as: codebase_apis = [{method, path, handler, file}]</action>
|
||||
|
||||
<output>🌐 Found {{codebase_apis.length}} API endpoints</output>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<substep n="2c" title="Scan for database tables">
|
||||
<check if="scan_for.database_tables == true">
|
||||
<action>Use Glob to find schema files:</action>
|
||||
<action> - **/prisma/schema.prisma (Prisma)</action>
|
||||
<action> - **/*.entity.{ts,js} (TypeORM)</action>
|
||||
<action> - **/models/**/*.{ts,js} (Mongoose/Sequelize)</action>
|
||||
<action> - **/*-table.ts (Custom)</action>
|
||||
|
||||
<action>Use Grep to find table definitions:</action>
|
||||
<action> - model (Prisma)</action>
|
||||
<action> - @Entity (TypeORM)</action>
|
||||
<action> - createTable (Migrations)</action>
|
||||
|
||||
<action>For each table found:</action>
|
||||
<action> Extract: table name, columns, relationships</action>
|
||||
|
||||
<action>Store as: codebase_tables = [{name, file, columns}]</action>
|
||||
|
||||
<output>🗄️ Found {{codebase_tables.length}} database tables</output>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<substep n="2d" title="Scan for services/modules">
|
||||
<check if="scan_for.services == true">
|
||||
<action>Use Glob to find service files:</action>
|
||||
<action> - **/*.service.{ts,js}</action>
|
||||
<action> - **/services/**/*.{ts,js}</action>
|
||||
<action> - **/*Service.{ts,js}</action>
|
||||
|
||||
<action>For each service found:</action>
|
||||
<action> Extract: service name, key methods, dependencies</action>
|
||||
<action> Ignore trivial services (<100 lines)</action>
|
||||
|
||||
<action>Store as: codebase_services = [{name, file, methods}]</action>
|
||||
|
||||
<output>⚙️ Found {{codebase_services.length}} services</output>
|
||||
</check>
|
||||
</substep>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Cross-reference codebase artifacts with stories">
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 CROSS-REFERENCING CODEBASE ↔ STORIES
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<action>Initialize: orphaned_features = []</action>
|
||||
|
||||
<substep n="3a" title="Check components">
|
||||
<iterate>For each component in codebase_components:</iterate>
|
||||
|
||||
<action>Search all stories for mentions of:</action>
|
||||
<action> - Component name in File Lists</action>
|
||||
<action> - Component name in Task descriptions</action>
|
||||
<action> - Component file path in File Lists</action>
|
||||
<action> - Feature described by component in ACs</action>
|
||||
|
||||
<check if="NO stories mention this component">
|
||||
<action>Add to orphaned_features:</action>
|
||||
<action>
|
||||
type: "component"
|
||||
name: {{component.name}}
|
||||
path: {{component.path}}
|
||||
size: {{component.size}} lines
|
||||
purpose: {{inferred_purpose_from_code}}
|
||||
severity: "HIGH" # Significant orphan
|
||||
</action>
|
||||
<output> 👻 ORPHAN: {{component.name}} ({{component.path}})</output>
|
||||
</check>
|
||||
|
||||
<check if="stories mention this component">
|
||||
<output> ✅ Documented: {{component.name}} → {{story_keys}}</output>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<substep n="3b" title="Check API endpoints">
|
||||
<iterate>For each API in codebase_apis:</iterate>
|
||||
|
||||
<action>Search all stories for mentions of:</action>
|
||||
<action> - Endpoint path (e.g., "/api/users")</action>
|
||||
<action> - HTTP method + resource (e.g., "POST users")</action>
|
||||
<action> - Handler file in File Lists</action>
|
||||
<action> - API functionality in ACs (e.g., "Users can create account")</action>
|
||||
|
||||
<check if="NO stories mention this API">
|
||||
<action>Add to orphaned_features:</action>
|
||||
<action>
|
||||
type: "api"
|
||||
method: {{api.method}}
|
||||
path: {{api.path}}
|
||||
handler: {{api.handler}}
|
||||
file: {{api.file}}
|
||||
severity: "CRITICAL" # APIs are critical functionality
|
||||
</action>
|
||||
<output> 👻 ORPHAN: {{api.method}} {{api.path}} ({{api.file}})</output>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<substep n="3c" title="Check database tables">
|
||||
<iterate>For each table in codebase_tables:</iterate>
|
||||
|
||||
<action>Search all stories for mentions of:</action>
|
||||
<action> - Table name</action>
|
||||
<action> - Migration file in File Lists</action>
|
||||
<action> - Data model in Tasks</action>
|
||||
|
||||
<check if="NO stories mention this table">
|
||||
<action>Add to orphaned_features:</action>
|
||||
<action>
|
||||
type: "database"
|
||||
name: {{table.name}}
|
||||
file: {{table.file}}
|
||||
columns: {{table.columns.length}}
|
||||
severity: "HIGH" # Database changes are significant
|
||||
</action>
|
||||
<output> 👻 ORPHAN: Table {{table.name}} ({{table.file}})</output>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<substep n="3d" title="Check services">
|
||||
<iterate>For each service in codebase_services:</iterate>
|
||||
|
||||
<action>Search all stories for mentions of:</action>
|
||||
<action> - Service name or class name</action>
|
||||
<action> - Service file in File Lists</action>
|
||||
<action> - Service functionality in Tasks/ACs</action>
|
||||
|
||||
<check if="NO stories mention this service">
|
||||
<action>Add to orphaned_features:</action>
|
||||
<action>
|
||||
type: "service"
|
||||
name: {{service.name}}
|
||||
file: {{service.file}}
|
||||
methods: {{service.methods.length}}
|
||||
severity: "MEDIUM" # Services are business logic
|
||||
</action>
|
||||
<output> 👻 ORPHAN: {{service.name}} ({{service.file}})</output>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Cross-Reference Complete
|
||||
👻 Orphaned Features: {{orphaned_features.length}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Analyze and categorize orphans">
|
||||
<action>Group orphans by type and severity:</action>
|
||||
<action>
|
||||
- critical_orphans (APIs, auth, payment)
|
||||
- high_orphans (Components, DB tables, services)
|
||||
- medium_orphans (Utilities, helpers)
|
||||
- low_orphans (Config files, constants)
|
||||
</action>
|
||||
|
||||
<action>Estimate complexity for each orphan:</action>
|
||||
<action> Based on file size, dependencies, test coverage</action>
|
||||
|
||||
<action>Suggest epic assignment based on functionality:</action>
|
||||
<action> - Auth components → Epic focusing on authentication</action>
|
||||
<action> - UI components → Epic focusing on frontend</action>
|
||||
<action> - API endpoints → Epic for that resource type</action>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
👻 GHOST FEATURES DETECTED
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
**Total Orphans:** {{orphaned_features.length}}
|
||||
|
||||
**By Severity:**
|
||||
- 🔴 CRITICAL: {{critical_orphans.length}} (APIs, security-critical)
|
||||
- 🟠 HIGH: {{high_orphans.length}} (Components, DB, services)
|
||||
- 🟡 MEDIUM: {{medium_orphans.length}} (Utilities, helpers)
|
||||
- 🟢 LOW: {{low_orphans.length}} (Config, constants)
|
||||
|
||||
**By Type:**
|
||||
- Components: {{component_orphans.length}}
|
||||
- API Endpoints: {{api_orphans.length}}
|
||||
- Database Tables: {{db_orphans.length}}
|
||||
- Services: {{service_orphans.length}}
|
||||
- Other: {{other_orphans.length}}
|
||||
|
||||
---
|
||||
|
||||
**CRITICAL Orphans (Immediate Action Required):**
|
||||
{{#each critical_orphans}}
|
||||
{{@index + 1}}. **{{type | uppercase}}**: {{name}}
|
||||
File: {{file}}
|
||||
Purpose: {{inferred_purpose}}
|
||||
Risk: {{why_critical}}
|
||||
Suggested Epic: {{suggested_epic}}
|
||||
{{/each}}
|
||||
|
||||
---
|
||||
|
||||
**HIGH Priority Orphans:**
|
||||
{{#each high_orphans}}
|
||||
{{@index + 1}}. **{{type | uppercase}}**: {{name}}
|
||||
File: {{file}}
|
||||
Size: {{size}} lines / {{complexity}} complexity
|
||||
Suggested Epic: {{suggested_epic}}
|
||||
{{/each}}
|
||||
|
||||
---
|
||||
|
||||
**Detection Confidence:**
|
||||
- Artifacts scanned: {{total_artifacts_scanned}}
|
||||
- Stories cross-referenced: {{stories_in_scope.length}}
|
||||
- Documentation coverage: {{documented_pct}}%
|
||||
- Orphan rate: {{orphan_rate}}%
|
||||
|
||||
{{#if orphan_rate > 20}}
|
||||
⚠️ **HIGH ORPHAN RATE** - Over 20% of codebase is undocumented!
|
||||
Recommend: Comprehensive backfill story creation session
|
||||
{{/if}}
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Propose backfill stories">
|
||||
<check if="create_backfill_stories == false">
|
||||
<output>
|
||||
Backfill story creation disabled. To create stories for orphans, run:
|
||||
/detect-ghost-features create_backfill_stories=true
|
||||
</output>
|
||||
<action>Jump to Step 7 (Generate Report)</action>
|
||||
</check>
|
||||
|
||||
<check if="orphaned_features.length == 0">
|
||||
<output>✅ No orphans found - all code is documented in stories!</output>
|
||||
<action>Jump to Step 7</action>
|
||||
</check>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📝 PROPOSING BACKFILL STORIES ({{orphaned_features.length}})
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<iterate>For each orphaned feature (prioritized by severity):</iterate>
|
||||
|
||||
<substep n="5a" title="Generate backfill story draft">
|
||||
<action>Analyze orphan to understand functionality:</action>
|
||||
<action> - Read implementation code</action>
|
||||
<action> - Identify dependencies and related files</action>
|
||||
<action> - Determine what it does (infer from code)</action>
|
||||
<action> - Find tests (if any) to understand use cases</action>
|
||||
|
||||
<action>Generate story draft:</action>
|
||||
<action>
|
||||
Story Title: "Document existing {{name}} {{type}}"
|
||||
|
||||
Story Description:
|
||||
This is a BACKFILL STORY documenting existing functionality found in the codebase
|
||||
that was not tracked in any story (likely vibe-coded or manually added).
|
||||
|
||||
Business Context:
|
||||
{{inferred_business_purpose_from_code}}
|
||||
|
||||
Current State:
|
||||
✅ **Implementation EXISTS:** {{file}}
|
||||
- {{description_of_what_it_does}}
|
||||
- {{key_features_or_methods}}
|
||||
{{#if has_tests}}✅ Tests exist: {{test_files}}{{else}}❌ No tests found{{/if}}
|
||||
|
||||
Acceptance Criteria:
|
||||
{{#each inferred_acs_from_code}}
|
||||
- [ ] {{this}}
|
||||
{{/each}}
|
||||
|
||||
Tasks:
|
||||
- [x] {{name}} implementation (ALREADY EXISTS - {{file}})
|
||||
{{#if missing_tests}}- [ ] Add tests for {{name}}{{/if}}
|
||||
{{#if missing_docs}}- [ ] Add documentation for {{name}}{{/if}}
|
||||
- [ ] Verify functionality works as expected
|
||||
- [ ] Add to relevant epic or create new epic for backfills
|
||||
|
||||
Definition of Done:
|
||||
- [x] Implementation exists and works
|
||||
{{#if has_tests}}- [x] Tests exist{{else}}- [ ] Tests added{{/if}}
|
||||
- [ ] Documented in story (this story)
|
||||
- [ ] Assigned to appropriate epic
|
||||
|
||||
Story Type: BACKFILL (documenting existing code)
|
||||
</action>
|
||||
|
||||
<output>
|
||||
📄 Generated backfill story draft for: {{name}}
|
||||
|
||||
{{story_draft_preview}}
|
||||
|
||||
---
|
||||
</output>
|
||||
</substep>
|
||||
|
||||
<substep n="5b" title="Ask user if they want to create this backfill story">
|
||||
<check if="auto_create == true">
|
||||
<action>Create backfill story automatically</action>
|
||||
<output>✅ Auto-created: {{story_filename}}</output>
|
||||
</check>
|
||||
|
||||
<check if="auto_create == false">
|
||||
<ask>
|
||||
Create backfill story for {{name}}?
|
||||
|
||||
**Type:** {{type}}
|
||||
**File:** {{file}}
|
||||
**Suggested Epic:** {{suggested_epic}}
|
||||
**Complexity:** {{complexity_estimate}}
|
||||
|
||||
[Y] Yes - Create this backfill story
|
||||
[A] Auto - Create this and all remaining backfill stories
|
||||
[E] Edit - Let me adjust the story draft first
|
||||
[S] Skip - Don't create story for this orphan
|
||||
[H] Halt - Stop backfill story creation
|
||||
|
||||
Your choice:
|
||||
</ask>
|
||||
|
||||
<check if="choice == 'Y'">
|
||||
<action>Create backfill story file: {sprint_artifacts}/backfill-{{type}}-{{name}}.md</action>
|
||||
<action>Add to backfill_stories_created list</action>
|
||||
<output>✅ Created: {{story_filename}}</output>
|
||||
</check>
|
||||
|
||||
<check if="choice == 'A'">
|
||||
<action>Set auto_create = true</action>
|
||||
<action>Create this story and auto-create remaining</action>
|
||||
</check>
|
||||
|
||||
<check if="choice == 'E'">
|
||||
<ask>Provide your adjusted story content or instructions for modifications:</ask>
|
||||
<action>Apply user's edits to story draft</action>
|
||||
<action>Create modified backfill story</action>
|
||||
</check>
|
||||
|
||||
<check if="choice == 'S'">
|
||||
<action>Add to skipped_backfills list</action>
|
||||
<output>⏭️ Skipped</output>
|
||||
</check>
|
||||
|
||||
<check if="choice == 'H'">
|
||||
<action>Exit backfill story creation loop</action>
|
||||
<action>Jump to Step 6</action>
|
||||
</check>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<check if="add_to_sprint_status AND backfill_stories_created.length > 0">
|
||||
<action>Load {sprint_status} file</action>
|
||||
|
||||
<iterate>For each created backfill story:</iterate>
|
||||
<action> Add entry: {{backfill_story_key}}: backlog # BACKFILL - documents existing {{name}}</action>
|
||||
|
||||
<action>Save sprint-status.yaml</action>
|
||||
|
||||
<output>✅ Added {{backfill_stories_created.length}} backfill stories to sprint-status.yaml</output>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Suggest epic organization for orphans">
|
||||
<check if="backfill_stories_created.length > 0">
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 BACKFILL STORY ORGANIZATION
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<action>Group backfill stories by suggested epic:</action>
|
||||
|
||||
<iterate>For each suggested_epic:</iterate>
|
||||
<output>
|
||||
**{{suggested_epic}}:**
|
||||
{{#each backfill_stories_for_epic}}
|
||||
- {{story_key}}: {{name}} ({{type}})
|
||||
{{/each}}
|
||||
</output>
|
||||
|
||||
<output>
|
||||
---
|
||||
|
||||
**Recommendations:**
|
||||
|
||||
1. **Option A: Create "Epic-Backfill" for all orphans**
|
||||
- Single epic containing all backfill stories
|
||||
- Easy to track undocumented code
|
||||
- Clear separation from feature work
|
||||
|
||||
2. **Option B: Distribute to existing epics**
|
||||
- Add each backfill story to its logical epic
|
||||
- Better thematic grouping
|
||||
- May inflate epic story counts
|
||||
|
||||
3. **Option C: Leave in backlog**
|
||||
- Don't assign to epics yet
|
||||
- Review and assign during next planning
|
||||
|
||||
**Your choice:**
|
||||
[A] Create Epic-Backfill (recommended)
|
||||
[B] Distribute to existing epics
|
||||
[C] Leave in backlog for manual assignment
|
||||
[S] Skip epic assignment
|
||||
</output>
|
||||
|
||||
<ask>How should backfill stories be organized?</ask>
|
||||
|
||||
<check if="choice == 'A'">
|
||||
<action>Create epic-backfill.md in epics directory</action>
|
||||
<action>Update sprint-status.yaml with epic-backfill entry</action>
|
||||
<action>Assign all backfill stories to epic-backfill</action>
|
||||
</check>
|
||||
|
||||
<check if="choice == 'B'">
|
||||
<iterate>For each backfill story:</iterate>
|
||||
<action> Assign to suggested_epic in sprint-status.yaml</action>
|
||||
<action> Update story_key to match epic (e.g., 2-11-backfill-userprofile)</action>
|
||||
</check>
|
||||
|
||||
<check if="choice == 'C' OR choice == 'S'">
|
||||
<action>Leave stories in backlog</action>
|
||||
</check>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Generate comprehensive report">
|
||||
<check if="create_report == true">
|
||||
<action>Write report to: {sprint_artifacts}/ghost-features-report-{{timestamp}}.md</action>
|
||||
|
||||
<action>Report structure:</action>
|
||||
<action>
|
||||
# Ghost Features Report (Reverse Gap Analysis)
|
||||
|
||||
**Generated:** {{timestamp}}
|
||||
**Scope:** {{scan_scope}} {{#if epic_number}}(Epic {{epic_number}}){{/if}}
|
||||
|
||||
## Executive Summary
|
||||
|
||||
**Codebase Artifacts Scanned:** {{total_artifacts_scanned}}
|
||||
**Stories Cross-Referenced:** {{stories_in_scope.length}}
|
||||
**Orphaned Features Found:** {{orphaned_features.length}}
|
||||
**Documentation Coverage:** {{documented_pct}}%
|
||||
**Backfill Stories Created:** {{backfill_stories_created.length}}
|
||||
|
||||
## Orphaned Features Detail
|
||||
|
||||
### CRITICAL Orphans ({{critical_orphans.length}})
|
||||
[Full list with files, purposes, risks]
|
||||
|
||||
### HIGH Priority Orphans ({{high_orphans.length}})
|
||||
[Full list]
|
||||
|
||||
### MEDIUM Priority Orphans ({{medium_orphans.length}})
|
||||
[Full list]
|
||||
|
||||
## Backfill Stories Created
|
||||
|
||||
{{#each backfill_stories_created}}
|
||||
- {{story_key}}: {{story_file}}
|
||||
{{/each}}
|
||||
|
||||
## Recommendations
|
||||
|
||||
[Epic assignment suggestions, next steps]
|
||||
|
||||
## Appendix: Scan Methodology
|
||||
|
||||
[How detection worked, patterns used, confidence levels]
|
||||
</action>
|
||||
|
||||
<output>📄 Full report: {{report_path}}</output>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Final summary and next steps">
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ GHOST FEATURE DETECTION COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
**Scan Scope:** {{scan_scope}} {{#if epic_number}}(Epic {{epic_number}}){{/if}}
|
||||
|
||||
**Results:**
|
||||
- 👻 Orphaned Features: {{orphaned_features.length}}
|
||||
- 📝 Backfill Stories Created: {{backfill_stories_created.length}}
|
||||
- ⏭️ Skipped: {{skipped_backfills.length}}
|
||||
- 📊 Documentation Coverage: {{documented_pct}}%
|
||||
|
||||
{{#if orphaned_features.length == 0}}
|
||||
✅ **EXCELLENT!** All code is documented in stories.
|
||||
Your codebase and story backlog are in perfect sync.
|
||||
{{/if}}
|
||||
|
||||
{{#if orphaned_features.length > 0 AND backfill_stories_created.length == 0}}
|
||||
**Action Required:**
|
||||
Run with create_backfill_stories=true to generate stories for orphans
|
||||
{{/if}}
|
||||
|
||||
{{#if backfill_stories_created.length > 0}}
|
||||
**Next Steps:**
|
||||
|
||||
1. **Review backfill stories** - Check generated stories for accuracy
|
||||
2. **Assign to epics** - Organize backfills (or create Epic-Backfill)
|
||||
3. **Update sprint-status.yaml** - Already updated with {{backfill_stories_created.length}} new entries
|
||||
4. **Prioritize** - Decide when to implement tests/docs for orphans
|
||||
5. **Run revalidation** - Verify orphans work as expected
|
||||
|
||||
**Quick Commands:**
|
||||
```bash
|
||||
# Revalidate a backfill story to verify functionality
|
||||
/revalidate-story story_file={{backfill_stories_created[0].file}}
|
||||
|
||||
# Process backfill stories (add tests/docs)
|
||||
/batch-super-dev filter_by_epic=backfill
|
||||
```
|
||||
{{/if}}
|
||||
|
||||
{{#if create_report}}
|
||||
**Detailed Report:** {{report_path}}
|
||||
{{/if}}
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
💡 **Pro Tip:** Run this periodically (e.g., end of each sprint) to catch
|
||||
vibe-coded features before they become maintenance nightmares.
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
|
|
@ -0,0 +1,56 @@
|
|||
name: detect-ghost-features
|
||||
description: "Reverse gap analysis: Find functionality in codebase that has no corresponding story (vibe-coded or undocumented features). Propose backfill stories."
|
||||
author: "BMad"
|
||||
version: "1.0.0" # Who you gonna call? GHOST-FEATURE-BUSTERS! 👻
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{output_folder}/sprint-artifacts"
|
||||
sprint_status: "{output_folder}/sprint-status.yaml"
|
||||
|
||||
# Input parameters
|
||||
epic_number: "{epic_number}" # Optional: Limit to specific epic (e.g., "2")
|
||||
scan_scope: "sprint" # "sprint" | "epic" | "codebase"
|
||||
create_backfill_stories: true # Propose backfill stories for orphans
|
||||
|
||||
# Detection settings
|
||||
detection:
|
||||
scan_for:
|
||||
components: true # React/Vue/Angular components
|
||||
api_endpoints: true # Routes, controllers, handlers
|
||||
database_tables: true # Migrations, schema
|
||||
services: true # Services, modules, utilities
|
||||
models: true # Data models, entities
|
||||
ui_pages: true # Pages, screens, views
|
||||
|
||||
ignore_patterns:
|
||||
- "**/node_modules/**"
|
||||
- "**/dist/**"
|
||||
- "**/build/**"
|
||||
- "**/*.test.*"
|
||||
- "**/*.spec.*"
|
||||
- "**/migrations/**" # Migrations are referenced collectively, not per-story
|
||||
|
||||
# What counts as "documented"?
|
||||
documented_if:
|
||||
mentioned_in_file_list: true # Story File List mentions it
|
||||
mentioned_in_tasks: true # Task description mentions it
|
||||
mentioned_in_acs: true # AC mentions the feature
|
||||
file_committed_in_story_commit: true # Git history shows it in story commit
|
||||
|
||||
# Backfill story settings
|
||||
backfill:
|
||||
auto_create: false # Require confirmation before creating each
|
||||
add_to_sprint_status: true # Add to sprint as "backlog"
|
||||
mark_as_backfill: true # Add note: "Backfill story documenting existing code"
|
||||
run_gap_analysis: false # Don't run gap (we know it exists)
|
||||
estimate_effort: true # Estimate how complex the feature is
|
||||
|
||||
# Output settings
|
||||
output:
|
||||
create_report: true # Generate orphaned-features-report.md
|
||||
group_by_category: true # Group by component/api/db/etc
|
||||
suggest_epic_assignment: true # Suggest which epic orphans belong to
|
||||
|
||||
standalone: true
|
||||
|
|
@ -0,0 +1,743 @@
|
|||
# Migration Reliability Guarantees
|
||||
|
||||
**Purpose:** Document how this migration tool ensures 100% reliability and data integrity.
|
||||
|
||||
---
|
||||
|
||||
## Core Guarantees
|
||||
|
||||
### 1. **Idempotent Operations** ✅
|
||||
|
||||
**Guarantee:** Running migration multiple times produces the same result as running once.
|
||||
|
||||
**How:**
|
||||
```javascript
|
||||
// Before creating issue, check if it exists
|
||||
const existing = await searchIssue(`label:story:${storyKey}`);
|
||||
|
||||
if (existing) {
|
||||
if (update_existing) {
|
||||
// Update existing issue (safe)
|
||||
await updateIssue(existing.number, data);
|
||||
} else {
|
||||
// Skip (already migrated)
|
||||
skip(storyKey);
|
||||
}
|
||||
} else {
|
||||
// Create new issue
|
||||
await createIssue(data);
|
||||
}
|
||||
```
|
||||
|
||||
**Test:**
|
||||
```bash
|
||||
# Run migration twice
|
||||
/migrate-to-github mode=execute
|
||||
/migrate-to-github mode=execute
|
||||
|
||||
# Result: Same issues, no duplicates
|
||||
# Second run: "47 stories already migrated, 0 created"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. **Atomic Per-Story Operations** ✅
|
||||
|
||||
**Guarantee:** Each story either fully migrates or fully rolls back. No partial states.
|
||||
|
||||
**How:**
|
||||
```javascript
|
||||
async function migrateStory(storyKey) {
|
||||
const transaction = {
|
||||
story_key: storyKey,
|
||||
operations: [],
|
||||
rollback_actions: []
|
||||
};
|
||||
|
||||
try {
|
||||
// Create issue
|
||||
const issue = await createIssue(data);
|
||||
transaction.operations.push({ type: 'create', issue_number: issue.number });
|
||||
transaction.rollback_actions.push(() => closeIssue(issue.number));
|
||||
|
||||
// Add labels
|
||||
await addLabels(issue.number, labels);
|
||||
transaction.operations.push({ type: 'labels' });
|
||||
|
||||
// Set milestone
|
||||
await setMilestone(issue.number, milestone);
|
||||
transaction.operations.push({ type: 'milestone' });
|
||||
|
||||
// Verify all operations succeeded
|
||||
await verifyIssue(issue.number);
|
||||
|
||||
// Success - commit transaction
|
||||
return { success: true, issue_number: issue.number };
|
||||
|
||||
} catch (error) {
|
||||
// Rollback all operations
|
||||
for (const rollback of transaction.rollback_actions.reverse()) {
|
||||
await rollback();
|
||||
}
|
||||
|
||||
return { success: false, error, rolled_back: true };
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. **Comprehensive Verification** ✅
|
||||
|
||||
**Guarantee:** Every write is verified by reading back the data.
|
||||
|
||||
**How:**
|
||||
```javascript
|
||||
// Write-Verify pattern
|
||||
async function createIssueVerified(data) {
|
||||
// 1. Create
|
||||
const created = await mcp__github__issue_write({ ...data });
|
||||
const issue_number = created.number;
|
||||
|
||||
// 2. Wait for GitHub eventual consistency
|
||||
await sleep(1000);
|
||||
|
||||
// 3. Read back
|
||||
const verification = await mcp__github__issue_read({
|
||||
issue_number: issue_number
|
||||
});
|
||||
|
||||
// 4. Verify fields
|
||||
assert(verification.title === data.title, 'Title mismatch');
|
||||
assert(verification.labels.includes(data.labels[0]), 'Label missing');
|
||||
assert(verification.body.includes(data.body.substring(0, 50)), 'Body mismatch');
|
||||
|
||||
// 5. Return verified issue
|
||||
return { verified: true, issue_number };
|
||||
}
|
||||
```
|
||||
|
||||
**Detection time:**
|
||||
- Write succeeds but data wrong: **Detected immediately** (1s after write)
|
||||
- Write fails silently: **Detected immediately** (read-back fails)
|
||||
- Partial write: **Detected immediately** (field mismatch)
|
||||
|
||||
---
|
||||
|
||||
### 4. **Crash-Safe State Tracking** ✅
|
||||
|
||||
**Guarantee:** If migration crashes/halts, can resume from exactly where it stopped.
|
||||
|
||||
**How:**
|
||||
```yaml
|
||||
# migration-state.yaml (updated after EACH story)
|
||||
started_at: 2026-01-07T15:30:00Z
|
||||
mode: execute
|
||||
github_owner: jschulte
|
||||
github_repo: myproject
|
||||
total_stories: 47
|
||||
last_completed: "2-15-profile-edit" # Story that just finished
|
||||
stories_migrated:
|
||||
- story_key: "2-1-login"
|
||||
issue_number: 101
|
||||
timestamp: 2026-01-07T15:30:15Z
|
||||
- story_key: "2-2-signup"
|
||||
issue_number: 102
|
||||
timestamp: 2026-01-07T15:30:32Z
|
||||
# ... 13 more
|
||||
- story_key: "2-15-profile-edit"
|
||||
issue_number: 115
|
||||
timestamp: 2026-01-07T15:35:18Z
|
||||
# CRASH HAPPENS HERE
|
||||
```
|
||||
|
||||
**Resume:**
|
||||
```bash
|
||||
# After crash, re-run migration
|
||||
/migrate-to-github mode=execute
|
||||
|
||||
→ Detects state file
|
||||
→ "Previous migration detected - 15 stories already migrated"
|
||||
→ "Resume from story 2-16-password-reset? (yes)"
|
||||
→ Continues from story 16, skips 1-15
|
||||
```
|
||||
|
||||
**State file is atomic:**
|
||||
- Written after EACH story (not at end)
|
||||
- Uses atomic write (tmp file + rename)
|
||||
- Never corrupted even if process killed mid-write
|
||||
|
||||
---
|
||||
|
||||
### 5. **Exponential Backoff Retry** ✅
|
||||
|
||||
**Guarantee:** Transient failures (network blips, GitHub 503s) don't fail migration.
|
||||
|
||||
**How:**
|
||||
```javascript
|
||||
async function retryWithBackoff(operation, config) {
|
||||
const backoffs = config.retry_backoff_ms; // [1000, 3000, 9000]
|
||||
|
||||
for (let attempt = 0; attempt < backoffs.length; attempt++) {
|
||||
try {
|
||||
return await operation();
|
||||
} catch (error) {
|
||||
if (attempt < backoffs.length - 1) {
|
||||
console.warn(`Retry ${attempt + 1} after ${backoffs[attempt]}ms`);
|
||||
await sleep(backoffs[attempt]);
|
||||
} else {
|
||||
// All retries exhausted
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```
|
||||
Story 2-5 migration:
|
||||
Attempt 1: GitHub 503 Service Unavailable
|
||||
→ Wait 1s, retry
|
||||
Attempt 2: Network timeout
|
||||
→ Wait 3s, retry
|
||||
Attempt 3: Success ✅
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 6. **Rollback Manifest** ✅
|
||||
|
||||
**Guarantee:** Can undo migration if something goes wrong.
|
||||
|
||||
**How:**
|
||||
```yaml
|
||||
# migration-rollback-2026-01-07T15-30-00.yaml
|
||||
created_at: 2026-01-07T15:30:00Z
|
||||
github_owner: jschulte
|
||||
github_repo: myproject
|
||||
migration_mode: execute
|
||||
|
||||
created_issues:
|
||||
- story_key: "2-1-login"
|
||||
issue_number: 101
|
||||
created_at: 2026-01-07T15:30:15Z
|
||||
title: "Story 2-1: User Login Flow"
|
||||
url: "https://github.com/jschulte/myproject/issues/101"
|
||||
|
||||
- story_key: "2-2-signup"
|
||||
issue_number: 102
|
||||
created_at: 2026-01-07T15:30:32Z
|
||||
title: "Story 2-2: User Registration"
|
||||
url: "https://github.com/jschulte/myproject/issues/102"
|
||||
|
||||
# ... all created issues tracked
|
||||
|
||||
rollback_command: |
|
||||
/migrate-to-github mode=rollback manifest=migration-rollback-2026-01-07T15-30-00.yaml
|
||||
```
|
||||
|
||||
**Rollback execution:**
|
||||
- Closes all created issues
|
||||
- Adds "migrated:rolled-back" label
|
||||
- Adds comment explaining why closed
|
||||
- Preserves issues (can reopen if needed)
|
||||
|
||||
---
|
||||
|
||||
### 7. **Dry-Run Mode** ✅
|
||||
|
||||
**Guarantee:** See exactly what will happen before it happens.
|
||||
|
||||
**How:**
|
||||
```javascript
|
||||
if (mode === 'dry-run') {
|
||||
// NO writes to GitHub - only reads
|
||||
for (const story of stories) {
|
||||
const existing = await searchIssue(`story:${story.key}`);
|
||||
|
||||
if (existing) {
|
||||
console.log(`Would UPDATE: Issue #${existing.number}`);
|
||||
} else {
|
||||
console.log(`Would CREATE: New issue for ${story.key}`);
|
||||
console.log(` Title: ${generateTitle(story)}`);
|
||||
console.log(` Labels: ${generateLabels(story)}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Show summary
|
||||
console.log(`
|
||||
Total: ${stories.length}
|
||||
Would create: ${wouldCreate.length}
|
||||
Would update: ${wouldUpdate.length}
|
||||
Would skip: ${wouldSkip.length}
|
||||
`);
|
||||
|
||||
// Exit without doing anything
|
||||
process.exit(0);
|
||||
}
|
||||
```
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
# Always run dry-run first
|
||||
/migrate-to-github mode=dry-run
|
||||
|
||||
# Review output, then execute
|
||||
/migrate-to-github mode=execute
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 8. **Halt on Critical Error** ✅
|
||||
|
||||
**Guarantee:** Never continue with corrupted/incomplete state.
|
||||
|
||||
**How:**
|
||||
```javascript
|
||||
try {
|
||||
await createIssue(storyData);
|
||||
} catch (error) {
|
||||
if (isCriticalError(error)) {
|
||||
// Critical: GitHub API returned 401/403/5xx
|
||||
console.error('CRITICAL ERROR: Cannot continue safely');
|
||||
console.error(`Story ${storyKey} failed: ${error}`);
|
||||
|
||||
// Save current state
|
||||
await saveState(migrationState);
|
||||
|
||||
// Create recovery instructions
|
||||
console.log(`
|
||||
Recovery options:
|
||||
1. Fix error: ${error.message}
|
||||
2. Resume migration: /migrate-to-github mode=execute (will skip completed stories)
|
||||
3. Rollback: /migrate-to-github mode=rollback
|
||||
`);
|
||||
|
||||
// HALT - do not continue
|
||||
process.exit(1);
|
||||
} else {
|
||||
// Non-critical: Individual story failed but can continue
|
||||
console.warn(`Story ${storyKey} failed (non-critical): ${error}`);
|
||||
failedStories.push({ storyKey, error });
|
||||
// Continue with next story
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing Reliability
|
||||
|
||||
### Test Suite
|
||||
|
||||
```javascript
|
||||
describe('Migration Reliability', () => {
|
||||
|
||||
it('is idempotent - can run twice safely', async () => {
|
||||
await migrate({ mode: 'execute' });
|
||||
const firstRun = getCreatedIssues();
|
||||
|
||||
await migrate({ mode: 'execute' }); // Run again
|
||||
const secondRun = getCreatedIssues();
|
||||
|
||||
expect(secondRun).toEqual(firstRun); // Same issues, no duplicates
|
||||
});
|
||||
|
||||
it('is atomic - failed story does not create partial issue', async () => {
|
||||
mockGitHub.createIssue.resolvesOnce(); // Create succeeds
|
||||
mockGitHub.addLabels.rejects(); // But adding labels fails
|
||||
|
||||
await migrate({ mode: 'execute' });
|
||||
|
||||
const issues = await searchAllIssues();
|
||||
const partialIssues = issues.filter(i => !i.labels.includes('story:'));
|
||||
|
||||
expect(partialIssues).toHaveLength(0); // No partial issues
|
||||
});
|
||||
|
||||
it('verifies all writes by reading back', async () => {
|
||||
mockGitHub.createIssue.resolves({ number: 101 });
|
||||
mockGitHub.readIssue.resolves({ title: 'WRONG TITLE' }); // Verification fails
|
||||
|
||||
await expect(migrate({ mode: 'execute' }))
|
||||
.rejects.toThrow('Write verification failed');
|
||||
});
|
||||
|
||||
it('can resume after crash', async () => {
|
||||
// Migrate 5 stories
|
||||
await migrate({ stories: stories.slice(0, 5) });
|
||||
|
||||
// Simulate crash (don't await)
|
||||
const promise = migrate({ stories: stories.slice(5, 10) });
|
||||
await sleep(2000);
|
||||
process.kill(); // Crash mid-migration
|
||||
|
||||
// Resume
|
||||
const resumed = await migrate({ mode: 'execute' });
|
||||
|
||||
expect(resumed.resumedFrom).toBe('2-5-story');
|
||||
expect(resumed.skipped).toBe(5); // Skipped already-migrated
|
||||
});
|
||||
|
||||
it('creates rollback manifest', async () => {
|
||||
await migrate({ mode: 'execute' });
|
||||
|
||||
const manifest = fs.readFileSync('migration-rollback-*.yaml');
|
||||
expect(manifest.created_issues).toHaveLength(47);
|
||||
expect(manifest.created_issues[0]).toHaveProperty('issue_number');
|
||||
});
|
||||
|
||||
it('can rollback migration', async () => {
|
||||
await migrate({ mode: 'execute' });
|
||||
const issuesBefore = await countIssues();
|
||||
|
||||
await migrate({ mode: 'rollback' });
|
||||
const issuesAfter = await countIssues({ state: 'open' });
|
||||
|
||||
expect(issuesAfter).toBeLessThan(issuesBefore);
|
||||
// Rolled-back issues are closed, not deleted
|
||||
});
|
||||
|
||||
it('handles rate limit gracefully', async () => {
|
||||
mockGitHub.createIssue.rejects({ status: 429, message: 'Rate limit exceeded' });
|
||||
|
||||
const result = await migrate({ mode: 'execute', halt_on_critical_error: false });
|
||||
|
||||
expect(result.rateLimitErrors).toBeGreaterThan(0);
|
||||
expect(result.savedState).toBeTruthy(); // State saved before halting
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Failure Recovery Procedures
|
||||
|
||||
### Scenario 1: Migration Fails Halfway
|
||||
|
||||
```bash
|
||||
# Migration was running, crashed/halted at story 15/47
|
||||
|
||||
# Check state file
|
||||
cat _bmad-output/migration-state.yaml
|
||||
# Shows: last_completed: "2-15-profile"
|
||||
|
||||
# Resume migration
|
||||
/migrate-to-github mode=execute
|
||||
|
||||
→ "Previous migration detected"
|
||||
→ "15 stories already migrated"
|
||||
→ "Resume from story 2-16? (yes)"
|
||||
→ Continues from story 16-47
|
||||
→ Creates 32 new issues
|
||||
→ Final: 47 total migrated ✅
|
||||
```
|
||||
|
||||
### Scenario 2: Created Issues but Verification Failed
|
||||
|
||||
```bash
|
||||
# Migration created issues but verification warnings
|
||||
|
||||
# Run verify mode
|
||||
/migrate-to-github mode=verify
|
||||
|
||||
→ Checks all 47 stories
|
||||
→ Reads each issue from GitHub
|
||||
→ Compares to local files
|
||||
→ Reports:
|
||||
"43 verified correct ✅"
|
||||
"4 have warnings ⚠️"
|
||||
- Story 2-5: Label missing "complexity:standard"
|
||||
- Story 2-10: Title doesn't match local file
|
||||
- Story 2-18: Milestone not set
|
||||
- Story 2-23: Acceptance Criteria count mismatch
|
||||
|
||||
# Fix issues
|
||||
/migrate-to-github mode=execute update_existing=true filter_by_status=warning
|
||||
|
||||
→ Re-migrates only the 4 with warnings
|
||||
→ Verification: "4/4 now verified correct ✅"
|
||||
```
|
||||
|
||||
### Scenario 3: Wrong Repository - Need to Rollback
|
||||
|
||||
```bash
|
||||
# Oops - migrated to wrong repo!
|
||||
|
||||
# Check what was created
|
||||
cat _bmad-output/migration-rollback-*.yaml
|
||||
# Shows: 47 issues created in wrong-repo
|
||||
|
||||
# Rollback
|
||||
/migrate-to-github mode=rollback
|
||||
|
||||
→ "Rollback manifest found: 47 issues"
|
||||
→ Type "DELETE ALL ISSUES" to confirm
|
||||
→ Closes all 47 issues
|
||||
→ Adds "migrated:rolled-back" label
|
||||
→ "Rollback complete ✅"
|
||||
|
||||
# Now migrate to correct repo
|
||||
/migrate-to-github mode=execute github_owner=jschulte github_repo=correct-repo
|
||||
```
|
||||
|
||||
### Scenario 4: Network Failure Mid-Migration
|
||||
|
||||
```bash
|
||||
# Migration running, network drops at story 23/47
|
||||
|
||||
# Automatic behavior:
|
||||
→ Story 23 fails to create (network timeout)
|
||||
→ Retry #1 after 1s: Still fails
|
||||
→ Retry #2 after 3s: Still fails
|
||||
→ Retry #3 after 9s: Still fails
|
||||
→ "CRITICAL: Cannot create issue for story 2-23 after 3 retries"
|
||||
→ Saves state (22 stories migrated)
|
||||
→ HALTS
|
||||
|
||||
# You see:
|
||||
"Migration halted at story 2-23 due to network error"
|
||||
"State saved: 22 stories successfully migrated"
|
||||
"Resume when network restored: /migrate-to-github mode=execute"
|
||||
|
||||
# After network restored:
|
||||
/migrate-to-github mode=execute
|
||||
|
||||
→ "Resuming from story 2-23"
|
||||
→ Continues 23-47
|
||||
→ "Migration complete: 47/47 migrated ✅"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Data Integrity Safeguards
|
||||
|
||||
### Safeguard #1: GitHub is Append-Only
|
||||
|
||||
**Design:** Migration never deletes data, only creates/updates.
|
||||
|
||||
- Create: Safe (adds new issue)
|
||||
- Update: Safe (modifies existing)
|
||||
- Delete: Only in explicit rollback mode
|
||||
|
||||
**Result:** Cannot accidentally lose data during migration.
|
||||
|
||||
### Safeguard #2: Local Files Untouched
|
||||
|
||||
**Design:** Migration reads local files but NEVER modifies them.
|
||||
|
||||
**Guarantee:**
|
||||
```javascript
|
||||
// Migration code
|
||||
const story = fs.readFileSync(storyFile, 'utf-8'); // READ ONLY
|
||||
|
||||
// ❌ This never happens:
|
||||
// fs.writeFileSync(storyFile, modified); // FORBIDDEN
|
||||
```
|
||||
|
||||
**Result:** If migration fails, local files are unchanged. Can retry safely.
|
||||
|
||||
### Safeguard #3: Duplicate Detection
|
||||
|
||||
**Design:** Check for existing issues before creating.
|
||||
|
||||
```javascript
|
||||
// Before creating
|
||||
const existing = await searchIssues({
|
||||
query: `repo:${owner}/${repo} label:story:${storyKey}`
|
||||
});
|
||||
|
||||
if (existing.length > 1) {
|
||||
throw new Error(`
|
||||
DUPLICATE DETECTED: Found ${existing.length} issues for story:${storyKey}
|
||||
|
||||
This should never happen. Possible causes:
|
||||
- Previous migration created duplicates
|
||||
- Manual issue creation
|
||||
- Label typo
|
||||
|
||||
Issues found:
|
||||
${existing.map(i => ` - Issue #${i.number}: ${i.title}`).join('\n')}
|
||||
|
||||
HALTING - resolve duplicates manually before continuing
|
||||
`);
|
||||
}
|
||||
```
|
||||
|
||||
**Result:** Cannot create duplicates even if run multiple times.
|
||||
|
||||
### Safeguard #4: State File Atomic Writes
|
||||
|
||||
**Design:** State file uses atomic write pattern (tmp + rename).
|
||||
|
||||
```javascript
|
||||
async function saveStateSafely(state, statePath) {
|
||||
const tmpPath = `${statePath}.tmp`;
|
||||
|
||||
// 1. Write to temp file
|
||||
fs.writeFileSync(tmpPath, yaml.stringify(state));
|
||||
|
||||
// 2. Verify temp file written correctly
|
||||
const readBack = yaml.parse(fs.readFileSync(tmpPath));
|
||||
assert.deepEqual(readBack, state, 'State file corruption detected');
|
||||
|
||||
// 3. Atomic rename (POSIX guarantee)
|
||||
fs.renameSync(tmpPath, statePath);
|
||||
|
||||
// State is now safely written - crash after this point is safe
|
||||
}
|
||||
```
|
||||
|
||||
**Result:** State file is never corrupted, even if process crashes during write.
|
||||
|
||||
---
|
||||
|
||||
## Monitoring & Observability
|
||||
|
||||
### Real-Time Progress
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
⚡ MIGRATION PROGRESS (Live)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Migrated: 15/47 (32%)
|
||||
Created: 12 issues
|
||||
Updated: 3 issues
|
||||
Failed: 0
|
||||
|
||||
Current: Story 2-16 (creating...)
|
||||
Last success: Story 2-15 (2s ago)
|
||||
|
||||
Rate: 1.2 stories/min
|
||||
ETA: 26 minutes remaining
|
||||
|
||||
API calls used: 45/5000 (1%)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
### Detailed Logging
|
||||
|
||||
```yaml
|
||||
# migration-log-2026-01-07T15-30-00.log
|
||||
[15:30:00] Migration started (mode: execute)
|
||||
[15:30:05] Pre-flight checks passed
|
||||
[15:30:15] Story 2-1: Created Issue #101 (verified)
|
||||
[15:30:32] Story 2-2: Created Issue #102 (verified)
|
||||
[15:30:45] Story 2-3: Already exists Issue #103 (updated)
|
||||
[15:31:02] Story 2-4: CREATE FAILED (attempt 1/3) - Network timeout
|
||||
[15:31:03] Story 2-4: Retry 1 after 1000ms
|
||||
[15:31:05] Story 2-4: Created Issue #104 (verified) ✅
|
||||
[15:31:20] Story 2-5: Created Issue #105 (verified)
|
||||
# ... continues
|
||||
[15:55:43] Migration complete: 47/47 success (0 failures)
|
||||
[15:55:44] State saved: migration-state.yaml
|
||||
[15:55:45] Rollback manifest: migration-rollback-2026-01-07T15-30-00.yaml
|
||||
[15:55:46] Report generated: migration-report-2026-01-07T15-30-00.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Rate Limit Management
|
||||
|
||||
### GitHub API Rate Limits
|
||||
|
||||
**Authenticated:** 5000 requests/hour
|
||||
**Per migration:** ~3-4 API calls per story
|
||||
|
||||
**For 47 stories:**
|
||||
- Search existing: 47 calls
|
||||
- Create issues: ~35 calls
|
||||
- Verify: 35 calls
|
||||
- Labels/milestones: ~20 calls
|
||||
- **Total:** ~140 calls
|
||||
- **Remaining:** 4860/5000 (97% remaining)
|
||||
|
||||
**Safe thresholds:**
|
||||
- <500 stories: Single migration run
|
||||
- 500-1000 stories: Split into 2 batches
|
||||
- >1000 stories: Use epic-based filtering
|
||||
|
||||
### Rate Limit Exhaustion Handling
|
||||
|
||||
```javascript
|
||||
async function apiCallWithRateLimitCheck(operation) {
|
||||
try {
|
||||
return await operation();
|
||||
} catch (error) {
|
||||
if (error.status === 429) { // Rate limit exceeded
|
||||
const resetTime = error.response.headers['x-ratelimit-reset'];
|
||||
const waitSeconds = resetTime - Math.floor(Date.now() / 1000);
|
||||
|
||||
console.warn(`
|
||||
⚠️ Rate limit exceeded
|
||||
Reset in: ${waitSeconds} seconds
|
||||
|
||||
Options:
|
||||
[W] Wait (pause migration until rate limit resets)
|
||||
[S] Stop (save state and resume later)
|
||||
|
||||
Choice:
|
||||
`);
|
||||
|
||||
if (choice === 'W') {
|
||||
console.log(`Waiting ${waitSeconds}s for rate limit reset...`);
|
||||
await sleep(waitSeconds * 1000);
|
||||
return await operation(); // Retry after rate limit resets
|
||||
} else {
|
||||
// Save state and halt
|
||||
await saveState(migrationState);
|
||||
throw new Error('HALT: Rate limit exceeded, resume later');
|
||||
}
|
||||
}
|
||||
|
||||
throw error; // Other error, propagate
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guarantees Summary
|
||||
|
||||
| Guarantee | Mechanism | Failure Mode | Recovery |
|
||||
|-----------|-----------|--------------|----------|
|
||||
| Idempotent | Pre-check existing issues | Run twice → duplicates? | ❌ Prevented by duplicate detection |
|
||||
| Atomic | Transaction per story | Create succeeds, labels fail? | ❌ Prevented by rollback on error |
|
||||
| Verified | Read-back after write | Write succeeds but wrong data? | ❌ Detected immediately, retried |
|
||||
| Resumable | State file after each story | Crash mid-migration? | ✅ Resume from last completed |
|
||||
| Reversible | Rollback manifest | Wrong repo migrated? | ✅ Rollback closes all issues |
|
||||
| Previewed | Dry-run mode | Unsure what will happen? | ✅ Preview before executing |
|
||||
| Resilient | Exponential backoff | Network blip? | ✅ Auto-retry 3x before failing |
|
||||
| Fail-safe | Halt on critical error | GitHub API down? | ✅ Saves state, can resume |
|
||||
|
||||
**Result:** 100% reliability through defense-in-depth strategy.
|
||||
|
||||
---
|
||||
|
||||
## Migration Checklist
|
||||
|
||||
**Before running migration:**
|
||||
- [ ] Run `/migrate-to-github mode=dry-run` to preview
|
||||
- [ ] Verify repository name is correct
|
||||
- [ ] Back up sprint-status.yaml (just in case)
|
||||
- [ ] Verify GitHub token has write permissions
|
||||
- [ ] Check rate limit: <1000 stories OK for single run
|
||||
|
||||
**During migration:**
|
||||
- [ ] Monitor progress output
|
||||
- [ ] Watch for warnings or retries
|
||||
- [ ] Note any failed stories
|
||||
|
||||
**After migration:**
|
||||
- [ ] Run `/migrate-to-github mode=verify`
|
||||
- [ ] Review migration report
|
||||
- [ ] Spot-check 3-5 created issues in GitHub UI
|
||||
- [ ] Save rollback manifest (in case need to undo)
|
||||
- [ ] Update workflow configs: `github_sync_enabled: true`
|
||||
|
||||
---
|
||||
|
||||
**Reliability Score: 10/10** ✅
|
||||
|
||||
Every failure mode has a recovery path. Every write is verified. Every operation is resumable.
|
||||
|
|
@ -0,0 +1,957 @@
|
|||
# Migrate to GitHub - Production-Grade Story Migration
|
||||
|
||||
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>RELIABILITY FIRST: This workflow prioritizes data integrity over speed</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="0" goal="Pre-Flight Safety Checks">
|
||||
<critical>MUST verify all prerequisites before ANY migration operations</critical>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🛡️ PRE-FLIGHT SAFETY CHECKS
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<substep n="0a" title="Verify GitHub MCP access">
|
||||
<action>Test GitHub MCP connection:</action>
|
||||
<action>Call: mcp__github__get_me()</action>
|
||||
|
||||
<check if="API call fails">
|
||||
<output>
|
||||
❌ CRITICAL: GitHub MCP not accessible
|
||||
|
||||
Cannot proceed with migration without GitHub API access.
|
||||
|
||||
Possible causes:
|
||||
- GitHub MCP server not configured
|
||||
- Authentication token missing or invalid
|
||||
- Network connectivity issues
|
||||
|
||||
Fix:
|
||||
1. Ensure GitHub MCP is configured in Claude settings
|
||||
2. Verify token has required permissions:
|
||||
- repo (full control)
|
||||
- write:discussion (for comments)
|
||||
3. Test connection: Try any GitHub MCP command
|
||||
|
||||
HALTING - Cannot migrate without GitHub access.
|
||||
</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<action>Extract current user info:</action>
|
||||
<action> - username: {{user.login}}</action>
|
||||
<action> - user_id: {{user.id}}</action>
|
||||
|
||||
<output>✅ GitHub MCP connected (@{{username}})</output>
|
||||
</substep>
|
||||
|
||||
<substep n="0b" title="Verify repository access">
|
||||
<action>Verify github_owner and github_repo parameters provided</action>
|
||||
|
||||
<check if="parameters missing">
|
||||
<output>
|
||||
❌ ERROR: GitHub repository not specified
|
||||
|
||||
Required parameters:
|
||||
github_owner: GitHub username or organization
|
||||
github_repo: Repository name
|
||||
|
||||
Usage:
|
||||
/migrate-to-github github_owner=jschulte github_repo=myproject
|
||||
/migrate-to-github github_owner=jschulte github_repo=myproject mode=execute
|
||||
|
||||
HALTING
|
||||
</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<action>Test repository access:</action>
|
||||
<action>Call: mcp__github__list_issues({
|
||||
owner: {{github_owner}},
|
||||
repo: {{github_repo}},
|
||||
per_page: 1
|
||||
})</action>
|
||||
|
||||
<check if="repository not found or access denied">
|
||||
<output>
|
||||
❌ CRITICAL: Cannot access repository {{github_owner}}/{{github_repo}}
|
||||
|
||||
Possible causes:
|
||||
- Repository doesn't exist
|
||||
- Token lacks access to this repository
|
||||
- Repository is private and token doesn't have permission
|
||||
|
||||
Verify:
|
||||
1. Repository exists: <https://github.com/{{github_owner}}/{{github_repo}}>
|
||||
2. Token has write access to issues
|
||||
3. Repository name is spelled correctly
|
||||
|
||||
HALTING
|
||||
</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<output>✅ Repository accessible ({{github_owner}}/{{github_repo}})</output>
|
||||
</substep>
|
||||
|
||||
<substep n="0c" title="Verify local files exist">
|
||||
<action>Check sprint-status.yaml exists:</action>
|
||||
<action>test -f {{sprint_status}}</action>
|
||||
|
||||
<check if="file not found">
|
||||
<output>
|
||||
❌ ERROR: sprint-status.yaml not found at {{sprint_status}}
|
||||
|
||||
Cannot migrate without sprint status file.
|
||||
|
||||
Run /sprint-planning to generate it first.
|
||||
|
||||
HALTING
|
||||
</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<action>Read and parse sprint-status.yaml</action>
|
||||
<action>Count total stories to migrate</action>
|
||||
|
||||
<output>✅ Found {{total_stories}} stories in sprint-status.yaml</output>
|
||||
|
||||
<action>Verify story files exist:</action>
|
||||
<action>For each story, try multiple naming patterns to find file</action>
|
||||
|
||||
<action>Report:</action>
|
||||
<output>
|
||||
📊 Story File Status:
|
||||
- ✅ Files found: {{stories_with_files}}
|
||||
- ❌ Files missing: {{stories_without_files}}
|
||||
{{#if stories_without_files > 0}}
|
||||
Missing: {{missing_story_keys}}
|
||||
{{/if}}
|
||||
</output>
|
||||
|
||||
<check if="stories_without_files > 0">
|
||||
<ask>
|
||||
⚠️ {{stories_without_files}} stories have no files
|
||||
|
||||
Options:
|
||||
[C] Continue (only migrate stories with files)
|
||||
[S] Skip these stories (add to skip list)
|
||||
[H] Halt (fix missing files first)
|
||||
|
||||
Choice:
|
||||
</ask>
|
||||
|
||||
<check if="choice == 'H'">
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<substep n="0d" title="Check for existing migration">
|
||||
<action>Check if state file exists: {{state_file}}</action>
|
||||
|
||||
<check if="state file exists">
|
||||
<action>Read migration state</action>
|
||||
<action>Extract: stories_migrated, issues_created, last_completed, timestamp</action>
|
||||
|
||||
<output>
|
||||
⚠️ Previous migration detected
|
||||
|
||||
Last migration:
|
||||
- Date: {{migration_timestamp}}
|
||||
- Stories migrated: {{stories_migrated.length}}
|
||||
- Issues created: {{issues_created.length}}
|
||||
- Last completed: {{last_completed}}
|
||||
- Status: {{migration_status}}
|
||||
|
||||
Options:
|
||||
[R] Resume (continue from where it left off)
|
||||
[F] Fresh (start over, may create duplicates if not careful)
|
||||
[V] View (show what was migrated)
|
||||
[D] Delete state (clear and start fresh)
|
||||
|
||||
Choice:
|
||||
</output>
|
||||
|
||||
<ask>How to proceed?</ask>
|
||||
|
||||
<check if="choice == 'R'">
|
||||
<action>Set resume_mode = true</action>
|
||||
<action>Load list of already-migrated stories</action>
|
||||
<action>Filter them out from migration queue</action>
|
||||
<output>✅ Resuming from story: {{last_completed}}</output>
|
||||
</check>
|
||||
|
||||
<check if="choice == 'F'">
|
||||
<output>⚠️ WARNING: Fresh start may create duplicate issues if stories were already migrated.</output>
|
||||
<ask>Confirm fresh start (will check for duplicates)? (yes/no):</ask>
|
||||
<check if="not confirmed">
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<check if="choice == 'V'">
|
||||
<action>Display migration state details</action>
|
||||
<action>Then re-prompt for choice</action>
|
||||
</check>
|
||||
|
||||
<check if="choice == 'D'">
|
||||
<action>Delete state file</action>
|
||||
<action>Set resume_mode = false</action>
|
||||
<output>✅ State cleared</output>
|
||||
</check>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ PRE-FLIGHT CHECKS PASSED
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
- GitHub MCP: Connected
|
||||
- Repository: Accessible
|
||||
- Sprint status: Loaded ({{total_stories}} stories)
|
||||
- Story files: {{stories_with_files}} found
|
||||
- Mode: {{mode}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</step>
|
||||
|
||||
<step n="1" goal="Dry-run mode - Preview migration plan">
|
||||
<check if="mode != 'dry-run'">
|
||||
<action>Skip to Step 2 (Execute mode)</action>
|
||||
</check>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 DRY-RUN MODE (Preview Only - No Changes)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
This will show what WOULD happen without actually creating issues.
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<action>For each story in sprint-status.yaml:</action>
|
||||
|
||||
<iterate>For each story_key:</iterate>
|
||||
|
||||
<substep n="1a" title="Check if issue already exists">
|
||||
<action>Search GitHub: mcp__github__search_issues({
|
||||
query: "repo:{{github_owner}}/{{github_repo}} label:story:{{story_key}}"
|
||||
})</action>
|
||||
|
||||
<check if="issue found">
|
||||
<action>would_update = {{update_existing}}</action>
|
||||
<output>
|
||||
📝 Story {{story_key}}:
|
||||
GitHub: Issue #{{existing_issue.number}} EXISTS
|
||||
Action: {{#if would_update}}Would UPDATE{{else}}Would SKIP{{/if}}
|
||||
Current labels: {{existing_issue.labels}}
|
||||
Current assignee: {{existing_issue.assignee || "none"}}
|
||||
</output>
|
||||
</check>
|
||||
|
||||
<check if="issue not found">
|
||||
<action>would_create = true</action>
|
||||
<action>Read local story file</action>
|
||||
<action>Parse: title, ACs, tasks, epic, status</action>
|
||||
|
||||
<output>
|
||||
📝 Story {{story_key}}:
|
||||
GitHub: NOT FOUND
|
||||
Action: Would CREATE
|
||||
|
||||
Proposed Issue:
|
||||
- Title: "Story {{story_key}}: {{parsed_title}}"
|
||||
- Labels: type:story, story:{{story_key}}, status:{{status}}, epic:{{epic_number}}, complexity:{{complexity}}
|
||||
- Milestone: Epic {{epic_number}}
|
||||
- Acceptance Criteria: {{ac_count}} items
|
||||
- Tasks: {{task_count}} items
|
||||
- Assignee: {{#if status == 'in-progress'}}@{{infer_from_git_log}}{{else}}none{{/if}}
|
||||
</output>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<action>Count actions:</action>
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 DRY-RUN SUMMARY
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
**Total Stories:** {{total_stories}}
|
||||
|
||||
**Actions:**
|
||||
- ✅ Would CREATE: {{would_create_count}} new issues
|
||||
- 🔄 Would UPDATE: {{would_update_count}} existing issues
|
||||
- ⏭️ Would SKIP: {{would_skip_count}} (existing, no update)
|
||||
|
||||
**Epics/Milestones:**
|
||||
- Would CREATE: {{epic_milestones_to_create.length}} milestones
|
||||
- Already exist: {{epic_milestones_existing.length}}
|
||||
|
||||
**Estimated API Calls:**
|
||||
- Issue searches: {{total_stories}} (check existing)
|
||||
- Issue creates: {{would_create_count}}
|
||||
- Issue updates: {{would_update_count}}
|
||||
- Milestone operations: {{milestone_operations}}
|
||||
- **Total:** ~{{total_api_calls}} API calls
|
||||
|
||||
**Rate Limit Impact:**
|
||||
- Authenticated limit: 5000/hour
|
||||
- This migration: ~{{total_api_calls}} calls
|
||||
- Remaining after: ~{{5000 - total_api_calls}}
|
||||
- Safe: {{#if total_api_calls < 1000}}YES{{else}}Borderline (consider smaller batches){{/if}}
|
||||
|
||||
**Estimated Duration:** {{estimated_minutes}} minutes
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
⚠️ This was a DRY-RUN. No issues were created.
|
||||
|
||||
To execute the migration:
|
||||
/migrate-to-github mode=execute github_owner={{github_owner}} github_repo={{github_repo}}
|
||||
|
||||
To migrate only Epic 2:
|
||||
/migrate-to-github mode=execute filter_by_epic=2 github_owner={{github_owner}} github_repo={{github_repo}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<action>Exit workflow (dry-run complete)</action>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Execute mode - Perform migration with atomic operations">
|
||||
<check if="mode != 'execute'">
|
||||
<action>Skip to Step 3 (Verify mode)</action>
|
||||
</check>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
⚡ EXECUTE MODE (Migrating Stories to GitHub)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
**SAFETY GUARANTEES:**
|
||||
✅ Idempotent - Can re-run safely (checks for duplicates)
|
||||
✅ Atomic - Each story fully succeeds or rolls back
|
||||
✅ Verified - Reads back each created issue
|
||||
✅ Resumable - Saves state after each story
|
||||
✅ Reversible - Creates rollback manifest
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<ask>
|
||||
⚠️ FINAL CONFIRMATION
|
||||
|
||||
You are about to create ~{{would_create_count}} GitHub Issues.
|
||||
|
||||
This operation:
|
||||
- WILL create issues in {{github_owner}}/{{github_repo}}
|
||||
- WILL modify your GitHub repository
|
||||
- CAN be rolled back (we'll create rollback manifest)
|
||||
- CANNOT be undone automatically after issues are created
|
||||
|
||||
Have you:
|
||||
- [ ] Run dry-run mode to preview?
|
||||
- [ ] Verified repository is correct?
|
||||
- [ ] Backed up sprint-status.yaml?
|
||||
- [ ] Confirmed you want to proceed?
|
||||
|
||||
Type "I understand and want to proceed" to continue:
|
||||
</ask>
|
||||
|
||||
<check if="confirmation != 'I understand and want to proceed'">
|
||||
<output>❌ Migration cancelled - confirmation not received</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<action>Initialize migration state:</action>
|
||||
<action>
|
||||
migration_state = {
|
||||
started_at: {{timestamp}},
|
||||
mode: "execute",
|
||||
github_owner: {{github_owner}},
|
||||
github_repo: {{github_repo}},
|
||||
total_stories: {{total_stories}},
|
||||
stories_migrated: [],
|
||||
issues_created: [],
|
||||
issues_updated: [],
|
||||
issues_failed: [],
|
||||
rollback_manifest: [],
|
||||
last_completed: null
|
||||
}
|
||||
</action>
|
||||
|
||||
<action>Save initial state to {{state_file}}</action>
|
||||
|
||||
<action>Initialize rollback manifest (for safety):</action>
|
||||
<action>rollback_manifest = {
|
||||
created_at: {{timestamp}},
|
||||
github_owner: {{github_owner}},
|
||||
github_repo: {{github_repo}},
|
||||
created_issues: [] # Will track issue numbers for rollback
|
||||
}</action>
|
||||
|
||||
<iterate>For each story in sprint-status.yaml:</iterate>
|
||||
|
||||
<substep n="2a" title="Migrate single story (ATOMIC)">
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📦 Migrating {{current_index}}/{{total_stories}}: {{story_key}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<action>Read local story file</action>
|
||||
|
||||
<check if="file not found">
|
||||
<output> ⏭️ SKIP - No file found</output>
|
||||
<action>Add to migration_state.issues_failed with reason: "File not found"</action>
|
||||
<action>Continue to next story</action>
|
||||
</check>
|
||||
|
||||
<action>Parse story file:</action>
|
||||
<action> - Extract all 12 sections</action>
|
||||
<action> - Parse Acceptance Criteria (convert to checkboxes)</action>
|
||||
<action> - Parse Tasks (convert to checkboxes)</action>
|
||||
<action> - Extract metadata: epic_number, complexity</action>
|
||||
|
||||
<action>Check if issue already exists (idempotent check):</action>
|
||||
<action>Call: mcp__github__search_issues({
|
||||
query: "repo:{{github_owner}}/{{github_repo}} label:story:{{story_key}}"
|
||||
})</action>
|
||||
|
||||
<check if="issue exists AND update_existing == false">
|
||||
<output> ✅ EXISTS - Issue #{{existing_issue.number}} (skipping, update_existing=false)</output>
|
||||
<action>Add to migration_state.stories_migrated (already done)</action>
|
||||
<action>Continue to next story</action>
|
||||
</check>
|
||||
|
||||
<check if="issue exists AND update_existing == true">
|
||||
<output> 🔄 EXISTS - Issue #{{existing_issue.number}} (updating)</output>
|
||||
|
||||
<action>ATOMIC UPDATE with retry:</action>
|
||||
<action>
|
||||
attempt = 0
|
||||
max_attempts = {{max_retries}} + 1
|
||||
|
||||
WHILE attempt < max_attempts:
|
||||
TRY:
|
||||
# Update issue
|
||||
result = mcp__github__issue_write({
|
||||
method: "update",
|
||||
owner: {{github_owner}},
|
||||
repo: {{github_repo}},
|
||||
issue_number: {{existing_issue.number}},
|
||||
title: "Story {{story_key}}: {{parsed_title}}",
|
||||
body: {{convertStoryToIssueBody(parsed)}},
|
||||
labels: {{generateLabels(story_key, status, parsed)}}
|
||||
})
|
||||
|
||||
# Verify update succeeded (read back)
|
||||
sleep 1 second # GitHub eventual consistency
|
||||
|
||||
verification = mcp__github__issue_read({
|
||||
method: "get",
|
||||
owner: {{github_owner}},
|
||||
repo: {{github_repo}},
|
||||
issue_number: {{existing_issue.number}}
|
||||
})
|
||||
|
||||
# Check verification
|
||||
IF verification.title != expected_title:
|
||||
THROW "Write verification failed"
|
||||
|
||||
# Success!
|
||||
output: " ✅ UPDATED and VERIFIED - Issue #{{existing_issue.number}}"
|
||||
BREAK
|
||||
|
||||
CATCH error:
|
||||
attempt++
|
||||
IF attempt < max_attempts:
|
||||
sleep {{retry_backoff_ms[attempt]}}
|
||||
output: " ⚠️ Retry {{attempt}}/{{max_retries}} after error: {{error}}"
|
||||
ELSE:
|
||||
output: " ❌ FAILED after {{max_retries}} retries: {{error}}"
|
||||
add to migration_state.issues_failed
|
||||
|
||||
IF halt_on_critical_error:
|
||||
HALT
|
||||
ELSE:
|
||||
CONTINUE to next story
|
||||
</action>
|
||||
|
||||
<action>Add to migration_state.issues_updated</action>
|
||||
</check>
|
||||
|
||||
<check if="issue does NOT exist">
|
||||
<output> 🆕 CREATING new issue...</output>
|
||||
|
||||
<action>Generate issue body from story file:</action>
|
||||
<action>
|
||||
issue_body = """
|
||||
**Story File:** [{{story_key}}.md]({{file_path_in_repo}})
|
||||
**Epic:** {{epic_number}}
|
||||
**Complexity:** {{complexity}} ({{task_count}} tasks)
|
||||
|
||||
## Business Context
|
||||
{{parsed.businessContext}}
|
||||
|
||||
## Acceptance Criteria
|
||||
{{#each parsed.acceptanceCriteria}}
|
||||
- [ ] AC{{@index + 1}}: {{this}}
|
||||
{{/each}}
|
||||
|
||||
## Tasks
|
||||
{{#each parsed.tasks}}
|
||||
- [ ] {{this}}
|
||||
{{/each}}
|
||||
|
||||
## Technical Requirements
|
||||
{{parsed.technicalRequirements}}
|
||||
|
||||
## Definition of Done
|
||||
{{#each parsed.definitionOfDone}}
|
||||
- [ ] {{this}}
|
||||
{{/each}}
|
||||
|
||||
---
|
||||
_Migrated from BMAD local files_
|
||||
_Sync timestamp: {{timestamp}}_
|
||||
_Local file: `{{story_file_path}}`_
|
||||
"""
|
||||
</action>
|
||||
|
||||
<action>Generate labels:</action>
|
||||
<action>
|
||||
labels = [
|
||||
"type:story",
|
||||
"story:{{story_key}}",
|
||||
"status:{{current_status}}",
|
||||
"epic:{{epic_number}}",
|
||||
"complexity:{{complexity}}"
|
||||
]
|
||||
|
||||
{{#if has_high_risk_keywords}}
|
||||
labels.push("risk:high")
|
||||
{{/if}}
|
||||
</action>
|
||||
|
||||
<action>ATOMIC CREATE with retry and verification:</action>
|
||||
<action>
|
||||
attempt = 0
|
||||
|
||||
WHILE attempt < max_attempts:
|
||||
TRY:
|
||||
# Create issue
|
||||
created_issue = mcp__github__issue_write({
|
||||
method: "create",
|
||||
owner: {{github_owner}},
|
||||
repo: {{github_repo}},
|
||||
title: "Story {{story_key}}: {{parsed_title}}",
|
||||
body: {{issue_body}},
|
||||
labels: {{labels}}
|
||||
})
|
||||
|
||||
issue_number = created_issue.number
|
||||
|
||||
# CRITICAL: Verify creation succeeded (read back)
|
||||
sleep 2 seconds # GitHub eventual consistency
|
||||
|
||||
verification = mcp__github__issue_read({
|
||||
method: "get",
|
||||
owner: {{github_owner}},
|
||||
repo: {{github_repo}},
|
||||
issue_number: {{issue_number}}
|
||||
})
|
||||
|
||||
# Verify all fields
|
||||
IF verification.title != expected_title:
|
||||
THROW "Title mismatch after create"
|
||||
|
||||
IF NOT verification.labels.includes("story:{{story_key}}"):
|
||||
THROW "Story label missing after create"
|
||||
|
||||
# Success - record for rollback capability
|
||||
output: " ✅ CREATED and VERIFIED - Issue #{{issue_number}}"
|
||||
|
||||
rollback_manifest.created_issues.push({
|
||||
story_key: {{story_key}},
|
||||
issue_number: {{issue_number}},
|
||||
created_at: {{timestamp}}
|
||||
})
|
||||
|
||||
migration_state.issues_created.push({
|
||||
story_key: {{story_key}},
|
||||
issue_number: {{issue_number}}
|
||||
})
|
||||
|
||||
BREAK
|
||||
|
||||
CATCH error:
|
||||
attempt++
|
||||
|
||||
# Check if issue was created despite error (orphaned issue)
|
||||
check_result = mcp__github__search_issues({
|
||||
query: "repo:{{github_owner}}/{{github_repo}} label:story:{{story_key}}"
|
||||
})
|
||||
|
||||
IF check_result.length > 0:
|
||||
# Issue was created, verification failed - treat as success
|
||||
output: " ✅ CREATED (verification had transient error)"
|
||||
BREAK
|
||||
|
||||
IF attempt < max_attempts:
|
||||
sleep {{retry_backoff_ms[attempt]}}
|
||||
output: " ⚠️ Retry {{attempt}}/{{max_retries}}"
|
||||
ELSE:
|
||||
output: " ❌ FAILED after {{max_retries}} retries: {{error}}"
|
||||
|
||||
migration_state.issues_failed.push({
|
||||
story_key: {{story_key}},
|
||||
error: {{error}},
|
||||
attempts: {{attempt}}
|
||||
})
|
||||
|
||||
IF halt_on_critical_error:
|
||||
output: "HALTING - Critical error during migration"
|
||||
save migration_state
|
||||
HALT
|
||||
ELSE:
|
||||
output: "Continuing despite failure (continue_on_failure=true)"
|
||||
CONTINUE to next story
|
||||
</action>
|
||||
</check>
|
||||
|
||||
<action>Update migration state:</action>
|
||||
<action>migration_state.stories_migrated.push({{story_key}})</action>
|
||||
<action>migration_state.last_completed = {{story_key}}</action>
|
||||
|
||||
<check if="save_state_after_each == true">
|
||||
<action>Save migration state to {{state_file}}</action>
|
||||
<action>Save rollback manifest to {{output_folder}}/migration-rollback-{{timestamp}}.yaml</action>
|
||||
</check>
|
||||
|
||||
<check if="current_index % 10 == 0">
|
||||
<output>
|
||||
📊 Progress: {{current_index}}/{{total_stories}} migrated
|
||||
Created: {{issues_created.length}}
|
||||
Updated: {{issues_updated.length}}
|
||||
Failed: {{issues_failed.length}}
|
||||
</output>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ MIGRATION COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
**Total:** {{total_stories}} stories processed
|
||||
**Created:** {{issues_created.length}} new issues
|
||||
**Updated:** {{issues_updated.length}} existing issues
|
||||
**Failed:** {{issues_failed.length}} errors
|
||||
**Duration:** {{actual_duration}}
|
||||
|
||||
{{#if issues_failed.length > 0}}
|
||||
**Failed Stories:**
|
||||
{{#each issues_failed}}
|
||||
- {{story_key}}: {{error}}
|
||||
{{/each}}
|
||||
|
||||
Recommendation: Fix errors and re-run migration (will skip already-migrated stories)
|
||||
{{/if}}
|
||||
|
||||
**Rollback Manifest:** {{rollback_manifest_path}}
|
||||
(Use this file to delete created issues if needed)
|
||||
|
||||
**State File:** {{state_file}}
|
||||
(Tracks migration progress for resume capability)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<action>Continue to Step 3 (Verify)</action>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Verify mode - Double-check migration accuracy">
|
||||
<check if="mode != 'verify' AND mode != 'execute'">
|
||||
<action>Skip to Step 4</action>
|
||||
</check>
|
||||
|
||||
<check if="mode == 'execute'">
|
||||
<ask>
|
||||
Migration complete. Run verification to double-check accuracy? (yes/no):
|
||||
</ask>
|
||||
|
||||
<check if="response != 'yes'">
|
||||
<action>Skip to Step 5 (Report)</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 VERIFICATION MODE (Double-Checking Migration)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<action>Load migration state from {{state_file}}</action>
|
||||
|
||||
<iterate>For each migrated story in migration_state.stories_migrated:</iterate>
|
||||
|
||||
<action>Fetch issue from GitHub:</action>
|
||||
<action>Search: label:story:{{story_key}}</action>
|
||||
|
||||
<check if="issue not found">
|
||||
<output> ❌ VERIFICATION FAILED: {{story_key}} - Issue not found in GitHub</output>
|
||||
<action>Add to verification_failures</action>
|
||||
</check>
|
||||
|
||||
<check if="issue found">
|
||||
<action>Verify fields match expected:</action>
|
||||
<action> - Title contains story_key ✓</action>
|
||||
<action> - Label "story:{{story_key}}" exists ✓</action>
|
||||
<action> - Status label matches sprint-status.yaml ✓</action>
|
||||
<action> - AC count matches local file ✓</action>
|
||||
|
||||
<check if="all fields match">
|
||||
<output> ✅ VERIFIED: {{story_key}} → Issue #{{issue_number}}</output>
|
||||
</check>
|
||||
|
||||
<check if="fields mismatch">
|
||||
<output> ⚠️ MISMATCH: {{story_key}} → Issue #{{issue_number}}</output>
|
||||
<output> Expected: {{expected}}</output>
|
||||
<output> Actual: {{actual}}</output>
|
||||
<action>Add to verification_warnings</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 VERIFICATION RESULTS
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
**Stories Checked:** {{stories_migrated.length}}
|
||||
**✅ Verified Correct:** {{verified_count}}
|
||||
**⚠️ Warnings:** {{verification_warnings.length}}
|
||||
**❌ Failures:** {{verification_failures.length}}
|
||||
|
||||
{{#if verification_failures.length > 0}}
|
||||
**Verification Failures:**
|
||||
{{#each verification_failures}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
|
||||
❌ Migration has errors - issues may be missing or incorrect
|
||||
{{else}}
|
||||
✅ All migrated stories verified in GitHub
|
||||
{{/if}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Rollback mode - Delete created issues">
|
||||
<check if="mode != 'rollback'">
|
||||
<action>Skip to Step 5 (Report)</action>
|
||||
</check>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
⚠️ ROLLBACK MODE (Delete Migrated Issues)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<action>Load rollback manifest from {{output_folder}}/migration-rollback-*.yaml</action>
|
||||
|
||||
<check if="manifest not found">
|
||||
<output>
|
||||
❌ ERROR: No rollback manifest found
|
||||
|
||||
Cannot rollback without manifest file.
|
||||
Rollback manifests are in: {{output_folder}}/migration-rollback-*.yaml
|
||||
|
||||
HALTING
|
||||
</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<output>
|
||||
**Rollback Manifest:**
|
||||
- Created: {{manifest.created_at}}
|
||||
- Repository: {{manifest.github_owner}}/{{manifest.github_repo}}
|
||||
- Issues to delete: {{manifest.created_issues.length}}
|
||||
|
||||
**WARNING:** This will PERMANENTLY DELETE these issues from GitHub:
|
||||
{{#each manifest.created_issues}}
|
||||
- Issue #{{issue_number}}: {{story_key}}
|
||||
{{/each}}
|
||||
|
||||
This operation CANNOT be undone!
|
||||
</output>
|
||||
|
||||
<ask>
|
||||
Type "DELETE ALL ISSUES" to proceed with rollback:
|
||||
</ask>
|
||||
|
||||
<check if="confirmation != 'DELETE ALL ISSUES'">
|
||||
<output>❌ Rollback cancelled</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<iterate>For each issue in manifest.created_issues:</iterate>
|
||||
|
||||
<action>Delete issue (GitHub API doesn't support delete, so close + lock):</action>
|
||||
<action>
|
||||
# GitHub doesn't allow issue deletion via API
|
||||
# Best we can do: close, lock, and add label "migrated:rolled-back"
|
||||
|
||||
mcp__github__issue_write({
|
||||
method: "update",
|
||||
issue_number: {{issue_number}},
|
||||
state: "closed",
|
||||
labels: ["migrated:rolled-back", "do-not-use"],
|
||||
state_reason: "not_planned"
|
||||
})
|
||||
|
||||
# Add comment explaining
|
||||
mcp__github__add_issue_comment({
|
||||
issue_number: {{issue_number}},
|
||||
body: "Issue closed - migration was rolled back. Do not use."
|
||||
})
|
||||
</action>
|
||||
|
||||
<output> ✅ Rolled back: Issue #{{issue_number}}</output>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ ROLLBACK COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
**Issues Rolled Back:** {{manifest.created_issues.length}}
|
||||
|
||||
Note: GitHub API doesn't support issue deletion.
|
||||
Issues were closed with label "migrated:rolled-back" instead.
|
||||
|
||||
To fully delete (manual):
|
||||
1. Go to repository settings
|
||||
2. Issues → Delete closed issues
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Generate comprehensive migration report">
|
||||
<action>Calculate final statistics:</action>
|
||||
<action>
|
||||
final_stats = {
|
||||
total_stories: {{total_stories}},
|
||||
migrated_successfully: {{issues_created.length + issues_updated.length}},
|
||||
failed: {{issues_failed.length}},
|
||||
success_rate: ({{migrated_successfully}} / {{total_stories}}) * 100,
|
||||
duration: {{end_time - start_time}},
|
||||
avg_time_per_story: {{duration / total_stories}}
|
||||
}
|
||||
</action>
|
||||
|
||||
<check if="create_migration_report == true">
|
||||
<action>Write comprehensive report to {{report_path}}</action>
|
||||
|
||||
<action>Report structure:</action>
|
||||
<action>
|
||||
# GitHub Migration Report
|
||||
|
||||
**Date:** {{timestamp}}
|
||||
**Repository:** {{github_owner}}/{{github_repo}}
|
||||
**Mode:** {{mode}}
|
||||
|
||||
## Executive Summary
|
||||
|
||||
- **Total Stories:** {{total_stories}}
|
||||
- **✅ Migrated:** {{migrated_successfully}} ({{success_rate}}%)
|
||||
- **❌ Failed:** {{failed}}
|
||||
- **Duration:** {{duration}}
|
||||
- **Avg per story:** {{avg_time_per_story}}
|
||||
|
||||
## Created Issues
|
||||
|
||||
{{#each issues_created}}
|
||||
- Story {{story_key}} → Issue #{{issue_number}}
|
||||
URL: <https://github.com/{{github_owner}}/{{github_repo}}/issues/{{issue_number}}>
|
||||
{{/each}}
|
||||
|
||||
## Updated Issues
|
||||
|
||||
{{#each issues_updated}}
|
||||
- Story {{story_key}} → Issue #{{issue_number}} (updated)
|
||||
{{/each}}
|
||||
|
||||
## Failed Migrations
|
||||
|
||||
{{#if issues_failed.length > 0}}
|
||||
{{#each issues_failed}}
|
||||
- Story {{story_key}}: {{error}}
|
||||
Attempts: {{attempts}}
|
||||
{{/each}}
|
||||
|
||||
**Recovery Steps:**
|
||||
1. Fix underlying issues (check error messages)
|
||||
2. Re-run migration (will skip already-migrated stories)
|
||||
{{else}}
|
||||
None - all stories migrated successfully!
|
||||
{{/if}}
|
||||
|
||||
## Rollback Information
|
||||
|
||||
**Rollback Manifest:** {{rollback_manifest_path}}
|
||||
|
||||
To rollback this migration:
|
||||
```bash
|
||||
/migrate-to-github mode=rollback
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Verify migration:** /migrate-to-github mode=verify
|
||||
2. **Test story checkout:** /checkout-story story_key=2-5-auth
|
||||
3. **Enable GitHub sync:** Update workflow.yaml with github_sync_enabled=true
|
||||
4. **Product Owner setup:** Share GitHub Issues URL with PO team
|
||||
|
||||
## Migration Details
|
||||
|
||||
**API Calls Made:** ~{{total_api_calls}}
|
||||
**Rate Limit Used:** {{api_calls_used}}/5000
|
||||
**Errors Encountered:** {{error_count}}
|
||||
**Retries Performed:** {{retry_count}}
|
||||
|
||||
---
|
||||
_Generated by BMAD migrate-to-github workflow_
|
||||
</action>
|
||||
|
||||
<output>📄 Migration report: {{report_path}}</output>
|
||||
</check>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ MIGRATION WORKFLOW COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
**Mode:** {{mode}}
|
||||
**Success Rate:** {{success_rate}}%
|
||||
|
||||
{{#if mode == 'execute'}}
|
||||
**✅ {{migrated_successfully}} stories now in GitHub Issues**
|
||||
|
||||
View in GitHub:
|
||||
<https://github.com/{{github_owner}}/{{github_repo}}/issues?q=is:issue+label:type:story>
|
||||
|
||||
**Next Steps:**
|
||||
1. Verify migration: /migrate-to-github mode=verify
|
||||
2. Test workflows with GitHub sync enabled
|
||||
3. Share Issues URL with Product Owner team
|
||||
|
||||
{{#if issues_failed.length > 0}}
|
||||
⚠️ {{issues_failed.length}} stories failed - re-run to retry
|
||||
{{/if}}
|
||||
{{/if}}
|
||||
|
||||
{{#if mode == 'dry-run'}}
|
||||
**This was a preview. No issues were created.**
|
||||
|
||||
To execute: /migrate-to-github mode=execute
|
||||
{{/if}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
name: migrate-to-github
|
||||
description: "Production-grade migration of BMAD stories from local files to GitHub Issues with comprehensive reliability guarantees"
|
||||
author: "BMad"
|
||||
version: "1.0.0"
|
||||
|
||||
# Critical variables
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{output_folder}/sprint-artifacts"
|
||||
sprint_status: "{output_folder}/sprint-status.yaml"
|
||||
|
||||
# GitHub configuration
|
||||
github:
|
||||
owner: "{github_owner}" # Required: GitHub username or org
|
||||
repo: "{github_repo}" # Required: Repository name
|
||||
# Token comes from MCP GitHub server config (already authenticated)
|
||||
|
||||
# Migration mode
|
||||
mode: "dry-run" # "dry-run" | "execute" | "verify" | "rollback"
|
||||
# SAFETY: Defaults to dry-run - must explicitly choose execute
|
||||
|
||||
# Migration scope
|
||||
scope:
|
||||
include_epics: true # Create milestone for each epic
|
||||
include_stories: true # Create issue for each story
|
||||
filter_by_epic: null # Optional: Only migrate Epic N (e.g., "2")
|
||||
filter_by_status: null # Optional: Only migrate stories with status (e.g., "backlog")
|
||||
|
||||
# Migration strategy
|
||||
strategy:
|
||||
check_existing: true # Search for existing issues before creating (prevents duplicates)
|
||||
update_existing: true # If issue exists, update it (false = skip)
|
||||
create_missing: true # Create issues for stories without issues
|
||||
|
||||
# Label strategy
|
||||
label_prefix: "story:" # Prefix for story labels (e.g., "story:2-5-auth")
|
||||
use_type_labels: true # Add "type:story", "type:epic"
|
||||
use_status_labels: true # Add "status:backlog", "status:in-progress", etc.
|
||||
use_complexity_labels: true # Add "complexity:micro", etc.
|
||||
use_epic_labels: true # Add "epic:2", "epic:3", etc.
|
||||
|
||||
# Reliability settings
|
||||
reliability:
|
||||
verify_after_create: true # Read back issue to verify creation succeeded
|
||||
retry_on_failure: true # Retry failed operations
|
||||
max_retries: 3
|
||||
retry_backoff_ms: [1000, 3000, 9000] # Exponential backoff
|
||||
halt_on_critical_error: true # Stop migration if critical error occurs
|
||||
save_state_after_each: true # Save progress after each story (crash-safe)
|
||||
create_rollback_manifest: true # Track created issues for rollback
|
||||
|
||||
# State tracking
|
||||
state_file: "{output_folder}/migration-state.yaml"
|
||||
# Tracks: stories_migrated, issues_created, last_story, can_resume
|
||||
|
||||
# Output
|
||||
output:
|
||||
create_migration_report: true
|
||||
report_path: "{output_folder}/migration-report-{timestamp}.md"
|
||||
log_level: "verbose" # "quiet" | "normal" | "verbose"
|
||||
|
||||
standalone: true
|
||||
|
|
@ -0,0 +1,188 @@
|
|||
# Multi-Agent Code Review
|
||||
|
||||
**Purpose:** Perform unbiased code review using multiple specialized AI agents in FRESH CONTEXT, with agent count based on story complexity.
|
||||
|
||||
## Overview
|
||||
|
||||
**Key Principle: FRESH CONTEXT**
|
||||
- Review happens in NEW session (not the agent that wrote the code)
|
||||
- Prevents bias from implementation decisions
|
||||
- Provides truly independent perspective
|
||||
|
||||
**Variable Agent Count by Complexity:**
|
||||
- **MICRO** (2 agents): Security + Code Quality - Quick sanity check
|
||||
- **STANDARD** (4 agents): + Architecture + Testing - Balanced review
|
||||
- **COMPLEX** (6 agents): + Performance + Domain Expert - Comprehensive analysis
|
||||
|
||||
**Available Specialized Agents:**
|
||||
- **Security Agent**: Identifies vulnerabilities and security risks
|
||||
- **Code Quality Agent**: Reviews style, maintainability, and best practices
|
||||
- **Architecture Agent**: Reviews system design, patterns, and structure
|
||||
- **Testing Agent**: Evaluates test coverage and quality
|
||||
- **Performance Agent**: Analyzes efficiency and optimization opportunities
|
||||
- **Domain Expert**: Validates business logic and domain constraints
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Determine Agent Count
|
||||
|
||||
Based on {complexity_level}:
|
||||
|
||||
```
|
||||
If complexity_level == "micro":
|
||||
agent_count = 2
|
||||
agents = ["security", "code_quality"]
|
||||
Display: 🔍 MICRO Review (2 agents: Security + Code Quality)
|
||||
|
||||
Else if complexity_level == "standard":
|
||||
agent_count = 4
|
||||
agents = ["security", "code_quality", "architecture", "testing"]
|
||||
Display: 📋 STANDARD Review (4 agents: Multi-perspective)
|
||||
|
||||
Else if complexity_level == "complex":
|
||||
agent_count = 6
|
||||
agents = ["security", "code_quality", "architecture", "testing", "performance", "domain_expert"]
|
||||
Display: 🔬 COMPLEX Review (6 agents: Comprehensive analysis)
|
||||
```
|
||||
|
||||
### Step 2: Load Story Context
|
||||
|
||||
```bash
|
||||
# Read story file
|
||||
story_file="{story_file}"
|
||||
test -f "$story_file" || (echo "❌ Story file not found: $story_file" && exit 1)
|
||||
```
|
||||
|
||||
Read the story file to understand:
|
||||
- What was supposed to be implemented
|
||||
- Acceptance criteria
|
||||
- Tasks and subtasks
|
||||
- File list
|
||||
|
||||
### Step 3: Invoke Multi-Agent Review Skill (Fresh Context + Smart Agent Selection)
|
||||
|
||||
**CRITICAL:** This review MUST happen in a FRESH CONTEXT (new session, different agent).
|
||||
|
||||
**Smart Agent Selection:**
|
||||
- Skill analyzes changed files and selects MOST RELEVANT agents
|
||||
- Touching payments code? → Add financial-security agent
|
||||
- Touching auth code? → Add auth-security agent
|
||||
- Touching file uploads? → Add file-security agent
|
||||
- Touching performance-critical code? → Add performance agent
|
||||
- Agent count determined by complexity, but agents chosen by code analysis
|
||||
|
||||
```xml
|
||||
<invoke-skill skill="multi-agent-review">
|
||||
<parameter name="story_id">{story_id}</parameter>
|
||||
<parameter name="base_branch">{base_branch}</parameter>
|
||||
<parameter name="max_agents">{agent_count}</parameter>
|
||||
<parameter name="agent_selection">smart</parameter>
|
||||
<parameter name="fresh_context">true</parameter>
|
||||
</invoke-skill>
|
||||
```
|
||||
|
||||
The skill will:
|
||||
1. Create fresh context (unbiased review session)
|
||||
2. Analyze changed files in the story
|
||||
3. Detect code categories (auth, payments, file handling, etc.)
|
||||
4. Select {agent_count} MOST RELEVANT specialized agents
|
||||
5. Run parallel reviews from selected agents
|
||||
6. Each agent reviews from their expertise perspective
|
||||
7. Aggregate findings with severity ratings
|
||||
8. Return comprehensive review report
|
||||
|
||||
### Step 3: Save Review Report
|
||||
|
||||
```bash
|
||||
# The skill returns a review report
|
||||
# Save it to: {review_report}
|
||||
```
|
||||
|
||||
Display summary:
|
||||
```
|
||||
🤖 MULTI-AGENT CODE REVIEW COMPLETE
|
||||
|
||||
Agents Used: {agent_count}
|
||||
- Architecture Agent
|
||||
- Security Agent
|
||||
- Performance Agent
|
||||
- Testing Agent
|
||||
- Code Quality Agent
|
||||
|
||||
Findings:
|
||||
- 🔴 CRITICAL: {critical_count}
|
||||
- 🟠 HIGH: {high_count}
|
||||
- 🟡 MEDIUM: {medium_count}
|
||||
- 🔵 LOW: {low_count}
|
||||
- ℹ️ INFO: {info_count}
|
||||
|
||||
Report saved to: {review_report}
|
||||
```
|
||||
|
||||
### Step 4: Present Findings
|
||||
|
||||
For each finding, display:
|
||||
```
|
||||
[{severity}] {title}
|
||||
Agent: {agent_name}
|
||||
Location: {file}:{line}
|
||||
|
||||
{description}
|
||||
|
||||
Recommendation:
|
||||
{recommendation}
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
### Step 5: Next Steps
|
||||
|
||||
Suggest actions based on findings:
|
||||
```
|
||||
📋 RECOMMENDED NEXT STEPS:
|
||||
|
||||
If CRITICAL findings exist:
|
||||
⚠️ MUST FIX before proceeding
|
||||
- Address all critical security/correctness issues
|
||||
- Re-run review after fixes
|
||||
|
||||
If only HIGH/MEDIUM findings:
|
||||
✅ Story may proceed
|
||||
- Consider addressing high-priority items
|
||||
- Create follow-up tasks for medium items
|
||||
- Document LOW items as tech debt
|
||||
|
||||
If only LOW/INFO findings:
|
||||
✅ Code quality looks good
|
||||
- Optional: Address style/optimization suggestions
|
||||
- Proceed to completion
|
||||
```
|
||||
|
||||
## Integration with Super-Dev-Pipeline
|
||||
|
||||
This workflow is designed to be called from super-dev-pipeline step 7 (code review) when the story complexity is COMPLEX or when user explicitly requests multi-agent review.
|
||||
|
||||
**When to Use:**
|
||||
- Complex stories (≥16 tasks or high-risk keywords)
|
||||
- Stories involving security-sensitive code
|
||||
- Stories with significant architectural changes
|
||||
- When single-agent review has been inconclusive
|
||||
- User explicitly requests comprehensive review
|
||||
|
||||
**When NOT to Use:**
|
||||
- Micro stories (≤3 tasks)
|
||||
- Standard stories with simple changes
|
||||
- Stories that passed adversarial review cleanly
|
||||
|
||||
## Output Files
|
||||
|
||||
- `{review_report}`: Full review findings in markdown
|
||||
- Integrated into story completion summary
|
||||
- Referenced in audit trail
|
||||
|
||||
## Error Handling
|
||||
|
||||
If multi-agent-review skill fails:
|
||||
- Fall back to adversarial code review
|
||||
- Log the failure reason
|
||||
- Continue pipeline with warning
|
||||
|
|
@ -0,0 +1,58 @@
|
|||
name: multi-agent-review
|
||||
description: "Smart multi-agent code review with dynamic agent selection based on changed code. Uses multiple specialized AI agents to review different aspects: architecture, security, performance, testing, and code quality."
|
||||
author: "BMad"
|
||||
version: "1.0.0"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{config_source}:sprint_artifacts"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/multi-agent-review"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
|
||||
# Input parameters
|
||||
story_id: "{story_id}" # Required
|
||||
story_file: "{sprint_artifacts}/story-{story_id}.md"
|
||||
base_branch: "main" # Optional: branch to compare against
|
||||
complexity_level: "standard" # micro | standard | complex (passed from super-dev-pipeline)
|
||||
|
||||
# Complexity-based agent selection (NEW v1.0.0)
|
||||
# Cost-effective review depth based on story RISK and technical complexity
|
||||
# Complexity determined by batch-super-dev based on: risk keywords, architectural impact, security concerns
|
||||
complexity_routing:
|
||||
micro:
|
||||
agent_count: 2
|
||||
agents: ["security", "code_quality"]
|
||||
description: "Quick sanity check for low-risk stories"
|
||||
examples: ["UI tweaks", "text changes", "simple CRUD", "documentation"]
|
||||
cost_multiplier: 1x
|
||||
|
||||
standard:
|
||||
agent_count: 4
|
||||
agents: ["security", "code_quality", "architecture", "testing"]
|
||||
description: "Balanced multi-perspective review for medium-risk changes"
|
||||
examples: ["API endpoints", "business logic", "data validation", "component refactors"]
|
||||
cost_multiplier: 2x
|
||||
|
||||
complex:
|
||||
agent_count: 6
|
||||
agents: ["security", "code_quality", "architecture", "testing", "performance", "domain_expert"]
|
||||
description: "Comprehensive review for high-risk/high-complexity changes"
|
||||
examples: ["auth/security", "payments", "data migration", "architecture changes", "performance-critical", "complex algorithms"]
|
||||
cost_multiplier: 3x
|
||||
|
||||
# Review settings
|
||||
review_settings:
|
||||
fresh_context_required: true # CRITICAL: Review in new session for unbiased perspective
|
||||
agents_to_use: "complexity_based" # complexity_based | all | custom
|
||||
generate_report: true
|
||||
auto_fix_suggested: false # Set to true to automatically apply suggested fixes
|
||||
|
||||
# Output
|
||||
review_report: "{sprint_artifacts}/review-{story_id}-multi-agent.md"
|
||||
|
||||
standalone: true
|
||||
web_bundle: false
|
||||
|
|
@ -0,0 +1,273 @@
|
|||
# Revalidate Epic - Batch Story Revalidation with Semaphore Pattern
|
||||
|
||||
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="1" goal="Load sprint status and find epic stories">
|
||||
<action>Verify epic_number parameter provided</action>
|
||||
|
||||
<check if="epic_number not provided">
|
||||
<output>❌ ERROR: epic_number parameter required
|
||||
|
||||
Usage:
|
||||
/revalidate-epic epic_number=2
|
||||
/revalidate-epic epic_number=2 fill_gaps=true
|
||||
/revalidate-epic epic_number=2 fill_gaps=true max_concurrent=5
|
||||
</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<action>Read {sprint_status} file</action>
|
||||
<action>Parse development_status map</action>
|
||||
|
||||
<action>Filter stories starting with "{{epic_number}}-" (e.g., "2-1-", "2-2-", etc.)</action>
|
||||
<action>Exclude epics (keys starting with "epic-") and retrospectives</action>
|
||||
|
||||
<action>Store as: epic_stories (list of story keys)</action>
|
||||
|
||||
<check if="epic_stories is empty">
|
||||
<output>❌ No stories found for Epic {{epic_number}}
|
||||
|
||||
Check sprint-status.yaml to verify epic number is correct.
|
||||
</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 EPIC {{epic_number}} REVALIDATION
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
**Stories Found:** {{epic_stories.length}}
|
||||
**Mode:** {{#if fill_gaps}}Verify & Fill Gaps{{else}}Verify Only{{/if}}
|
||||
**Max Concurrent:** {{max_concurrent}} agents
|
||||
**Pattern:** Semaphore (continuous worker pool)
|
||||
|
||||
**Stories to Revalidate:**
|
||||
{{#each epic_stories}}
|
||||
{{@index + 1}}. {{this}}
|
||||
{{/each}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<ask>Proceed with revalidation? (yes/no):</ask>
|
||||
|
||||
<check if="response != 'yes'">
|
||||
<output>❌ Revalidation cancelled</output>
|
||||
<action>Exit workflow</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Initialize semaphore pattern for parallel revalidation">
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🚀 Starting Parallel Revalidation (Semaphore Pattern)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<action>Initialize worker pool state:</action>
|
||||
<action>
|
||||
- story_queue = epic_stories
|
||||
- active_workers = {}
|
||||
- completed_stories = []
|
||||
- failed_stories = []
|
||||
- verification_results = {}
|
||||
- next_story_index = 0
|
||||
- max_workers = {{max_concurrent}}
|
||||
</action>
|
||||
|
||||
<action>Fill initial worker slots:</action>
|
||||
|
||||
<iterate>While next_story_index < min(max_workers, story_queue.length):</iterate>
|
||||
|
||||
<action>
|
||||
story_key = story_queue[next_story_index]
|
||||
story_file = {sprint_artifacts}/{{story_key}}.md # Try multiple naming patterns if needed
|
||||
worker_id = next_story_index + 1
|
||||
|
||||
Spawn Task agent:
|
||||
- subagent_type: "general-purpose"
|
||||
- description: "Revalidate story {{story_key}}"
|
||||
- prompt: "Execute revalidate-story workflow for {{story_key}}.
|
||||
|
||||
CRITICAL INSTRUCTIONS:
|
||||
1. Load workflow: _bmad/bmm/workflows/4-implementation/revalidate-story/workflow.yaml
|
||||
2. Parameters: story_file={{story_file}}, fill_gaps={{fill_gaps}}
|
||||
3. Clear all checkboxes
|
||||
4. Verify each AC/Task/DoD against codebase
|
||||
5. Re-check verified items
|
||||
6. Report gaps
|
||||
{{#if fill_gaps}}7. Fill gaps and commit{{/if}}
|
||||
8. Return verification summary"
|
||||
- run_in_background: true
|
||||
|
||||
Store in active_workers[worker_id]:
|
||||
story_key: {{story_key}}
|
||||
task_id: {{returned_task_id}}
|
||||
started_at: {{timestamp}}
|
||||
</action>
|
||||
|
||||
<action>Increment next_story_index</action>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ {{active_workers.size}} workers active
|
||||
📋 {{story_queue.length - next_story_index}} stories queued
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Maintain worker pool until all stories revalidated">
|
||||
<critical>SEMAPHORE PATTERN: Keep {{max_workers}} agents running continuously</critical>
|
||||
|
||||
<iterate>While active_workers.size > 0 OR next_story_index < story_queue.length:</iterate>
|
||||
|
||||
<action>Poll for completed workers (non-blocking):</action>
|
||||
|
||||
<iterate>For each worker_id in active_workers:</iterate>
|
||||
|
||||
<action>Check worker status using TaskOutput(task_id, block=false)</action>
|
||||
|
||||
<check if="worker completed successfully">
|
||||
<action>Get verification results from worker output</action>
|
||||
<action>Parse: verified_pct, gaps_found, gaps_filled</action>
|
||||
|
||||
<action>Store in verification_results[story_key]</action>
|
||||
<action>Add to completed_stories</action>
|
||||
<action>Remove from active_workers</action>
|
||||
|
||||
<output>✅ Worker {{worker_id}}: {{story_key}} → {{verified_pct}}% verified{{#if gaps_filled > 0}}, {{gaps_filled}} gaps filled{{/if}}</output>
|
||||
|
||||
<check if="next_story_index < story_queue.length">
|
||||
<action>Refill slot with next story (same pattern as batch-super-dev)</action>
|
||||
<output>🔄 Worker {{worker_id}} refilled: {{next_story_key}}</output>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<check if="worker failed">
|
||||
<action>Add to failed_stories with error</action>
|
||||
<action>Remove from active_workers</action>
|
||||
<output>❌ Worker {{worker_id}}: {{story_key}} failed</output>
|
||||
|
||||
<check if="continue_on_failure AND next_story_index < story_queue.length">
|
||||
<action>Refill slot despite failure</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<action>Display live progress every 30 seconds:</action>
|
||||
<output>
|
||||
📊 Live Progress: {{completed_stories.length}} completed, {{active_workers.size}} active, {{story_queue.length - next_story_index}} queued
|
||||
</output>
|
||||
|
||||
<action>Sleep 5 seconds before next poll</action>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Generate epic-level summary">
|
||||
<action>Aggregate verification results across all stories:</action>
|
||||
<action>
|
||||
epic_total_items = sum of all items across stories
|
||||
epic_verified = sum of verified items
|
||||
epic_partial = sum of partial items
|
||||
epic_missing = sum of missing items
|
||||
epic_gaps_filled = sum of gaps filled
|
||||
|
||||
epic_verified_pct = (epic_verified / epic_total_items) × 100
|
||||
</action>
|
||||
|
||||
<action>Group stories by verification percentage:</action>
|
||||
<action>
|
||||
- complete_stories (≥95% verified)
|
||||
- mostly_complete_stories (80-94% verified)
|
||||
- partial_stories (50-79% verified)
|
||||
- incomplete_stories (<50% verified)
|
||||
</action>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 EPIC {{epic_number}} REVALIDATION SUMMARY
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
**Total Stories:** {{epic_stories.length}}
|
||||
**Completed:** {{completed_stories.length}}
|
||||
**Failed:** {{failed_stories.length}}
|
||||
|
||||
**Epic-Wide Verification:**
|
||||
- ✅ Verified: {{epic_verified}}/{{epic_total_items}} ({{epic_verified_pct}}%)
|
||||
- 🔶 Partial: {{epic_partial}}/{{epic_total_items}}
|
||||
- ❌ Missing: {{epic_missing}}/{{epic_total_items}}
|
||||
{{#if fill_gaps}}- 🔧 Gaps Filled: {{epic_gaps_filled}}{{/if}}
|
||||
|
||||
**Story Health:**
|
||||
- ✅ Complete (≥95%): {{complete_stories.length}} stories
|
||||
- 🔶 Mostly Complete (80-94%): {{mostly_complete_stories.length}} stories
|
||||
- ⚠️ Partial (50-79%): {{partial_stories.length}} stories
|
||||
- ❌ Incomplete (<50%): {{incomplete_stories.length}} stories
|
||||
|
||||
---
|
||||
|
||||
**Complete Stories (≥95% verified):**
|
||||
{{#each complete_stories}}
|
||||
- {{story_key}}: {{verified_pct}}% verified
|
||||
{{/each}}
|
||||
|
||||
{{#if mostly_complete_stories.length > 0}}
|
||||
**Mostly Complete Stories (80-94%):**
|
||||
{{#each mostly_complete_stories}}
|
||||
- {{story_key}}: {{verified_pct}}% verified ({{gaps_count}} gaps{{#if gaps_filled > 0}}, {{gaps_filled}} filled{{/if}})
|
||||
{{/each}}
|
||||
{{/if}}
|
||||
|
||||
{{#if partial_stories.length > 0}}
|
||||
**⚠️ Partial Stories (50-79%):**
|
||||
{{#each partial_stories}}
|
||||
- {{story_key}}: {{verified_pct}}% verified ({{gaps_count}} gaps{{#if gaps_filled > 0}}, {{gaps_filled}} filled{{/if}})
|
||||
{{/each}}
|
||||
|
||||
Recommendation: Continue development on these stories
|
||||
{{/if}}
|
||||
|
||||
{{#if incomplete_stories.length > 0}}
|
||||
**❌ Incomplete Stories (<50%):**
|
||||
{{#each incomplete_stories}}
|
||||
- {{story_key}}: {{verified_pct}}% verified ({{gaps_count}} gaps{{#if gaps_filled > 0}}, {{gaps_filled}} filled{{/if}})
|
||||
{{/each}}
|
||||
|
||||
Recommendation: Re-implement these stories from scratch
|
||||
{{/if}}
|
||||
|
||||
{{#if failed_stories.length > 0}}
|
||||
**❌ Failed Revalidations:**
|
||||
{{#each failed_stories}}
|
||||
- {{story_key}}: {{error}}
|
||||
{{/each}}
|
||||
{{/if}}
|
||||
|
||||
---
|
||||
|
||||
**Epic Health Score:** {{epic_verified_pct}}/100
|
||||
|
||||
{{#if epic_verified_pct >= 95}}
|
||||
✅ Epic is COMPLETE and verified
|
||||
{{else if epic_verified_pct >= 80}}
|
||||
🔶 Epic is MOSTLY COMPLETE ({{epic_missing}} items need attention)
|
||||
{{else if epic_verified_pct >= 50}}
|
||||
⚠️ Epic is PARTIALLY COMPLETE (significant gaps remain)
|
||||
{{else}}
|
||||
❌ Epic is INCOMPLETE (major rework needed)
|
||||
{{/if}}
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<check if="create_epic_report == true">
|
||||
<action>Write epic summary to: {sprint_artifacts}/revalidation-epic-{{epic_number}}-{{timestamp}}.md</action>
|
||||
<output>📄 Epic report: {{report_path}}</output>
|
||||
</check>
|
||||
|
||||
<check if="update_sprint_status == true">
|
||||
<action>Update sprint-status.yaml with revalidation timestamp and results</action>
|
||||
<action>Add comment to epic entry: # Revalidated: {{epic_verified_pct}}% verified ({{timestamp}})</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
|
|
@ -0,0 +1,44 @@
|
|||
name: revalidate-epic
|
||||
description: "Batch revalidation of all stories in an epic. Clears checkboxes and re-verifies against codebase with semaphore pattern."
|
||||
author: "BMad"
|
||||
version: "1.0.0"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{output_folder}/sprint-artifacts"
|
||||
sprint_status: "{output_folder}/sprint-status.yaml"
|
||||
|
||||
# Input parameters
|
||||
epic_number: "{epic_number}" # Required: Epic number (e.g., "2" for Epic 2)
|
||||
fill_gaps: false # Optional: Fill missing items after verification
|
||||
max_concurrent: 3 # Optional: Max concurrent revalidation agents (default: 3)
|
||||
|
||||
# Verification settings (inherited by story revalidations)
|
||||
verification:
|
||||
verify_acceptance_criteria: true
|
||||
verify_tasks: true
|
||||
verify_definition_of_done: true
|
||||
check_for_stubs: true
|
||||
require_tests: true
|
||||
|
||||
# Gap filling settings
|
||||
gap_filling:
|
||||
max_gaps_per_story: 10 # Safety limit per story
|
||||
require_confirmation_first_story: true # Ask on first story, then auto for rest
|
||||
run_tests_after_each: true
|
||||
commit_strategy: "per_gap" # "per_gap" | "per_story" | "all_at_once"
|
||||
|
||||
# Execution settings
|
||||
execution:
|
||||
use_semaphore_pattern: true # Constant concurrency (not batch-and-wait)
|
||||
continue_on_failure: true # Keep processing if one story fails
|
||||
display_live_progress: true # Show progress updates every 30s
|
||||
|
||||
# Output settings
|
||||
output:
|
||||
create_epic_report: true # Generate epic-level summary
|
||||
create_story_reports: true # Generate per-story reports
|
||||
update_sprint_status: true # Update progress in sprint-status.yaml
|
||||
|
||||
standalone: true
|
||||
|
|
@ -0,0 +1,510 @@
|
|||
# Revalidate Story - Verify Checkboxes Against Codebase Reality
|
||||
|
||||
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="1" goal="Load story and backup current state">
|
||||
<action>Verify story_file parameter provided</action>
|
||||
|
||||
<check if="story_file not provided">
|
||||
<output>❌ ERROR: story_file parameter required
|
||||
|
||||
Usage:
|
||||
/revalidate-story story_file=path/to/story.md
|
||||
/revalidate-story story_file=path/to/story.md fill_gaps=true
|
||||
</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<action>Read COMPLETE story file: {{story_file}}</action>
|
||||
<action>Parse sections: Acceptance Criteria, Tasks/Subtasks, Definition of Done, Dev Agent Record</action>
|
||||
|
||||
<action>Extract story_key from filename (e.g., "2-7-image-file-handling")</action>
|
||||
|
||||
<action>Create backup of current checkbox state:</action>
|
||||
<action>Count currently checked items:
|
||||
- ac_checked_before = count of [x] in Acceptance Criteria
|
||||
- tasks_checked_before = count of [x] in Tasks/Subtasks
|
||||
- dod_checked_before = count of [x] in Definition of Done
|
||||
- total_checked_before = sum of above
|
||||
</action>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔍 STORY REVALIDATION STARTED
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
**Story:** {{story_key}}
|
||||
**File:** {{story_file}}
|
||||
**Mode:** {{#if fill_gaps}}Verify & Fill Gaps{{else}}Verify Only{{/if}}
|
||||
|
||||
**Current State:**
|
||||
- Acceptance Criteria: {{ac_checked_before}}/{{ac_total}} checked
|
||||
- Tasks: {{tasks_checked_before}}/{{tasks_total}} checked
|
||||
- Definition of Done: {{dod_checked_before}}/{{dod_total}} checked
|
||||
- **Total:** {{total_checked_before}}/{{total_items}} ({{pct_before}}%)
|
||||
|
||||
**Action:** Clearing all checkboxes and re-verifying against codebase...
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Clear all checkboxes">
|
||||
<output>🧹 Clearing all checkboxes to start fresh verification...</output>
|
||||
|
||||
<action>Use Edit tool to replace all [x] with [ ] in Acceptance Criteria section</action>
|
||||
<action>Use Edit tool to replace all [x] with [ ] in Tasks/Subtasks section</action>
|
||||
<action>Use Edit tool to replace all [x] with [ ] in Definition of Done section</action>
|
||||
|
||||
<action>Save story file with all boxes unchecked</action>
|
||||
|
||||
<output>✅ All checkboxes cleared. Starting verification from clean slate...</output>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Verify Acceptance Criteria against codebase">
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📋 VERIFYING ACCEPTANCE CRITERIA ({{ac_total}} items)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<action>Extract all AC items from Acceptance Criteria section</action>
|
||||
|
||||
<iterate>For each AC item:</iterate>
|
||||
|
||||
<substep n="3a" title="Parse AC and determine what should exist">
|
||||
<action>Extract AC description and identify artifacts:
|
||||
- File mentions (e.g., "UserProfile component")
|
||||
- Function names (e.g., "updateUser function")
|
||||
- Features (e.g., "dark mode toggle")
|
||||
- Test requirements (e.g., "unit tests covering edge cases")
|
||||
</action>
|
||||
|
||||
<output>Verifying AC{{@index}}: {{ac_description}}</output>
|
||||
</substep>
|
||||
|
||||
<substep n="3b" title="Search codebase for evidence">
|
||||
<action>Use Glob to find relevant files:
|
||||
- If AC mentions specific file: glob for that file
|
||||
- If AC mentions component: glob for **/*ComponentName*
|
||||
- If AC mentions feature: glob for files in related directories
|
||||
</action>
|
||||
|
||||
<action>Use Grep to search for symbols/functions/features</action>
|
||||
|
||||
<action>Read found files to verify:</action>
|
||||
<action>- NOT a stub (check for "TODO", "Not implemented", "throw new Error")</action>
|
||||
<action>- Has actual implementation (not just empty function)</action>
|
||||
<action>- Tests exist (search for *.test.* or *.spec.* files)</action>
|
||||
<action>- Tests pass (if --fill-gaps mode, run tests)</action>
|
||||
</substep>
|
||||
|
||||
<substep n="3c" title="Determine verification status">
|
||||
<check if="all evidence found AND no stubs AND tests exist">
|
||||
<action>verification_status = VERIFIED</action>
|
||||
<action>Check box [x] in story file for this AC</action>
|
||||
<action>Record evidence: "✅ VERIFIED: {{files_found}}, tests: {{test_files}}"</action>
|
||||
<output> ✅ AC{{@index}}: VERIFIED</output>
|
||||
</check>
|
||||
|
||||
<check if="partial evidence OR stubs found OR tests missing">
|
||||
<action>verification_status = PARTIAL</action>
|
||||
<action>Check box [~] in story file for this AC</action>
|
||||
<action>Record gap: "🔶 PARTIAL: {{what_exists}}, missing: {{what_is_missing}}"</action>
|
||||
<output> 🔶 AC{{@index}}: PARTIAL ({{what_is_missing}})</output>
|
||||
<action>Add to gaps_list with details</action>
|
||||
</check>
|
||||
|
||||
<check if="no evidence found">
|
||||
<action>verification_status = MISSING</action>
|
||||
<action>Leave box unchecked [ ] in story file</action>
|
||||
<action>Record gap: "❌ MISSING: No implementation found for {{ac_description}}"</action>
|
||||
<output> ❌ AC{{@index}}: MISSING</output>
|
||||
<action>Add to gaps_list with details</action>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<action>Save story file after each AC verification</action>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Acceptance Criteria Verification Complete
|
||||
✅ Verified: {{ac_verified}}
|
||||
🔶 Partial: {{ac_partial}}
|
||||
❌ Missing: {{ac_missing}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Verify Tasks/Subtasks against codebase">
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📋 VERIFYING TASKS ({{tasks_total}} items)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<action>Extract all Task items from Tasks/Subtasks section</action>
|
||||
|
||||
<iterate>For each Task item (same verification logic as ACs):</iterate>
|
||||
|
||||
<action>Parse task description for artifacts</action>
|
||||
<action>Search codebase with Glob/Grep</action>
|
||||
<action>Read and verify (check for stubs, tests)</action>
|
||||
<action>Determine status: VERIFIED | PARTIAL | MISSING</action>
|
||||
<action>Update checkbox: [x] | [~] | [ ]</action>
|
||||
<action>Record evidence or gap</action>
|
||||
<action>Save story file</action>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Tasks Verification Complete
|
||||
✅ Verified: {{tasks_verified}}
|
||||
🔶 Partial: {{tasks_partial}}
|
||||
❌ Missing: {{tasks_missing}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Verify Definition of Done against codebase">
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📋 VERIFYING DEFINITION OF DONE ({{dod_total}} items)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<action>Extract all DoD items from Definition of Done section</action>
|
||||
|
||||
<iterate>For each DoD item:</iterate>
|
||||
|
||||
<action>Parse DoD requirement:
|
||||
- "Type check passes" → Run type checker
|
||||
- "Unit tests 90%+ coverage" → Run coverage report
|
||||
- "Linting clean" → Run linter
|
||||
- "Build succeeds" → Run build
|
||||
- "All tests pass" → Run test suite
|
||||
</action>
|
||||
|
||||
<action>Execute verification for this DoD item</action>
|
||||
|
||||
<check if="verification passes">
|
||||
<action>Check box [x]</action>
|
||||
<action>Record: "✅ VERIFIED: {{verification_result}}"</action>
|
||||
</check>
|
||||
|
||||
<check if="verification fails or N/A">
|
||||
<action>Leave unchecked [ ] or partial [~]</action>
|
||||
<action>Record gap if applicable</action>
|
||||
</check>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Definition of Done Verification Complete
|
||||
✅ Verified: {{dod_verified}}
|
||||
🔶 Partial: {{dod_partial}}
|
||||
❌ Missing: {{dod_missing}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Generate revalidation report">
|
||||
<action>Calculate overall completion:</action>
|
||||
<action>
|
||||
total_verified = ac_verified + tasks_verified + dod_verified
|
||||
total_partial = ac_partial + tasks_partial + dod_partial
|
||||
total_missing = ac_missing + tasks_missing + dod_missing
|
||||
total_items = ac_total + tasks_total + dod_total
|
||||
|
||||
verified_pct = (total_verified / total_items) × 100
|
||||
completion_pct = ((total_verified + total_partial) / total_items) × 100
|
||||
</action>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📊 REVALIDATION SUMMARY
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
**Story:** {{story_key}}
|
||||
**File:** {{story_file}}
|
||||
|
||||
**Verification Results:**
|
||||
- ✅ Verified Complete: {{total_verified}}/{{total_items}} ({{verified_pct}}%)
|
||||
- 🔶 Partially Complete: {{total_partial}}/{{total_items}}
|
||||
- ❌ Missing/Incomplete: {{total_missing}}/{{total_items}}
|
||||
|
||||
**Breakdown:**
|
||||
- Acceptance Criteria: {{ac_verified}}✅ {{ac_partial}}🔶 {{ac_missing}}❌ / {{ac_total}} total
|
||||
- Tasks: {{tasks_verified}}✅ {{tasks_partial}}🔶 {{tasks_missing}}❌ / {{tasks_total}} total
|
||||
- Definition of Done: {{dod_verified}}✅ {{dod_partial}}🔶 {{dod_missing}}❌ / {{dod_total}} total
|
||||
|
||||
**Status Assessment:**
|
||||
{{#if verified_pct >= 95}}
|
||||
✅ Story is COMPLETE ({{verified_pct}}% verified)
|
||||
{{else if verified_pct >= 80}}
|
||||
🔶 Story is MOSTLY COMPLETE ({{verified_pct}}% verified, {{total_missing}} gaps)
|
||||
{{else if verified_pct >= 50}}
|
||||
⚠️ Story is PARTIALLY COMPLETE ({{verified_pct}}% verified, {{total_missing}} gaps)
|
||||
{{else}}
|
||||
❌ Story is INCOMPLETE ({{verified_pct}}% verified, significant work missing)
|
||||
{{/if}}
|
||||
|
||||
**Before Revalidation:** {{total_checked_before}}/{{total_items}} checked ({{pct_before}}%)
|
||||
**After Revalidation:** {{total_verified}}/{{total_items}} verified ({{verified_pct}}%)
|
||||
**Accuracy:** {{#if pct_before == verified_pct}}Perfect match{{else if pct_before > verified_pct}}{{pct_before - verified_pct}}% over-reported{{else}}{{verified_pct - pct_before}}% under-reported{{/if}}
|
||||
|
||||
{{#if total_missing > 0}}
|
||||
---
|
||||
**Gaps Found ({{total_missing}}):**
|
||||
{{#each gaps_list}}
|
||||
{{@index + 1}}. {{item_type}} - {{item_description}}
|
||||
Status: {{status}}
|
||||
Missing: {{what_is_missing}}
|
||||
{{#if evidence}}Evidence checked: {{evidence}}{{/if}}
|
||||
{{/each}}
|
||||
---
|
||||
{{/if}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<check if="create_report == true">
|
||||
<action>Write detailed report to: {sprint_artifacts}/revalidation-{{story_key}}-{{timestamp}}.md</action>
|
||||
<action>Include: verification results, gaps list, evidence for each item, recommendations</action>
|
||||
<output>📄 Detailed report: {{report_path}}</output>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Decide on gap filling">
|
||||
<check if="fill_gaps == false">
|
||||
<output>
|
||||
✅ Verification complete (verify-only mode)
|
||||
|
||||
{{#if total_missing > 0}}
|
||||
**To fill the {{total_missing}} gaps, run:**
|
||||
/revalidate-story story_file={{story_file}} fill_gaps=true
|
||||
{{else}}
|
||||
No gaps found - story is complete!
|
||||
{{/if}}
|
||||
</output>
|
||||
<action>Exit workflow</action>
|
||||
</check>
|
||||
|
||||
<check if="fill_gaps == true AND total_missing == 0">
|
||||
<output>✅ No gaps to fill - story is already complete!</output>
|
||||
<action>Exit workflow</action>
|
||||
</check>
|
||||
|
||||
<check if="fill_gaps == true AND total_missing > 0">
|
||||
<check if="total_missing > max_gaps_to_fill">
|
||||
<output>
|
||||
⚠️ TOO MANY GAPS: {{total_missing}} gaps found (max: {{max_gaps_to_fill}})
|
||||
|
||||
This story has too many missing items for automatic gap filling.
|
||||
Consider:
|
||||
1. Re-implementing the story from scratch with /dev-story
|
||||
2. Manually implementing the gaps
|
||||
3. Increasing max_gaps_to_fill in workflow.yaml (use cautiously)
|
||||
|
||||
Gap filling HALTED for safety.
|
||||
</output>
|
||||
<action>HALT</action>
|
||||
</check>
|
||||
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🔧 GAP FILLING MODE ({{total_missing}} gaps to fill)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
|
||||
<action>Continue to Step 8</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Fill gaps (implement missing items)">
|
||||
<iterate>For each gap in gaps_list:</iterate>
|
||||
|
||||
<substep n="8a" title="Confirm gap filling">
|
||||
<check if="require_confirmation == true">
|
||||
<ask>
|
||||
Fill this gap?
|
||||
|
||||
**Item:** {{item_description}}
|
||||
**Type:** {{item_type}} ({{section}})
|
||||
**Missing:** {{what_is_missing}}
|
||||
|
||||
[Y] Yes - Implement this item
|
||||
[A] Auto-fill - Implement this and all remaining gaps without asking
|
||||
[S] Skip - Leave this gap unfilled
|
||||
[H] Halt - Stop gap filling
|
||||
|
||||
Your choice:
|
||||
</ask>
|
||||
|
||||
<check if="choice == 'A'">
|
||||
<action>Set require_confirmation = false (auto-fill remaining)</action>
|
||||
</check>
|
||||
|
||||
<check if="choice == 'S'">
|
||||
<action>Continue to next gap</action>
|
||||
</check>
|
||||
|
||||
<check if="choice == 'H'">
|
||||
<action>Exit gap filling loop</action>
|
||||
<action>Jump to Step 9 (Summary)</action>
|
||||
</check>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<substep n="8b" title="Implement missing item">
|
||||
<output>🔧 Implementing: {{item_description}}</output>
|
||||
|
||||
<action>Load story context (Technical Requirements, Architecture Compliance, Dev Notes)</action>
|
||||
<action>Implement missing item following story specifications</action>
|
||||
<action>Write tests if required</action>
|
||||
<action>Run tests to verify implementation</action>
|
||||
<action>Verify linting/type checking passes</action>
|
||||
|
||||
<check if="implementation succeeds AND tests pass">
|
||||
<action>Check box [x] for this item in story file</action>
|
||||
<action>Update File List with new/modified files</action>
|
||||
<action>Add to Dev Agent Record: "Gap filled: {{item_description}}"</action>
|
||||
<output> ✅ Implemented and verified</output>
|
||||
|
||||
<check if="commit_strategy == 'per_gap'">
|
||||
<action>Stage files for this gap</action>
|
||||
<action>Commit: "fix({{story_key}}): fill gap - {{item_description}}"</action>
|
||||
<output> ✅ Committed</output>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<check if="implementation fails">
|
||||
<output> ❌ Failed to implement: {{error_message}}</output>
|
||||
<action>Leave box unchecked</action>
|
||||
<action>Record failure in gaps_list</action>
|
||||
<action>Add to failed_gaps</action>
|
||||
</check>
|
||||
</substep>
|
||||
|
||||
<action>After all gaps processed:</action>
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Gap Filling Complete
|
||||
✅ Filled: {{gaps_filled}}
|
||||
❌ Failed: {{gaps_failed}}
|
||||
⏭️ Skipped: {{gaps_skipped}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</step>
|
||||
|
||||
<step n="9" goal="Re-verify filled gaps and finalize">
|
||||
<check if="gaps_filled > 0">
|
||||
<output>🔍 Re-verifying filled gaps...</output>
|
||||
|
||||
<iterate>For each filled gap:</iterate>
|
||||
<action>Re-run verification for that item</action>
|
||||
<action>Ensure still VERIFIED after all changes</action>
|
||||
|
||||
<output>✅ All filled gaps re-verified</output>
|
||||
</check>
|
||||
|
||||
<action>Calculate final completion:</action>
|
||||
<action>
|
||||
final_verified = count of [x] across all sections
|
||||
final_partial = count of [~] across all sections
|
||||
final_missing = count of [ ] across all sections
|
||||
final_pct = (final_verified / total_items) × 100
|
||||
</action>
|
||||
|
||||
<check if="commit_strategy == 'all_at_once' AND gaps_filled > 0">
|
||||
<action>Stage all changed files</action>
|
||||
<action>Commit: "fix({{story_key}}): fill {{gaps_filled}} gaps from revalidation"</action>
|
||||
<output>✅ All gaps committed</output>
|
||||
</check>
|
||||
|
||||
<check if="update_sprint_status == true">
|
||||
<action>Load {sprint_status} file</action>
|
||||
<action>Update entry with current progress:</action>
|
||||
<action>Format: {{story_key}}: {{current_status}} # Revalidated: {{final_verified}}/{{total_items}} ({{final_pct}}%) verified</action>
|
||||
<action>Save sprint-status.yaml</action>
|
||||
<output>✅ Sprint status updated with revalidation results</output>
|
||||
</check>
|
||||
|
||||
<check if="update_dev_agent_record == true">
|
||||
<action>Add to Dev Agent Record in story file:</action>
|
||||
<action>
|
||||
## Revalidation Record ({{timestamp}})
|
||||
|
||||
**Revalidation Mode:** {{#if fill_gaps}}Verify & Fill{{else}}Verify Only{{/if}}
|
||||
|
||||
**Results:**
|
||||
- Verified: {{final_verified}}/{{total_items}} ({{final_pct}}%)
|
||||
- Gaps Found: {{total_missing}}
|
||||
- Gaps Filled: {{gaps_filled}}
|
||||
|
||||
**Evidence:**
|
||||
{{#each verification_evidence}}
|
||||
- {{item}}: {{evidence}}
|
||||
{{/each}}
|
||||
|
||||
{{#if gaps_filled > 0}}
|
||||
**Gaps Filled:**
|
||||
{{#each filled_gaps}}
|
||||
- {{item}}: {{what_was_implemented}}
|
||||
{{/each}}
|
||||
{{/if}}
|
||||
|
||||
{{#if failed_gaps.length > 0}}
|
||||
**Failed to Fill:**
|
||||
{{#each failed_gaps}}
|
||||
- {{item}}: {{error}}
|
||||
{{/each}}
|
||||
{{/if}}
|
||||
</action>
|
||||
<action>Save story file</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="10" goal="Final summary and recommendations">
|
||||
<output>
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
✅ REVALIDATION COMPLETE
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
**Story:** {{story_key}}
|
||||
|
||||
**Final Status:**
|
||||
- ✅ Verified Complete: {{final_verified}}/{{total_items}} ({{final_pct}}%)
|
||||
- 🔶 Partially Complete: {{final_partial}}/{{total_items}}
|
||||
- ❌ Missing/Incomplete: {{final_missing}}/{{total_items}}
|
||||
|
||||
{{#if fill_gaps}}
|
||||
**Gap Filling Results:**
|
||||
- Filled: {{gaps_filled}}
|
||||
- Failed: {{gaps_failed}}
|
||||
- Skipped: {{gaps_skipped}}
|
||||
{{/if}}
|
||||
|
||||
**Accuracy Check:**
|
||||
- Before revalidation: {{pct_before}}% checked
|
||||
- After revalidation: {{final_pct}}% verified
|
||||
- Checkbox accuracy: {{#if pct_before == final_pct}}✅ Perfect (0% discrepancy){{else if pct_before > final_pct}}⚠️ {{pct_before - final_pct}}% over-reported (checkboxes were optimistic){{else}}🔶 {{final_pct - pct_before}}% under-reported (work done but not checked){{/if}}
|
||||
|
||||
{{#if final_pct >= 95}}
|
||||
**Recommendation:** Story is COMPLETE - mark as "done" or "review"
|
||||
{{else if final_pct >= 80}}
|
||||
**Recommendation:** Story is mostly complete - finish remaining {{final_missing}} items then mark "review"
|
||||
{{else if final_pct >= 50}}
|
||||
**Recommendation:** Story has significant gaps - continue development with /dev-story
|
||||
{{else}}
|
||||
**Recommendation:** Story is mostly incomplete - consider re-implementing with /dev-story or /super-dev-pipeline
|
||||
{{/if}}
|
||||
|
||||
{{#if failed_gaps.length > 0}}
|
||||
**⚠️ Manual attention needed for {{failed_gaps.length}} items that failed to fill automatically**
|
||||
{{/if}}
|
||||
|
||||
{{#if create_report}}
|
||||
**Detailed Report:** {sprint_artifacts}/revalidation-{{story_key}}-{{timestamp}}.md
|
||||
{{/if}}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
</output>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
|
|
@ -0,0 +1,37 @@
|
|||
name: revalidate-story
|
||||
description: "Clear checkboxes and re-verify story against actual codebase implementation. Identifies gaps and optionally fills them."
|
||||
author: "BMad"
|
||||
version: "1.0.0"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{output_folder}/sprint-artifacts"
|
||||
|
||||
# Input parameters
|
||||
story_file: "{story_file}" # Required: Full path to story file
|
||||
fill_gaps: false # Optional: Fill missing items after verification (default: verify-only)
|
||||
auto_commit: false # Optional: Auto-commit filled gaps (default: prompt)
|
||||
|
||||
# Verification settings
|
||||
verification:
|
||||
verify_acceptance_criteria: true
|
||||
verify_tasks: true
|
||||
verify_definition_of_done: true
|
||||
check_for_stubs: true # Reject stub implementations (TODO, Not implemented, etc.)
|
||||
require_tests: true # Require tests for code items
|
||||
|
||||
# Gap filling settings (only used if fill_gaps=true)
|
||||
gap_filling:
|
||||
max_gaps_to_fill: 10 # Safety limit - HALT if more gaps than this
|
||||
require_confirmation: true # Ask before filling each gap (false = auto-fill all)
|
||||
run_tests_after_each: true # Verify each filled gap works
|
||||
commit_strategy: "per_gap" # "per_gap" | "all_at_once" | "none"
|
||||
|
||||
# Output settings
|
||||
output:
|
||||
create_report: true # Generate revalidation-report.md
|
||||
update_dev_agent_record: true # Add revalidation notes to story
|
||||
update_sprint_status: true # Update progress in sprint-status.yaml
|
||||
|
||||
standalone: false
|
||||
|
|
@ -0,0 +1,391 @@
|
|||
# Super-Dev-Pipeline v2.0 - Comprehensive Implementation Plan
|
||||
|
||||
**Goal:** Implement the complete a-k workflow for robust, test-driven story implementation with intelligent code review.
|
||||
|
||||
## Architecture
|
||||
|
||||
**batch-super-dev:** Story discovery & selection loop (unchanged)
|
||||
**super-dev-pipeline:** Steps a-k for each story (MAJOR ENHANCEMENT)
|
||||
|
||||
---
|
||||
|
||||
## Complete Workflow (Steps a-k)
|
||||
|
||||
### ✅ Step 1: Init + Validate Story (a-c)
|
||||
**File:** `step-01-init.md` (COMPLETED)
|
||||
- [x] a. Validate story file exists and is robust
|
||||
- [x] b. If no story file, run /create-story-with-gap-analysis (auto-invoke)
|
||||
- [x] c. Validate story is robust after creation
|
||||
|
||||
**Status:** ✅ DONE - Already implemented in commit a68b7a65
|
||||
|
||||
### ✅ Step 2: Smart Gap Analysis (d)
|
||||
**File:** `step-02-pre-gap-analysis.md` (NEEDS ENHANCEMENT)
|
||||
- [ ] d. Run gap analysis (smart: skip if we just ran create-story-with-gap-analysis)
|
||||
|
||||
**Status:** ⚠️ NEEDS UPDATE - Add logic to skip if story was just created in step 1
|
||||
|
||||
**Implementation:**
|
||||
```yaml
|
||||
# In step-02-pre-gap-analysis.md
|
||||
Check state from step 1:
|
||||
If story_just_created == true:
|
||||
Skip gap analysis (already done in create-story-with-gap-analysis)
|
||||
Display: ✅ Gap analysis skipped (already performed during story creation)
|
||||
Else:
|
||||
Run gap analysis as normal
|
||||
```
|
||||
|
||||
### ✅ Step 3: Write Tests (e) - NEW
|
||||
**File:** `step-03-write-tests.md` (COMPLETED)
|
||||
- [x] e. Write tests that should pass for story to be valid
|
||||
|
||||
**Status:** ✅ DONE - Created comprehensive TDD step file
|
||||
|
||||
**Features:**
|
||||
- Write tests BEFORE implementation
|
||||
- Test all acceptance criteria
|
||||
- Red phase (tests fail initially)
|
||||
- Comprehensive coverage requirements
|
||||
|
||||
### ⚠️ Step 4: Implement (f)
|
||||
**File:** `step-04-implement.md` (NEEDS RENAME)
|
||||
- [ ] f. Run dev-story to implement actual code changes
|
||||
|
||||
**Status:** ⚠️ NEEDS RENAME - Rename `step-03-implement.md` → `step-04-implement.md`
|
||||
|
||||
**Implementation:**
|
||||
```bash
|
||||
# Rename file
|
||||
mv step-03-implement.md step-04-implement.md
|
||||
|
||||
# Update references
|
||||
# Update workflow.yaml step 4 definition
|
||||
# Update next step references in step-03-write-tests.md
|
||||
```
|
||||
|
||||
### ⚠️ Step 5: Post-Validation (g)
|
||||
**File:** `step-05-post-validation.md` (NEEDS RENAME)
|
||||
- [ ] g. Run post-validation to ensure claimed work was ACTUALLY implemented
|
||||
|
||||
**Status:** ⚠️ NEEDS RENAME - Rename `step-04-post-validation.md` → `step-05-post-validation.md`
|
||||
|
||||
### ✅ Step 6: Run Quality Checks (h) - NEW
|
||||
**File:** `step-06-run-quality-checks.md` (COMPLETED)
|
||||
- [x] h. Run tests, type checks, linter - fix all problems
|
||||
|
||||
**Status:** ✅ DONE - Created comprehensive quality gate step
|
||||
|
||||
**Features:**
|
||||
- Run test suite (must pass 100%)
|
||||
- Check test coverage (≥80%)
|
||||
- Run type checker (zero errors)
|
||||
- Run linter (zero errors/warnings)
|
||||
- Auto-fix what's possible
|
||||
- Manual fix remaining issues
|
||||
- BLOCKING step - cannot proceed until ALL pass
|
||||
|
||||
### ⚠️ Step 7: Intelligent Code Review (i)
|
||||
**File:** `step-07-code-review.md` (NEEDS RENAME + ENHANCEMENT)
|
||||
- [ ] i. Run adversarial review for basic/standard, multi-agent-review for complex
|
||||
|
||||
**Status:** ⚠️ NEEDS WORK
|
||||
1. Rename `step-05-code-review.md` → `step-07-code-review.md`
|
||||
2. Enhance to actually invoke multi-agent-review workflow
|
||||
3. Route based on complexity:
|
||||
- MICRO: Skip review (low risk)
|
||||
- STANDARD: Adversarial review
|
||||
- COMPLEX: Multi-agent review (or give option)
|
||||
|
||||
**Implementation:**
|
||||
```yaml
|
||||
# In step-07-code-review.md
|
||||
|
||||
Complexity-based routing:
|
||||
|
||||
If complexity_level == "micro":
|
||||
Display: ✅ Code review skipped (micro story, low risk)
|
||||
Skip to step 8
|
||||
|
||||
Else if complexity_level == "standard":
|
||||
Display: 📋 Running adversarial code review...
|
||||
Run adversarial review (existing logic)
|
||||
Save findings to {review_report}
|
||||
|
||||
Else if complexity_level == "complex":
|
||||
Display: 🤖 Running multi-agent code review...
|
||||
<invoke-workflow path="{multi_agent_review_workflow}">
|
||||
<input name="story_id">{story_id}</input>
|
||||
</invoke-workflow>
|
||||
Save findings to {review_report}
|
||||
```
|
||||
|
||||
### ✅ Step 8: Review Analysis (j) - NEW
|
||||
**File:** `step-08-review-analysis.md` (COMPLETED)
|
||||
- [x] j. Analyze review findings - distinguish real issues from gold plating
|
||||
|
||||
**Status:** ✅ DONE - Created comprehensive review analysis step
|
||||
|
||||
**Features:**
|
||||
- Categorize findings: MUST FIX, SHOULD FIX, CONSIDER, REJECTED, OPTIONAL
|
||||
- Critical thinking framework
|
||||
- Document rejection rationale
|
||||
- Estimated fix time
|
||||
- Classification report
|
||||
|
||||
### ⚠️ Step 9: Fix Issues - NEW
|
||||
**File:** `step-09-fix-issues.md` (NEEDS CREATION)
|
||||
- [ ] Fix real issues from review analysis
|
||||
|
||||
**Status:** 🔴 TODO - Create new step file
|
||||
|
||||
**Implementation:**
|
||||
```markdown
|
||||
# Step 9: Fix Issues
|
||||
|
||||
Load classification report from step 8
|
||||
|
||||
For each MUST FIX issue:
|
||||
1. Read file at location
|
||||
2. Understand the issue
|
||||
3. Implement fix
|
||||
4. Verify fix works (run tests)
|
||||
5. Commit fix
|
||||
|
||||
For each SHOULD FIX issue:
|
||||
1. Read file at location
|
||||
2. Understand the issue
|
||||
3. Implement fix
|
||||
4. Verify fix works (run tests)
|
||||
5. Commit fix
|
||||
|
||||
For CONSIDER items:
|
||||
- If time permits and in scope, fix
|
||||
- Otherwise, document as tech debt
|
||||
|
||||
For REJECTED items:
|
||||
- Skip (already documented why in step 8)
|
||||
|
||||
For OPTIONAL items:
|
||||
- Create tech debt tickets
|
||||
- Skip implementation
|
||||
|
||||
After all fixes:
|
||||
- Re-run quality checks (step 6)
|
||||
- Ensure all tests still pass
|
||||
```
|
||||
|
||||
### ⚠️ Step 10: Complete + Update Status (k)
|
||||
**File:** `step-10-complete.md` (NEEDS RENAME + ENHANCEMENT)
|
||||
- [ ] k. Update story to "done", update sprint-status.yaml (MANDATORY)
|
||||
|
||||
**Status:** ⚠️ NEEDS WORK
|
||||
1. Rename `step-06-complete.md` → `step-10-complete.md`
|
||||
2. Add MANDATORY sprint-status.yaml update
|
||||
3. Update story status to "done"
|
||||
4. Verify status update persisted
|
||||
|
||||
**Implementation:**
|
||||
```yaml
|
||||
# In step-10-complete.md
|
||||
|
||||
CRITICAL ENFORCEMENT:
|
||||
|
||||
1. Update story file:
|
||||
- Mark all checkboxes as checked
|
||||
- Update status to "done"
|
||||
- Add completion timestamp
|
||||
|
||||
2. Update sprint-status.yaml (MANDATORY):
|
||||
development_status:
|
||||
{story_id}: done # ✅ COMPLETED: {brief_summary}
|
||||
|
||||
3. Verify update persisted:
|
||||
- Re-read sprint-status.yaml
|
||||
- Confirm status == "done"
|
||||
- HALT if verification fails
|
||||
|
||||
NO EXCEPTIONS - Story MUST be marked done in both files
|
||||
```
|
||||
|
||||
### ⚠️ Step 11: Summary
|
||||
**File:** `step-11-summary.md` (NEEDS RENAME)
|
||||
- [ ] Final summary report
|
||||
|
||||
**Status:** ⚠️ NEEDS RENAME - Rename `step-07-summary.md` → `step-11-summary.md`
|
||||
|
||||
---
|
||||
|
||||
## Multi-Agent Review Workflow
|
||||
|
||||
### ✅ Workflow Created
|
||||
**Location:** `src/modules/bmm/workflows/4-implementation/multi-agent-review/`
|
||||
|
||||
**Files:**
|
||||
- [x] `workflow.yaml` (COMPLETED)
|
||||
- [x] `instructions.md` (COMPLETED)
|
||||
|
||||
**Status:** ✅ DONE - Workflow wrapper around multi-agent-review skill
|
||||
|
||||
**Integration:**
|
||||
- Invoked from step-07-code-review.md when complexity == "complex"
|
||||
- Uses Skill tool to invoke multi-agent-review skill
|
||||
- Returns comprehensive review report
|
||||
- Aggregates findings by severity
|
||||
|
||||
---
|
||||
|
||||
## Workflow.yaml Updates Needed
|
||||
|
||||
**File:** `src/modules/bmm/workflows/4-implementation/super-dev-pipeline/workflow.yaml`
|
||||
|
||||
**Changes Required:**
|
||||
1. Update version to `1.5.0`
|
||||
2. Update description to mention test-first approach
|
||||
3. Redefine steps array (11 steps instead of 7)
|
||||
4. Add multi-agent-review workflow path
|
||||
5. Update complexity routing for new steps
|
||||
6. Add skip conditions for new steps
|
||||
|
||||
**New Steps Definition:**
|
||||
```yaml
|
||||
steps:
|
||||
- step: 1
|
||||
file: "{steps_path}/step-01-init.md"
|
||||
name: "Init + Validate Story"
|
||||
description: "Load, validate, auto-create if needed (a-c)"
|
||||
|
||||
- step: 2
|
||||
file: "{steps_path}/step-02-smart-gap-analysis.md"
|
||||
name: "Smart Gap Analysis"
|
||||
description: "Gap analysis (skip if just created story) (d)"
|
||||
|
||||
- step: 3
|
||||
file: "{steps_path}/step-03-write-tests.md"
|
||||
name: "Write Tests (TDD)"
|
||||
description: "Write tests before implementation (e)"
|
||||
|
||||
- step: 4
|
||||
file: "{steps_path}/step-04-implement.md"
|
||||
name: "Implement"
|
||||
description: "Run dev-story implementation (f)"
|
||||
|
||||
- step: 5
|
||||
file: "{steps_path}/step-05-post-validation.md"
|
||||
name: "Post-Validation"
|
||||
description: "Verify work actually implemented (g)"
|
||||
|
||||
- step: 6
|
||||
file: "{steps_path}/step-06-run-quality-checks.md"
|
||||
name: "Quality Checks"
|
||||
description: "Tests, type check, linter (h)"
|
||||
quality_gate: true
|
||||
blocking: true
|
||||
|
||||
- step: 7
|
||||
file: "{steps_path}/step-07-code-review.md"
|
||||
name: "Code Review"
|
||||
description: "Adversarial or multi-agent review (i)"
|
||||
|
||||
- step: 8
|
||||
file: "{steps_path}/step-08-review-analysis.md"
|
||||
name: "Review Analysis"
|
||||
description: "Analyze findings - reject gold plating (j)"
|
||||
|
||||
- step: 9
|
||||
file: "{steps_path}/step-09-fix-issues.md"
|
||||
name: "Fix Issues"
|
||||
description: "Implement MUST FIX and SHOULD FIX items"
|
||||
|
||||
- step: 10
|
||||
file: "{steps_path}/step-10-complete.md"
|
||||
name: "Complete + Update Status"
|
||||
description: "Mark done, update sprint-status.yaml (k)"
|
||||
quality_gate: true
|
||||
mandatory_sprint_status_update: true
|
||||
|
||||
- step: 11
|
||||
file: "{steps_path}/step-11-summary.md"
|
||||
name: "Summary"
|
||||
description: "Final report"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## File Rename Operations
|
||||
|
||||
Execute these renames:
|
||||
```bash
|
||||
cd src/modules/bmm/workflows/4-implementation/super-dev-pipeline/steps/
|
||||
|
||||
# Rename existing files to new step numbers
|
||||
mv step-03-implement.md step-04-implement.md
|
||||
mv step-04-post-validation.md step-05-post-validation.md
|
||||
mv step-05-code-review.md step-07-code-review.md
|
||||
mv step-06-complete.md step-10-complete.md
|
||||
mv step-06a-queue-commit.md step-10a-queue-commit.md
|
||||
mv step-07-summary.md step-11-summary.md
|
||||
|
||||
# Update step-02 to step-02-smart-gap-analysis.md (add "smart" logic)
|
||||
# No rename needed, just update content
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
### Phase 1: File Structure ✅ (Partially Done)
|
||||
- [x] Create multi-agent-review workflow
|
||||
- [x] Create step-03-write-tests.md
|
||||
- [x] Create step-06-run-quality-checks.md
|
||||
- [x] Create step-08-review-analysis.md
|
||||
- [ ] Create step-09-fix-issues.md
|
||||
- [ ] Rename existing step files
|
||||
- [ ] Update workflow.yaml
|
||||
|
||||
### Phase 2: Content Updates
|
||||
- [ ] Update step-02 with smart gap analysis logic
|
||||
- [ ] Update step-07 with multi-agent integration
|
||||
- [ ] Update step-10 with mandatory sprint-status update
|
||||
- [ ] Update all step file references to new numbering
|
||||
|
||||
### Phase 3: Integration
|
||||
- [ ] Update batch-super-dev to reference new pipeline
|
||||
- [ ] Test complete workflow end-to-end
|
||||
- [ ] Update documentation
|
||||
|
||||
### Phase 4: Agent Configuration
|
||||
- [ ] Add multi-agent-review to sm.agent.yaml
|
||||
- [ ] Add multi-agent-review to dev.agent.yaml (optional)
|
||||
- [ ] Update agent menu descriptions
|
||||
|
||||
---
|
||||
|
||||
## Testing Plan
|
||||
|
||||
1. **Test micro story:** Should skip steps 3, 7, 8, 9 (write tests, code review, analysis, fix)
|
||||
2. **Test standard story:** Should run all steps with adversarial review
|
||||
3. **Test complex story:** Should run all steps with multi-agent review
|
||||
4. **Test story creation:** Verify auto-create in step 1 works
|
||||
5. **Test smart gap analysis:** Verify step 2 skips if story just created
|
||||
6. **Test quality gate:** Verify step 6 blocks on failing tests
|
||||
7. **Test review analysis:** Verify step 8 correctly categorizes findings
|
||||
8. **Test sprint-status update:** Verify step 10 updates sprint-status.yaml
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
**v1.4.0** (Current - Committed): Auto-create story via /create-story-with-gap-analysis
|
||||
**v1.5.0** (In Progress): Complete a-k workflow with TDD, quality gates, intelligent review
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Create `step-09-fix-issues.md`
|
||||
2. Perform all file renames
|
||||
3. Update `workflow.yaml` with new 11-step structure
|
||||
4. Test each step individually
|
||||
5. Test complete workflow end-to-end
|
||||
6. Commit and document
|
||||
|
|
@ -0,0 +1,169 @@
|
|||
# super-dev-pipeline
|
||||
|
||||
**Token-efficient step-file workflow that prevents vibe coding and works for both greenfield AND brownfield development.**
|
||||
|
||||
## 🎯 Purpose
|
||||
|
||||
Combines the best of both worlds:
|
||||
- **super-dev-story's flexibility** - works for greenfield and brownfield
|
||||
- **story-pipeline's discipline** - step-file architecture prevents vibe coding
|
||||
|
||||
## 🔑 Key Features
|
||||
|
||||
### 1. **Smart Batching** ⚡ NEW!
|
||||
- **Pattern detection**: Automatically identifies similar tasks
|
||||
- **Intelligent grouping**: Batches low-risk, repetitive tasks
|
||||
- **50-70% faster** for stories with repetitive work (e.g., package migrations)
|
||||
- **Safety preserved**: Validation gates still enforced, fallback on failure
|
||||
- **NOT vibe coding**: Systematic detection + batch validation
|
||||
|
||||
### 2. **Adaptive Implementation**
|
||||
- Greenfield tasks: TDD approach (test-first)
|
||||
- Brownfield tasks: Refactor approach (understand-first)
|
||||
- Hybrid stories: Mix both as appropriate
|
||||
|
||||
### 3. **Anti-Vibe-Coding Architecture**
|
||||
- **Step-file design**: One step at a time, no looking ahead
|
||||
- **Mandatory sequences**: Can't skip or optimize steps
|
||||
- **Quality gates**: Must pass before proceeding
|
||||
- **State tracking**: Progress recorded and verified
|
||||
|
||||
### 4. **Brownfield Support**
|
||||
- Pre-gap analysis scans existing code
|
||||
- Validates tasks against current implementation
|
||||
- Refines vague tasks to specific actions
|
||||
- Detects already-completed work
|
||||
|
||||
### 5. **Complete Quality Gates**
|
||||
- ✅ Pre-gap analysis (validates + detects batchable patterns)
|
||||
- ✅ Smart batching (groups similar tasks, validates batches)
|
||||
- ✅ Adaptive implementation (TDD or refactor)
|
||||
- ✅ Post-validation (catches false positives)
|
||||
- ✅ Code review (finds 3-10 issues)
|
||||
- ✅ Commit + push (targeted files only)
|
||||
|
||||
## 📁 Workflow Steps
|
||||
|
||||
| Step | File | Purpose |
|
||||
|------|------|---------|
|
||||
| 1 | step-01-init.md | Load story, detect greenfield vs brownfield |
|
||||
| 2 | step-02-pre-gap-analysis.md | Validate tasks against codebase |
|
||||
| 3 | step-03-implement.md | Adaptive implementation (no vibe coding!) |
|
||||
| 4 | step-04-post-validation.md | Verify completion vs reality |
|
||||
| 5 | step-05-code-review.md | Adversarial review (3-10 issues) |
|
||||
| 6 | step-06-complete.md | Commit and push changes |
|
||||
| 7 | step-07-summary.md | Audit trail generation |
|
||||
|
||||
## 🚀 Usage
|
||||
|
||||
### Standalone
|
||||
```bash
|
||||
bmad super-dev-pipeline
|
||||
```
|
||||
|
||||
### From batch-super-dev
|
||||
```bash
|
||||
bmad batch-super-dev
|
||||
# Automatically uses super-dev-pipeline for each story
|
||||
```
|
||||
|
||||
## 📊 Efficiency Metrics
|
||||
|
||||
| Metric | super-dev-story | super-dev-pipeline | super-dev-pipeline + batching |
|
||||
|--------|----------------|-------------------|-------------------------------|
|
||||
| Tokens/story | 100-150K | 40-60K | 40-60K (same) |
|
||||
| Time/100 tasks | 200 min | 200 min | **100 min** (50% faster!) |
|
||||
| Architecture | Orchestration | Step-files | Step-files + batching |
|
||||
| Vibe coding | Possible | Prevented | Prevented |
|
||||
| Repetitive work | Slow | Slow | **Fast** |
|
||||
|
||||
## 🛡️ Why This Prevents Vibe Coding
|
||||
|
||||
**The Problem:**
|
||||
When token counts get high (>100K), Claude tends to:
|
||||
- Skip verification steps
|
||||
- Batch multiple tasks
|
||||
- "Trust me, I got this" syndrome
|
||||
- Deviate from intended workflow
|
||||
|
||||
**The Solution:**
|
||||
Step-file architecture enforces:
|
||||
- ✅ ONE step loaded at a time
|
||||
- ✅ MUST read entire step file first
|
||||
- ✅ MUST follow numbered sequence
|
||||
- ✅ MUST complete quality gate
|
||||
- ✅ MUST update state before proceeding
|
||||
|
||||
**Result:** Disciplined execution even at 200K+ tokens!
|
||||
|
||||
## 🔄 Comparison with Other Workflows
|
||||
|
||||
### vs super-dev-story (Original)
|
||||
- ✅ Same quality gates
|
||||
- ✅ Same brownfield support
|
||||
- ✅ 50% more token efficient
|
||||
- ✅ **Prevents vibe coding** (new!)
|
||||
|
||||
### vs story-pipeline
|
||||
- ✅ Same step-file discipline
|
||||
- ✅ **Works for brownfield** (story-pipeline doesn't!)
|
||||
- ✅ No mandatory ATDD (more flexible)
|
||||
- ✅ **Smart batching** (50-70% faster for repetitive work!)
|
||||
- ❌ Slightly less token efficient (40-60K vs 25-30K)
|
||||
|
||||
## 🎓 When to Use
|
||||
|
||||
**Use super-dev-pipeline when:**
|
||||
- Working with existing codebase (brownfield)
|
||||
- Need vibe-coding prevention
|
||||
- Running batch-super-dev
|
||||
- Token counts will be high
|
||||
- Want disciplined execution
|
||||
|
||||
**Use story-pipeline when:**
|
||||
- Creating entirely new features (pure greenfield)
|
||||
- Story doesn't exist yet (needs creation)
|
||||
- Maximum token efficiency needed
|
||||
- TDD/ATDD is appropriate
|
||||
|
||||
**Use super-dev-story when:**
|
||||
- Need quick one-off development
|
||||
- Interactive development preferred
|
||||
- Traditional orchestration is fine
|
||||
|
||||
## 📝 Requirements
|
||||
|
||||
- Story file must exist (does NOT create stories)
|
||||
- Project context must exist
|
||||
- Works with both `_bmad` and `.bmad` conventions
|
||||
|
||||
## 🏗️ Architecture Notes
|
||||
|
||||
### Development Mode Detection
|
||||
|
||||
Auto-detects based on File List:
|
||||
- **Greenfield**: All files are new
|
||||
- **Brownfield**: All files exist
|
||||
- **Hybrid**: Mix of new and existing
|
||||
|
||||
### Adaptive Implementation
|
||||
|
||||
Step 3 adapts methodology:
|
||||
- New files → TDD approach
|
||||
- Existing files → Refactor approach
|
||||
- Tests → Add/update as needed
|
||||
- Migrations → Apply and verify
|
||||
|
||||
### State Management
|
||||
|
||||
Uses `super-dev-state-{story_id}.yaml` for:
|
||||
- Progress tracking
|
||||
- Quality gate results
|
||||
- File lists
|
||||
- Metrics collection
|
||||
|
||||
Cleaned up after completion (audit trail is permanent record).
|
||||
|
||||
---
|
||||
|
||||
**super-dev-pipeline: Disciplined development for the real world!** 🚀
|
||||
|
|
@ -0,0 +1,406 @@
|
|||
---
|
||||
name: 'step-01-init'
|
||||
description: 'Initialize pipeline, load story (auto-create if needed), detect development mode'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline'
|
||||
create_story_workflow: '{project-root}/_bmad/bmm/workflows/4-implementation/create-story-with-gap-analysis'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-01-init.md'
|
||||
nextStepFile: '{workflow_path}/steps/step-02-pre-gap-analysis.md'
|
||||
|
||||
# Role
|
||||
role: null # No agent role yet
|
||||
---
|
||||
|
||||
# Step 1: Initialize Pipeline
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Initialize the super-dev-pipeline:
|
||||
1. Load story file (must exist!)
|
||||
2. Cache project context
|
||||
3. Detect development mode (greenfield vs brownfield)
|
||||
4. Initialize state tracking
|
||||
5. Display execution plan
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
### Initialization Principles
|
||||
|
||||
- **AUTO-CREATE IF NEEDED** - If story is missing or incomplete, auto-invoke /create-story-with-gap-analysis (NEW v1.4.0)
|
||||
- **READ COMPLETELY** - Load all context before proceeding
|
||||
- **DETECT MODE** - Determine if greenfield or brownfield
|
||||
- **NO ASSUMPTIONS** - Verify all files and paths
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Detect Execution Mode
|
||||
|
||||
Check if running in batch or interactive mode:
|
||||
- Batch mode: Invoked from batch-super-dev
|
||||
- Interactive mode: User-initiated
|
||||
|
||||
Set `{mode}` variable.
|
||||
|
||||
### 2. Resolve Story File Path
|
||||
|
||||
**From input parameters:**
|
||||
- `story_id`: e.g., "1-4"
|
||||
- `story_file`: Full path to story file
|
||||
|
||||
**If story_file not provided:**
|
||||
```
|
||||
story_file = {sprint_artifacts}/story-{story_id}.md
|
||||
```
|
||||
|
||||
### 3. Verify Story Exists (Auto-Create if Missing - NEW v1.4.0)
|
||||
|
||||
```bash
|
||||
# Check if story file exists
|
||||
test -f "{story_file}"
|
||||
```
|
||||
|
||||
**If story does NOT exist:**
|
||||
```
|
||||
⚠️ Story file not found at {story_file}
|
||||
|
||||
🔄 AUTO-CREATING: Invoking /create-story-with-gap-analysis...
|
||||
```
|
||||
|
||||
<invoke-workflow path="{create_story_workflow}/workflow.yaml">
|
||||
<input name="story_id">{story_id}</input>
|
||||
<input name="epic_num">{epic_num}</input>
|
||||
<input name="story_num">{story_num}</input>
|
||||
</invoke-workflow>
|
||||
|
||||
After workflow completes, verify story was created:
|
||||
```bash
|
||||
test -f "{story_file}" && echo "✅ Story created successfully" || echo "❌ Story creation failed - HALT"
|
||||
```
|
||||
|
||||
**If story was created, set flag for smart gap analysis:**
|
||||
```yaml
|
||||
# Set state flag to skip redundant gap analysis in step 2
|
||||
story_just_created: true
|
||||
gap_analysis_completed: true # Already done in create-story-with-gap-analysis
|
||||
```
|
||||
|
||||
**If story exists:**
|
||||
```
|
||||
✅ Story file found: {story_file}
|
||||
```
|
||||
|
||||
### 4. Load Story File
|
||||
|
||||
Read story file and extract:
|
||||
- Story title
|
||||
- Epic number
|
||||
- Story number
|
||||
- Acceptance criteria
|
||||
- Current tasks (checked and unchecked)
|
||||
- File List section (if exists)
|
||||
|
||||
Count:
|
||||
- Total tasks: `{total_task_count}`
|
||||
- Unchecked tasks: `{unchecked_task_count}`
|
||||
- Checked tasks: `{checked_task_count}`
|
||||
|
||||
### 4.5 Pre-Flight Check & Auto-Regenerate (UPDATED v1.4.0)
|
||||
|
||||
**Check story quality and auto-regenerate if insufficient:**
|
||||
|
||||
```
|
||||
If total_task_count == 0:
|
||||
Display:
|
||||
⚠️ Story has no tasks - needs gap analysis
|
||||
|
||||
🔄 AUTO-REGENERATING: Invoking /create-story-with-gap-analysis...
|
||||
```
|
||||
<invoke-workflow path="{create_story_workflow}/workflow.yaml">
|
||||
<input name="story_id">{story_id}</input>
|
||||
<input name="story_file">{story_file}</input>
|
||||
<input name="regenerate">true</input>
|
||||
</invoke-workflow>
|
||||
|
||||
# Set flag for smart gap analysis (v1.5.0)
|
||||
story_just_created: true
|
||||
gap_analysis_completed: true
|
||||
|
||||
Then re-load story and continue.
|
||||
|
||||
```
|
||||
If unchecked_task_count == 0:
|
||||
Display:
|
||||
✅ EARLY BAILOUT: Story Already Complete
|
||||
|
||||
All {checked_task_count} tasks are already marked complete.
|
||||
- No implementation work required
|
||||
- Story may need status update to "review" or "done"
|
||||
|
||||
{if batch mode: Continue to next story}
|
||||
{if interactive mode: HALT - Story complete}
|
||||
|
||||
If story file missing required sections (Tasks, Acceptance Criteria):
|
||||
Display:
|
||||
⚠️ Story missing required sections: {missing_sections}
|
||||
|
||||
🔄 AUTO-REGENERATING: Invoking /create-story-with-gap-analysis...
|
||||
```
|
||||
<invoke-workflow path="{create_story_workflow}/workflow.yaml">
|
||||
<input name="story_id">{story_id}</input>
|
||||
<input name="story_file">{story_file}</input>
|
||||
<input name="regenerate">true</input>
|
||||
</invoke-workflow>
|
||||
|
||||
# Set flag for smart gap analysis (v1.5.0)
|
||||
story_just_created: true
|
||||
gap_analysis_completed: true
|
||||
|
||||
Then re-load story and continue.
|
||||
|
||||
**If all checks pass:**
|
||||
```
|
||||
✅ Pre-flight checks passed
|
||||
- Story valid: {total_task_count} tasks
|
||||
- Work remaining: {unchecked_task_count} unchecked
|
||||
- Ready for implementation
|
||||
```
|
||||
|
||||
### 5. Load Project Context
|
||||
|
||||
Read `**/project-context.md`:
|
||||
- Tech stack
|
||||
- Coding patterns
|
||||
- Database conventions
|
||||
- Testing requirements
|
||||
|
||||
Cache in memory for use across steps.
|
||||
|
||||
### 6. Apply Complexity Routing (NEW v1.2.0)
|
||||
|
||||
**Check complexity_level parameter:**
|
||||
- `micro`: Lightweight path - skip pre-gap analysis (step 2) and code review (step 5)
|
||||
- `standard`: Full pipeline - all steps
|
||||
- `complex`: Full pipeline with warnings
|
||||
|
||||
**Determine skip_steps based on complexity:**
|
||||
```
|
||||
If complexity_level == "micro":
|
||||
skip_steps = [2, 5]
|
||||
pipeline_mode = "lightweight"
|
||||
|
||||
Display:
|
||||
🚀 MICRO COMPLEXITY DETECTED
|
||||
|
||||
Lightweight path enabled:
|
||||
- ⏭️ Skipping Pre-Gap Analysis (low risk)
|
||||
- ⏭️ Skipping Code Review (simple changes)
|
||||
- Estimated token savings: 50-70%
|
||||
|
||||
If complexity_level == "complex":
|
||||
skip_steps = []
|
||||
pipeline_mode = "enhanced"
|
||||
|
||||
Display:
|
||||
🔒 COMPLEX STORY DETECTED
|
||||
|
||||
Enhanced validation enabled:
|
||||
- Full pipeline with all quality gates
|
||||
- Consider splitting if story fails
|
||||
|
||||
⚠️ Warning: This story has high-risk elements.
|
||||
Proceeding with extra attention.
|
||||
|
||||
If complexity_level == "standard":
|
||||
skip_steps = []
|
||||
pipeline_mode = "standard"
|
||||
```
|
||||
|
||||
Store `skip_steps` and `pipeline_mode` in state file.
|
||||
|
||||
### 7. Detect Development Mode
|
||||
|
||||
**Check File List section in story:**
|
||||
|
||||
```typescript
|
||||
interface DetectionResult {
|
||||
mode: "greenfield" | "brownfield" | "hybrid";
|
||||
reasoning: string;
|
||||
existing_files: string[];
|
||||
new_files: string[];
|
||||
}
|
||||
```
|
||||
|
||||
**Detection logic:**
|
||||
|
||||
```bash
|
||||
# Extract files from File List section
|
||||
files_in_story=()
|
||||
|
||||
# For each file, check if it exists
|
||||
existing_count=0
|
||||
new_count=0
|
||||
|
||||
for file in files_in_story; do
|
||||
if test -f "$file"; then
|
||||
existing_count++
|
||||
existing_files+=("$file")
|
||||
else
|
||||
new_count++
|
||||
new_files+=("$file")
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
**Mode determination:**
|
||||
- `existing_count == 0` → **greenfield** (all new files)
|
||||
- `new_count == 0` → **brownfield** (all existing files)
|
||||
- Both > 0 → **hybrid** (mix of new and existing)
|
||||
|
||||
### 8. Display Initialization Summary
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🚀 SUPER-DEV PIPELINE - Disciplined Execution
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Story: {story_title}
|
||||
File: {story_file}
|
||||
Mode: {mode} (interactive|batch)
|
||||
Complexity: {complexity_level} → {pipeline_mode} path
|
||||
|
||||
Development Type: {greenfield|brownfield|hybrid}
|
||||
- Existing files: {existing_count}
|
||||
- New files: {new_count}
|
||||
|
||||
Tasks:
|
||||
- Total: {total_task_count}
|
||||
- Completed: {checked_task_count} ✅
|
||||
- Remaining: {unchecked_task_count} ⏳
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Pipeline Steps:
|
||||
1. ✅ Initialize (current)
|
||||
2. {⏭️ SKIP|⏳} Pre-Gap Analysis - Validate tasks {if micro: "(skipped - low risk)"}
|
||||
3. ⏳ Implement - {TDD|Refactor|Hybrid}
|
||||
4. ⏳ Post-Validation - Verify completion
|
||||
5. {⏭️ SKIP|⏳} Code Review - Find issues {if micro: "(skipped - simple changes)"}
|
||||
6. ⏳ Complete - Commit + push
|
||||
7. ⏳ Summary - Audit trail
|
||||
|
||||
{if pipeline_mode == "lightweight":
|
||||
🚀 LIGHTWEIGHT PATH: Steps 2 and 5 will be skipped (50-70% token savings)
|
||||
}
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
⚠️ ANTI-VIBE-CODING ENFORCEMENT ACTIVE
|
||||
|
||||
This workflow uses step-file architecture to ensure:
|
||||
- ✅ No skipping steps (except complexity-based routing)
|
||||
- ✅ No optimizing sequences
|
||||
- ✅ No looking ahead
|
||||
- ✅ No vibe coding even at 200K tokens
|
||||
|
||||
You will follow each step file PRECISELY.
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
### 9. Initialize State File
|
||||
|
||||
Create state file at `{sprint_artifacts}/super-dev-state-{story_id}.yaml`:
|
||||
|
||||
```yaml
|
||||
---
|
||||
story_id: "{story_id}"
|
||||
story_file: "{story_file}"
|
||||
mode: "{mode}"
|
||||
development_type: "{greenfield|brownfield|hybrid}"
|
||||
|
||||
# Complexity routing (NEW v1.2.0)
|
||||
complexity:
|
||||
level: "{complexity_level}" # micro | standard | complex
|
||||
pipeline_mode: "{pipeline_mode}" # lightweight | standard | enhanced
|
||||
skip_steps: {skip_steps} # e.g., [2, 5] for micro
|
||||
|
||||
stepsCompleted: [1]
|
||||
lastStep: 1
|
||||
currentStep: 2 # Or 3 if step 2 is skipped
|
||||
status: "in_progress"
|
||||
|
||||
started_at: "{timestamp}"
|
||||
updated_at: "{timestamp}"
|
||||
|
||||
cached_context:
|
||||
story_loaded: true
|
||||
project_context_loaded: true
|
||||
|
||||
development_analysis:
|
||||
existing_files: {existing_count}
|
||||
new_files: {new_count}
|
||||
total_tasks: {total_task_count}
|
||||
unchecked_tasks: {unchecked_task_count}
|
||||
|
||||
steps:
|
||||
step-01-init:
|
||||
status: completed
|
||||
completed_at: "{timestamp}"
|
||||
step-02-pre-gap-analysis:
|
||||
status: {pending|skipped} # skipped if complexity == micro
|
||||
step-03-implement:
|
||||
status: pending
|
||||
step-04-post-validation:
|
||||
status: pending
|
||||
step-05-code-review:
|
||||
status: {pending|skipped} # skipped if complexity == micro
|
||||
step-06-complete:
|
||||
status: pending
|
||||
step-07-summary:
|
||||
status: pending
|
||||
```
|
||||
|
||||
### 10. Display Menu (Interactive) or Proceed (Batch)
|
||||
|
||||
**Interactive Mode Menu:**
|
||||
```
|
||||
[C] Continue to {next step name}
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
**Batch Mode:** Auto-continue to next step
|
||||
|
||||
## CRITICAL STEP COMPLETION
|
||||
|
||||
**Determine next step based on complexity routing:**
|
||||
|
||||
```
|
||||
If 2 in skip_steps (micro complexity):
|
||||
nextStepFile = '{workflow_path}/steps/step-03-implement.md'
|
||||
Display: "⏭️ Skipping Pre-Gap Analysis (micro complexity) → Proceeding to Implementation"
|
||||
Else:
|
||||
nextStepFile = '{workflow_path}/steps/step-02-pre-gap-analysis.md'
|
||||
```
|
||||
|
||||
**ONLY WHEN** initialization is complete,
|
||||
load and execute `{nextStepFile}`.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- Story file loaded successfully
|
||||
- Development mode detected accurately
|
||||
- State file initialized
|
||||
- Context cached in memory
|
||||
- Ready for pre-gap analysis
|
||||
|
||||
### ❌ FAILURE
|
||||
- Story file not found
|
||||
- Invalid story file format
|
||||
- Missing project context
|
||||
- State file creation failed
|
||||
|
|
@ -0,0 +1,653 @@
|
|||
---
|
||||
name: 'step-02-smart-gap-analysis'
|
||||
description: 'Smart gap analysis - skip if story just created with gap analysis in step 1'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-02-smart-gap-analysis.md'
|
||||
stateFile: '{state_file}'
|
||||
nextStepFile: '{workflow_path}/steps/step-03-write-tests.md'
|
||||
|
||||
# Role Switch
|
||||
role: dev
|
||||
agentFile: '{project-root}/_bmad/bmm/agents/dev.md'
|
||||
---
|
||||
|
||||
# Step 2: Smart Gap Analysis
|
||||
|
||||
## ROLE SWITCH
|
||||
|
||||
**Switching to DEV (Developer) perspective.**
|
||||
|
||||
You are now analyzing the story tasks against codebase reality.
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Validate all story tasks against the actual codebase:
|
||||
1. Scan codebase for existing implementations
|
||||
2. Identify which tasks are truly needed vs already done
|
||||
3. Refine vague tasks to be specific and actionable
|
||||
4. Add missing tasks that were overlooked
|
||||
5. Uncheck any tasks that claim completion incorrectly
|
||||
6. Ensure tasks align with existing code patterns
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
### Gap Analysis Principles
|
||||
|
||||
- **TRUST NOTHING** - Verify every task against codebase
|
||||
- **SCAN THOROUGHLY** - Use Glob, Grep, Read to understand existing code
|
||||
- **BE SPECIFIC** - Vague tasks like "Add feature X" need breakdown
|
||||
- **ADD MISSING** - If something is needed but not tasked, add it
|
||||
- **BROWNFIELD AWARE** - Check for existing implementations
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 0. Smart Gap Analysis Check (NEW v1.5.0)
|
||||
|
||||
**Check if gap analysis already performed in step 1:**
|
||||
|
||||
```yaml
|
||||
# Read state from step 1
|
||||
Read {stateFile}
|
||||
|
||||
If story_just_created == true:
|
||||
Display:
|
||||
✅ GAP ANALYSIS SKIPPED
|
||||
|
||||
Story was just created via /create-story-with-gap-analysis in step 1.
|
||||
Gap analysis already performed as part of story creation.
|
||||
|
||||
Skipping redundant gap analysis.
|
||||
Proceeding directly to test writing (step 3).
|
||||
|
||||
Exit step 2
|
||||
```
|
||||
|
||||
**If story was NOT just created, proceed with gap analysis below.**
|
||||
|
||||
### 1. Load Story Tasks
|
||||
|
||||
Read story file and extract all tasks (checked and unchecked):
|
||||
|
||||
```regex
|
||||
- \[ \] (.+) # Unchecked
|
||||
- \[x\] (.+) # Checked
|
||||
```
|
||||
|
||||
Build list of all tasks to analyze.
|
||||
|
||||
### 2. Scan Existing Codebase
|
||||
|
||||
**For development_type = "brownfield" or "hybrid":**
|
||||
|
||||
Scan all files mentioned in File List:
|
||||
|
||||
```bash
|
||||
# For each file in File List
|
||||
for file in {file_list}; do
|
||||
if test -f "$file"; then
|
||||
# Read file to understand current implementation
|
||||
read "$file"
|
||||
|
||||
# Check what's already implemented
|
||||
grep -E "function|class|interface|export" "$file"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
Document existing implementations.
|
||||
|
||||
### 3. Analyze Each Task
|
||||
|
||||
For EACH task in story:
|
||||
|
||||
**A. Determine Task Type:**
|
||||
- Component creation
|
||||
- Function/method addition
|
||||
- Database migration
|
||||
- API endpoint
|
||||
- UI element
|
||||
- Test creation
|
||||
- Refactoring
|
||||
- Bug fix
|
||||
|
||||
**B. Check Against Codebase:**
|
||||
|
||||
```typescript
|
||||
interface TaskAnalysis {
|
||||
task: string;
|
||||
type: string;
|
||||
status: "needed" | "partially_done" | "already_done" | "unclear";
|
||||
reasoning: string;
|
||||
existing_code?: string;
|
||||
refinement?: string;
|
||||
}
|
||||
```
|
||||
|
||||
**For each task, ask:**
|
||||
1. Does related code already exist?
|
||||
2. If yes, what needs to change?
|
||||
3. If no, what needs to be created?
|
||||
4. Is the task specific enough to implement?
|
||||
|
||||
**C. Categorize Task:**
|
||||
|
||||
**NEEDED** - Task is clear and required:
|
||||
```yaml
|
||||
- task: "Add deleteUser server action"
|
||||
status: needed
|
||||
reasoning: "No deleteUser function found in codebase"
|
||||
action: "Implement as specified"
|
||||
```
|
||||
|
||||
**PARTIALLY_DONE** - Some work exists, needs completion:
|
||||
```yaml
|
||||
- task: "Add error handling to createUser"
|
||||
status: partially_done
|
||||
reasoning: "createUser exists but only handles success case"
|
||||
existing_code: "src/actions/createUser.ts"
|
||||
action: "Add error handling for DB failures, validation errors"
|
||||
```
|
||||
|
||||
**ALREADY_DONE** - Task is complete:
|
||||
```yaml
|
||||
- task: "Create users table"
|
||||
status: already_done
|
||||
reasoning: "users table exists with correct schema"
|
||||
existing_code: "migrations/20250101_create_users.sql"
|
||||
action: "Check this task, no work needed"
|
||||
```
|
||||
|
||||
**UNCLEAR** - Task is too vague:
|
||||
```yaml
|
||||
- task: "Improve user flow"
|
||||
status: unclear
|
||||
reasoning: "Ambiguous - what specifically needs improvement?"
|
||||
action: "Refine to specific sub-tasks"
|
||||
refinement:
|
||||
- "Add loading states to user forms"
|
||||
- "Add error toast on user creation failure"
|
||||
- "Add success confirmation modal"
|
||||
```
|
||||
|
||||
### 4. Generate Gap Analysis Report
|
||||
|
||||
Create report showing findings:
|
||||
|
||||
```markdown
|
||||
## Pre-Gap Analysis Results
|
||||
|
||||
**Development Mode:** {greenfield|brownfield|hybrid}
|
||||
|
||||
**Task Analysis:**
|
||||
|
||||
### ✅ Tasks Ready for Implementation ({needed_count})
|
||||
1. {task_1} - {reasoning}
|
||||
2. {task_2} - {reasoning}
|
||||
|
||||
### ⚠️ Tasks Partially Implemented ({partial_count})
|
||||
1. {task_1}
|
||||
- Current: {existing_implementation}
|
||||
- Needed: {what_to_add}
|
||||
- File: {file_path}
|
||||
|
||||
### ✓ Tasks Already Complete ({done_count})
|
||||
1. {task_1}
|
||||
- Evidence: {existing_code_location}
|
||||
- Action: Will check this task
|
||||
|
||||
### 🔍 Tasks Need Refinement ({unclear_count})
|
||||
1. {original_vague_task}
|
||||
- Issue: {why_unclear}
|
||||
- Refined to:
|
||||
- [ ] {specific_sub_task_1}
|
||||
- [ ] {specific_sub_task_2}
|
||||
|
||||
### ➕ Missing Tasks Discovered ({missing_count})
|
||||
1. {missing_task_1} - {why_needed}
|
||||
2. {missing_task_2} - {why_needed}
|
||||
|
||||
**Summary:**
|
||||
- Ready to implement: {needed_count}
|
||||
- Need completion: {partial_count}
|
||||
- Already done: {done_count}
|
||||
- Need refinement: {unclear_count}
|
||||
- Missing tasks: {missing_count}
|
||||
|
||||
**Total work remaining:** {work_count} tasks
|
||||
```
|
||||
|
||||
### 5. Update Story File
|
||||
|
||||
**A. Check already-done tasks:**
|
||||
```markdown
|
||||
- [x] Create users table (verified in gap analysis)
|
||||
```
|
||||
|
||||
**B. Refine unclear tasks:**
|
||||
```markdown
|
||||
~~- [ ] Improve user flow~~ (too vague)
|
||||
|
||||
Refined to:
|
||||
- [ ] Add loading states to user forms
|
||||
- [ ] Add error toast on user creation failure
|
||||
- [ ] Add success confirmation modal
|
||||
```
|
||||
|
||||
**C. Add missing tasks:**
|
||||
```markdown
|
||||
## Tasks (Updated after Pre-Gap Analysis)
|
||||
|
||||
{existing_tasks}
|
||||
|
||||
### Added from Gap Analysis
|
||||
- [ ] {missing_task_1}
|
||||
- [ ] {missing_task_2}
|
||||
```
|
||||
|
||||
**D. Add Gap Analysis section:**
|
||||
```markdown
|
||||
## Gap Analysis
|
||||
|
||||
### Pre-Development Analysis
|
||||
- **Date:** {timestamp}
|
||||
- **Development Type:** {greenfield|brownfield|hybrid}
|
||||
- **Existing Files:** {count}
|
||||
- **New Files:** {count}
|
||||
|
||||
**Findings:**
|
||||
- Tasks ready: {needed_count}
|
||||
- Tasks partially done: {partial_count}
|
||||
- Tasks already complete: {done_count}
|
||||
- Tasks refined: {unclear_count}
|
||||
- Tasks added: {missing_count}
|
||||
|
||||
**Codebase Scan:**
|
||||
{list existing implementations found}
|
||||
|
||||
**Status:** Ready for implementation
|
||||
```
|
||||
|
||||
### 6. Pattern Detection for Smart Batching (NEW!)
|
||||
|
||||
After validating tasks, detect repeating patterns that can be batched:
|
||||
|
||||
```typescript
|
||||
interface TaskPattern {
|
||||
pattern_name: string;
|
||||
pattern_type: "package_install" | "module_registration" | "code_deletion" | "import_update" | "custom";
|
||||
tasks: Task[];
|
||||
batchable: boolean;
|
||||
risk_level: "low" | "medium" | "high";
|
||||
validation_strategy: string;
|
||||
estimated_time_individual: number; // minutes if done one-by-one
|
||||
estimated_time_batched: number; // minutes if batched
|
||||
}
|
||||
```
|
||||
|
||||
**Common Batchable Patterns:**
|
||||
|
||||
**Pattern: Package Installation**
|
||||
```
|
||||
Tasks like:
|
||||
- [ ] Add @company/shared-utils to package.json
|
||||
- [ ] Add @company/validation to package.json
|
||||
- [ ] Add @company/http-client to package.json
|
||||
|
||||
Batchable: YES
|
||||
Risk: LOW
|
||||
Validation: npm install && npm run build
|
||||
Time: 5 min batch vs 15 min individual (3x faster!)
|
||||
```
|
||||
|
||||
**Pattern: Module Registration**
|
||||
```
|
||||
Tasks like:
|
||||
- [ ] Import SharedUtilsModule in app.module.ts
|
||||
- [ ] Import ValidationModule in app.module.ts
|
||||
- [ ] Import HttpClientModule in app.module.ts
|
||||
|
||||
Batchable: YES
|
||||
Risk: LOW
|
||||
Validation: TypeScript compile
|
||||
Time: 10 min batch vs 20 min individual (2x faster!)
|
||||
```
|
||||
|
||||
**Pattern: Code Deletion**
|
||||
```
|
||||
Tasks like:
|
||||
- [ ] Delete src/old-audit.service.ts
|
||||
- [ ] Remove OldAuditModule from imports
|
||||
- [ ] Delete src/old-cache.service.ts
|
||||
|
||||
Batchable: YES
|
||||
Risk: LOW (tests will catch issues)
|
||||
Validation: Build + test suite
|
||||
Time: 15 min batch vs 30 min individual (2x faster!)
|
||||
```
|
||||
|
||||
**Pattern: Business Logic (NOT batchable)**
|
||||
```
|
||||
Tasks like:
|
||||
- [ ] Add circuit breaker fallback for WIS API
|
||||
- [ ] Implement 3-tier caching for user data
|
||||
- [ ] Add audit logging for theme updates
|
||||
|
||||
Batchable: NO
|
||||
Risk: MEDIUM-HIGH (logic varies per case)
|
||||
Validation: Per-task testing
|
||||
Time: Execute individually with full rigor
|
||||
```
|
||||
|
||||
**Detection Algorithm:**
|
||||
|
||||
```bash
|
||||
# For each task, check if it matches a known pattern
|
||||
for task in tasks; do
|
||||
case "$task" in
|
||||
*"Add @"*"to package.json"*)
|
||||
pattern="package_install"
|
||||
batchable=true
|
||||
;;
|
||||
*"Import"*"Module in app.module"*)
|
||||
pattern="module_registration"
|
||||
batchable=true
|
||||
;;
|
||||
*"Delete"*|*"Remove"*)
|
||||
pattern="code_deletion"
|
||||
batchable=true
|
||||
;;
|
||||
*"circuit breaker"*|*"fallback"*|*"caching for"*)
|
||||
pattern="business_logic"
|
||||
batchable=false
|
||||
;;
|
||||
*)
|
||||
pattern="custom"
|
||||
batchable=false # Default to safe
|
||||
;;
|
||||
esac
|
||||
done
|
||||
```
|
||||
|
||||
**Generate Batching Plan:**
|
||||
|
||||
```markdown
|
||||
## Smart Batching Analysis
|
||||
|
||||
**Detected Patterns:**
|
||||
|
||||
### ✅ Batchable Patterns (Execute Together)
|
||||
1. **Package Installation** (5 tasks)
|
||||
- Add @dealer/audit-logging
|
||||
- Add @dealer/http-client
|
||||
- Add @dealer/caching
|
||||
- Add @dealer/circuit-breaker
|
||||
- Run pnpm install
|
||||
|
||||
Validation: Build succeeds
|
||||
Time: 5 min (vs 10 min individual)
|
||||
Risk: LOW
|
||||
|
||||
2. **Module Registration** (5 tasks)
|
||||
- Import 5 modules
|
||||
- Register in app.module
|
||||
- Configure each
|
||||
|
||||
Validation: TypeScript compile
|
||||
Time: 10 min (vs 20 min individual)
|
||||
Risk: LOW
|
||||
|
||||
### ⚠️ Individual Execution Required
|
||||
3. **Circuit Breaker Logic** (3 tasks)
|
||||
- WIS API fallback strategy
|
||||
- i18n client fallback
|
||||
- Cache fallback
|
||||
|
||||
Reason: Fallback logic varies per API
|
||||
Time: 60 min (cannot batch)
|
||||
Risk: MEDIUM
|
||||
|
||||
**Total Estimated Time:**
|
||||
- With smart batching: ~2.5 hours
|
||||
- Without batching: ~5.5 hours
|
||||
- Savings: 3 hours (54% faster!)
|
||||
|
||||
**Safety:**
|
||||
- Batchable tasks: Validated as a group
|
||||
- Individual tasks: Full rigor maintained
|
||||
- No vibe coding: All validation gates enforced
|
||||
```
|
||||
|
||||
### 7. Handle Approval (Interactive Mode Only)
|
||||
|
||||
**Interactive Mode:**
|
||||
|
||||
Display gap analysis report with conditional batching menu.
|
||||
|
||||
**CRITICAL DECISION LOGIC:**
|
||||
- If `batchable_count > 0 AND time_saved > 0`: Show batching options
|
||||
- If `batchable_count = 0 OR time_saved = 0`: Skip batching options (no benefit)
|
||||
|
||||
**When Batching Has Benefit (time_saved > 0):**
|
||||
|
||||
```
|
||||
Gap Analysis Complete + Smart Batching Plan
|
||||
|
||||
Task Analysis:
|
||||
- {done_count} tasks already complete (will check)
|
||||
- {unclear_count} tasks refined to {refined_count} specific tasks
|
||||
- {missing_count} new tasks added
|
||||
- {needed_count} tasks ready for implementation
|
||||
|
||||
Smart Batching Detected:
|
||||
- {batchable_count} tasks can be batched into {batch_count} pattern groups
|
||||
- {individual_count} tasks require individual execution
|
||||
- Estimated time savings: {time_saved} hours
|
||||
|
||||
Total work: {work_count} tasks
|
||||
Estimated time: {estimated_hours} hours (with batching)
|
||||
|
||||
[A] Accept changes and batching plan
|
||||
[B] Accept but disable batching (slower, safer)
|
||||
[E] Edit tasks manually
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
**When Batching Has NO Benefit (time_saved = 0):**
|
||||
|
||||
```
|
||||
Gap Analysis Complete
|
||||
|
||||
Task Analysis:
|
||||
- {done_count} tasks already complete (will check)
|
||||
- {unclear_count} tasks refined to {refined_count} specific tasks
|
||||
- {missing_count} new tasks added
|
||||
- {needed_count} tasks ready for implementation
|
||||
|
||||
Smart Batching Analysis:
|
||||
- Batchable patterns detected: 0
|
||||
- Tasks requiring individual execution: {work_count}
|
||||
- Estimated time savings: none (tasks require individual attention)
|
||||
|
||||
Total work: {work_count} tasks
|
||||
Estimated time: {estimated_hours} hours
|
||||
|
||||
[A] Accept changes
|
||||
[E] Edit tasks manually
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
**Why Skip Batching Option When Benefit = 0:**
|
||||
- Reduces decision fatigue
|
||||
- Prevents pointless "batch vs no-batch" choice when outcome is identical
|
||||
- Cleaner UX when batching isn't applicable
|
||||
|
||||
**Batch Mode:** Auto-accept changes (batching plan applied only if benefit > 0)
|
||||
|
||||
### 8. Update Story File with Batching Plan (Conditional)
|
||||
|
||||
**ONLY add batching plan if `time_saved > 0`.**
|
||||
|
||||
If batching has benefit (time_saved > 0), add batching plan to story file:
|
||||
|
||||
```markdown
|
||||
## Smart Batching Plan
|
||||
|
||||
**Pattern Groups Detected:**
|
||||
|
||||
### Batch 1: Package Installation (5 tasks, 5 min)
|
||||
- [ ] Add @company/shared-utils to package.json
|
||||
- [ ] Add @company/validation to package.json
|
||||
- [ ] Add @company/http-client to package.json
|
||||
- [ ] Add @company/database-client to package.json
|
||||
- [ ] Run npm install
|
||||
|
||||
**Validation:** Build succeeds
|
||||
|
||||
### Batch 2: Module Registration (5 tasks, 10 min)
|
||||
{list tasks}
|
||||
|
||||
### Individual Tasks: Business Logic (15 tasks, 90 min)
|
||||
{list tasks that can't be batched}
|
||||
|
||||
**Time Estimate:**
|
||||
- With batching: {batched_time} hours
|
||||
- Without batching: {individual_time} hours
|
||||
- Savings: {savings} hours
|
||||
```
|
||||
|
||||
If batching has NO benefit (time_saved = 0), **skip this section entirely** and just add gap analysis results.
|
||||
|
||||
### 9. Update Pipeline State
|
||||
|
||||
Update state file:
|
||||
- Add `2` to `stepsCompleted`
|
||||
- Set `lastStep: 2`
|
||||
- Set `steps.step-02-pre-gap-analysis.status: completed`
|
||||
- Record gap analysis results:
|
||||
```yaml
|
||||
gap_analysis:
|
||||
development_type: "{mode}"
|
||||
tasks_ready: {count}
|
||||
tasks_partial: {count}
|
||||
tasks_done: {count}
|
||||
tasks_refined: {count}
|
||||
tasks_added: {count}
|
||||
|
||||
smart_batching:
|
||||
enabled: {true if time_saved > 0, false otherwise}
|
||||
patterns_detected: {count}
|
||||
batchable_tasks: {count}
|
||||
individual_tasks: {count}
|
||||
estimated_time_with_batching: {hours}
|
||||
estimated_time_without_batching: {hours}
|
||||
estimated_savings: {hours}
|
||||
```
|
||||
|
||||
**Note:** `smart_batching.enabled` is set to `false` when batching has no benefit, preventing unnecessary batching plan generation.
|
||||
|
||||
### 10. Present Summary (Conditional Format)
|
||||
|
||||
**When Batching Has Benefit (time_saved > 0):**
|
||||
|
||||
```
|
||||
Pre-Gap Analysis Complete + Smart Batching Plan
|
||||
|
||||
Development Type: {greenfield|brownfield|hybrid}
|
||||
Work Remaining: {work_count} tasks
|
||||
|
||||
Codebase Status:
|
||||
- Existing implementations reviewed: {existing_count}
|
||||
- New implementations needed: {new_count}
|
||||
|
||||
Smart Batching Analysis:
|
||||
- Batchable patterns detected: {batch_count}
|
||||
- Tasks that can be batched: {batchable_count} ({percent}%)
|
||||
- Tasks requiring individual execution: {individual_count}
|
||||
|
||||
Time Estimate:
|
||||
- With smart batching: {batched_time} hours ⚡
|
||||
- Without batching: {individual_time} hours
|
||||
- Time savings: {savings} hours ({savings_percent}% faster!)
|
||||
|
||||
Ready for Implementation
|
||||
```
|
||||
|
||||
**When Batching Has NO Benefit (time_saved = 0):**
|
||||
|
||||
```
|
||||
Pre-Gap Analysis Complete
|
||||
|
||||
Development Type: {greenfield|brownfield|hybrid}
|
||||
Work Remaining: {work_count} tasks
|
||||
|
||||
Codebase Status:
|
||||
- Existing implementations reviewed: {existing_count}
|
||||
- New implementations needed: {new_count}
|
||||
|
||||
Smart Batching Analysis:
|
||||
- Batchable patterns detected: 0
|
||||
- Tasks requiring individual execution: {work_count}
|
||||
- Estimated time: {estimated_hours} hours
|
||||
|
||||
Ready for Implementation
|
||||
```
|
||||
|
||||
**Interactive Mode Menu:**
|
||||
```
|
||||
[C] Continue to Implementation
|
||||
[R] Re-run gap analysis
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
**Batch Mode:** Auto-continue
|
||||
|
||||
## QUALITY GATE
|
||||
|
||||
Before proceeding:
|
||||
- [ ] All tasks analyzed against codebase
|
||||
- [ ] Vague tasks refined to specific actions
|
||||
- [ ] Already-done tasks checked
|
||||
- [ ] Missing tasks added
|
||||
- [ ] Gap analysis section added to story
|
||||
- [ ] Story file updated with refinements
|
||||
|
||||
## CRITICAL STEP COMPLETION
|
||||
|
||||
**ONLY WHEN** [all tasks analyzed AND story file updated],
|
||||
load and execute `{nextStepFile}` for implementation.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- Every task analyzed against codebase
|
||||
- Vague tasks made specific
|
||||
- Missing work identified and added
|
||||
- Already-done work verified
|
||||
- Gap analysis documented
|
||||
|
||||
### ❌ FAILURE
|
||||
- Skipping codebase scan
|
||||
- Accepting vague tasks ("Add feature X")
|
||||
- Not checking for existing implementations
|
||||
- Missing obvious gaps
|
||||
- No refinement of unclear tasks
|
||||
|
||||
## WHY THIS STEP PREVENTS VIBE CODING
|
||||
|
||||
Pre-gap analysis forces Claude to:
|
||||
1. **Understand existing code** before implementing
|
||||
2. **Be specific** about what to build
|
||||
3. **Verify assumptions** against reality
|
||||
4. **Plan work properly** instead of guessing
|
||||
|
||||
This is especially critical for **brownfield** where vibe coding causes:
|
||||
- Breaking existing functionality
|
||||
- Duplicating existing code
|
||||
- Missing integration points
|
||||
- Ignoring established patterns
|
||||
|
|
@ -0,0 +1,248 @@
|
|||
---
|
||||
name: 'step-03-write-tests'
|
||||
description: 'Write comprehensive tests BEFORE implementation (TDD approach)'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-03-write-tests.md'
|
||||
stateFile: '{state_file}'
|
||||
storyFile: '{story_file}'
|
||||
|
||||
# Next step
|
||||
nextStep: '{workflow_path}/steps/step-04-implement.md'
|
||||
---
|
||||
|
||||
# Step 3: Write Tests (TDD Approach)
|
||||
|
||||
**Goal:** Write comprehensive tests that validate story acceptance criteria BEFORE writing implementation code.
|
||||
|
||||
## Why Test-First?
|
||||
|
||||
1. **Clear requirements**: Writing tests forces clarity about what "done" means
|
||||
2. **Better design**: TDD leads to more testable, modular code
|
||||
3. **Confidence**: Know immediately when implementation is complete
|
||||
4. **Regression safety**: Tests catch future breakage
|
||||
|
||||
## Principles
|
||||
|
||||
- **Test acceptance criteria**: Each AC should have corresponding tests
|
||||
- **Test behavior, not implementation**: Focus on what, not how
|
||||
- **Red-Green-Refactor**: Tests should fail initially (red), then pass when implemented (green)
|
||||
- **Comprehensive coverage**: Unit tests, integration tests, and E2E tests as needed
|
||||
|
||||
---
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Analyze Story Requirements
|
||||
|
||||
```
|
||||
Read {storyFile} completely.
|
||||
|
||||
Extract:
|
||||
- All Acceptance Criteria
|
||||
- All Tasks and Subtasks
|
||||
- All Files in File List
|
||||
- Definition of Done requirements
|
||||
```
|
||||
|
||||
### 2. Determine Test Strategy
|
||||
|
||||
For each acceptance criterion, determine:
|
||||
```
|
||||
Testing Level:
|
||||
- Unit tests: For individual functions/components
|
||||
- Integration tests: For component interactions
|
||||
- E2E tests: For full user workflows
|
||||
|
||||
Test Framework:
|
||||
- Jest (JavaScript/TypeScript)
|
||||
- PyTest (Python)
|
||||
- xUnit (C#/.NET)
|
||||
- JUnit (Java)
|
||||
- Etc. based on project stack
|
||||
```
|
||||
|
||||
### 3. Write Test Stubs
|
||||
|
||||
Create test files FIRST (before implementation):
|
||||
|
||||
```bash
|
||||
Example for React component:
|
||||
__tests__/components/UserDashboard.test.tsx
|
||||
|
||||
Example for API endpoint:
|
||||
__tests__/api/users.test.ts
|
||||
|
||||
Example for service:
|
||||
__tests__/services/auth.test.ts
|
||||
```
|
||||
|
||||
### 4. Write Test Cases
|
||||
|
||||
For each acceptance criterion:
|
||||
|
||||
```typescript
|
||||
// Example: React component test
|
||||
describe('UserDashboard', () => {
|
||||
describe('AC1: Display user profile information', () => {
|
||||
it('should render user name', () => {
|
||||
render(<UserDashboard user={mockUser} />);
|
||||
expect(screen.getByText('John Doe')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should render user email', () => {
|
||||
render(<UserDashboard user={mockUser} />);
|
||||
expect(screen.getByText('john@example.com')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should render user avatar', () => {
|
||||
render(<UserDashboard user={mockUser} />);
|
||||
expect(screen.getByAltText('User avatar')).toBeInTheDocument();
|
||||
});
|
||||
});
|
||||
|
||||
describe('AC2: Allow user to edit profile', () => {
|
||||
it('should show edit button when not in edit mode', () => {
|
||||
render(<UserDashboard user={mockUser} />);
|
||||
expect(screen.getByRole('button', { name: /edit/i })).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should enable edit mode when edit button clicked', () => {
|
||||
render(<UserDashboard user={mockUser} />);
|
||||
fireEvent.click(screen.getByRole('button', { name: /edit/i }));
|
||||
expect(screen.getByRole('textbox', { name: /name/i })).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should save changes when save button clicked', async () => {
|
||||
const onSave = vi.fn();
|
||||
render(<UserDashboard user={mockUser} onSave={onSave} />);
|
||||
|
||||
fireEvent.click(screen.getByRole('button', { name: /edit/i }));
|
||||
fireEvent.change(screen.getByRole('textbox', { name: /name/i }), {
|
||||
target: { value: 'Jane Doe' }
|
||||
});
|
||||
fireEvent.click(screen.getByRole('button', { name: /save/i }));
|
||||
|
||||
await waitFor(() => {
|
||||
expect(onSave).toHaveBeenCalledWith({ ...mockUser, name: 'Jane Doe' });
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### 5. Verify Tests Fail (Red Phase)
|
||||
|
||||
```bash
|
||||
# Run tests - they SHOULD fail because implementation doesn't exist yet
|
||||
npm test
|
||||
|
||||
# Expected output:
|
||||
# ❌ FAIL __tests__/components/UserDashboard.test.tsx
|
||||
# UserDashboard
|
||||
# AC1: Display user profile information
|
||||
# ✕ should render user name (5ms)
|
||||
# ✕ should render user email (3ms)
|
||||
# ✕ should render user avatar (2ms)
|
||||
#
|
||||
# This is GOOD! Tests failing = requirements are clear
|
||||
```
|
||||
|
||||
**If tests pass unexpectedly:**
|
||||
```
|
||||
⚠️ WARNING: Some tests are passing before implementation!
|
||||
|
||||
This means either:
|
||||
1. Functionality already exists (brownfield - verify and document)
|
||||
2. Tests are not actually testing the new requirements
|
||||
3. Tests have mocking issues (testing mocks instead of real code)
|
||||
|
||||
Review and fix before proceeding.
|
||||
```
|
||||
|
||||
### 6. Document Test Coverage
|
||||
|
||||
Create test coverage report:
|
||||
```yaml
|
||||
Test Coverage Summary:
|
||||
Acceptance Criteria: {total_ac_count}
|
||||
Acceptance Criteria with Tests: {tested_ac_count}
|
||||
Coverage: {coverage_percentage}%
|
||||
|
||||
Tasks: {total_task_count}
|
||||
Tasks with Tests: {tested_task_count}
|
||||
Coverage: {task_coverage_percentage}%
|
||||
|
||||
Test Files Created:
|
||||
- {test_file_1}
|
||||
- {test_file_2}
|
||||
- {test_file_3}
|
||||
|
||||
Total Test Cases: {test_case_count}
|
||||
```
|
||||
|
||||
### 7. Commit Tests
|
||||
|
||||
```bash
|
||||
git add {test_files}
|
||||
git commit -m "test(story-{story_id}): add tests for {story_title}
|
||||
|
||||
Write comprehensive tests for all acceptance criteria:
|
||||
{list_of_acs}
|
||||
|
||||
Test coverage:
|
||||
- {tested_ac_count}/{total_ac_count} ACs covered
|
||||
- {test_case_count} test cases
|
||||
- Unit tests: {unit_test_count}
|
||||
- Integration tests: {integration_test_count}
|
||||
- E2E tests: {e2e_test_count}
|
||||
|
||||
Tests currently failing (red phase) - expected behavior.
|
||||
Will implement functionality in next step."
|
||||
```
|
||||
|
||||
### 8. Update State
|
||||
|
||||
```yaml
|
||||
# Update {stateFile}
|
||||
current_step: 3
|
||||
tests_written: true
|
||||
test_files: [{test_file_list}]
|
||||
test_coverage: {coverage_percentage}%
|
||||
tests_status: "failing (red phase - expected)"
|
||||
ready_for_implementation: true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Checks
|
||||
|
||||
Before proceeding to implementation:
|
||||
|
||||
✅ **All acceptance criteria have corresponding tests**
|
||||
✅ **Tests are comprehensive (happy path + edge cases + error cases)**
|
||||
✅ **Tests follow project testing conventions**
|
||||
✅ **Tests are isolated and don't depend on each other**
|
||||
✅ **Tests have clear, descriptive names**
|
||||
✅ **Mock data is realistic and well-organized**
|
||||
✅ **Tests are failing for the right reasons (not implemented yet)**
|
||||
|
||||
---
|
||||
|
||||
## Skip Conditions
|
||||
|
||||
This step can be skipped if:
|
||||
- Complexity level = "micro" AND tasks ≤ 2
|
||||
- Story is documentation-only (no code changes)
|
||||
- Story is pure refactoring with existing comprehensive tests
|
||||
|
||||
---
|
||||
|
||||
## Next Step
|
||||
|
||||
Proceed to **Step 4: Implement** ({nextStep})
|
||||
|
||||
Now that tests are written and failing (red phase), implement the functionality to make them pass (green phase).
|
||||
|
|
@ -0,0 +1,515 @@
|
|||
---
|
||||
name: 'step-04-implement'
|
||||
description: 'HOSPITAL-GRADE implementation - safety-critical code with comprehensive testing'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-04-implement.md'
|
||||
nextStepFile: '{workflow_path}/steps/step-05-post-validation.md'
|
||||
|
||||
# Role Continue
|
||||
role: dev
|
||||
---
|
||||
|
||||
# Step 4: Implement Story (Hospital-Grade Quality)
|
||||
|
||||
## ROLE CONTINUATION
|
||||
|
||||
**Continuing as DEV (Developer) perspective.**
|
||||
|
||||
You are now implementing the story tasks with adaptive methodology based on development type.
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Implement all unchecked tasks using appropriate methodology:
|
||||
1. **Greenfield**: TDD approach (write tests first, then implement)
|
||||
2. **Brownfield**: Refactor approach (understand existing, modify carefully)
|
||||
3. **Hybrid**: Mix both approaches as appropriate per task
|
||||
|
||||
## ⚕️ HOSPITAL-GRADE CODE STANDARDS ⚕️
|
||||
|
||||
**CRITICAL: Lives May Depend on This Code**
|
||||
|
||||
This code may be used in healthcare/safety-critical environments.
|
||||
Every line must meet hospital-grade reliability standards.
|
||||
|
||||
### Safety-Critical Quality Requirements:
|
||||
|
||||
✅ **CORRECTNESS OVER SPEED**
|
||||
- Take 5 hours to do it right, not 1 hour to do it poorly
|
||||
- Double-check ALL logic, especially edge cases
|
||||
- ZERO tolerance for shortcuts or "good enough"
|
||||
|
||||
✅ **DEFENSIVE PROGRAMMING**
|
||||
- Validate ALL inputs (never trust external data)
|
||||
- Handle ALL error cases explicitly
|
||||
- Fail safely (graceful degradation, never silent failures)
|
||||
|
||||
✅ **COMPREHENSIVE TESTING**
|
||||
- Test happy path AND all edge cases
|
||||
- Test error handling (what happens when things fail?)
|
||||
- Test boundary conditions (min/max values, empty/null)
|
||||
|
||||
✅ **CODE CLARITY**
|
||||
- Prefer readability over cleverness
|
||||
- Comment WHY, not what (code shows what, comments explain why)
|
||||
- No magic numbers (use named constants)
|
||||
|
||||
✅ **ROBUST ERROR HANDLING**
|
||||
- Never swallow errors silently
|
||||
- Log errors with context (what, when, why)
|
||||
- Provide actionable error messages
|
||||
|
||||
⚠️ **WHEN IN DOUBT: ASK, DON'T GUESS**
|
||||
If you're uncertain about a requirement, HALT and ask for clarification.
|
||||
Guessing in safety-critical code is UNACCEPTABLE.
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
### Implementation Principles
|
||||
|
||||
- **DEFAULT: ONE TASK AT A TIME** - Execute tasks individually unless smart batching applies
|
||||
- **SMART BATCHING EXCEPTION** - Low-risk patterns (package installs, imports) may batch
|
||||
- **RUN TESTS FREQUENTLY** - After each task or batch completion
|
||||
- **FOLLOW PROJECT PATTERNS** - Never invent new patterns
|
||||
- **NO VIBE CODING** - Follow the sequence exactly
|
||||
- **VERIFY BEFORE PROCEEDING** - Confirm success before next task/batch
|
||||
|
||||
### Adaptive Methodology
|
||||
|
||||
**For Greenfield tasks (new files):**
|
||||
1. Write test first (if applicable)
|
||||
2. Implement minimal code to pass
|
||||
3. Verify test passes
|
||||
4. Move to next task
|
||||
|
||||
**For Brownfield tasks (existing files):**
|
||||
1. Read and understand existing code
|
||||
2. Write test for new behavior (if applicable)
|
||||
3. Modify existing code carefully
|
||||
4. Verify all tests pass (old and new)
|
||||
5. Move to next task
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Review Refined Tasks
|
||||
|
||||
Load story file and get all unchecked tasks (from pre-gap analysis).
|
||||
|
||||
Display:
|
||||
```
|
||||
Implementation Plan
|
||||
|
||||
Total tasks: {unchecked_count}
|
||||
|
||||
Development breakdown:
|
||||
- Greenfield tasks: {new_file_tasks}
|
||||
- Brownfield tasks: {existing_file_tasks}
|
||||
- Test tasks: {test_tasks}
|
||||
- Database tasks: {db_tasks}
|
||||
|
||||
Starting implementation loop...
|
||||
```
|
||||
|
||||
### 2. Load Smart Batching Plan
|
||||
|
||||
Load batching plan from story file (created in Step 2):
|
||||
|
||||
Extract:
|
||||
- Pattern batches (groups of similar tasks)
|
||||
- Individual tasks (require one-by-one execution)
|
||||
- Validation strategy per batch
|
||||
- Time estimates
|
||||
|
||||
### 3. Implementation Strategy Selection
|
||||
|
||||
**If smart batching plan exists:**
|
||||
```
|
||||
Smart Batching Enabled
|
||||
|
||||
Execution Plan:
|
||||
- {batch_count} pattern batches (execute together)
|
||||
- {individual_count} individual tasks (execute separately)
|
||||
|
||||
Proceeding with pattern-based execution...
|
||||
```
|
||||
|
||||
**If no batching plan:**
|
||||
```
|
||||
Standard Execution (One-at-a-Time)
|
||||
|
||||
All tasks will be executed individually with full rigor.
|
||||
```
|
||||
|
||||
### 4. Pattern Batch Execution (NEW!)
|
||||
|
||||
**For EACH pattern batch (if batching enabled):**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Batch {n}/{total_batches}: {pattern_name}
|
||||
Tasks in batch: {task_count}
|
||||
Type: {pattern_type}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**A. Display Batch Tasks:**
|
||||
```
|
||||
Executing together:
|
||||
1. {task_1}
|
||||
2. {task_2}
|
||||
3. {task_3}
|
||||
...
|
||||
|
||||
Validation strategy: {validation_strategy}
|
||||
Estimated time: {estimated_minutes} minutes
|
||||
```
|
||||
|
||||
**B. Execute All Tasks in Batch:**
|
||||
|
||||
**Example: Package Installation Batch**
|
||||
```bash
|
||||
# Execute all package installations together
|
||||
npm pkg set dependencies.@company/shared-utils="^1.0.0"
|
||||
npm pkg set dependencies.@company/validation="^2.0.0"
|
||||
npm pkg set dependencies.@company/http-client="^1.5.0"
|
||||
npm pkg set dependencies.@company/database-client="^3.0.0"
|
||||
|
||||
# Single install command
|
||||
npm install
|
||||
```
|
||||
|
||||
**Example: Module Registration Batch**
|
||||
```typescript
|
||||
// Add all imports at once
|
||||
import { SharedUtilsModule } from '@company/shared-utils';
|
||||
import { ValidationModule } from '@company/validation';
|
||||
import { HttpClientModule } from '@company/http-client';
|
||||
import { DatabaseModule } from '@company/database-client';
|
||||
|
||||
// Register all modules together
|
||||
@Module({
|
||||
imports: [
|
||||
SharedUtilsModule.forRoot(),
|
||||
ValidationModule.forRoot(validationConfig),
|
||||
HttpClientModule.forRoot(httpConfig),
|
||||
DatabaseModule.forRoot(dbConfig),
|
||||
// ... existing imports
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
**C. Validate Entire Batch:**
|
||||
|
||||
Run validation strategy for this pattern:
|
||||
```bash
|
||||
# For package installs
|
||||
npm run build
|
||||
|
||||
# For module registrations
|
||||
tsc --noEmit
|
||||
|
||||
# For code deletions
|
||||
npm test -- --run && npm run lint
|
||||
```
|
||||
|
||||
**D. If Validation Succeeds:**
|
||||
```
|
||||
✅ Batch Complete
|
||||
|
||||
All {task_count} tasks in batch executed successfully!
|
||||
|
||||
Marking all tasks complete:
|
||||
- [x] {task_1}
|
||||
- [x] {task_2}
|
||||
- [x] {task_3}
|
||||
...
|
||||
|
||||
Time: {actual_time} minutes
|
||||
```
|
||||
|
||||
**E. If Validation Fails:**
|
||||
```
|
||||
❌ Batch Validation Failed
|
||||
|
||||
Error: {error_message}
|
||||
|
||||
Falling back to one-at-a-time execution for this batch...
|
||||
```
|
||||
|
||||
**Fallback to individual execution:**
|
||||
- Execute each task in the failed batch one-by-one
|
||||
- Identify which task caused the failure
|
||||
- Fix and continue
|
||||
|
||||
### 5. Individual Task Execution
|
||||
|
||||
**For EACH individual task (non-batchable):**
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Task {n}/{total}: {task_description}
|
||||
Type: {greenfield|brownfield}
|
||||
Reason: {why_not_batchable}
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
**A. Identify File(s) Affected:**
|
||||
- New file to create?
|
||||
- Existing file to modify?
|
||||
- Test file to add/update?
|
||||
- Migration file to create?
|
||||
|
||||
**B. For NEW FILES (Greenfield):**
|
||||
|
||||
```
|
||||
1. Determine file path and structure
|
||||
2. Identify dependencies needed
|
||||
3. Write test first (if applicable):
|
||||
- Create test file
|
||||
- Write failing test
|
||||
- Run test, confirm RED
|
||||
|
||||
4. Implement code:
|
||||
- Create file
|
||||
- Add minimal implementation
|
||||
- Follow project patterns from project-context.md
|
||||
|
||||
5. Run test:
|
||||
npm test -- --run
|
||||
Confirm GREEN
|
||||
|
||||
6. Verify:
|
||||
- File created
|
||||
- Exports correct
|
||||
- Test passes
|
||||
```
|
||||
|
||||
**C. For EXISTING FILES (Brownfield):**
|
||||
|
||||
```
|
||||
1. Read existing file completely
|
||||
2. Understand current implementation
|
||||
3. Identify where to make changes
|
||||
4. Check if tests exist for this file
|
||||
|
||||
5. Add test for new behavior (if applicable):
|
||||
- Find or create test file
|
||||
- Add test for new/changed behavior
|
||||
- Run test, may fail or pass depending on change
|
||||
|
||||
6. Modify existing code:
|
||||
- Make minimal changes
|
||||
- Preserve existing functionality
|
||||
- Follow established patterns in the file
|
||||
- Don't refactor unrelated code
|
||||
|
||||
7. Run ALL tests (not just new ones):
|
||||
npm test -- --run
|
||||
Confirm all tests pass
|
||||
|
||||
8. Verify:
|
||||
- Changes made as planned
|
||||
- No regressions (all old tests pass)
|
||||
- New behavior works (new tests pass)
|
||||
```
|
||||
|
||||
**D. For DATABASE TASKS:**
|
||||
|
||||
```
|
||||
1. Create migration file:
|
||||
npx supabase migration new {description}
|
||||
|
||||
2. Write migration SQL:
|
||||
- Create/alter tables
|
||||
- Add RLS policies
|
||||
- Add indexes
|
||||
|
||||
3. Apply migration:
|
||||
npx supabase db push
|
||||
|
||||
4. Verify schema:
|
||||
mcp__supabase__list_tables
|
||||
Confirm changes applied
|
||||
|
||||
5. Generate types:
|
||||
npx supabase gen types typescript --local
|
||||
```
|
||||
|
||||
**E. For TEST TASKS:**
|
||||
|
||||
```
|
||||
1. Identify what to test
|
||||
2. Find or create test file
|
||||
3. Write test with clear assertions
|
||||
4. Run test:
|
||||
npm test -- --run --grep "{test_name}"
|
||||
|
||||
5. Verify test is meaningful (not placeholder)
|
||||
```
|
||||
|
||||
**F. Check Task Complete:**
|
||||
|
||||
After implementing task, verify:
|
||||
- [ ] Code exists where expected
|
||||
- [ ] Tests pass
|
||||
- [ ] No TypeScript errors
|
||||
- [ ] Follows project patterns
|
||||
|
||||
**Mark task complete in story file:**
|
||||
```markdown
|
||||
- [x] {task_description}
|
||||
```
|
||||
|
||||
**Update state file with progress.**
|
||||
|
||||
### 3. Handle Errors Gracefully
|
||||
|
||||
**If implementation fails:**
|
||||
|
||||
```
|
||||
⚠️ Task failed: {task_description}
|
||||
|
||||
Error: {error_message}
|
||||
|
||||
Options:
|
||||
1. Debug and retry
|
||||
2. Skip and document blocker
|
||||
3. Simplify approach
|
||||
|
||||
DO NOT vibe code or guess!
|
||||
Follow error systematically.
|
||||
```
|
||||
|
||||
### 4. Run Full Test Suite
|
||||
|
||||
After ALL tasks completed:
|
||||
|
||||
```bash
|
||||
npm test -- --run
|
||||
npm run lint
|
||||
npm run build
|
||||
```
|
||||
|
||||
**All must pass before proceeding.**
|
||||
|
||||
### 5. Verify Task Completion
|
||||
|
||||
Re-read story file and count:
|
||||
- Tasks completed this session: {count}
|
||||
- Tasks remaining: {should be 0}
|
||||
- All checked: {should be true}
|
||||
|
||||
### 6. Update Pipeline State
|
||||
|
||||
Update state file:
|
||||
- Add `3` to `stepsCompleted`
|
||||
- Set `lastStep: 3`
|
||||
- Set `steps.step-03-implement.status: completed`
|
||||
- Record:
|
||||
```yaml
|
||||
implementation:
|
||||
files_created: {count}
|
||||
files_modified: {count}
|
||||
migrations_applied: {count}
|
||||
tests_added: {count}
|
||||
tasks_completed: {count}
|
||||
```
|
||||
|
||||
### 7. Display Summary
|
||||
|
||||
```
|
||||
Implementation Complete
|
||||
|
||||
Tasks Completed: {completed_count}
|
||||
|
||||
Files:
|
||||
- Created: {created_files}
|
||||
- Modified: {modified_files}
|
||||
|
||||
Migrations:
|
||||
- {migration_1}
|
||||
- {migration_2}
|
||||
|
||||
Tests:
|
||||
- All passing: {pass_count}/{total_count}
|
||||
- New tests added: {new_test_count}
|
||||
|
||||
Build Status:
|
||||
- Lint: ✓ Clean
|
||||
- TypeScript: ✓ No errors
|
||||
- Build: ✓ Success
|
||||
|
||||
Ready for Post-Validation
|
||||
```
|
||||
|
||||
**Interactive Mode Menu:**
|
||||
```
|
||||
[C] Continue to Post-Validation
|
||||
[T] Run tests again
|
||||
[B] Run build again
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
**Batch Mode:** Auto-continue
|
||||
|
||||
## QUALITY GATE
|
||||
|
||||
Before proceeding:
|
||||
- [ ] All unchecked tasks completed
|
||||
- [ ] All tests pass
|
||||
- [ ] Lint clean
|
||||
- [ ] Build succeeds
|
||||
- [ ] No TypeScript errors
|
||||
- [ ] Followed project patterns
|
||||
- [ ] **No vibe coding occurred**
|
||||
|
||||
## CRITICAL STEP COMPLETION
|
||||
|
||||
**ONLY WHEN** [all tasks complete AND all tests pass AND lint clean AND build succeeds],
|
||||
load and execute `{nextStepFile}` for post-validation.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- All tasks implemented one at a time
|
||||
- Tests pass for each task
|
||||
- Brownfield code modified carefully
|
||||
- No regressions introduced
|
||||
- Project patterns followed
|
||||
- Build and lint clean
|
||||
- **Disciplined execution maintained**
|
||||
|
||||
### ❌ FAILURE
|
||||
- Vibe coding (guessing implementation)
|
||||
- Batching multiple tasks
|
||||
- Not running tests per task
|
||||
- Breaking existing functionality
|
||||
- Inventing new patterns
|
||||
- Skipping verification
|
||||
- **Deviating from step sequence**
|
||||
|
||||
## ANTI-VIBE-CODING ENFORCEMENT
|
||||
|
||||
This step enforces discipline by:
|
||||
|
||||
1. **One task at a time** - Can't batch or optimize
|
||||
2. **Test after each task** - Immediate verification
|
||||
3. **Follow existing patterns** - No invention
|
||||
4. **Brownfield awareness** - Read existing code first
|
||||
5. **Frequent verification** - Run tests, lint, build
|
||||
|
||||
**Even at 200K tokens, you MUST:**
|
||||
- ✅ Implement ONE task
|
||||
- ✅ Run tests
|
||||
- ✅ Verify it works
|
||||
- ✅ Mark task complete
|
||||
- ✅ Move to next task
|
||||
|
||||
**NO shortcuts. NO optimization. NO vibe coding.**
|
||||
|
|
@ -0,0 +1,450 @@
|
|||
---
|
||||
name: 'step-04-post-validation'
|
||||
description: 'Verify completed tasks against codebase reality (catch false positives)'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-04-post-validation.md'
|
||||
nextStepFile: '{workflow_path}/steps/step-05-code-review.md'
|
||||
prevStepFile: '{workflow_path}/steps/step-03-implement.md'
|
||||
|
||||
# Role Switch
|
||||
role: dev
|
||||
requires_fresh_context: false # Continue from implementation context
|
||||
---
|
||||
|
||||
# Step 5b: Post-Implementation Validation
|
||||
|
||||
## ROLE CONTINUATION - VERIFICATION MODE
|
||||
|
||||
**Continuing as DEV but switching to VERIFICATION mindset.**
|
||||
|
||||
You are now verifying that completed work actually exists in the codebase.
|
||||
This catches the common problem of tasks marked [x] but implementation is incomplete.
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Verify all completed tasks against codebase reality:
|
||||
1. Re-read story file and extract completed tasks
|
||||
2. For each completed task, identify what should exist
|
||||
3. Use codebase search tools to verify existence
|
||||
4. Run tests to verify they actually pass
|
||||
5. Identify false positives (marked done but not actually done)
|
||||
6. If gaps found, uncheck tasks and add missing work
|
||||
7. Re-run implementation if needed
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
### Verification Principles
|
||||
|
||||
- **TRUST NOTHING** - Verify every completed task
|
||||
- **CHECK EXISTENCE** - Files, functions, components must exist
|
||||
- **CHECK COMPLETENESS** - Not just existence, but full implementation
|
||||
- **TEST VERIFICATION** - Claimed test coverage must be real
|
||||
- **NO ASSUMPTIONS** - Re-scan the codebase with fresh eyes
|
||||
|
||||
### What to Verify
|
||||
|
||||
For each task marked [x]:
|
||||
- Files mentioned exist at correct paths
|
||||
- Functions/components declared and exported
|
||||
- Tests exist and actually pass
|
||||
- Database migrations applied
|
||||
- API endpoints respond correctly
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Load Story and Extract Completed Tasks
|
||||
|
||||
Load story file: `{story_file}`
|
||||
|
||||
Extract all tasks from story that are marked [x]:
|
||||
```regex
|
||||
- \[x\] (.+)
|
||||
```
|
||||
|
||||
Build list of `completed_tasks` to verify.
|
||||
|
||||
### 2. Categorize Tasks by Type
|
||||
|
||||
For each completed task, determine what needs verification:
|
||||
|
||||
**File Creation Tasks:**
|
||||
- Pattern: "Create {file_path}"
|
||||
- Verify: File exists at path
|
||||
|
||||
**Component/Function Tasks:**
|
||||
- Pattern: "Add {name} function/component"
|
||||
- Verify: Symbol exists and is exported
|
||||
|
||||
**Test Tasks:**
|
||||
- Pattern: "Add test for {feature}"
|
||||
- Verify: Test file exists and test passes
|
||||
|
||||
**Database Tasks:**
|
||||
- Pattern: "Add {table} table", "Create migration"
|
||||
- Verify: Migration file exists, schema matches
|
||||
|
||||
**API Tasks:**
|
||||
- Pattern: "Create {endpoint} endpoint"
|
||||
- Verify: Route file exists, handler implemented
|
||||
|
||||
**UI Tasks:**
|
||||
- Pattern: "Add {element} to UI"
|
||||
- Verify: Component has data-testid attribute
|
||||
|
||||
### 3. Verify File Existence
|
||||
|
||||
For all file-related tasks:
|
||||
|
||||
```bash
|
||||
# Use Glob to find files
|
||||
glob: "**/{mentioned_filename}"
|
||||
```
|
||||
|
||||
**Check:**
|
||||
- [ ] File exists
|
||||
- [ ] File is not empty
|
||||
- [ ] File has expected exports
|
||||
|
||||
**False Positive Indicators:**
|
||||
- File doesn't exist
|
||||
- File exists but is empty
|
||||
- File exists but missing expected symbols
|
||||
|
||||
### 4. Verify Function/Component Implementation
|
||||
|
||||
For code implementation tasks:
|
||||
|
||||
```bash
|
||||
# Use Grep to find symbols
|
||||
grep: "{function_name|component_name}"
|
||||
glob: "**/*.{ts,tsx}"
|
||||
output_mode: "content"
|
||||
```
|
||||
|
||||
**Check:**
|
||||
- [ ] Symbol is declared
|
||||
- [ ] Symbol is exported
|
||||
- [ ] Implementation is not a stub/placeholder
|
||||
- [ ] Required logic is present
|
||||
|
||||
**False Positive Indicators:**
|
||||
- Symbol not found
|
||||
- Symbol exists but marked TODO
|
||||
- Symbol exists but throws "Not implemented"
|
||||
- Symbol exists but returns empty/null
|
||||
|
||||
### 5. Verify Test Coverage
|
||||
|
||||
For all test-related tasks:
|
||||
|
||||
```bash
|
||||
# Find test files
|
||||
glob: "**/*.test.{ts,tsx}"
|
||||
glob: "**/*.spec.{ts,tsx}"
|
||||
|
||||
# Run specific tests
|
||||
npm test -- --run --grep "{feature_name}"
|
||||
```
|
||||
|
||||
**Check:**
|
||||
- [ ] Test file exists
|
||||
- [ ] Test describes the feature
|
||||
- [ ] Test actually runs (not skipped)
|
||||
- [ ] Test passes (GREEN)
|
||||
|
||||
**False Positive Indicators:**
|
||||
- No test file found
|
||||
- Test exists but skipped (it.skip)
|
||||
- Test exists but fails
|
||||
- Test exists but doesn't test the feature (placeholder)
|
||||
|
||||
### 6. Verify Database Changes
|
||||
|
||||
For database migration tasks:
|
||||
|
||||
```bash
|
||||
# Find migration files
|
||||
glob: "**/migrations/*.sql"
|
||||
|
||||
# Check Supabase schema
|
||||
mcp__supabase__list_tables
|
||||
```
|
||||
|
||||
**Check:**
|
||||
- [ ] Migration file exists
|
||||
- [ ] Migration has been applied
|
||||
- [ ] Table/column exists in schema
|
||||
- [ ] RLS policies are present
|
||||
|
||||
**False Positive Indicators:**
|
||||
- Migration file missing
|
||||
- Migration not applied to database
|
||||
- Table/column doesn't exist
|
||||
- RLS policies missing
|
||||
|
||||
### 7. Verify API Endpoints
|
||||
|
||||
For API endpoint tasks:
|
||||
|
||||
```bash
|
||||
# Find route files
|
||||
glob: "**/app/api/**/{endpoint}/route.ts"
|
||||
grep: "export async function {METHOD}"
|
||||
```
|
||||
|
||||
**Check:**
|
||||
- [ ] Route file exists
|
||||
- [ ] Handler function implemented
|
||||
- [ ] Returns proper Response type
|
||||
- [ ] Error handling present
|
||||
|
||||
**False Positive Indicators:**
|
||||
- Route file doesn't exist
|
||||
- Handler throws "Not implemented"
|
||||
- Handler returns stub response
|
||||
|
||||
### 8. Run Full Verification
|
||||
|
||||
Execute verification for ALL completed tasks:
|
||||
|
||||
```typescript
|
||||
interface VerificationResult {
|
||||
task: string;
|
||||
status: "verified" | "false_positive";
|
||||
evidence: string;
|
||||
missing?: string;
|
||||
}
|
||||
|
||||
const results: VerificationResult[] = [];
|
||||
|
||||
for (const task of completed_tasks) {
|
||||
const result = await verifyTask(task);
|
||||
results.push(result);
|
||||
}
|
||||
```
|
||||
|
||||
### 9. Analyze Verification Results
|
||||
|
||||
Count results:
|
||||
```
|
||||
Total Verified: {verified_count}
|
||||
False Positives: {false_positive_count}
|
||||
```
|
||||
|
||||
### 10. Handle False Positives
|
||||
|
||||
**IF false positives found (count > 0):**
|
||||
|
||||
Display:
|
||||
```
|
||||
⚠️ POST-IMPLEMENTATION GAPS DETECTED
|
||||
|
||||
Tasks marked complete but implementation incomplete:
|
||||
|
||||
{for each false_positive}
|
||||
- [ ] {task_description}
|
||||
Missing: {what_is_missing}
|
||||
Evidence: {grep/glob results}
|
||||
|
||||
{add new tasks for missing work}
|
||||
- [ ] Actually implement {missing_part}
|
||||
```
|
||||
|
||||
**Actions:**
|
||||
1. Uncheck false positive tasks in story file
|
||||
2. Add new tasks for the missing work
|
||||
3. Update "Gap Analysis" section in story
|
||||
4. Set state to re-run implementation
|
||||
|
||||
**Re-run implementation:**
|
||||
```
|
||||
Detected {false_positive_count} incomplete tasks.
|
||||
Re-running Step 5: Implementation to complete missing work...
|
||||
|
||||
{load and execute step-05-implement.md}
|
||||
```
|
||||
|
||||
After re-implementation, **RE-RUN THIS STEP** (step-05b-post-validation.md)
|
||||
|
||||
### 11. Handle Verified Success
|
||||
|
||||
**IF no false positives (all verified):**
|
||||
|
||||
Display:
|
||||
```
|
||||
✅ POST-IMPLEMENTATION VALIDATION PASSED
|
||||
|
||||
All {verified_count} completed tasks verified against codebase:
|
||||
- Files exist and are complete
|
||||
- Functions/components implemented
|
||||
- Tests exist and pass
|
||||
- Database changes applied
|
||||
- API endpoints functional
|
||||
|
||||
Ready for Code Review
|
||||
```
|
||||
|
||||
Update story file "Gap Analysis" section:
|
||||
```markdown
|
||||
## Gap Analysis
|
||||
|
||||
### Post-Implementation Validation
|
||||
- **Date:** {timestamp}
|
||||
- **Tasks Verified:** {verified_count}
|
||||
- **False Positives:** 0
|
||||
- **Status:** ✅ All work verified complete
|
||||
|
||||
**Verification Evidence:**
|
||||
{for each verified task}
|
||||
- ✅ {task}: {evidence}
|
||||
```
|
||||
|
||||
### 12. Update Pipeline State
|
||||
|
||||
Update state file:
|
||||
- Add `5b` to `stepsCompleted`
|
||||
- Set `lastStep: 5b`
|
||||
- Set `steps.step-05b-post-validation.status: completed`
|
||||
- Record verification results:
|
||||
```yaml
|
||||
verification:
|
||||
tasks_verified: {count}
|
||||
false_positives: {count}
|
||||
re_implementation_required: {true|false}
|
||||
```
|
||||
|
||||
### 13. Present Summary and Menu
|
||||
|
||||
Display:
|
||||
```
|
||||
Post-Implementation Validation Complete
|
||||
|
||||
Verification Summary:
|
||||
- Tasks Checked: {total_count}
|
||||
- Verified Complete: {verified_count}
|
||||
- False Positives: {false_positive_count}
|
||||
- Re-implementations: {retry_count}
|
||||
|
||||
{if false_positives}
|
||||
Re-running implementation to complete missing work...
|
||||
{else}
|
||||
All work verified. Proceeding to Code Review...
|
||||
{endif}
|
||||
```
|
||||
|
||||
**Interactive Mode Menu (only if no false positives):**
|
||||
```
|
||||
[C] Continue to {next step based on complexity: Code Review | Complete}
|
||||
[V] Run verification again
|
||||
[T] Run tests again
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
{if micro complexity: "⏭️ Code Review will be skipped (lightweight path)"}
|
||||
|
||||
**Batch Mode:**
|
||||
- Auto re-run implementation if false positives
|
||||
- Auto-continue if all verified
|
||||
|
||||
## QUALITY GATE
|
||||
|
||||
Before proceeding to code review:
|
||||
- [ ] All completed tasks verified against codebase
|
||||
- [ ] Zero false positives remaining
|
||||
- [ ] All tests still passing
|
||||
- [ ] Build still succeeds
|
||||
- [ ] Gap analysis updated with verification results
|
||||
|
||||
## VERIFICATION TOOLS
|
||||
|
||||
Use these tools for verification:
|
||||
|
||||
```typescript
|
||||
// File existence
|
||||
glob("{pattern}")
|
||||
|
||||
// Symbol search
|
||||
grep("{symbol_name}", { glob: "**/*.{ts,tsx}", output_mode: "content" })
|
||||
|
||||
// Test execution
|
||||
bash("npm test -- --run --grep '{test_name}'")
|
||||
|
||||
// Database check
|
||||
mcp__supabase__list_tables()
|
||||
|
||||
// Read file contents
|
||||
read("{file_path}")
|
||||
```
|
||||
|
||||
## CRITICAL STEP COMPLETION
|
||||
|
||||
**IF** [false positives detected],
|
||||
load and execute `{prevStepFile}` to complete missing work,
|
||||
then RE-RUN this step.
|
||||
|
||||
**ONLY WHEN** [all tasks verified AND zero false positives]:
|
||||
|
||||
**Determine next step based on complexity routing:**
|
||||
|
||||
```
|
||||
If 5 in skip_steps (micro complexity):
|
||||
nextStepFile = '{workflow_path}/steps/step-06-complete.md'
|
||||
Display: "⏭️ Skipping Code Review (micro complexity) → Proceeding to Complete"
|
||||
Else:
|
||||
nextStepFile = '{workflow_path}/steps/step-05-code-review.md'
|
||||
```
|
||||
|
||||
Load and execute `{nextStepFile}`.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- All completed tasks verified against codebase
|
||||
- No false positives (or all re-implemented)
|
||||
- Tests still passing
|
||||
- Evidence documented for each task
|
||||
- Gap analysis updated
|
||||
|
||||
### ❌ FAILURE
|
||||
- Skipping verification ("trust the marks")
|
||||
- Not checking actual code existence
|
||||
- Not running tests to verify claims
|
||||
- Allowing false positives to proceed
|
||||
- Not documenting verification evidence
|
||||
|
||||
## COMMON FALSE POSITIVE PATTERNS
|
||||
|
||||
Watch for these common issues:
|
||||
|
||||
1. **Stub Implementations**
|
||||
- Function exists but returns `null`
|
||||
- Function throws "Not implemented"
|
||||
- Component returns empty div
|
||||
|
||||
2. **Placeholder Tests**
|
||||
- Test exists but skipped (it.skip)
|
||||
- Test doesn't actually test the feature
|
||||
- Test always passes (no assertions)
|
||||
|
||||
3. **Incomplete Files**
|
||||
- File created but empty
|
||||
- Missing required exports
|
||||
- TODO comments everywhere
|
||||
|
||||
4. **Database Drift**
|
||||
- Migration file exists but not applied
|
||||
- Schema doesn't match migration
|
||||
- RLS policies missing
|
||||
|
||||
5. **API Stubs**
|
||||
- Route exists but returns 501
|
||||
- Handler not implemented
|
||||
- No error handling
|
||||
|
||||
This step is the **safety net** that catches incomplete work before code review.
|
||||
|
|
@ -0,0 +1,368 @@
|
|||
---
|
||||
name: 'step-06-run-quality-checks'
|
||||
description: 'Run tests, type checks, and linter - fix all problems before code review'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-06-run-quality-checks.md'
|
||||
stateFile: '{state_file}'
|
||||
storyFile: '{story_file}'
|
||||
|
||||
# Next step
|
||||
nextStep: '{workflow_path}/steps/step-07-code-review.md'
|
||||
---
|
||||
|
||||
# Step 6: Run Quality Checks
|
||||
|
||||
**Goal:** Verify implementation quality through automated checks: tests, type checking, and linting. Fix ALL problems before proceeding to human/AI code review.
|
||||
|
||||
## Why Automate First?
|
||||
|
||||
1. **Fast feedback**: Automated checks run in seconds
|
||||
2. **Catch obvious issues**: Type errors, lint violations, failing tests
|
||||
3. **Save review time**: Don't waste code review time on mechanical issues
|
||||
4. **Enforce standards**: Consistent code style and quality
|
||||
|
||||
## Principles
|
||||
|
||||
- **Zero tolerance**: ALL checks must pass
|
||||
- **Fix, don't skip**: If a check fails, fix it - don't disable the check
|
||||
- **Iterate quickly**: Run-fix-run loop until all green
|
||||
- **Document workarounds**: If you must suppress a check, document why
|
||||
|
||||
---
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Run Test Suite
|
||||
|
||||
```bash
|
||||
echo "📋 Running test suite..."
|
||||
|
||||
# Run all tests
|
||||
npm test
|
||||
|
||||
# Or for other stacks:
|
||||
# pytest
|
||||
# dotnet test
|
||||
# mvn test
|
||||
# cargo test
|
||||
```
|
||||
|
||||
**Expected output:**
|
||||
```
|
||||
✅ PASS __tests__/components/UserDashboard.test.tsx
|
||||
UserDashboard
|
||||
AC1: Display user profile information
|
||||
✓ should render user name (12ms)
|
||||
✓ should render user email (8ms)
|
||||
✓ should render user avatar (6ms)
|
||||
AC2: Allow user to edit profile
|
||||
✓ should show edit button when not in edit mode (10ms)
|
||||
✓ should enable edit mode when edit button clicked (15ms)
|
||||
✓ should save changes when save button clicked (22ms)
|
||||
|
||||
Test Suites: 1 passed, 1 total
|
||||
Tests: 6 passed, 6 total
|
||||
Time: 2.134s
|
||||
```
|
||||
|
||||
**If tests fail:**
|
||||
```
|
||||
❌ Test failures detected!
|
||||
|
||||
Failed tests:
|
||||
- UserDashboard › AC2 › should save changes when save button clicked
|
||||
Expected: { name: 'Jane Doe', email: 'john@example.com' }
|
||||
Received: undefined
|
||||
|
||||
Action required:
|
||||
1. Analyze the failure
|
||||
2. Fix the implementation
|
||||
3. Re-run tests
|
||||
4. Repeat until all tests pass
|
||||
|
||||
DO NOT PROCEED until all tests pass.
|
||||
```
|
||||
|
||||
### 2. Check Test Coverage
|
||||
|
||||
```bash
|
||||
echo "📊 Checking test coverage..."
|
||||
|
||||
# Generate coverage report
|
||||
npm run test:coverage
|
||||
|
||||
# Or for other stacks:
|
||||
# pytest --cov
|
||||
# dotnet test /p:CollectCoverage=true
|
||||
# cargo tarpaulin
|
||||
```
|
||||
|
||||
**Minimum coverage thresholds:**
|
||||
```yaml
|
||||
Line Coverage: ≥80%
|
||||
Branch Coverage: ≥75%
|
||||
Function Coverage: ≥80%
|
||||
Statement Coverage: ≥80%
|
||||
```
|
||||
|
||||
**If coverage is low:**
|
||||
```
|
||||
⚠️ Test coverage below threshold!
|
||||
|
||||
Current coverage:
|
||||
Lines: 72% (threshold: 80%)
|
||||
Branches: 68% (threshold: 75%)
|
||||
Functions: 85% (threshold: 80%)
|
||||
|
||||
Uncovered areas:
|
||||
- src/components/UserDashboard.tsx: lines 45-52 (error handling)
|
||||
- src/services/userService.ts: lines 23-28 (edge case)
|
||||
|
||||
Action required:
|
||||
1. Add tests for uncovered code paths
|
||||
2. Re-run coverage check
|
||||
3. Achieve ≥80% coverage before proceeding
|
||||
```
|
||||
|
||||
### 3. Run Type Checker
|
||||
|
||||
```bash
|
||||
echo "🔍 Running type checker..."
|
||||
|
||||
# For TypeScript
|
||||
npx tsc --noEmit
|
||||
|
||||
# For Python
|
||||
# mypy src/
|
||||
|
||||
# For C#
|
||||
# dotnet build
|
||||
|
||||
# For Java
|
||||
# mvn compile
|
||||
```
|
||||
|
||||
**Expected output:**
|
||||
```
|
||||
✅ No type errors found
|
||||
```
|
||||
|
||||
**If type errors found:**
|
||||
```
|
||||
❌ Type errors detected!
|
||||
|
||||
src/components/UserDashboard.tsx:45:12 - error TS2345: Argument of type 'string | undefined' is not assignable to parameter of type 'string'.
|
||||
|
||||
45 onSave(user.name);
|
||||
~~~~~~~~~
|
||||
|
||||
src/services/userService.ts:23:18 - error TS2339: Property 'id' does not exist on type 'User'.
|
||||
|
||||
23 return user.id;
|
||||
~~
|
||||
|
||||
Found 2 errors in 2 files.
|
||||
|
||||
Action required:
|
||||
1. Fix type errors
|
||||
2. Re-run type checker
|
||||
3. Repeat until zero errors
|
||||
|
||||
DO NOT PROCEED with type errors.
|
||||
```
|
||||
|
||||
### 4. Run Linter
|
||||
|
||||
```bash
|
||||
echo "✨ Running linter..."
|
||||
|
||||
# For JavaScript/TypeScript
|
||||
npm run lint
|
||||
|
||||
# For Python
|
||||
# pylint src/
|
||||
|
||||
# For C#
|
||||
# dotnet format --verify-no-changes
|
||||
|
||||
# For Java
|
||||
# mvn checkstyle:check
|
||||
```
|
||||
|
||||
**Expected output:**
|
||||
```
|
||||
✅ No linting errors found
|
||||
```
|
||||
|
||||
**If lint errors found:**
|
||||
```
|
||||
❌ Lint errors detected!
|
||||
|
||||
src/components/UserDashboard.tsx
|
||||
45:1 error 'useState' is not defined no-undef
|
||||
52:12 error Unexpected console statement no-console
|
||||
67:5 warning Unexpected var, use let or const instead no-var
|
||||
|
||||
src/services/userService.ts
|
||||
23:1 error Missing return type on function @typescript-eslint/explicit-function-return-type
|
||||
|
||||
✖ 4 problems (3 errors, 1 warning)
|
||||
|
||||
Action required:
|
||||
1. Run auto-fix if available: npm run lint:fix
|
||||
2. Manually fix remaining errors
|
||||
3. Re-run linter
|
||||
4. Repeat until zero errors and zero warnings
|
||||
|
||||
DO NOT PROCEED with lint errors.
|
||||
```
|
||||
|
||||
### 5. Auto-Fix What's Possible
|
||||
|
||||
```bash
|
||||
echo "🔧 Attempting auto-fixes..."
|
||||
|
||||
# Run formatters and auto-fixable linters
|
||||
npm run lint:fix
|
||||
npm run format
|
||||
|
||||
# Stage the auto-fixes
|
||||
git add .
|
||||
```
|
||||
|
||||
### 6. Manual Fixes
|
||||
|
||||
For issues that can't be auto-fixed:
|
||||
|
||||
```typescript
|
||||
// Example: Fix type error
|
||||
// Before:
|
||||
const userName = user.name; // Type error if name is optional
|
||||
onSave(userName);
|
||||
|
||||
// After:
|
||||
const userName = user.name ?? ''; // Handle undefined case
|
||||
onSave(userName);
|
||||
```
|
||||
|
||||
```typescript
|
||||
// Example: Fix lint error
|
||||
// Before:
|
||||
var count = 0; // ESLint: no-var
|
||||
|
||||
// After:
|
||||
let count = 0; // Use let instead of var
|
||||
```
|
||||
|
||||
### 7. Verify All Checks Pass
|
||||
|
||||
Run everything again to confirm:
|
||||
|
||||
```bash
|
||||
echo "✅ Final verification..."
|
||||
|
||||
# Run all checks
|
||||
npm test && \
|
||||
npx tsc --noEmit && \
|
||||
npm run lint
|
||||
|
||||
echo "✅ ALL QUALITY CHECKS PASSED!"
|
||||
```
|
||||
|
||||
### 8. Commit Quality Fixes
|
||||
|
||||
```bash
|
||||
# Only if fixes were needed
|
||||
if git diff --cached --quiet; then
|
||||
echo "No fixes needed - all checks passed first time!"
|
||||
else
|
||||
git commit -m "fix(story-{story_id}): address quality check issues
|
||||
|
||||
- Fix type errors
|
||||
- Resolve lint violations
|
||||
- Improve test coverage to {coverage}%
|
||||
|
||||
All automated checks now passing:
|
||||
✅ Tests: {test_count} passed
|
||||
✅ Type check: No errors
|
||||
✅ Linter: No violations
|
||||
✅ Coverage: {coverage}%"
|
||||
fi
|
||||
```
|
||||
|
||||
### 9. Update State
|
||||
|
||||
```yaml
|
||||
# Update {stateFile}
|
||||
current_step: 6
|
||||
quality_checks:
|
||||
tests_passed: true
|
||||
test_count: {test_count}
|
||||
coverage: {coverage}%
|
||||
type_check_passed: true
|
||||
lint_passed: true
|
||||
all_checks_passed: true
|
||||
ready_for_code_review: true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Gate
|
||||
|
||||
**CRITICAL:** This is a **BLOCKING STEP**. You **MUST NOT** proceed to code review until ALL of the following pass:
|
||||
|
||||
✅ **All tests passing** (0 failures)
|
||||
✅ **Test coverage ≥80%** (or project threshold)
|
||||
✅ **Zero type errors**
|
||||
✅ **Zero lint errors**
|
||||
✅ **Zero lint warnings** (or all warnings justified and documented)
|
||||
|
||||
If ANY check fails:
|
||||
1. Fix the issue
|
||||
2. Re-run all checks
|
||||
3. Repeat until ALL PASS
|
||||
4. THEN proceed to next step
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Tests fail sporadically:**
|
||||
- Check for test interdependencies
|
||||
- Look for timing issues (use `waitFor` in async tests)
|
||||
- Check for environment-specific issues
|
||||
|
||||
**Type errors in third-party libraries:**
|
||||
- Install `@types` packages
|
||||
- Use type assertions carefully (document why)
|
||||
- Consider updating library versions
|
||||
|
||||
**Lint rules conflict with team standards:**
|
||||
- Discuss with team before changing config
|
||||
- Document exceptions in comments
|
||||
- Update lint config if truly inappropriate
|
||||
|
||||
**Coverage can't reach 80%:**
|
||||
- Focus on critical paths first
|
||||
- Test error cases and edge cases
|
||||
- Consider if untested code is actually needed
|
||||
|
||||
---
|
||||
|
||||
## Skip Conditions
|
||||
|
||||
This step CANNOT be skipped. All stories must pass quality checks.
|
||||
|
||||
The only exception: Documentation-only stories with zero code changes.
|
||||
|
||||
---
|
||||
|
||||
## Next Step
|
||||
|
||||
Proceed to **Step 7: Code Review** ({nextStep})
|
||||
|
||||
Now that all automated checks pass, the code is ready for human/AI review.
|
||||
|
|
@ -0,0 +1,337 @@
|
|||
---
|
||||
name: 'step-07-code-review'
|
||||
description: 'Multi-agent code review with fresh context and variable agent count'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline'
|
||||
multi_agent_review_workflow: '{project-root}/_bmad/bmm/workflows/4-implementation/multi-agent-review'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-07-code-review.md'
|
||||
nextStepFile: '{workflow_path}/steps/step-08-review-analysis.md'
|
||||
stateFile: '{state_file}'
|
||||
reviewReport: '{sprint_artifacts}/review-{story_id}.md'
|
||||
|
||||
# Role (continue as dev, but reviewer mindset)
|
||||
role: dev
|
||||
requires_fresh_context: true # CRITICAL: Review MUST happen in fresh context
|
||||
---
|
||||
|
||||
# Step 7: Code Review (Multi-Agent with Fresh Context)
|
||||
|
||||
## ROLE CONTINUATION - ADVERSARIAL MODE
|
||||
|
||||
**Continuing as DEV but switching to ADVERSARIAL REVIEWER mindset.**
|
||||
|
||||
You are now a critical code reviewer. Your job is to FIND PROBLEMS.
|
||||
- **NEVER** say "looks good" - that's a failure
|
||||
- **MUST** find 3-10 specific issues
|
||||
- **FIX** every issue you find
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Perform adversarial code review:
|
||||
1. Query Supabase advisors for security/performance issues
|
||||
2. Identify all files changed for this story
|
||||
3. Review each file against checklist
|
||||
4. Find and document 3-10 issues (MANDATORY)
|
||||
5. Fix all issues
|
||||
6. Verify tests still pass
|
||||
|
||||
### Multi-Agent Review with Fresh Context (NEW v1.5.0)
|
||||
|
||||
**All reviews now use multi-agent approach with variable agent counts based on risk.**
|
||||
|
||||
**CRITICAL: Review in FRESH CONTEXT (unbiased perspective)**
|
||||
|
||||
```
|
||||
⚠️ CHECKPOINT: Starting fresh review session
|
||||
|
||||
Multi-agent review will run in NEW context to avoid bias from implementation.
|
||||
|
||||
Agent count based on complexity level:
|
||||
- MICRO: 2 agents (Security + Code Quality)
|
||||
- STANDARD: 4 agents (+ Architecture + Testing)
|
||||
- COMPLEX: 6 agents (+ Performance + Domain Expert)
|
||||
|
||||
Smart agent selection analyzes changed files to select most relevant reviewers.
|
||||
```
|
||||
|
||||
**Invoke multi-agent-review workflow:**
|
||||
|
||||
```xml
|
||||
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/4-implementation/multi-agent-review/workflow.yaml">
|
||||
<input name="story_id">{story_id}</input>
|
||||
<input name="complexity_level">{complexity_level}</input>
|
||||
<input name="fresh_context">true</input>
|
||||
</invoke-workflow>
|
||||
```
|
||||
|
||||
**The multi-agent-review workflow will:**
|
||||
1. Create fresh context (new session, unbiased)
|
||||
2. Analyze changed files
|
||||
3. Select appropriate agents based on code changes
|
||||
4. Run parallel reviews from multiple perspectives
|
||||
5. Aggregate findings with severity ratings
|
||||
6. Return comprehensive review report
|
||||
|
||||
**After review completes:**
|
||||
- Review report saved to: `{sprint_artifacts}/review-{story_id}.md`
|
||||
- Proceed to step 8 (Review Analysis) to categorize findings
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
### Adversarial Requirements
|
||||
|
||||
- **MINIMUM 3 ISSUES** - If you found fewer, look harder
|
||||
- **MAXIMUM 10 ISSUES** - Prioritize if more found
|
||||
- **NO "LOOKS GOOD"** - This is FORBIDDEN
|
||||
- **FIX EVERYTHING** - Don't just report, fix
|
||||
|
||||
### Review Categories (find issues in EACH)
|
||||
|
||||
1. Security
|
||||
2. Performance
|
||||
3. Error Handling
|
||||
4. Test Coverage
|
||||
5. Code Quality
|
||||
6. Architecture
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Query Supabase Advisors
|
||||
|
||||
Use MCP tools:
|
||||
|
||||
```
|
||||
mcp__supabase__get_advisors:
|
||||
type: "security"
|
||||
|
||||
mcp__supabase__get_advisors:
|
||||
type: "performance"
|
||||
```
|
||||
|
||||
Document any issues found.
|
||||
|
||||
### 2. Identify Changed Files
|
||||
|
||||
```bash
|
||||
git status
|
||||
git diff --name-only HEAD~1
|
||||
```
|
||||
|
||||
List all files changed for story {story_id}.
|
||||
|
||||
### 3. Review Each Category
|
||||
|
||||
#### SECURITY REVIEW
|
||||
|
||||
For each file, check:
|
||||
- [ ] No SQL injection vulnerabilities
|
||||
- [ ] No XSS vulnerabilities
|
||||
- [ ] Auth checks on all protected routes
|
||||
- [ ] RLS policies exist and are correct
|
||||
- [ ] No credential exposure (API keys, secrets)
|
||||
- [ ] Input validation present
|
||||
- [ ] Rate limiting considered
|
||||
|
||||
#### PERFORMANCE REVIEW
|
||||
|
||||
- [ ] No N+1 query patterns
|
||||
- [ ] Indexes exist for query patterns
|
||||
- [ ] No unnecessary re-renders
|
||||
- [ ] Proper caching strategy
|
||||
- [ ] Efficient data fetching
|
||||
- [ ] Bundle size impact considered
|
||||
|
||||
#### ERROR HANDLING REVIEW
|
||||
|
||||
- [ ] Result type used consistently
|
||||
- [ ] Error messages are user-friendly
|
||||
- [ ] Edge cases handled
|
||||
- [ ] Null/undefined checked
|
||||
- [ ] Network errors handled gracefully
|
||||
|
||||
#### TEST COVERAGE REVIEW
|
||||
|
||||
- [ ] All AC have tests
|
||||
- [ ] Edge cases tested
|
||||
- [ ] Error paths tested
|
||||
- [ ] Mocking is appropriate (not excessive)
|
||||
- [ ] Tests are deterministic
|
||||
|
||||
#### CODE QUALITY REVIEW
|
||||
|
||||
- [ ] DRY - no duplicate code
|
||||
- [ ] SOLID principles followed
|
||||
- [ ] TypeScript strict mode compliant
|
||||
- [ ] No any types
|
||||
- [ ] Functions are focused (single responsibility)
|
||||
- [ ] Naming is clear and consistent
|
||||
|
||||
#### ARCHITECTURE REVIEW
|
||||
|
||||
- [ ] Module boundaries respected
|
||||
- [ ] Imports from index.ts only
|
||||
- [ ] Server/client separation correct
|
||||
- [ ] Data flow is clear
|
||||
- [ ] No circular dependencies
|
||||
|
||||
### 4. Document All Issues
|
||||
|
||||
For each issue found:
|
||||
|
||||
```yaml
|
||||
issue_{n}:
|
||||
severity: critical|high|medium|low
|
||||
category: security|performance|error-handling|testing|quality|architecture
|
||||
file: "{file_path}"
|
||||
line: {line_number}
|
||||
problem: |
|
||||
{Clear description of the issue}
|
||||
risk: |
|
||||
{What could go wrong if not fixed}
|
||||
fix: |
|
||||
{How to fix it}
|
||||
```
|
||||
|
||||
### 5. Fix All Issues
|
||||
|
||||
For EACH issue documented:
|
||||
|
||||
1. Edit the file to fix the issue
|
||||
2. Add test if issue wasn't covered
|
||||
3. Verify the fix is correct
|
||||
4. Mark as fixed
|
||||
|
||||
### 6. Run Verification
|
||||
|
||||
After all fixes:
|
||||
|
||||
```bash
|
||||
npm run lint
|
||||
npm run build
|
||||
npm test -- --run
|
||||
```
|
||||
|
||||
All must pass.
|
||||
|
||||
### 7. Create Review Report
|
||||
|
||||
Append to story file or create `{sprint_artifacts}/review-{story_id}.md`:
|
||||
|
||||
```markdown
|
||||
# Code Review Report - Story {story_id}
|
||||
|
||||
## Summary
|
||||
- Issues Found: {count}
|
||||
- Issues Fixed: {count}
|
||||
- Categories Reviewed: {list}
|
||||
|
||||
## Issues Detail
|
||||
|
||||
### Issue 1: {title}
|
||||
- **Severity:** {severity}
|
||||
- **Category:** {category}
|
||||
- **File:** {file}:{line}
|
||||
- **Problem:** {description}
|
||||
- **Fix Applied:** {fix_description}
|
||||
|
||||
### Issue 2: {title}
|
||||
...
|
||||
|
||||
## Security Checklist
|
||||
- [x] RLS policies verified
|
||||
- [x] No credential exposure
|
||||
- [x] Input validation present
|
||||
|
||||
## Performance Checklist
|
||||
- [x] No N+1 queries
|
||||
- [x] Indexes verified
|
||||
|
||||
## Final Status
|
||||
All issues resolved. Tests passing.
|
||||
|
||||
Reviewed by: DEV (adversarial)
|
||||
Reviewed at: {timestamp}
|
||||
```
|
||||
|
||||
### 8. Update Pipeline State
|
||||
|
||||
Update state file:
|
||||
- Add `6` to `stepsCompleted`
|
||||
- Set `lastStep: 6`
|
||||
- Set `steps.step-06-code-review.status: completed`
|
||||
- Record `issues_found` and `issues_fixed`
|
||||
|
||||
### 9. Present Summary and Menu
|
||||
|
||||
Display:
|
||||
```
|
||||
Code Review Complete
|
||||
|
||||
Issues Found: {count} (minimum 3 required)
|
||||
Issues Fixed: {count}
|
||||
|
||||
By Category:
|
||||
- Security: {count}
|
||||
- Performance: {count}
|
||||
- Error Handling: {count}
|
||||
- Test Coverage: {count}
|
||||
- Code Quality: {count}
|
||||
- Architecture: {count}
|
||||
|
||||
All Tests: PASSING
|
||||
Lint: CLEAN
|
||||
Build: SUCCESS
|
||||
|
||||
Review Report: {report_path}
|
||||
```
|
||||
|
||||
**Interactive Mode Menu:**
|
||||
```
|
||||
[C] Continue to Completion
|
||||
[R] Run another review pass
|
||||
[T] Run tests again
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
**Batch Mode:** Auto-continue if minimum issues found and fixed
|
||||
|
||||
## QUALITY GATE
|
||||
|
||||
Before proceeding:
|
||||
- [ ] Minimum 3 issues found and fixed
|
||||
- [ ] All categories reviewed
|
||||
- [ ] All tests still passing
|
||||
- [ ] Lint clean
|
||||
- [ ] Build succeeds
|
||||
- [ ] Review report created
|
||||
|
||||
## MCP TOOLS AVAILABLE
|
||||
|
||||
- `mcp__supabase__get_advisors` - Security/performance checks
|
||||
- `mcp__supabase__execute_sql` - Query verification
|
||||
|
||||
## CRITICAL STEP COMPLETION
|
||||
|
||||
**ONLY WHEN** [minimum 3 issues found AND all fixed AND tests pass],
|
||||
load and execute `{nextStepFile}` for story completion.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- Found and fixed 3-10 issues
|
||||
- All categories reviewed
|
||||
- Tests still passing after fixes
|
||||
- Review report complete
|
||||
- No "looks good" shortcuts
|
||||
|
||||
### ❌ FAILURE
|
||||
- Saying "looks good" or "no issues found"
|
||||
- Finding fewer than 3 issues
|
||||
- Not fixing issues found
|
||||
- Tests failing after fixes
|
||||
- Skipping review categories
|
||||
|
|
@ -0,0 +1,327 @@
|
|||
---
|
||||
name: 'step-08-review-analysis'
|
||||
description: 'Intelligently analyze code review findings - distinguish real issues from gold plating'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-08-review-analysis.md'
|
||||
stateFile: '{state_file}'
|
||||
storyFile: '{story_file}'
|
||||
reviewReport: '{sprint_artifacts}/review-{story_id}.md'
|
||||
|
||||
# Next step
|
||||
nextStep: '{workflow_path}/steps/step-09-fix-issues.md'
|
||||
---
|
||||
|
||||
# Step 8: Review Analysis
|
||||
|
||||
**Goal:** Critically analyze code review findings to distinguish **real problems** from **gold plating**, **false positives**, and **overzealous suggestions**.
|
||||
|
||||
## The Problem
|
||||
|
||||
AI code reviewers (and human reviewers) sometimes:
|
||||
- 🎨 **Gold plate**: Suggest unnecessary perfectionism
|
||||
- 🔍 **Overreact**: Flag non-issues to appear thorough
|
||||
- 📚 **Over-engineer**: Suggest abstractions for simple cases
|
||||
- ⚖️ **Misjudge context**: Apply rules without understanding tradeoffs
|
||||
|
||||
## The Solution
|
||||
|
||||
**Critical thinking filter**: Evaluate each finding objectively.
|
||||
|
||||
---
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Load Review Report
|
||||
|
||||
```bash
|
||||
# Read the code review report
|
||||
review_report="{reviewReport}"
|
||||
test -f "$review_report" || (echo "⚠️ No review report found" && exit 0)
|
||||
```
|
||||
|
||||
Parse findings by severity:
|
||||
- 🔴 CRITICAL
|
||||
- 🟠 HIGH
|
||||
- 🟡 MEDIUM
|
||||
- 🔵 LOW
|
||||
- ℹ️ INFO
|
||||
|
||||
### 2. Categorize Each Finding
|
||||
|
||||
For EACH finding, ask these questions:
|
||||
|
||||
#### Question 1: Is this a REAL problem?
|
||||
|
||||
```
|
||||
Real Problem Indicators:
|
||||
✅ Would cause bugs or incorrect behavior
|
||||
✅ Would cause security vulnerabilities
|
||||
✅ Would cause performance issues in production
|
||||
✅ Would make future maintenance significantly harder
|
||||
✅ Violates team/project standards documented in codebase
|
||||
|
||||
NOT Real Problems:
|
||||
❌ "Could be more elegant" (subjective style preference)
|
||||
❌ "Consider adding abstraction" (YAGNI - you aren't gonna need it)
|
||||
❌ "This pattern is not ideal" (works fine, alternative is marginal)
|
||||
❌ "Add comprehensive error handling" (for impossible error cases)
|
||||
❌ "Add logging everywhere" (log signal, not noise)
|
||||
```
|
||||
|
||||
#### Question 2: Does this finding understand CONTEXT?
|
||||
|
||||
```
|
||||
Context Considerations:
|
||||
📋 Story scope: Does fixing this exceed story requirements?
|
||||
🎯 Project maturity: Is this MVP, beta, or production-hardened?
|
||||
⚡ Performance criticality: Is this a hot path or cold path?
|
||||
👥 Team standards: Does team actually follow this pattern?
|
||||
📊 Data scale: Does this handle actual expected volume?
|
||||
|
||||
Example of MISSING context:
|
||||
Finding: "Add database indexing for better performance"
|
||||
Reality: Table has 100 rows total, query runs once per day
|
||||
Verdict: ❌ REJECT - Premature optimization
|
||||
```
|
||||
|
||||
#### Question 3: Is this ACTIONABLE?
|
||||
|
||||
```
|
||||
Actionable Findings:
|
||||
✅ Specific file, line number, exact issue
|
||||
✅ Clear explanation of problem
|
||||
✅ Concrete recommendation for fix
|
||||
✅ Can be fixed in reasonable time
|
||||
|
||||
NOT Actionable:
|
||||
❌ Vague: "Code quality could be improved"
|
||||
❌ No location: "Some error handling is missing"
|
||||
❌ No recommendation: "This might cause issues"
|
||||
❌ Massive scope: "Refactor entire architecture"
|
||||
```
|
||||
|
||||
### 3. Classification Decision Tree
|
||||
|
||||
For each finding, classify as:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Finding Classification Decision Tree │
|
||||
└─────────────────────────────────────────┘
|
||||
|
||||
Is it a CRITICAL security/correctness issue?
|
||||
├─ YES → 🔴 MUST FIX
|
||||
└─ NO ↓
|
||||
|
||||
Does it violate documented project standards?
|
||||
├─ YES → 🟠 SHOULD FIX
|
||||
└─ NO ↓
|
||||
|
||||
Would it prevent future maintenance?
|
||||
├─ YES → 🟡 CONSIDER FIX (if in scope)
|
||||
└─ NO ↓
|
||||
|
||||
Is it gold plating / over-engineering?
|
||||
├─ YES → ⚪ REJECT (document why)
|
||||
└─ NO ↓
|
||||
|
||||
Is it a style/opinion without real impact?
|
||||
├─ YES → ⚪ REJECT (document why)
|
||||
└─ NO → 🔵 OPTIONAL (tech debt backlog)
|
||||
```
|
||||
|
||||
### 4. Create Classification Report
|
||||
|
||||
```markdown
|
||||
# Code Review Analysis: Story {story_id}
|
||||
|
||||
## Review Metadata
|
||||
- Reviewer: {reviewer_type} (Adversarial / Multi-Agent)
|
||||
- Total Findings: {total_findings}
|
||||
- Review Date: {date}
|
||||
|
||||
## Classification Results
|
||||
|
||||
### 🔴 MUST FIX (Critical - Blocking)
|
||||
Total: {must_fix_count}
|
||||
|
||||
1. **[SECURITY] Unvalidated user input in API endpoint**
|
||||
- File: `src/api/users.ts:45`
|
||||
- Issue: POST /api/users accepts unvalidated input, SQL injection risk
|
||||
- Why this is real: Security vulnerability, could lead to data breach
|
||||
- Action: Add input validation with Zod schema
|
||||
- Estimated effort: 30 min
|
||||
|
||||
2. **[CORRECTNESS] Race condition in state update**
|
||||
- File: `src/components/UserForm.tsx:67`
|
||||
- Issue: Multiple async setState calls without proper sequencing
|
||||
- Why this is real: Causes intermittent bugs in production
|
||||
- Action: Use functional setState or useReducer
|
||||
- Estimated effort: 20 min
|
||||
|
||||
### 🟠 SHOULD FIX (High Priority)
|
||||
Total: {should_fix_count}
|
||||
|
||||
3. **[STANDARDS] Missing error handling per team convention**
|
||||
- File: `src/services/userService.ts:34`
|
||||
- Issue: API calls lack try-catch per documented standards
|
||||
- Why this matters: Team standard in CONTRIBUTING.md section 3.2
|
||||
- Action: Wrap in try-catch, log errors
|
||||
- Estimated effort: 15 min
|
||||
|
||||
### 🟡 CONSIDER FIX (Medium - If in scope)
|
||||
Total: {consider_count}
|
||||
|
||||
4. **[MAINTAINABILITY] Complex nested conditional**
|
||||
- File: `src/utils/validation.ts:23`
|
||||
- Issue: 4-level nested if-else hard to read
|
||||
- Why this matters: Could confuse future maintainers
|
||||
- Action: Extract to guard clauses or lookup table
|
||||
- Estimated effort: 45 min
|
||||
- **Scope consideration**: Nice to have, but not blocking
|
||||
|
||||
### ⚪ REJECTED (Gold Plating / False Positives)
|
||||
Total: {rejected_count}
|
||||
|
||||
5. **[REJECTED] "Add comprehensive logging to all functions"**
|
||||
- Reason: Gold plating - logging should be signal, not noise
|
||||
- Context: These are simple utility functions, no debugging issues
|
||||
- Verdict: REJECT - Would create log spam
|
||||
|
||||
6. **[REJECTED] "Extract component for reusability"**
|
||||
- Reason: YAGNI - component used only once, no reuse planned
|
||||
- Context: Story scope is single-use dashboard widget
|
||||
- Verdict: REJECT - Premature abstraction
|
||||
|
||||
7. **[REJECTED] "Add database connection pooling"**
|
||||
- Reason: Premature optimization - current load is minimal
|
||||
- Context: App has 10 concurrent users max, no performance issues
|
||||
- Verdict: REJECT - Optimize when needed, not speculatively
|
||||
|
||||
8. **[REJECTED] "Consider microservices architecture"**
|
||||
- Reason: Out of scope - architectural decision beyond story
|
||||
- Context: Story is adding a single API endpoint
|
||||
- Verdict: REJECT - Massive overreach
|
||||
|
||||
### 🔵 OPTIONAL (Tech Debt Backlog)
|
||||
Total: {optional_count}
|
||||
|
||||
9. **[STYLE] Inconsistent naming convention**
|
||||
- File: `src/utils/helpers.ts:12`
|
||||
- Issue: camelCase vs snake_case mixing
|
||||
- Why low priority: Works fine, linter doesn't flag it
|
||||
- Action: Standardize to camelCase when touching this file later
|
||||
- Create tech debt ticket: TD-{number}
|
||||
|
||||
## Summary
|
||||
|
||||
**Action Plan:**
|
||||
- 🔴 MUST FIX: {must_fix_count} issues (blocking)
|
||||
- 🟠 SHOULD FIX: {should_fix_count} issues (high priority)
|
||||
- 🟡 CONSIDER: {consider_count} issues (if time permits)
|
||||
- ⚪ REJECTED: {rejected_count} findings (documented why)
|
||||
- 🔵 OPTIONAL: {optional_count} items (tech debt backlog)
|
||||
|
||||
**Estimated fix time:** {total_fix_time_hours} hours
|
||||
|
||||
**Proceed to:** Step 9 - Fix Issues (implement MUST FIX + SHOULD FIX items)
|
||||
```
|
||||
|
||||
### 5. Document Rejections
|
||||
|
||||
**CRITICAL:** When rejecting findings, ALWAYS document WHY:
|
||||
|
||||
```markdown
|
||||
## Rejected Findings - Rationale
|
||||
|
||||
### Finding: "Add caching layer for all API calls"
|
||||
**Rejected because:**
|
||||
- ⚡ Premature optimization - no performance issues detected
|
||||
- 📊 Traffic analysis shows <100 requests/day
|
||||
- 🎯 Story scope is feature addition, not optimization
|
||||
- 💰 Cost: 2 days implementation, 0 proven benefit
|
||||
- 📝 Decision: Monitor first, optimize if needed
|
||||
|
||||
### Finding: "Refactor to use dependency injection"
|
||||
**Rejected because:**
|
||||
- 🏗️ Over-engineering - current approach works fine
|
||||
- 📏 Codebase size doesn't justify DI complexity
|
||||
- 👥 Team unfamiliar with DI patterns
|
||||
- 🎯 Story scope: simple feature, not architecture overhaul
|
||||
- 📝 Decision: Keep it simple, revisit if codebase grows
|
||||
|
||||
### Finding: "Add comprehensive JSDoc to all functions"
|
||||
**Rejected because:**
|
||||
- 📚 Gold plating - TypeScript types provide documentation
|
||||
- ⏱️ Time sink - 4+ hours for marginal benefit
|
||||
- 🎯 Team standard: JSDoc only for public APIs
|
||||
- 📝 Decision: Follow team convention, not reviewer preference
|
||||
```
|
||||
|
||||
### 6. Update State
|
||||
|
||||
```yaml
|
||||
# Update {stateFile}
|
||||
current_step: 8
|
||||
review_analysis:
|
||||
must_fix: {must_fix_count}
|
||||
should_fix: {should_fix_count}
|
||||
consider: {consider_count}
|
||||
rejected: {rejected_count}
|
||||
optional: {optional_count}
|
||||
estimated_fix_time: "{total_fix_time_hours}h"
|
||||
rejections_documented: true
|
||||
analysis_complete: true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Critical Thinking Framework
|
||||
|
||||
Use this framework to evaluate EVERY finding:
|
||||
|
||||
### The "So What?" Test
|
||||
- **Ask:** "So what if we don't fix this?"
|
||||
- **If answer is:** "Nothing bad happens" → REJECT
|
||||
- **If answer is:** "Production breaks" → MUST FIX
|
||||
|
||||
### The "YAGNI" Test (You Aren't Gonna Need It)
|
||||
- **Ask:** "Do we need this NOW for current requirements?"
|
||||
- **If answer is:** "Maybe someday" → REJECT
|
||||
- **If answer is:** "Yes, breaks without it" → FIX
|
||||
|
||||
### The "Scope" Test
|
||||
- **Ask:** "Is this within the story's scope?"
|
||||
- **If answer is:** "No, requires new story" → REJECT (or create new story)
|
||||
- **If answer is:** "Yes, part of ACs" → FIX
|
||||
|
||||
### The "Team Standard" Test
|
||||
- **Ask:** "Does our team actually do this?"
|
||||
- **If answer is:** "No, reviewer's opinion" → REJECT
|
||||
- **If answer is:** "Yes, in CONTRIBUTING.md" → FIX
|
||||
|
||||
---
|
||||
|
||||
## Common Rejection Patterns
|
||||
|
||||
Learn to recognize these patterns:
|
||||
|
||||
1. **"Consider adding..."** - Usually gold plating unless critical
|
||||
2. **"It would be better if..."** - Subjective opinion, often rejectable
|
||||
3. **"For maximum performance..."** - Premature optimization
|
||||
4. **"To follow best practices..."** - Check if team actually follows it
|
||||
5. **"This could be refactored..."** - Does it need refactoring NOW?
|
||||
6. **"Add comprehensive..."** - Comprehensive = overkill most of the time
|
||||
7. **"Future-proof by..."** - Can't predict future, solve current problems
|
||||
|
||||
---
|
||||
|
||||
## Next Step
|
||||
|
||||
Proceed to **Step 9: Fix Issues** ({nextStep})
|
||||
|
||||
Implement MUST FIX and SHOULD FIX items. Skip rejected items (already documented why).
|
||||
|
|
@ -0,0 +1,371 @@
|
|||
---
|
||||
name: 'step-09-fix-issues'
|
||||
description: 'Fix MUST FIX and SHOULD FIX issues from review analysis'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-09-fix-issues.md'
|
||||
stateFile: '{state_file}'
|
||||
storyFile: '{story_file}'
|
||||
reviewAnalysis: '{sprint_artifacts}/review-analysis-{story_id}.md'
|
||||
|
||||
# Next step
|
||||
nextStep: '{workflow_path}/steps/step-10-complete.md'
|
||||
---
|
||||
|
||||
# Step 9: Fix Issues
|
||||
|
||||
**Goal:** Implement fixes for MUST FIX and SHOULD FIX items identified in review analysis. Skip rejected items (gold plating already documented).
|
||||
|
||||
## Principles
|
||||
|
||||
- **Fix real problems only**: MUST FIX and SHOULD FIX categories
|
||||
- **Skip rejected items**: Already documented why in step 8
|
||||
- **Verify each fix**: Run tests after each fix
|
||||
- **Commit incrementally**: One fix per commit for traceability
|
||||
|
||||
---
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Load Review Analysis
|
||||
|
||||
```bash
|
||||
# Read review analysis from step 8
|
||||
review_analysis="{reviewAnalysis}"
|
||||
test -f "$review_analysis" || (echo "⚠️ No review analysis found - skipping fix step" && exit 0)
|
||||
```
|
||||
|
||||
Parse the analysis report to extract:
|
||||
- MUST FIX items (count: {must_fix_count})
|
||||
- SHOULD FIX items (count: {should_fix_count})
|
||||
- Rejected items (for reference - DO NOT fix these)
|
||||
|
||||
### 2. Fix MUST FIX Items (Critical - Blocking)
|
||||
|
||||
**These are MANDATORY fixes - cannot proceed without fixing.**
|
||||
|
||||
For each MUST FIX issue:
|
||||
|
||||
```
|
||||
🔴 Issue #{number}: {title}
|
||||
File: {file}:{line}
|
||||
Severity: CRITICAL
|
||||
Category: {category} (SECURITY | CORRECTNESS | etc.)
|
||||
|
||||
Problem:
|
||||
{description}
|
||||
|
||||
Fix Required:
|
||||
{recommendation}
|
||||
|
||||
Estimated Time: {estimate}
|
||||
```
|
||||
|
||||
**Fix Process:**
|
||||
1. Read the file at the specified location
|
||||
2. Understand the issue context
|
||||
3. Implement the recommended fix
|
||||
4. Add test if issue was caught by testing gap
|
||||
5. Run tests to verify fix works
|
||||
6. Commit the fix
|
||||
|
||||
```bash
|
||||
# Example fix commit
|
||||
git add {file}
|
||||
git commit -m "fix(story-{story_id}): {issue_title}
|
||||
|
||||
{category}: {brief_description}
|
||||
|
||||
- Issue: {problem_summary}
|
||||
- Fix: {fix_summary}
|
||||
- Testing: {test_verification}
|
||||
|
||||
Addresses review finding #{number} (MUST FIX)
|
||||
Related to story {story_id}"
|
||||
```
|
||||
|
||||
**Quality Check After Each Fix:**
|
||||
```bash
|
||||
# Verify fix doesn't break anything
|
||||
npm test
|
||||
|
||||
# If tests fail:
|
||||
# 1. Fix the test or the code
|
||||
# 2. Re-run tests
|
||||
# 3. Only commit when tests pass
|
||||
```
|
||||
|
||||
### 3. Fix SHOULD FIX Items (High Priority)
|
||||
|
||||
**These are important for code quality and team standards.**
|
||||
|
||||
For each SHOULD FIX issue:
|
||||
|
||||
```
|
||||
🟠 Issue #{number}: {title}
|
||||
File: {file}:{line}
|
||||
Severity: HIGH
|
||||
Category: {category} (STANDARDS | MAINTAINABILITY | etc.)
|
||||
|
||||
Problem:
|
||||
{description}
|
||||
|
||||
Fix Required:
|
||||
{recommendation}
|
||||
|
||||
Estimated Time: {estimate}
|
||||
```
|
||||
|
||||
Same fix process as MUST FIX items, but with SHOULD FIX label in commit.
|
||||
|
||||
### 4. Consider CONSIDER Items (If Time/Scope Permits)
|
||||
|
||||
For CONSIDER items, evaluate:
|
||||
|
||||
```
|
||||
🟡 Issue #{number}: {title}
|
||||
File: {file}:{line}
|
||||
Severity: MEDIUM
|
||||
|
||||
Scope Check:
|
||||
- Is this within story scope? {yes/no}
|
||||
- Time remaining in story? {estimate}
|
||||
- Would this improve maintainability? {yes/no}
|
||||
|
||||
Decision:
|
||||
[ ] FIX NOW - In scope and quick
|
||||
[ ] CREATE TECH DEBT TICKET - Out of scope
|
||||
[ ] SKIP - Not worth the effort
|
||||
```
|
||||
|
||||
If fixing:
|
||||
- Same process as SHOULD FIX
|
||||
- Label as "refactor" or "improve" instead of "fix"
|
||||
|
||||
If creating tech debt ticket:
|
||||
```markdown
|
||||
# Tech Debt: {title}
|
||||
|
||||
**Source:** Code review finding from story {story_id}
|
||||
**Priority:** Medium
|
||||
**Estimated Effort:** {estimate}
|
||||
|
||||
**Description:**
|
||||
{issue_description}
|
||||
|
||||
**Recommendation:**
|
||||
{recommendation}
|
||||
|
||||
**Why Deferred:**
|
||||
{reason} (e.g., out of scope, time constraints, etc.)
|
||||
```
|
||||
|
||||
### 5. Skip REJECTED Items
|
||||
|
||||
**DO NOT fix rejected items.**
|
||||
|
||||
Display confirmation:
|
||||
```
|
||||
⚪ REJECTED ITEMS (Skipped):
|
||||
Total: {rejected_count}
|
||||
|
||||
These findings were analyzed and rejected in step 8:
|
||||
- #{number}: {title} - {rejection_reason}
|
||||
- #{number}: {title} - {rejection_reason}
|
||||
|
||||
✅ Correctly skipped (documented as gold plating/false positives)
|
||||
```
|
||||
|
||||
### 6. Skip OPTIONAL Items (Tech Debt Backlog)
|
||||
|
||||
For OPTIONAL items:
|
||||
- Create tech debt tickets (if not already created)
|
||||
- Do NOT implement now
|
||||
- Add to project backlog
|
||||
|
||||
### 7. Verify All Fixes Work Together
|
||||
|
||||
After all fixes applied, run complete quality check:
|
||||
|
||||
```bash
|
||||
echo "🔍 Verifying all fixes together..."
|
||||
|
||||
# Run full test suite
|
||||
npm test
|
||||
|
||||
# Run type checker
|
||||
npx tsc --noEmit
|
||||
|
||||
# Run linter
|
||||
npm run lint
|
||||
|
||||
# Check test coverage
|
||||
npm run test:coverage
|
||||
```
|
||||
|
||||
**If any check fails:**
|
||||
```
|
||||
❌ Quality checks failed after fixes!
|
||||
|
||||
This means fixes introduced new issues.
|
||||
|
||||
Action required:
|
||||
1. Identify which fix broke which test
|
||||
2. Fix the issue
|
||||
3. Re-run quality checks
|
||||
4. Repeat until all checks pass
|
||||
|
||||
DO NOT PROCEED until all quality checks pass.
|
||||
```
|
||||
|
||||
### 8. Summary Report
|
||||
|
||||
```markdown
|
||||
# Fix Summary: Story {story_id}
|
||||
|
||||
## Issues Addressed
|
||||
|
||||
### 🔴 MUST FIX: {must_fix_count} issues
|
||||
- [x] Issue #1: {title} - FIXED ✅
|
||||
- [x] Issue #2: {title} - FIXED ✅
|
||||
|
||||
### 🟠 SHOULD FIX: {should_fix_count} issues
|
||||
- [x] Issue #3: {title} - FIXED ✅
|
||||
- [x] Issue #4: {title} - FIXED ✅
|
||||
|
||||
### 🟡 CONSIDER: {consider_fixed_count}/{consider_count} issues
|
||||
- [x] Issue #5: {title} - FIXED ✅
|
||||
- [ ] Issue #6: {title} - Tech debt ticket created
|
||||
|
||||
### ⚪ REJECTED: {rejected_count} items
|
||||
- Correctly skipped (documented in review analysis)
|
||||
|
||||
### 🔵 OPTIONAL: {optional_count} items
|
||||
- Tech debt tickets created
|
||||
- Added to backlog
|
||||
|
||||
## Commits Made
|
||||
|
||||
Total commits: {commit_count}
|
||||
- MUST FIX commits: {must_fix_commits}
|
||||
- SHOULD FIX commits: {should_fix_commits}
|
||||
- Other commits: {other_commits}
|
||||
|
||||
## Final Quality Check
|
||||
|
||||
✅ All tests passing: {test_count} tests
|
||||
✅ Type check: No errors
|
||||
✅ Linter: No violations
|
||||
✅ Coverage: {coverage}%
|
||||
|
||||
## Time Spent
|
||||
|
||||
Estimated: {estimated_time}
|
||||
Actual: {actual_time}
|
||||
Efficiency: {efficiency_percentage}%
|
||||
```
|
||||
|
||||
### 9. Update State
|
||||
|
||||
```yaml
|
||||
# Update {stateFile}
|
||||
current_step: 9
|
||||
issues_fixed:
|
||||
must_fix: {must_fix_count}
|
||||
should_fix: {should_fix_count}
|
||||
consider: {consider_fixed_count}
|
||||
rejected: {rejected_count} (skipped - documented)
|
||||
optional: {optional_count} (tech debt created)
|
||||
fixes_verified: true
|
||||
all_quality_checks_passed: true
|
||||
ready_for_completion: true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
**BLOCKING:** Cannot proceed to step 10 until:
|
||||
|
||||
✅ **All MUST FIX issues resolved**
|
||||
✅ **All SHOULD FIX issues resolved**
|
||||
✅ **All tests passing**
|
||||
✅ **Type check passing**
|
||||
✅ **Linter passing**
|
||||
✅ **Coverage maintained or improved**
|
||||
|
||||
If any gate fails:
|
||||
1. Fix the issue
|
||||
2. Re-run quality checks
|
||||
3. Repeat until ALL PASS
|
||||
4. THEN proceed to next step
|
||||
|
||||
---
|
||||
|
||||
## Skip Conditions
|
||||
|
||||
This step can be skipped only if:
|
||||
- Review analysis (step 8) found zero issues requiring fixes
|
||||
- All findings were REJECTED or OPTIONAL
|
||||
|
||||
Display when skipping:
|
||||
```
|
||||
✅ No fixes required!
|
||||
|
||||
Review analysis found no critical or high-priority issues.
|
||||
All findings were either rejected as gold plating or marked as optional tech debt.
|
||||
|
||||
Proceeding to completion...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If a fix causes test failures:**
|
||||
```
|
||||
⚠️ Fix introduced regression!
|
||||
|
||||
Test failures after applying fix for: {issue_title}
|
||||
|
||||
Failed tests:
|
||||
- {test_name_1}
|
||||
- {test_name_2}
|
||||
|
||||
Action:
|
||||
1. Review the fix - did it break existing functionality?
|
||||
2. Either fix the implementation or update the tests
|
||||
3. Re-run tests
|
||||
4. Only proceed when tests pass
|
||||
```
|
||||
|
||||
**If stuck on a fix:**
|
||||
```
|
||||
⚠️ Fix is more complex than estimated
|
||||
|
||||
Issue: {issue_title}
|
||||
Estimated: {estimate}
|
||||
Actual time spent: {actual} (exceeded estimate)
|
||||
|
||||
Options:
|
||||
[C] Continue - Keep working on this fix
|
||||
[D] Defer - Create tech debt ticket and continue
|
||||
[H] Help - Request human intervention
|
||||
|
||||
If deferring:
|
||||
- Document current progress
|
||||
- Create detailed tech debt ticket
|
||||
- Note blocking issues
|
||||
- Continue with other fixes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Step
|
||||
|
||||
Proceed to **Step 10: Complete + Update Status** ({nextStep})
|
||||
|
||||
All issues fixed, all quality checks passed. Ready to mark story as done!
|
||||
|
|
@ -0,0 +1,332 @@
|
|||
---
|
||||
name: 'step-10-complete'
|
||||
description: 'Complete story with MANDATORY sprint-status.yaml update and verification'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-10-complete.md'
|
||||
nextStepFile: '{workflow_path}/steps/step-11-summary.md'
|
||||
stateFile: '{state_file}'
|
||||
sprint_status: '{sprint_artifacts}/sprint-status.yaml'
|
||||
|
||||
# Role Switch
|
||||
role: sm
|
||||
---
|
||||
|
||||
# Step 10: Complete Story (v1.5.0: Mandatory Status Update)
|
||||
|
||||
## ROLE SWITCH
|
||||
|
||||
**Switching to SM (Scrum Master) perspective.**
|
||||
|
||||
You are now completing the story and preparing changes for git commit.
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Complete the story with safety checks and MANDATORY status updates:
|
||||
1. Extract file list from story
|
||||
2. Stage only story-related files
|
||||
3. Generate commit message
|
||||
4. Create commit
|
||||
5. Push to remote (if configured)
|
||||
6. Update story file status to "done"
|
||||
7. **UPDATE sprint-status.yaml (MANDATORY - NO EXCEPTIONS)**
|
||||
8. **VERIFY sprint-status.yaml update persisted (CRITICAL)**
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
### Completion Principles
|
||||
|
||||
- **TARGETED COMMIT** - Only files from this story's File List
|
||||
- **SAFETY CHECKS** - Verify no secrets, proper commit message
|
||||
- **STATUS UPDATE** - Mark story as "review" (ready for human review)
|
||||
- **NO FORCE PUSH** - Normal push only
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Extract File List from Story
|
||||
|
||||
Read story file and find "File List" section:
|
||||
|
||||
```markdown
|
||||
## File List
|
||||
- src/components/UserProfile.tsx
|
||||
- src/actions/updateUser.ts
|
||||
- tests/user.test.ts
|
||||
```
|
||||
|
||||
Extract all file paths.
|
||||
Add story file itself to the list.
|
||||
|
||||
Store as `{story_files}` (space-separated list).
|
||||
|
||||
### 2. Verify Files Exist
|
||||
|
||||
For each file in list:
|
||||
```bash
|
||||
test -f "{file}" && echo "✓ {file}" || echo "⚠️ {file} not found"
|
||||
```
|
||||
|
||||
### 3. Check Git Status
|
||||
|
||||
```bash
|
||||
git status --short
|
||||
```
|
||||
|
||||
Display files changed.
|
||||
|
||||
### 4. Stage Story Files Only
|
||||
|
||||
```bash
|
||||
git add {story_files}
|
||||
```
|
||||
|
||||
**This ensures parallel-safe commits** (other agents won't conflict).
|
||||
|
||||
### 5. Generate Commit Message
|
||||
|
||||
Based on story title and changes:
|
||||
|
||||
```
|
||||
feat(story-{story_id}): {story_title}
|
||||
|
||||
Implemented:
|
||||
{list acceptance criteria or key changes}
|
||||
|
||||
Files changed:
|
||||
- {file_1}
|
||||
- {file_2}
|
||||
|
||||
Story: {story_file}
|
||||
```
|
||||
|
||||
### 6. Create Commit (With Queue for Parallel Mode)
|
||||
|
||||
**Check execution mode:**
|
||||
```
|
||||
If mode == "batch" AND parallel execution:
|
||||
use_commit_queue = true
|
||||
Else:
|
||||
use_commit_queue = false
|
||||
```
|
||||
|
||||
**If use_commit_queue == true:**
|
||||
|
||||
```bash
|
||||
# Commit queue with file-based locking
|
||||
lock_file=".git/bmad-commit.lock"
|
||||
max_wait=300 # 5 minutes
|
||||
wait_time=0
|
||||
retry_delay=1
|
||||
|
||||
while [ $wait_time -lt $max_wait ]; do
|
||||
if [ ! -f "$lock_file" ]; then
|
||||
# Acquire lock
|
||||
echo "locked_by: {{story_key}}
|
||||
locked_at: $(date -u +%Y-%m-%dT%H:%M:%SZ)
|
||||
worker_id: {{worker_id}}
|
||||
pid: $$" > "$lock_file"
|
||||
|
||||
echo "🔒 Commit lock acquired for {{story_key}}"
|
||||
|
||||
# Execute commit
|
||||
git commit -m "$(cat <<'EOF'
|
||||
{commit_message}
|
||||
EOF
|
||||
)"
|
||||
|
||||
commit_result=$?
|
||||
|
||||
# Release lock
|
||||
rm -f "$lock_file"
|
||||
echo "🔓 Lock released"
|
||||
|
||||
if [ $commit_result -eq 0 ]; then
|
||||
git log -1 --oneline
|
||||
break
|
||||
else
|
||||
echo "❌ Commit failed"
|
||||
exit $commit_result
|
||||
fi
|
||||
else
|
||||
# Lock exists, check if stale
|
||||
lock_age=$(( $(date +%s) - $(date -r "$lock_file" +%s) ))
|
||||
if [ $lock_age -gt 300 ]; then
|
||||
echo "⚠️ Stale lock detected (${lock_age}s old) - removing"
|
||||
rm -f "$lock_file"
|
||||
continue
|
||||
fi
|
||||
|
||||
locked_by=$(grep "locked_by:" "$lock_file" | cut -d' ' -f2-)
|
||||
echo "⏳ Waiting for commit lock... (held by $locked_by, ${wait_time}s elapsed)"
|
||||
sleep $retry_delay
|
||||
wait_time=$(( wait_time + retry_delay ))
|
||||
retry_delay=$(( retry_delay < 30 ? retry_delay * 3 / 2 : 30 )) # Exponential backoff, max 30s
|
||||
fi
|
||||
done
|
||||
|
||||
if [ $wait_time -ge $max_wait ]; then
|
||||
echo "❌ TIMEOUT: Could not acquire commit lock after 5 minutes"
|
||||
echo "Lock holder: $(cat $lock_file)"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
**If use_commit_queue == false (sequential mode):**
|
||||
|
||||
```bash
|
||||
# Direct commit (no queue needed)
|
||||
git commit -m "$(cat <<'EOF'
|
||||
{commit_message}
|
||||
EOF
|
||||
)"
|
||||
|
||||
git log -1 --oneline
|
||||
```
|
||||
|
||||
### 7. Push to Remote (Optional)
|
||||
|
||||
**If configured to push:**
|
||||
```bash
|
||||
git push
|
||||
```
|
||||
|
||||
**If push succeeds:**
|
||||
```
|
||||
✅ Changes pushed to remote
|
||||
```
|
||||
|
||||
**If push fails (e.g., need to pull first):**
|
||||
```
|
||||
⚠️ Push failed - changes committed locally
|
||||
You can push manually when ready
|
||||
```
|
||||
|
||||
### 8. Update Story Status (File + Sprint-Status)
|
||||
|
||||
**CRITICAL: Two-location update with verification**
|
||||
|
||||
#### 8.1: Update Story File
|
||||
|
||||
Update story file frontmatter:
|
||||
```yaml
|
||||
status: done # Story completed (v1.5.0: changed from "review" to "done")
|
||||
completed_date: {date}
|
||||
```
|
||||
|
||||
#### 8.2: Update sprint-status.yaml (MANDATORY - NO EXCEPTIONS)
|
||||
|
||||
**This is CRITICAL and CANNOT be skipped.**
|
||||
|
||||
```bash
|
||||
# Read current sprint-status.yaml
|
||||
sprint_status_file="{sprint_artifacts}/sprint-status.yaml"
|
||||
story_key="{story_id}"
|
||||
|
||||
# Update development_status section
|
||||
# Change status from whatever it was to "done"
|
||||
|
||||
development_status:
|
||||
{story_id}: done # ✅ COMPLETED: {story_title}
|
||||
```
|
||||
|
||||
**Implementation:**
|
||||
```bash
|
||||
# Read current status
|
||||
current_status=$(grep "^\s*{story_id}:" "$sprint_status_file" | awk '{print $2}')
|
||||
|
||||
# Update to done
|
||||
sed -i'' "s/^\s*{story_id}:.*/ {story_id}: done # ✅ COMPLETED: {story_title}/" "$sprint_status_file"
|
||||
|
||||
echo "✅ Updated sprint-status.yaml: {story_id} → done"
|
||||
```
|
||||
|
||||
#### 8.3: Verify Update Persisted (CRITICAL)
|
||||
|
||||
```bash
|
||||
# Re-read sprint-status.yaml to verify change
|
||||
verification=$(grep "^\s*{story_id}:" "$sprint_status_file" | awk '{print $2}')
|
||||
|
||||
if [ "$verification" != "done" ]; then
|
||||
echo "❌ CRITICAL: sprint-status.yaml update FAILED!"
|
||||
echo "Expected: done"
|
||||
echo "Got: $verification"
|
||||
echo ""
|
||||
echo "HALTING pipeline - status update is MANDATORY"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Verified: sprint-status.yaml correctly updated"
|
||||
```
|
||||
|
||||
**NO EXCEPTIONS:** If verification fails, pipeline MUST HALT.
|
||||
|
||||
### 9. Update Pipeline State
|
||||
|
||||
Update state file:
|
||||
- Add `6` to `stepsCompleted`
|
||||
- Set `lastStep: 6`
|
||||
- Set `steps.step-06-complete.status: completed`
|
||||
- Record commit hash
|
||||
|
||||
### 10. Display Summary
|
||||
|
||||
```
|
||||
Story Completion
|
||||
|
||||
✅ Files staged: {file_count}
|
||||
✅ Commit created: {commit_hash}
|
||||
✅ Status updated: review
|
||||
{if pushed}✅ Pushed to remote{endif}
|
||||
|
||||
Commit: {commit_hash_short}
|
||||
Message: {commit_title}
|
||||
|
||||
Ready for Summary Generation
|
||||
```
|
||||
|
||||
**Interactive Mode Menu:**
|
||||
```
|
||||
[C] Continue to Summary
|
||||
[P] Push to remote (if not done)
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
**Batch Mode:** Auto-continue
|
||||
|
||||
## QUALITY GATE
|
||||
|
||||
Before proceeding (BLOCKING - ALL must pass):
|
||||
- [ ] Targeted files staged (from File List)
|
||||
- [ ] Commit message generated
|
||||
- [ ] Commit created successfully
|
||||
- [ ] Story file status updated to "done"
|
||||
- [ ] **sprint-status.yaml updated to "done" (MANDATORY)**
|
||||
- [ ] **sprint-status.yaml update VERIFIED (CRITICAL)**
|
||||
|
||||
**If ANY check fails, pipeline MUST HALT.**
|
||||
|
||||
## CRITICAL STEP COMPLETION
|
||||
|
||||
**ONLY WHEN** [commit created],
|
||||
load and execute `{nextStepFile}` for summary generation.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- Only story files committed
|
||||
- Commit message is clear
|
||||
- Status updated properly
|
||||
- No secrets committed
|
||||
- Push succeeded or skipped safely
|
||||
|
||||
### ❌ FAILURE
|
||||
- Committing unrelated files
|
||||
- Generic commit message
|
||||
- Not updating story status
|
||||
- Pushing secrets
|
||||
- Force pushing
|
||||
|
|
@ -0,0 +1,279 @@
|
|||
---
|
||||
name: 'step-06a-queue-commit'
|
||||
description: 'Queued git commit with file-based locking for parallel safety'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-06a-queue-commit.md'
|
||||
nextStepFile: '{workflow_path}/steps/step-07-summary.md'
|
||||
|
||||
# Role
|
||||
role: dev
|
||||
requires_fresh_context: false
|
||||
---
|
||||
|
||||
# Step 6a: Queued Git Commit (Parallel-Safe)
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Execute git commit with file-based locking to prevent concurrent commit conflicts in parallel batch mode.
|
||||
|
||||
**Problem Solved:**
|
||||
- Multiple parallel agents trying to commit simultaneously
|
||||
- Git lock file conflicts (.git/index.lock)
|
||||
- "Another git process seems to be running" errors
|
||||
- Commit failures requiring manual intervention
|
||||
|
||||
**Solution:**
|
||||
- File-based commit queue using .git/bmad-commit.lock
|
||||
- Automatic retry with exponential backoff
|
||||
- Lock cleanup on success or failure
|
||||
- Maximum wait time enforcement
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Check if Commit Queue is Needed
|
||||
|
||||
```
|
||||
If mode == "batch" AND execution_mode == "parallel":
|
||||
use_commit_queue = true
|
||||
Display: "🔒 Using commit queue (parallel mode)"
|
||||
Else:
|
||||
use_commit_queue = false
|
||||
Display: "Committing directly (sequential mode)"
|
||||
goto Step 3 (Direct Commit)
|
||||
```
|
||||
|
||||
### 2. Acquire Commit Lock (Parallel Mode Only)
|
||||
|
||||
**Lock file:** `.git/bmad-commit.lock`
|
||||
|
||||
**Acquisition algorithm:**
|
||||
```
|
||||
max_wait_time = 300 seconds (5 minutes)
|
||||
retry_delay = 1 second (exponential backoff)
|
||||
start_time = now()
|
||||
|
||||
WHILE elapsed_time < max_wait_time:
|
||||
|
||||
IF lock file does NOT exist:
|
||||
Create lock file with content:
|
||||
locked_by: {{story_key}}
|
||||
locked_at: {{timestamp}}
|
||||
worker_id: {{worker_id}}
|
||||
pid: {{process_id}}
|
||||
|
||||
Display: "🔓 Lock acquired for {{story_key}}"
|
||||
BREAK (proceed to commit)
|
||||
|
||||
ELSE:
|
||||
Read lock file to check who has it
|
||||
Display: "⏳ Waiting for commit lock... (held by {{locked_by}}, {{wait_duration}}s elapsed)"
|
||||
|
||||
Sleep retry_delay seconds
|
||||
retry_delay = min(retry_delay * 1.5, 30) # Exponential backoff, max 30s
|
||||
|
||||
Check if lock is stale (>5 minutes old):
|
||||
IF lock_age > 300 seconds:
|
||||
Display: "⚠️ Stale lock detected ({{lock_age}}s old) - removing"
|
||||
Delete lock file
|
||||
Continue (try again)
|
||||
```
|
||||
|
||||
**Timeout handling:**
|
||||
```
|
||||
IF elapsed_time >= max_wait_time:
|
||||
Display:
|
||||
❌ TIMEOUT: Could not acquire commit lock after 5 minutes
|
||||
|
||||
Lock held by: {{locked_by}}
|
||||
Lock age: {{lock_age}} seconds
|
||||
|
||||
Possible causes:
|
||||
- Another agent crashed while holding lock
|
||||
- Commit taking abnormally long
|
||||
- Lock file not cleaned up
|
||||
|
||||
HALT - Manual intervention required:
|
||||
- Check if lock holder is still running
|
||||
- Delete .git/bmad-commit.lock if safe
|
||||
- Retry this story
|
||||
```
|
||||
|
||||
### 3. Execute Git Commit
|
||||
|
||||
**Stage changes:**
|
||||
```bash
|
||||
git add {files_changed_for_this_story}
|
||||
```
|
||||
|
||||
**Generate commit message:**
|
||||
```
|
||||
feat: implement story {{story_key}}
|
||||
|
||||
{{implementation_summary_from_dev_agent_record}}
|
||||
|
||||
Files changed:
|
||||
{{#each files_changed}}
|
||||
- {{this}}
|
||||
{{/each}}
|
||||
|
||||
Tasks completed: {{checked_tasks}}/{{total_tasks}}
|
||||
Story status: {{story_status}}
|
||||
```
|
||||
|
||||
**Commit:**
|
||||
```bash
|
||||
git commit -m "$(cat <<'EOF'
|
||||
{commit_message}
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
**Verification:**
|
||||
```bash
|
||||
git log -1 --oneline
|
||||
```
|
||||
|
||||
Confirm commit SHA returned.
|
||||
|
||||
### 4. Release Commit Lock (Parallel Mode Only)
|
||||
|
||||
```
|
||||
IF use_commit_queue == true:
|
||||
Delete lock file: .git/bmad-commit.lock
|
||||
|
||||
Verify lock removed:
|
||||
IF lock file still exists:
|
||||
Display: "⚠️ WARNING: Could not remove lock file"
|
||||
Try force delete
|
||||
ELSE:
|
||||
Display: "🔓 Lock released for {{story_key}}"
|
||||
```
|
||||
|
||||
**Error handling:**
|
||||
```
|
||||
IF commit failed:
|
||||
Release lock (if held)
|
||||
Display:
|
||||
❌ COMMIT FAILED: {{error_message}}
|
||||
|
||||
Story implemented but not committed.
|
||||
Changes are staged but not in git history.
|
||||
|
||||
HALT - Fix commit issue before continuing
|
||||
```
|
||||
|
||||
### 5. Update State
|
||||
|
||||
Update state file:
|
||||
- Add `6a` to `stepsCompleted`
|
||||
- Set `lastStep: 6a`
|
||||
- Record `commit_sha`
|
||||
- Record `committed_at` timestamp
|
||||
|
||||
### 6. Present Summary
|
||||
|
||||
Display:
|
||||
```
|
||||
✅ Story {{story_key}} Committed
|
||||
|
||||
Commit: {{commit_sha}}
|
||||
Files: {{files_count}} changed
|
||||
{{#if use_commit_queue}}Lock wait: {{lock_wait_duration}}s{{/if}}
|
||||
```
|
||||
|
||||
**Interactive Mode Menu:**
|
||||
```
|
||||
[C] Continue to Summary
|
||||
[P] Push to remote
|
||||
[H] Halt pipeline
|
||||
```
|
||||
|
||||
**Batch Mode:** Auto-continue to step-07-summary.md
|
||||
|
||||
## CRITICAL STEP COMPLETION
|
||||
|
||||
Load and execute `{nextStepFile}` for summary.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- Changes committed to git
|
||||
- Commit SHA recorded
|
||||
- Lock acquired and released cleanly (parallel mode)
|
||||
- No lock file remaining
|
||||
- State updated
|
||||
|
||||
### ❌ FAILURE
|
||||
- Commit timed out
|
||||
- Lock acquisition timed out (>5 min)
|
||||
- Lock not released (leaked lock)
|
||||
- Commit command failed
|
||||
- Stale lock not cleaned up
|
||||
|
||||
---
|
||||
|
||||
## LOCK FILE FORMAT
|
||||
|
||||
`.git/bmad-commit.lock` contains:
|
||||
```yaml
|
||||
locked_by: "2-7-image-file-handling"
|
||||
locked_at: "2026-01-07T18:45:32Z"
|
||||
worker_id: 3
|
||||
pid: 12345
|
||||
story_file: "docs/sprint-artifacts/2-7-image-file-handling.md"
|
||||
```
|
||||
|
||||
This allows debugging if lock gets stuck.
|
||||
|
||||
---
|
||||
|
||||
## QUEUE BENEFITS
|
||||
|
||||
**Before (No Queue):**
|
||||
```
|
||||
Worker 1: git commit → acquires .git/index.lock
|
||||
Worker 2: git commit → ERROR: index.lock exists
|
||||
Worker 3: git commit → ERROR: index.lock exists
|
||||
Worker 2: retries → ERROR: index.lock exists
|
||||
Worker 3: retries → ERROR: index.lock exists
|
||||
Workers 2 & 3: HALT - manual intervention needed
|
||||
```
|
||||
|
||||
**After (With Queue):**
|
||||
```
|
||||
Worker 1: acquires bmad-commit.lock → git commit → releases lock
|
||||
Worker 2: waits for lock → acquires → git commit → releases
|
||||
Worker 3: waits for lock → acquires → git commit → releases
|
||||
All workers: SUCCESS ✅
|
||||
```
|
||||
|
||||
**Throughput Impact:**
|
||||
- Implementation: Fully parallel (no blocking)
|
||||
- Commits: Serialized (necessary to prevent conflicts)
|
||||
- Overall: Still much faster than sequential mode (implementation is 90% of the time)
|
||||
|
||||
---
|
||||
|
||||
## STALE LOCK RECOVERY
|
||||
|
||||
**Automatic cleanup:**
|
||||
- Locks older than 5 minutes are considered stale
|
||||
- Automatically removed before retrying
|
||||
- Prevents permanent deadlock from crashed agents
|
||||
|
||||
**Manual recovery:**
|
||||
```bash
|
||||
# If workflow stuck on lock acquisition:
|
||||
rm .git/bmad-commit.lock
|
||||
|
||||
# Check if any git process is actually running:
|
||||
ps aux | grep git
|
||||
|
||||
# If no git process, safe to remove lock
|
||||
```
|
||||
|
|
@ -0,0 +1,219 @@
|
|||
---
|
||||
name: 'step-11-summary'
|
||||
description: 'Generate comprehensive audit trail and pipeline summary'
|
||||
|
||||
# Path Definitions
|
||||
workflow_path: '{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline'
|
||||
|
||||
# File References
|
||||
thisStepFile: '{workflow_path}/steps/step-11-summary.md'
|
||||
stateFile: '{state_file}'
|
||||
storyFile: '{story_file}'
|
||||
auditTrail: '{audit_trail}'
|
||||
|
||||
# Role
|
||||
role: null
|
||||
---
|
||||
|
||||
# Step 11: Pipeline Summary
|
||||
|
||||
## STEP GOAL
|
||||
|
||||
Generate comprehensive audit trail and summary:
|
||||
1. Calculate total duration
|
||||
2. Summarize work completed
|
||||
3. Generate audit trail file
|
||||
4. Display final summary
|
||||
5. Clean up state file
|
||||
|
||||
## EXECUTION SEQUENCE
|
||||
|
||||
### 1. Calculate Metrics
|
||||
|
||||
From state file, calculate:
|
||||
- Total duration: `{completed_at} - {started_at}`
|
||||
- Step durations
|
||||
- Files modified count
|
||||
- Issues found and fixed
|
||||
- Tasks completed
|
||||
|
||||
### 2. Generate Audit Trail
|
||||
|
||||
Create file: `{sprint_artifacts}/audit-super-dev-{story_id}-{date}.yaml`
|
||||
|
||||
```yaml
|
||||
---
|
||||
audit_version: "1.0"
|
||||
workflow: "super-dev-pipeline"
|
||||
workflow_version: "1.0.0"
|
||||
|
||||
# Story identification
|
||||
story_id: "{story_id}"
|
||||
story_file: "{story_file}"
|
||||
story_title: "{story_title}"
|
||||
|
||||
# Execution summary
|
||||
execution:
|
||||
started_at: "{started_at}"
|
||||
completed_at: "{completed_at}"
|
||||
total_duration: "{duration}"
|
||||
mode: "{mode}"
|
||||
status: "completed"
|
||||
|
||||
# Development analysis
|
||||
development:
|
||||
type: "{greenfield|brownfield|hybrid}"
|
||||
existing_files_modified: {count}
|
||||
new_files_created: {count}
|
||||
migrations_applied: {count}
|
||||
|
||||
# Step results
|
||||
steps:
|
||||
step-01-init:
|
||||
duration: "{duration}"
|
||||
status: "completed"
|
||||
|
||||
step-02-pre-gap-analysis:
|
||||
duration: "{duration}"
|
||||
tasks_analyzed: {count}
|
||||
tasks_refined: {count}
|
||||
tasks_added: {count}
|
||||
status: "completed"
|
||||
|
||||
step-03-implement:
|
||||
duration: "{duration}"
|
||||
tasks_completed: {count}
|
||||
files_created: {list}
|
||||
files_modified: {list}
|
||||
migrations: {list}
|
||||
tests_added: {count}
|
||||
status: "completed"
|
||||
|
||||
step-04-post-validation:
|
||||
duration: "{duration}"
|
||||
tasks_verified: {count}
|
||||
false_positives: {count}
|
||||
re_implementations: {count}
|
||||
status: "completed"
|
||||
|
||||
step-05-code-review:
|
||||
duration: "{duration}"
|
||||
issues_found: {count}
|
||||
issues_fixed: {count}
|
||||
categories: {list}
|
||||
status: "completed"
|
||||
|
||||
step-06-complete:
|
||||
duration: "{duration}"
|
||||
commit_hash: "{hash}"
|
||||
files_committed: {count}
|
||||
pushed: {true|false}
|
||||
status: "completed"
|
||||
|
||||
# Quality metrics
|
||||
quality:
|
||||
all_tests_passing: true
|
||||
lint_clean: true
|
||||
build_success: true
|
||||
no_vibe_coding: true
|
||||
followed_step_sequence: true
|
||||
|
||||
# Files affected
|
||||
files:
|
||||
created: {list}
|
||||
modified: {list}
|
||||
deleted: {list}
|
||||
|
||||
# Commit information
|
||||
commit:
|
||||
hash: "{hash}"
|
||||
message: "{message}"
|
||||
files_committed: {count}
|
||||
pushed_to_remote: {true|false}
|
||||
```
|
||||
|
||||
### 3. Display Final Summary
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
🎉 SUPER-DEV PIPELINE COMPLETE!
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Story: {story_title}
|
||||
Duration: {total_duration}
|
||||
|
||||
Development Type: {greenfield|brownfield|hybrid}
|
||||
|
||||
Results:
|
||||
✅ Tasks Completed: {completed_count}
|
||||
✅ Files Created: {created_count}
|
||||
✅ Files Modified: {modified_count}
|
||||
✅ Tests Added: {test_count}
|
||||
✅ Issues Found & Fixed: {issue_count}
|
||||
|
||||
Quality Gates Passed:
|
||||
✅ Pre-Gap Analysis
|
||||
✅ Implementation
|
||||
✅ Post-Validation (no false positives)
|
||||
✅ Code Review (3-10 issues)
|
||||
✅ All tests passing
|
||||
✅ Lint clean
|
||||
✅ Build success
|
||||
|
||||
Git:
|
||||
✅ Commit: {commit_hash}
|
||||
{if pushed}✅ Pushed to remote{endif}
|
||||
|
||||
Story Status: review (ready for human review)
|
||||
|
||||
Audit Trail: {audit_file}
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
✨ No vibe coding occurred! Disciplined execution maintained.
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
### 4. Clean Up State File
|
||||
|
||||
```bash
|
||||
rm {sprint_artifacts}/super-dev-state-{story_id}.yaml
|
||||
```
|
||||
|
||||
State is no longer needed - audit trail is the permanent record.
|
||||
|
||||
### 5. Final Message
|
||||
|
||||
```
|
||||
Super-Dev Pipeline Complete!
|
||||
|
||||
This story was developed with disciplined step-file execution.
|
||||
All quality gates passed. Ready for human review.
|
||||
|
||||
Next Steps:
|
||||
1. Review the commit: git show {commit_hash}
|
||||
2. Test manually if needed
|
||||
3. Merge when approved
|
||||
```
|
||||
|
||||
## PIPELINE COMPLETE
|
||||
|
||||
Pipeline execution is finished. No further steps.
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- Audit trail generated
|
||||
- Summary accurate
|
||||
- State file cleaned up
|
||||
- Story marked "review"
|
||||
- All metrics captured
|
||||
|
||||
### ❌ FAILURE
|
||||
- Missing audit trail
|
||||
- Incomplete summary
|
||||
- State file not cleaned
|
||||
- Metrics inaccurate
|
||||
|
|
@ -0,0 +1,292 @@
|
|||
---
|
||||
name: super-dev-pipeline
|
||||
description: Step-file architecture for super-dev workflow - disciplined execution for both greenfield and brownfield development
|
||||
web_bundle: true
|
||||
---
|
||||
|
||||
# Super-Dev Pipeline Workflow
|
||||
|
||||
**Goal:** Execute story development with disciplined step-file architecture that prevents "vibe coding" and works for both new features and existing codebase modifications.
|
||||
|
||||
**Your Role:** You are the **BMAD Pipeline Orchestrator**. You will follow each step file precisely, without deviation, optimization, or skipping ahead.
|
||||
|
||||
**Key Principle:** This workflow uses **step-file architecture** for disciplined execution that prevents Claude from veering off-course when token usage is high.
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This uses **step-file architecture** borrowed from story-pipeline:
|
||||
|
||||
### Core Principles
|
||||
|
||||
- **Micro-file Design**: Each step is a self-contained instruction file (~150-300 lines)
|
||||
- **Just-In-Time Loading**: Only the current step file is in memory
|
||||
- **Mandatory Sequences**: Execute all numbered sections in order, never deviate
|
||||
- **State Tracking**: Pipeline state in `{sprint_artifacts}/super-dev-state-{story_id}.yaml`
|
||||
- **No Vibe Coding**: Explicit instructions prevent optimization/deviation
|
||||
|
||||
### Step Processing Rules
|
||||
|
||||
1. **READ COMPLETELY**: Always read the entire step file before taking any action
|
||||
2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
|
||||
3. **QUALITY GATES**: Complete gate criteria before proceeding to next step
|
||||
4. **WAIT FOR INPUT**: In interactive mode, halt at menus and wait for user selection
|
||||
5. **SAVE STATE**: Update pipeline state file after each step completion
|
||||
6. **LOAD NEXT**: When directed, load, read entire file, then execute the next step
|
||||
|
||||
### Critical Rules (NO EXCEPTIONS)
|
||||
|
||||
- **NEVER** load multiple step files simultaneously
|
||||
- **ALWAYS** read entire step file before execution
|
||||
- **NEVER** skip steps or optimize the sequence
|
||||
- **ALWAYS** update pipeline state after completing each step
|
||||
- **ALWAYS** follow the exact instructions in the step file
|
||||
- **NEVER** create mental todo lists from future steps
|
||||
- **NEVER** look ahead to future step files
|
||||
- **NEVER** vibe code when token usage is high - follow the steps exactly!
|
||||
|
||||
---
|
||||
|
||||
## STEP FILE MAP
|
||||
|
||||
| Step | File | Agent | Purpose |
|
||||
|------|------|-------|---------|
|
||||
| 1 | step-01-init.md | - | Load story, detect greenfield vs brownfield |
|
||||
| 2 | step-02-pre-gap-analysis.md | DEV | Validate tasks + **detect batchable patterns** |
|
||||
| 3 | step-03-implement.md | DEV | **Smart batching** + adaptive implementation |
|
||||
| 4 | step-04-post-validation.md | DEV | Verify completed tasks vs reality |
|
||||
| 5 | step-05-code-review.md | DEV | Find 3-10 specific issues |
|
||||
| 6 | step-06-complete.md | SM | Commit and push changes |
|
||||
| 7 | step-07-summary.md | - | Audit trail generation |
|
||||
|
||||
---
|
||||
|
||||
## KEY DIFFERENCES FROM story-pipeline
|
||||
|
||||
### What's REMOVED:
|
||||
- ❌ Step 2 (create-story) - assumes story already exists
|
||||
- ❌ Step 4 (ATDD) - not mandatory for brownfield
|
||||
|
||||
### What's ENHANCED:
|
||||
- ✅ Pre-gap analysis is MORE thorough (validates against existing code)
|
||||
- ✅ **Smart Batching** - detects and groups similar tasks automatically
|
||||
- ✅ Implementation is ADAPTIVE (TDD for new, refactor for existing)
|
||||
- ✅ Works for both greenfield and brownfield
|
||||
|
||||
### What's NEW:
|
||||
- ⚡ **Pattern Detection** - automatically identifies batchable tasks
|
||||
- ⚡ **Intelligent Grouping** - groups similar tasks for batch execution
|
||||
- ⚡ **Time Optimization** - 50-70% faster for repetitive work
|
||||
- ⚡ **Safety Preserved** - validation gates enforce discipline
|
||||
|
||||
---
|
||||
|
||||
## SMART BATCHING FEATURE
|
||||
|
||||
### What is Smart Batching?
|
||||
|
||||
**Smart batching** is an intelligent optimization that groups similar, low-risk tasks for batch execution while maintaining full validation discipline.
|
||||
|
||||
**NOT Vibe Coding:**
|
||||
- ✅ Pattern detection is systematic (not guesswork)
|
||||
- ✅ Batches are validated as a group (not skipped)
|
||||
- ✅ Failure triggers fallback to one-at-a-time
|
||||
- ✅ High-risk tasks always executed individually
|
||||
|
||||
**When It Helps:**
|
||||
- Large stories with repetitive tasks (100+ tasks)
|
||||
- Package migration work (installing multiple packages)
|
||||
- Module refactoring (same pattern across files)
|
||||
- Code cleanup (delete old implementations)
|
||||
|
||||
**Time Savings:**
|
||||
```
|
||||
Example: 100-task story
|
||||
- Without batching: 100 tasks × 2 min = 200 minutes (3.3 hours)
|
||||
- With batching: 6 batches × 10 min + 20 individual × 2 min = 100 minutes (1.7 hours)
|
||||
- Savings: 100 minutes (50% faster!)
|
||||
```
|
||||
|
||||
### Batchable Pattern Types
|
||||
|
||||
| Pattern | Example Tasks | Risk | Validation |
|
||||
|---------|--------------|------|------------|
|
||||
| **Package Install** | Add dependencies | LOW | Build succeeds |
|
||||
| **Module Registration** | Import modules | LOW | TypeScript compiles |
|
||||
| **Code Deletion** | Remove old code | LOW | Tests pass |
|
||||
| **Import Updates** | Update import paths | LOW | Build succeeds |
|
||||
| **Config Changes** | Update settings | LOW | App starts |
|
||||
|
||||
### NON-Batchable (Individual Execution)
|
||||
|
||||
| Pattern | Example Tasks | Risk | Why Individual |
|
||||
|---------|--------------|------|----------------|
|
||||
| **Business Logic** | Circuit breaker fallbacks | MEDIUM-HIGH | Logic varies per case |
|
||||
| **Security Code** | Auth/authorization | HIGH | Mistakes are critical |
|
||||
| **Data Migrations** | Schema changes | HIGH | Irreversible |
|
||||
| **API Integration** | External service calls | MEDIUM | Error handling varies |
|
||||
| **Novel Patterns** | First-time implementation | MEDIUM | Unproven approach |
|
||||
|
||||
### How It Works
|
||||
|
||||
**Step 2 (Pre-Gap Analysis):**
|
||||
1. Analyzes all tasks
|
||||
2. Detects repeating patterns
|
||||
3. Categorizes as batchable or individual
|
||||
4. Generates batching plan with time estimates
|
||||
5. Adds plan to story file
|
||||
|
||||
**Step 3 (Implementation):**
|
||||
1. Loads batching plan
|
||||
2. Executes pattern batches first
|
||||
3. Validates each batch
|
||||
4. Fallback to individual if batch fails
|
||||
5. Executes individual tasks with full rigor
|
||||
|
||||
**Safety Mechanisms:**
|
||||
- Pattern detection uses conservative rules (default to individual)
|
||||
- Each batch has explicit validation strategy
|
||||
- Failed batch triggers automatic fallback
|
||||
- High-risk tasks never batched
|
||||
- All validation gates still enforced
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION MODES
|
||||
|
||||
### Interactive Mode (Default)
|
||||
```bash
|
||||
bmad super-dev-pipeline
|
||||
```
|
||||
|
||||
Features:
|
||||
- Menu navigation between steps
|
||||
- User approval at quality gates
|
||||
- Can pause and resume
|
||||
|
||||
### Batch Mode (For batch-super-dev)
|
||||
```bash
|
||||
bmad super-dev-pipeline --batch
|
||||
```
|
||||
|
||||
Features:
|
||||
- Auto-proceed through all steps
|
||||
- Fail-fast on errors
|
||||
- No vibe coding even at high token counts
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Configuration Loading
|
||||
|
||||
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
- `output_folder`, `sprint_artifacts`, `communication_language`
|
||||
|
||||
### 2. Pipeline Parameters
|
||||
|
||||
Resolve from invocation:
|
||||
- `story_id`: Story identifier (e.g., "1-4")
|
||||
- `story_file`: Path to story file (must exist!)
|
||||
- `mode`: "interactive" or "batch"
|
||||
|
||||
### 3. Document Pre-loading
|
||||
|
||||
Load and cache these documents (read once, use across steps):
|
||||
- Story file: Required, must exist
|
||||
- Project context: `**/project-context.md`
|
||||
- Epic file: Optional, for context
|
||||
|
||||
### 4. First Step Execution
|
||||
|
||||
Load, read the full file and then execute:
|
||||
`{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline/steps/step-01-init.md`
|
||||
|
||||
---
|
||||
|
||||
## QUALITY GATES
|
||||
|
||||
Each gate must pass before proceeding:
|
||||
|
||||
### Pre-Gap Analysis Gate (Step 2)
|
||||
- [ ] All tasks validated against codebase
|
||||
- [ ] Existing code analyzed
|
||||
- [ ] Tasks refined if needed
|
||||
- [ ] No missing context
|
||||
|
||||
### Implementation Gate (Step 3)
|
||||
- [ ] All tasks completed
|
||||
- [ ] Tests pass
|
||||
- [ ] Code follows project patterns
|
||||
- [ ] No TypeScript errors
|
||||
|
||||
### Post-Validation Gate (Step 4)
|
||||
- [ ] All completed tasks verified against codebase
|
||||
- [ ] Zero false positives (or re-implementation complete)
|
||||
- [ ] Files/functions/tests actually exist
|
||||
- [ ] Tests actually pass (not just claimed)
|
||||
|
||||
### Code Review Gate (Step 5)
|
||||
- [ ] 3-10 specific issues identified (not "looks good")
|
||||
- [ ] All issues resolved or documented
|
||||
- [ ] Security review complete
|
||||
|
||||
---
|
||||
|
||||
## ANTI-VIBE-CODING ENFORCEMENT
|
||||
|
||||
This workflow **prevents vibe coding** through:
|
||||
|
||||
1. **Mandatory Sequence**: Can't skip ahead or optimize
|
||||
2. **Micro-file Loading**: Only current step in memory
|
||||
3. **Quality Gates**: Must pass criteria to proceed
|
||||
4. **State Tracking**: Progress is recorded and verified
|
||||
5. **Explicit Instructions**: No interpretation required
|
||||
|
||||
**Even at 200K tokens**, Claude must:
|
||||
- ✅ Read entire step file
|
||||
- ✅ Follow numbered sequence
|
||||
- ✅ Complete quality gate
|
||||
- ✅ Update state
|
||||
- ✅ Load next step
|
||||
|
||||
**No shortcuts. No optimizations. No vibe coding.**
|
||||
|
||||
---
|
||||
|
||||
## SUCCESS METRICS
|
||||
|
||||
### ✅ SUCCESS
|
||||
- Pipeline completes all 7 steps
|
||||
- All quality gates passed
|
||||
- Story status updated
|
||||
- Git commit created
|
||||
- Audit trail generated
|
||||
- **No vibe coding occurred**
|
||||
|
||||
### ❌ FAILURE
|
||||
- Step file instructions skipped or optimized
|
||||
- Quality gate bypassed without approval
|
||||
- State file not updated
|
||||
- Tests not verified
|
||||
- Code review accepts "looks good"
|
||||
- **Vibe coding detected**
|
||||
|
||||
---
|
||||
|
||||
## COMPARISON WITH OTHER WORKFLOWS
|
||||
|
||||
| Feature | super-dev-story | story-pipeline | super-dev-pipeline |
|
||||
|---------|----------------|----------------|-------------------|
|
||||
| Architecture | Orchestration | Step-files | Step-files |
|
||||
| Story creation | Separate workflow | Included | ❌ Not included |
|
||||
| ATDD mandatory | No | Yes | No (adaptive) |
|
||||
| Greenfield | ✅ | ✅ | ✅ |
|
||||
| Brownfield | ✅ | ❌ Limited | ✅ |
|
||||
| Token efficiency | ~100-150K | ~25-30K | ~40-60K |
|
||||
| Vibe-proof | ❌ | ✅ | ✅ |
|
||||
|
||||
---
|
||||
|
||||
**super-dev-pipeline is the best of both worlds for batch-super-dev!**
|
||||
|
|
@ -0,0 +1,269 @@
|
|||
name: super-dev-pipeline
|
||||
description: "Complete a-k workflow: test-first development, smart gap analysis, quality gates, intelligent multi-agent review, and mandatory status updates. Risk-based complexity routing with variable agent counts."
|
||||
author: "BMad"
|
||||
version: "1.5.0" # Complete a-k workflow with TDD, quality gates, multi-agent review, and mandatory sprint-status updates
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{config_source}:sprint_artifacts"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow paths
|
||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline"
|
||||
steps_path: "{installed_path}/steps"
|
||||
templates_path: "{installed_path}/templates"
|
||||
checklists_path: "{installed_path}/checklists"
|
||||
|
||||
# State management
|
||||
state_file: "{sprint_artifacts}/super-dev-state-{{story_id}}.yaml"
|
||||
audit_trail: "{sprint_artifacts}/audit-super-dev-{{story_id}}-{{date}}.yaml"
|
||||
|
||||
# Auto-create story settings (NEW v1.4.0)
|
||||
# When story is missing or lacks proper context, auto-invoke /create-story-with-gap-analysis
|
||||
auto_create_story:
|
||||
enabled: true # Set to false to revert to old HALT behavior
|
||||
create_story_workflow: "{project-root}/_bmad/bmm/workflows/4-implementation/create-story-with-gap-analysis"
|
||||
triggers:
|
||||
- story_not_found # Story file doesn't exist
|
||||
- no_tasks # Story exists but has no tasks
|
||||
- missing_sections # Story missing required sections (Tasks, Acceptance Criteria)
|
||||
|
||||
# Complexity level (passed from batch-super-dev or set manually)
|
||||
# Controls which pipeline steps to execute
|
||||
complexity_level: "standard" # micro | standard | complex
|
||||
|
||||
# Risk-based complexity routing (UPDATED v1.5.0)
|
||||
# Complexity determined by RISK level, not task count
|
||||
# Risk keywords: auth, security, payment, file handling, architecture changes
|
||||
complexity_routing:
|
||||
micro:
|
||||
skip_steps: [3, 7, 8, 9] # Skip write-tests, code-review, review-analysis, fix-issues
|
||||
description: "Lightweight path for low-risk stories (UI tweaks, text, simple CRUD)"
|
||||
multi_agent_count: 2
|
||||
examples: ["UI tweaks", "text changes", "simple CRUD", "documentation"]
|
||||
standard:
|
||||
skip_steps: [] # Full pipeline
|
||||
description: "Balanced path for medium-risk stories (APIs, business logic)"
|
||||
multi_agent_count: 4
|
||||
examples: ["API endpoints", "business logic", "data validation"]
|
||||
complex:
|
||||
skip_steps: [] # Full pipeline + comprehensive review
|
||||
description: "Comprehensive path for high-risk stories (auth, payments, security)"
|
||||
multi_agent_count: 6
|
||||
examples: ["auth/security", "payments", "file handling", "architecture changes"]
|
||||
warn_before_start: true
|
||||
suggest_split: true
|
||||
|
||||
# Workflow modes
|
||||
modes:
|
||||
interactive:
|
||||
description: "Human-in-the-loop with menu navigation between steps"
|
||||
checkpoint_on_failure: true
|
||||
requires_approval: true
|
||||
smart_batching: true # User can approve batching plan
|
||||
batch:
|
||||
description: "Unattended execution for batch-super-dev"
|
||||
checkpoint_on_failure: true
|
||||
requires_approval: false
|
||||
fail_fast: true
|
||||
smart_batching: true # Auto-enabled for efficiency
|
||||
|
||||
# Smart batching configuration
|
||||
smart_batching:
|
||||
enabled: true
|
||||
detect_patterns: true
|
||||
default_to_safe: true # When uncertain, execute individually
|
||||
min_batch_size: 3 # Minimum tasks to form a batch
|
||||
fallback_on_failure: true # Revert to individual if batch fails
|
||||
|
||||
# Batchable pattern definitions
|
||||
batchable_patterns:
|
||||
- pattern: "package_installation"
|
||||
keywords: ["Add", "package.json", "npm install", "dependency"]
|
||||
risk_level: "low"
|
||||
validation: "npm install && npm run build"
|
||||
|
||||
- pattern: "module_registration"
|
||||
keywords: ["Import", "Module", "app.module", "register"]
|
||||
risk_level: "low"
|
||||
validation: "tsc --noEmit"
|
||||
|
||||
- pattern: "code_deletion"
|
||||
keywords: ["Delete", "Remove", "rm ", "unlink"]
|
||||
risk_level: "low"
|
||||
validation: "npm test && npm run build"
|
||||
|
||||
- pattern: "import_update"
|
||||
keywords: ["Update import", "Change import", "import from"]
|
||||
risk_level: "low"
|
||||
validation: "npm run build"
|
||||
|
||||
# Non-batchable pattern definitions (always execute individually)
|
||||
individual_patterns:
|
||||
- pattern: "business_logic"
|
||||
keywords: ["circuit breaker", "fallback", "caching for", "strategy"]
|
||||
risk_level: "medium"
|
||||
|
||||
- pattern: "security"
|
||||
keywords: ["auth", "permission", "security", "encrypt"]
|
||||
risk_level: "high"
|
||||
|
||||
- pattern: "data_migration"
|
||||
keywords: ["migration", "schema", "ALTER TABLE", "database"]
|
||||
risk_level: "high"
|
||||
|
||||
# Agent role definitions (loaded once, switched as needed)
|
||||
agents:
|
||||
dev:
|
||||
name: "Developer"
|
||||
persona: "{project-root}/_bmad/bmm/agents/dev.md"
|
||||
description: "Gap analysis, write tests, implementation, validation, review, fixes"
|
||||
used_in_steps: [2, 3, 4, 5, 6, 7, 8, 9]
|
||||
sm:
|
||||
name: "Scrum Master"
|
||||
persona: "{project-root}/_bmad/bmm/agents/sm.md"
|
||||
description: "Story completion, status updates, sprint-status.yaml management"
|
||||
used_in_steps: [10]
|
||||
|
||||
# Step file definitions (NEW v1.5.0: 11-step a-k workflow)
|
||||
steps:
|
||||
- step: 1
|
||||
file: "{steps_path}/step-01-init.md"
|
||||
name: "Init + Validate Story"
|
||||
description: "Load, validate, auto-create if needed (a-c)"
|
||||
agent: null
|
||||
quality_gate: false
|
||||
auto_create_story: true
|
||||
|
||||
- step: 2
|
||||
file: "{steps_path}/step-02-smart-gap-analysis.md"
|
||||
name: "Smart Gap Analysis"
|
||||
description: "Gap analysis (skip if just created story) (d)"
|
||||
agent: dev
|
||||
quality_gate: true
|
||||
skip_if_story_just_created: true
|
||||
|
||||
- step: 3
|
||||
file: "{steps_path}/step-03-write-tests.md"
|
||||
name: "Write Tests (TDD)"
|
||||
description: "Write tests before implementation (e)"
|
||||
agent: dev
|
||||
quality_gate: false
|
||||
test_driven: true
|
||||
|
||||
- step: 4
|
||||
file: "{steps_path}/step-04-implement.md"
|
||||
name: "Implement"
|
||||
description: "Run dev-story implementation (f)"
|
||||
agent: dev
|
||||
quality_gate: true
|
||||
|
||||
- step: 5
|
||||
file: "{steps_path}/step-05-post-validation.md"
|
||||
name: "Post-Validation"
|
||||
description: "Verify work actually implemented (g)"
|
||||
agent: dev
|
||||
quality_gate: true
|
||||
iterative: true
|
||||
|
||||
- step: 6
|
||||
file: "{steps_path}/step-06-run-quality-checks.md"
|
||||
name: "Quality Checks"
|
||||
description: "Tests, type check, linter - fix all (h)"
|
||||
agent: dev
|
||||
quality_gate: true
|
||||
blocking: true
|
||||
required_checks:
|
||||
- tests_passing
|
||||
- type_check_passing
|
||||
- lint_passing
|
||||
- coverage_threshold
|
||||
|
||||
- step: 7
|
||||
file: "{steps_path}/step-07-code-review.md"
|
||||
name: "Code Review"
|
||||
description: "Multi-agent review with fresh context (i)"
|
||||
agent: dev
|
||||
quality_gate: true
|
||||
requires_fresh_context: true
|
||||
multi_agent_review: true
|
||||
variable_agent_count: true
|
||||
|
||||
- step: 8
|
||||
file: "{steps_path}/step-08-review-analysis.md"
|
||||
name: "Review Analysis"
|
||||
description: "Analyze findings - reject gold plating (j)"
|
||||
agent: dev
|
||||
quality_gate: false
|
||||
critical_thinking: true
|
||||
|
||||
- step: 9
|
||||
file: "{steps_path}/step-09-fix-issues.md"
|
||||
name: "Fix Issues"
|
||||
description: "Implement MUST FIX and SHOULD FIX items"
|
||||
agent: dev
|
||||
quality_gate: true
|
||||
|
||||
- step: 10
|
||||
file: "{steps_path}/step-10-complete.md"
|
||||
name: "Complete + Update Status"
|
||||
description: "Mark done, update sprint-status.yaml (k)"
|
||||
agent: sm
|
||||
quality_gate: true
|
||||
mandatory_sprint_status_update: true
|
||||
verify_status_update: true
|
||||
|
||||
- step: 11
|
||||
file: "{steps_path}/step-11-summary.md"
|
||||
name: "Summary"
|
||||
description: "Generate comprehensive audit trail"
|
||||
agent: null
|
||||
quality_gate: false
|
||||
|
||||
# Quality gates
|
||||
quality_gates:
|
||||
pre_gap_analysis:
|
||||
step: 2
|
||||
criteria:
|
||||
- "All tasks validated or refined"
|
||||
- "No missing context"
|
||||
- "Implementation path clear"
|
||||
|
||||
implementation:
|
||||
step: 3
|
||||
criteria:
|
||||
- "All tasks completed"
|
||||
- "Tests pass"
|
||||
- "Code follows project patterns"
|
||||
|
||||
post_validation:
|
||||
step: 4
|
||||
criteria:
|
||||
- "All completed tasks verified against codebase"
|
||||
- "Zero false positives remaining"
|
||||
- "Files/functions/tests actually exist"
|
||||
|
||||
code_review:
|
||||
step: 5
|
||||
criteria:
|
||||
- "3-10 specific issues identified"
|
||||
- "All issues resolved or documented"
|
||||
- "Security review complete"
|
||||
|
||||
# Document loading strategies
|
||||
input_file_patterns:
|
||||
story:
|
||||
description: "Story file being developed"
|
||||
pattern: "{sprint_artifacts}/story-*.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
cache: true
|
||||
|
||||
project_context:
|
||||
description: "Critical rules and patterns"
|
||||
pattern: "**/project-context.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
cache: true
|
||||
|
||||
standalone: true
|
||||
|
|
@ -0,0 +1,218 @@
|
|||
name: super-dev-pipeline
|
||||
description: "Step-file architecture with complexity-based routing, smart batching, and auto-story-creation. Micro stories get lightweight path, standard/complex get full quality gates."
|
||||
author: "BMad"
|
||||
version: "1.4.0" # Added auto-create story via /create-story-with-gap-analysis when story missing or incomplete
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
sprint_artifacts: "{config_source}:sprint_artifacts"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow paths
|
||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-pipeline"
|
||||
steps_path: "{installed_path}/steps"
|
||||
templates_path: "{installed_path}/templates"
|
||||
checklists_path: "{installed_path}/checklists"
|
||||
|
||||
# State management
|
||||
state_file: "{sprint_artifacts}/super-dev-state-{{story_id}}.yaml"
|
||||
audit_trail: "{sprint_artifacts}/audit-super-dev-{{story_id}}-{{date}}.yaml"
|
||||
|
||||
# Auto-create story settings (NEW v1.4.0)
|
||||
# When story is missing or lacks proper context, auto-invoke /create-story-with-gap-analysis
|
||||
auto_create_story:
|
||||
enabled: true # Set to false to revert to old HALT behavior
|
||||
create_story_workflow: "{project-root}/_bmad/bmm/workflows/4-implementation/create-story-with-gap-analysis"
|
||||
triggers:
|
||||
- story_not_found # Story file doesn't exist
|
||||
- no_tasks # Story exists but has no tasks
|
||||
- missing_sections # Story missing required sections (Tasks, Acceptance Criteria)
|
||||
|
||||
# Complexity level (passed from batch-super-dev or set manually)
|
||||
# Controls which pipeline steps to execute
|
||||
complexity_level: "standard" # micro | standard | complex
|
||||
|
||||
# Complexity-based step skipping (NEW v1.2.0)
|
||||
complexity_routing:
|
||||
micro:
|
||||
skip_steps: [2, 5] # Skip pre-gap analysis and code review
|
||||
description: "Lightweight path for simple stories (≤3 tasks, low risk)"
|
||||
standard:
|
||||
skip_steps: [] # Full pipeline
|
||||
description: "Normal path with all quality gates"
|
||||
complex:
|
||||
skip_steps: [] # Full pipeline + warnings
|
||||
description: "Enhanced path for high-risk stories"
|
||||
warn_before_start: true
|
||||
suggest_split: true
|
||||
|
||||
# Workflow modes
|
||||
modes:
|
||||
interactive:
|
||||
description: "Human-in-the-loop with menu navigation between steps"
|
||||
checkpoint_on_failure: true
|
||||
requires_approval: true
|
||||
smart_batching: true # User can approve batching plan
|
||||
batch:
|
||||
description: "Unattended execution for batch-super-dev"
|
||||
checkpoint_on_failure: true
|
||||
requires_approval: false
|
||||
fail_fast: true
|
||||
smart_batching: true # Auto-enabled for efficiency
|
||||
|
||||
# Smart batching configuration
|
||||
smart_batching:
|
||||
enabled: true
|
||||
detect_patterns: true
|
||||
default_to_safe: true # When uncertain, execute individually
|
||||
min_batch_size: 3 # Minimum tasks to form a batch
|
||||
fallback_on_failure: true # Revert to individual if batch fails
|
||||
|
||||
# Batchable pattern definitions
|
||||
batchable_patterns:
|
||||
- pattern: "package_installation"
|
||||
keywords: ["Add", "package.json", "npm install", "dependency"]
|
||||
risk_level: "low"
|
||||
validation: "npm install && npm run build"
|
||||
|
||||
- pattern: "module_registration"
|
||||
keywords: ["Import", "Module", "app.module", "register"]
|
||||
risk_level: "low"
|
||||
validation: "tsc --noEmit"
|
||||
|
||||
- pattern: "code_deletion"
|
||||
keywords: ["Delete", "Remove", "rm ", "unlink"]
|
||||
risk_level: "low"
|
||||
validation: "npm test && npm run build"
|
||||
|
||||
- pattern: "import_update"
|
||||
keywords: ["Update import", "Change import", "import from"]
|
||||
risk_level: "low"
|
||||
validation: "npm run build"
|
||||
|
||||
# Non-batchable pattern definitions (always execute individually)
|
||||
individual_patterns:
|
||||
- pattern: "business_logic"
|
||||
keywords: ["circuit breaker", "fallback", "caching for", "strategy"]
|
||||
risk_level: "medium"
|
||||
|
||||
- pattern: "security"
|
||||
keywords: ["auth", "permission", "security", "encrypt"]
|
||||
risk_level: "high"
|
||||
|
||||
- pattern: "data_migration"
|
||||
keywords: ["migration", "schema", "ALTER TABLE", "database"]
|
||||
risk_level: "high"
|
||||
|
||||
# Agent role definitions (loaded once, switched as needed)
|
||||
agents:
|
||||
dev:
|
||||
name: "Developer"
|
||||
persona: "{project-root}/_bmad/bmm/agents/dev.md"
|
||||
description: "Pre-gap, implementation, post-validation, code review"
|
||||
used_in_steps: [2, 3, 4, 5]
|
||||
sm:
|
||||
name: "Scrum Master"
|
||||
persona: "{project-root}/_bmad/bmm/agents/sm.md"
|
||||
description: "Story completion and status"
|
||||
used_in_steps: [6]
|
||||
|
||||
# Step file definitions
|
||||
steps:
|
||||
- step: 1
|
||||
file: "{steps_path}/step-01-init.md"
|
||||
name: "Initialize"
|
||||
description: "Load story context and detect development mode"
|
||||
agent: null
|
||||
quality_gate: false
|
||||
|
||||
- step: 2
|
||||
file: "{steps_path}/step-02-pre-gap-analysis.md"
|
||||
name: "Pre-Gap Analysis"
|
||||
description: "Validate tasks against codebase (critical for brownfield)"
|
||||
agent: dev
|
||||
quality_gate: true
|
||||
|
||||
- step: 3
|
||||
file: "{steps_path}/step-03-implement.md"
|
||||
name: "Implement"
|
||||
description: "Adaptive implementation (TDD for new, refactor for existing)"
|
||||
agent: dev
|
||||
quality_gate: true
|
||||
|
||||
- step: 4
|
||||
file: "{steps_path}/step-04-post-validation.md"
|
||||
name: "Post-Validation"
|
||||
description: "Verify completed tasks against codebase reality"
|
||||
agent: dev
|
||||
quality_gate: true
|
||||
iterative: true # May re-invoke step 3 if gaps found
|
||||
|
||||
- step: 5
|
||||
file: "{steps_path}/step-05-code-review.md"
|
||||
name: "Code Review"
|
||||
description: "Adversarial code review finding 3-10 issues"
|
||||
agent: dev
|
||||
quality_gate: true
|
||||
|
||||
- step: 6
|
||||
file: "{steps_path}/step-06-complete.md"
|
||||
name: "Complete"
|
||||
description: "Commit and push changes"
|
||||
agent: sm
|
||||
quality_gate: false
|
||||
|
||||
- step: 7
|
||||
file: "{steps_path}/step-07-summary.md"
|
||||
name: "Summary"
|
||||
description: "Generate audit trail"
|
||||
agent: null
|
||||
quality_gate: false
|
||||
|
||||
# Quality gates
|
||||
quality_gates:
|
||||
pre_gap_analysis:
|
||||
step: 2
|
||||
criteria:
|
||||
- "All tasks validated or refined"
|
||||
- "No missing context"
|
||||
- "Implementation path clear"
|
||||
|
||||
implementation:
|
||||
step: 3
|
||||
criteria:
|
||||
- "All tasks completed"
|
||||
- "Tests pass"
|
||||
- "Code follows project patterns"
|
||||
|
||||
post_validation:
|
||||
step: 4
|
||||
criteria:
|
||||
- "All completed tasks verified against codebase"
|
||||
- "Zero false positives remaining"
|
||||
- "Files/functions/tests actually exist"
|
||||
|
||||
code_review:
|
||||
step: 5
|
||||
criteria:
|
||||
- "3-10 specific issues identified"
|
||||
- "All issues resolved or documented"
|
||||
- "Security review complete"
|
||||
|
||||
# Document loading strategies
|
||||
input_file_patterns:
|
||||
story:
|
||||
description: "Story file being developed"
|
||||
pattern: "{sprint_artifacts}/story-*.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
cache: true
|
||||
|
||||
project_context:
|
||||
description: "Critical rules and patterns"
|
||||
pattern: "**/project-context.md"
|
||||
load_strategy: "FULL_LOAD"
|
||||
cache: true
|
||||
|
||||
standalone: true
|
||||
Loading…
Reference in New Issue