refactor: convert remaining workflows to unified GSD-style format

Converted 4 workflows to unified workflow.md format:
- gap-analysis: verify story tasks against codebase
- push-all: safe git staging/commit/push with secret detection
- super-dev-story: dev pipeline with validation and review gates
- create-story-with-gap-analysis: regenerate story with verified codebase scan

Also cleaned up orphaned instructions.md files from earlier conversions:
- batch-super-dev
- detect-ghost-features
- migrate-to-github
- multi-agent-review
- recover-sprint-status
- revalidate-epic
- revalidate-story

Net reduction: 10,444 lines (12,872 deleted, 2,428 added)
This commit is contained in:
Jonah Schulte 2026-01-27 00:46:33 -05:00
parent 6e02497dcb
commit ffdf152f43
48 changed files with 2428 additions and 12872 deletions

View File

@ -1,170 +0,0 @@
# Create Story With Gap Analysis
**Custom Workflow by Jonah Schulte**
**Created:** December 24, 2025
**Purpose:** Generate stories with SYSTEMATIC codebase gap analysis (not inference-based)
---
## Problem This Solves
**Standard `/create-story` workflow:**
- ❌ Reads previous stories and git commits (passive)
- ❌ Infers what probably exists (guessing)
- ❌ Gap analysis quality varies by agent thoroughness
- ❌ Checkboxes may not reflect reality
**This custom workflow:**
- ✅ Actively scans codebase with Glob/Read tools
- ✅ Verifies file existence (not inference)
- ✅ Reads key files to check implementation depth (mocked vs real)
- ✅ Generates TRUTHFUL gap analysis
- ✅ Checkboxes are FACTS verified by file system
---
## Usage
```bash
/create-story-with-gap-analysis
# Or via Skill tool:
Skill: "create-story-with-gap-analysis"
Args: "1.9" (epic.story number)
```
**Workflow will:**
1. Load existing story + epic context
2. **SCAN codebase systematically** (Glob for files, Read to verify implementation)
3. Generate gap analysis with verified ✅/❌/⚠️ status
4. Update story file with truthful checkboxes
5. Save to _bmad-output/implementation-artifacts/
---
## What It Scans
**For each story, the workflow:**
1. **Identifies target directories** (from story title/requirements)
- Example: "admin-user-service" → apps/backend/admin-user-service/
2. **Globs for all files**
- `{target}/src/**/*.ts` - Find all TypeScript files
- `{target}/src/**/*.spec.ts` - Find all tests
3. **Checks specific required files**
- Based on ACs, check if files exist
- Example: `src/auth/controllers/bridgeid-auth.controller.ts` → ❌ MISSING
4. **Reads key files to verify depth**
- Check if mocked: Search for "MOCK" string
- Check if incomplete: Search for "TODO"
- Verify real implementation exists
5. **Checks package.json**
- Verify required dependencies are installed
- Identify missing packages
6. **Counts tests**
- How many test files exist
- Coverage for each component
---
## Output Format
**Generates story with:**
1. ✅ Standard BMAD 5 sections (Story, AC, Tasks, Dev Notes, Dev Agent Record)
2. ✅ Enhanced Dev Notes with verified gap analysis subsections:
- Gap Analysis: Current State vs Requirements
- Library/Framework Requirements (from package.json)
- File Structure Requirements (from Glob results)
- Testing Requirements (from test file count)
- Architecture Compliance
- Previous Story Intelligence
3. ✅ Truthful checkboxes based on verified file existence
---
## Difference from Standard /create-story
| Feature | /create-story | /create-story-with-gap-analysis |
|---------|---------------|--------------------------------|
| Reads previous story | ✅ | ✅ |
| Reads git commits | ✅ | ✅ |
| Loads epic context | ✅ | ✅ |
| **Scans codebase with Glob** | ❌ | ✅ SYSTEMATIC |
| **Verifies files exist** | ❌ | ✅ VERIFIED |
| **Reads files to check depth** | ❌ | ✅ MOCKED vs REAL |
| **Checks package.json** | ❌ | ✅ DEPENDENCIES |
| **Counts test coverage** | ❌ | ✅ COVERAGE |
| Gap analysis quality | Variable (agent-dependent) | Systematic (tool-verified) |
| Checkbox accuracy | Inference-based | File-existence-based |
---
## When to Use
**This workflow (planning-time gap analysis):**
- Use when regenerating/auditing stories
- Use when you want verified checkboxes upfront
- Best for stories that will be implemented immediately
- Manual verification at planning time
**Standard /create-story + /dev-story (dev-time gap analysis):**
- Recommended for most workflows
- Stories start as DRAFT, validated when dev begins
- Prevents staleness in batch planning
- Automatic verification at development time
**Use standard /create-story when:**
- Greenfield project (nothing exists yet)
- Backlog stories (won't be implemented for months)
- Epic planning phase (just sketching ideas)
**Tip:** Both approaches are complementary. You can use this workflow to regenerate stories, then use `/dev-story` which will re-validate at dev-time.
---
## Examples
**Regenerating Story 1.9:**
```bash
/create-story-with-gap-analysis
Choice: 1.9
# Workflow will:
# 1. Load existing 1-9-admin-user-service-bridgeid-rbac.md
# 2. Identify target: apps/backend/admin-user-service/
# 3. Glob: apps/backend/admin-user-service/src/**/*.ts (finds 47 files)
# 4. Check: src/auth/controllers/bridgeid-auth.controller.ts → ❌ MISSING
# 5. Read: src/bridgeid/services/bridgeid-client.service.ts → ⚠️ MOCKED
# 6. Read: package.json → axios ❌ NOT INSTALLED
# 7. Generate gap analysis with verified status
# 8. Write story with truthful checkboxes
```
**Result:** Story with verified gap analysis showing:
- ✅ 7 components IMPLEMENTED (verified file existence)
- ❌ 6 components MISSING (verified file not found)
- ⚠️ 1 component PARTIAL (file exists but contains "MOCK")
---
## Installation
This workflow is auto-discovered when BMAD is installed.
**To use:**
```bash
/bmad:bmm:workflows:create-story-with-gap-analysis
```
---
**Last Updated:** December 27, 2025
**Status:** Integrated into BMAD-METHOD

View File

@ -1,83 +0,0 @@
# Step 1: Initialize and Extract Story Requirements
## Goal
Load epic context and identify what needs to be scanned in the codebase.
## Execution
### 1. Determine Story to Create
**Ask user:**
```
Which story should I regenerate with gap analysis?
Options:
1. Provide story number (e.g., "1.9" or "1-9")
2. Provide story filename (e.g., "story-1.9.md" or legacy "1-9-admin-user-service-bridgeid-rbac.md")
Your choice:
```
**Parse input:**
- Extract epic_num (e.g., "1")
- Extract story_num (e.g., "9")
- Locate story file: `{story_dir}/story-{epic_num}.{story_num}.md` (fallback: `{story_dir}/{epic_num}-{story_num}-*.md`)
### 2. Load Existing Story Content
```bash
Read: {story_dir}/story-{epic_num}.{story_num}.md
# If not found, fallback:
Read: {story_dir}/{epic_num}-{story_num}-*.md
```
**Extract from existing story:**
- Story title
- User story text (As a... I want... So that...)
- Acceptance criteria (the requirements, not checkboxes)
- Any existing Dev Notes or technical context
**Store for later use.**
### 3. Load Epic Context
```bash
Read: {planning_artifacts}/epics.md
```
**Extract from epic:**
- Epic business objectives
- This story's original requirements
- Technical constraints
- Dependencies on other stories
### 4. Determine Target Directories
**From story title and requirements, identify:**
- Which service/app this story targets
- Which directories to scan
**Examples:**
- "admin-user-service" → `apps/backend/admin-user-service/`
- "Widget Batch 1" → `packages/widgets/`
- "POE Integration" → `apps/frontend/web/`
**Store target directories for Step 2 codebase scan.**
### 5. Ready for Codebase Scan
**Output:**
```
✅ Story Context Loaded
Story: {epic_num}.{story_num} - {title}
Target directories identified:
- {directory_1}
- {directory_2}
Ready to scan codebase for gap analysis.
[C] Continue to Codebase Scan
```
**WAIT for user to select Continue.**

View File

@ -1,184 +0,0 @@
# Step 2: Systematic Codebase Gap Analysis
## Goal
VERIFY what code actually exists vs what's missing using Glob and Read tools.
## CRITICAL
This step uses ACTUAL file system tools to generate TRUTHFUL gap analysis.
No guessing. No inference. VERIFY with tools.
## Execution
### 1. Scan Target Directories
**For each target directory identified in Step 1:**
```bash
# List all TypeScript files
Glob: {target_dir}/src/**/*.ts
Glob: {target_dir}/src/**/*.tsx
# Store file list
```
**Output:**
```
📁 Codebase Scan Results for {target_dir}
Found {count} TypeScript files:
- {file1}
- {file2}
...
```
### 2. Check for Specific Required Components
**Based on story Acceptance Criteria, check if required files exist:**
**Example for Auth Story:**
```bash
# Check for OAuth endpoints
Glob: {target_dir}/src/auth/controllers/*bridgeid*.ts
Result: ❌ MISSING (0 files found)
# Check for BridgeID client
Glob: {target_dir}/src/bridgeid/**/*.ts
Result: ✅ EXISTS (found: bridgeid-client.service.ts, bridgeid-sync.service.ts)
# Check for permission guards
Glob: {target_dir}/src/auth/guards/permissions*.ts
Result: ❌ MISSING (0 files found)
# Check for decorators
Glob: {target_dir}/src/auth/decorators/*permission*.ts
Result: ❌ MISSING (0 files found)
```
### 3. Verify Implementation Depth
**For files that exist, read them to check if MOCKED or REAL:**
```bash
# Read key implementation file
Read: {target_dir}/src/bridgeid/services/bridgeid-client.service.ts
# Search for indicators:
- Contains "MOCK" or "mock" → ⚠️ MOCKED (needs real implementation)
- Contains "TODO" → ⚠️ INCOMPLETE
- Contains real HTTP client (axios) → ✅ IMPLEMENTED
```
### 4. Check Dependencies
```bash
# Read package.json
Read: {target_dir}/package.json
# Verify required dependencies exist:
Required: axios
Found in package.json? → ❌ NO (needs to be added)
Required: @aws-sdk/client-secrets-manager
Found in package.json? → ❌ NO (needs to be added)
```
### 5. Check Test Coverage
```bash
# Find test files
Glob: {target_dir}/src/**/*.spec.ts
Glob: {target_dir}/test/**/*.test.ts
# Count tests
Found {test_count} test files
# Check for specific test coverage
Glob: {target_dir}/src/**/*bridgeid*.spec.ts
Result: ✅ EXISTS (found 3 test files)
```
### 6. Generate Truthful Gap Analysis
**Create structured gap analysis:**
```markdown
## Gap Analysis: Current State vs Requirements
**✅ IMPLEMENTED (Verified by Codebase Scan):**
1. **BridgeID Client Infrastructure** - MOCKED (needs real HTTP)
- File: src/bridgeid/services/bridgeid-client.service.ts ✅ EXISTS
- Implementation: Mock user data with circuit breaker
- Status: ⚠️ PARTIAL - Ready for real HTTP client
- Tests: 15 tests passing ✅
2. **User Synchronization Service**
- File: src/bridgeid/services/bridgeid-sync.service.ts ✅ EXISTS
- Implementation: Bulk sync BridgeID → admin_users
- Status: ✅ COMPLETE
- Tests: 6 tests passing ✅
3. **Role Mapping Logic**
- File: src/bridgeid/constants/role-mapping.constants.ts ✅ EXISTS
- Implementation: 7-tier role mapping with priority selection
- Status: ✅ COMPLETE
- Tests: 10 tests passing ✅
**❌ MISSING (Required for AC Completion):**
1. **BridgeID OAuth Endpoints**
- File: src/auth/controllers/bridgeid-auth.controller.ts ❌ NOT FOUND
- Need: POST /api/auth/bridgeid/login endpoint
- Need: GET /api/auth/bridgeid/callback endpoint
- Status: ❌ NOT IMPLEMENTED
2. **Permission Guards**
- File: src/auth/guards/permissions.guard.ts ❌ NOT FOUND
- File: src/auth/decorators/require-permissions.decorator.ts ❌ NOT FOUND
- Status: ❌ NOT IMPLEMENTED
3. **Real OAuth HTTP Client**
- Package: axios ❌ NOT in package.json
- Package: @aws-sdk/client-secrets-manager ❌ NOT in package.json
- Status: ❌ DEPENDENCIES NOT ADDED
```
### 7. Update Acceptance Criteria Checkboxes
**Based on verified gap analysis, mark checkboxes:**
```markdown
### AC1: BridgeID OAuth Integration
- [ ] OAuth login endpoint (VERIFIED MISSING - file not found)
- [ ] OAuth callback endpoint (VERIFIED MISSING - file not found)
- [ ] Client configuration (VERIFIED PARTIAL - exists but mocked)
### AC3: RBAC Permission System
- [x] Role mapping defined (VERIFIED COMPLETE - file exists, tests pass)
- [ ] Permission guard (VERIFIED MISSING - file not found)
- [ ] Permission decorator (VERIFIED MISSING - file not found)
```
**Checkboxes are now FACTS, not guesses.**
### 8. Present Gap Analysis
**Output:**
```
✅ Codebase Scan Complete
Scanned: apps/backend/admin-user-service/
Files found: 47 TypeScript files
Tests found: 31 test files
Gap Analysis Generated:
✅ 7 components IMPLEMENTED (verified)
❌ 6 components MISSING (verified)
⚠️ 1 component PARTIAL (needs completion)
Story checkboxes updated based on verified file existence.
[C] Continue to Story Generation
```
**WAIT for user to continue.**

View File

@ -1,181 +0,0 @@
# Step 3: Generate Story with Verified Gap Analysis
## Goal
Generate complete 7-section story file using verified gap analysis from Step 2.
## Execution
### 1. Load Template
```bash
Read: {installed_path}/template.md
```
### 2. Fill Template Variables
**Basic Story Info:**
- `{{epic_num}}` - from Step 1
- `{{story_num}}` - from Step 1
- `{{story_title}}` - from existing story or epic
- `{{priority}}` - from epic (P0, P1, P2)
- `{{effort}}` - from epic or estimate
**Story Section:**
- `{{role}}` - from existing story
- `{{action}}` - from existing story
- `{{benefit}}` - from existing story
**Business Context:**
- `{{business_value}}` - from epic context
- `{{scale_requirements}}` - from epic/architecture
- `{{compliance_requirements}}` - from epic/architecture
- `{{urgency}}` - from epic priority
**Acceptance Criteria:**
- `{{acceptance_criteria}}` - from epic + existing story
- Update checkboxes based on Step 2 gap analysis:
- [x] = Component verified EXISTS
- [ ] = Component verified MISSING
- [~] = Component verified PARTIAL (optional notation)
**Tasks / Subtasks:**
- `{{tasks_subtasks}}` - from epic + existing story
- Add "✅ DONE", "⚠️ PARTIAL", "❌ TODO" markers based on gap analysis
**Gap Analysis Section:**
- `{{implemented_components}}` - from Step 2 codebase scan (verified ✅)
- `{{missing_components}}` - from Step 2 codebase scan (verified ❌)
- `{{partial_components}}` - from Step 2 codebase scan (verified ⚠️)
**Architecture Compliance:**
- `{{architecture_patterns}}` - from architecture doc + playbooks
- Multi-tenant isolation requirements
- Caching strategies
- Error handling patterns
- Performance requirements
**Library/Framework Requirements:**
- `{{current_dependencies}}` - from Step 2 package.json scan
- `{{required_dependencies}}` - missing deps identified in Step 2
**File Structure:**
- `{{existing_files}}` - from Step 2 Glob results (verified ✅)
- `{{required_files}}` - from gap analysis (verified ❌)
**Testing Requirements:**
- `{{test_count}}` - from Step 2 test file count
- `{{required_tests}}` - based on missing components
- `{{coverage_target}}` - from architecture or default 90%
**Dev Agent Guardrails:**
- `{{guardrails}}` - from playbooks + previous story lessons
- What NOT to do
- Common mistakes to avoid
**Previous Story Intelligence:**
- `{{previous_story_learnings}}` - from Step 1 previous story Dev Agent Record
**Project Structure Notes:**
- `{{structure_alignment}}` - from architecture compliance
**References:**
- `{{references}}` - Links to epic, architecture, playbooks, related stories
**Definition of Done:**
- Standard DoD checklist with story-specific coverage target
### 3. Generate Complete Story
**Write filled template:**
```bash
Write: {story_dir}/story-{{epic_num}}.{{story_num}}.md
[Complete 7-section story with verified gap analysis]
```
### 4. Validate Generated Story
```bash
# Check section count
grep "^## " {story_dir}/story-{{epic_num}}.{{story_num}}.md | wc -l
# Should output: 7
# Check for gap analysis
grep -q "Gap Analysis.*Current State" {story_dir}/story-{{epic_num}}.{{story_num}}.md
# Should find it
# Run custom validation
./scripts/validate-bmad-format.sh {story_dir}/story-{{epic_num}}.{{story_num}}.md
# Update script to expect 7 sections + gap analysis subsection
```
### 5. Update Sprint Status
```bash
Read: {sprint_status}
# Find story entry
# Update status to "ready-for-dev" if was "backlog"
# Preserve all comments and structure
Write: {sprint_status}
```
### 6. Report Completion
**Output:**
```
✅ Story {{epic_num}}.{{story_num}} Regenerated with Gap Analysis
File: {story_dir}/story-{{epic_num}}.{{story_num}}.md
Sections: 7/7 ✅
Gap Analysis: VERIFIED with codebase scan
Summary:
✅ {{implemented_count}} components IMPLEMENTED (verified by file scan)
❌ {{missing_count}} components MISSING (verified file not found)
⚠️ {{partial_count}} components PARTIAL (file exists but mocked/incomplete)
Checkboxes in ACs and Tasks reflect VERIFIED status (not guesses).
Next Steps:
1. Review story file for accuracy
2. Use /dev-story to implement missing components
3. Story provides complete context for flawless implementation
Story is ready for development. 🚀
```
### 7. Cleanup
**Ask user:**
```
Story regeneration complete!
Would you like to:
[N] Regenerate next story ({{next_story_num}})
[Q] Quit workflow
[R] Review generated story first
Your choice:
```
**If N selected:** Loop back to Step 1 with next story number
**If Q selected:** End workflow
**If R selected:** Display story file, then show menu again
---
## Success Criteria
**Story generation succeeds when:**
1. ✅ 7 top-level ## sections present
2. ✅ Gap Analysis subsection exists with ✅/❌/⚠️ verified status
3. ✅ Checkboxes match codebase reality (spot-checked)
4. ✅ Dev Notes has all mandatory subsections
5. ✅ Definition of Done checklist included
6. ✅ File saved to correct location
7. ✅ Sprint status updated
---
**WORKFLOW COMPLETE - Ready to execute.**

View File

@ -1,179 +0,0 @@
# Story {{epic_num}}.{{story_num}}: {{story_title}}
**Status:** ready-for-dev
**Epic:** {{epic_num}}
**Priority:** {{priority}}
**Estimated Effort:** {{effort}}
---
## Story
As a **{{role}}**,
I want to **{{action}}**,
So that **{{benefit}}**.
---
## Business Context
### Why This Matters
{{business_value}}
### Production Reality
{{scale_requirements}}
{{compliance_requirements}}
{{urgency}}
---
## Acceptance Criteria
{{acceptance_criteria}}
---
## Tasks / Subtasks
{{tasks_subtasks}}
---
## Dev Notes
### Gap Analysis: Current State vs Requirements
**✅ IMPLEMENTED (Verified by Codebase Scan):**
{{implemented_components}}
**❌ MISSING (Required for AC Completion):**
{{missing_components}}
**⚠️ PARTIAL (Needs Enhancement):**
{{partial_components}}
### Architecture Compliance
{{architecture_patterns}}
### Library/Framework Requirements
**Current Dependencies:**
```json
{{current_dependencies}}
```
**Required Additions:**
```json
{{required_dependencies}}
```
### File Structure Requirements
**Completed Files:**
```
{{existing_files}}
```
**Required New Files:**
```
{{required_files}}
```
### Testing Requirements
**Current Test Coverage:** {{test_count}} tests passing
**Required Additional Tests:**
{{required_tests}}
**Target:** {{coverage_target}}
### Dev Agent Guardrails
{{guardrails}}
### Previous Story Intelligence
{{previous_story_learnings}}
### Project Structure Notes
{{structure_alignment}}
### References
{{references}}
---
## Definition of Done
### Code Quality (BLOCKING)
- [ ] Type check passes: `pnpm type-check` (zero errors)
- [ ] Zero `any` types in new code
- [ ] Lint passes: `pnpm lint` (zero errors in new code)
- [ ] Build succeeds: `pnpm build`
### Testing (BLOCKING)
- [ ] Unit tests: {{coverage_target}} coverage
- [ ] Integration tests: Key workflows validated
- [ ] All tests pass: New + existing (zero regressions)
### Security (BLOCKING)
- [ ] Dependency scan: `pnpm audit` (zero high/critical)
- [ ] No hardcoded secrets
- [ ] Input validation on all endpoints
- [ ] Auth checks on protected endpoints
- [ ] Audit logging on mutations
### Architecture Compliance (BLOCKING)
- [ ] Multi-tenant isolation: dealerId in all queries
- [ ] Cache namespacing: Cache keys include siteId
- [ ] Performance: External APIs cached, no N+1 queries
- [ ] Error handling: No silent failures
- [ ] Follows patterns from playbooks
### Deployment Validation (BLOCKING)
- [ ] Service starts: `pnpm dev` runs successfully
- [ ] Health check: `/health` returns 200
- [ ] Smoke test: Primary functionality verified
### Documentation (BLOCKING)
- [ ] API docs: Swagger decorators on endpoints
- [ ] Inline comments: Complex logic explained
- [ ] Story file: Dev Agent Record complete
---
## Dev Agent Record
### Agent Model Used
(To be filled by dev agent)
### Implementation Summary
(To be filled by dev agent)
### File List
(To be filled by dev agent)
### Test Results
(To be filled by dev agent)
### Completion Notes
(To be filled by dev agent)
---
**Generated by:** /create-story-with-gap-analysis
**Date:** {{date}}

View File

@ -0,0 +1,286 @@
# Create Story with Gap Analysis v3.0 - Verified Story Generation
<purpose>
Regenerate story with VERIFIED codebase gap analysis.
Uses Glob/Read tools to determine what actually exists vs what's missing.
Checkboxes reflect reality, not guesses.
</purpose>
<philosophy>
**Truth from Codebase, Not Assumptions**
1. Scan codebase for actual implementations
2. Verify files exist, check for stubs/TODOs
3. Check test coverage
4. Generate story with checkboxes matching reality
5. No guessing—every checkbox has evidence
</philosophy>
<config>
name: create-story-with-gap-analysis
version: 3.0.0
verification_status:
verified: "[x]" # File exists, real implementation, tests exist
partial: "[~]" # File exists but stub/TODO or no tests
missing: "[ ]" # File does not exist
defaults:
update_sprint_status: true
create_report: false
</config>
<execution_context>
@patterns/verification.md
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="initialize" priority="first">
**Identify story and load context**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 STORY REGENERATION WITH GAP ANALYSIS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**Ask user for story:**
```
Which story should I regenerate with gap analysis?
Provide:
- Story number (e.g., "1.9" or "1-9")
- OR story filename
Your choice:
```
**Parse input:**
- Extract epic_num, story_num
- Locate story file
**Load existing story:**
```bash
Read: {{story_dir}}/story-{{epic_num}}.{{story_num}}.md
```
Extract:
- Story title
- User story (As a... I want... So that...)
- Acceptance criteria
- Tasks
- Dev Notes
**Load epic context:**
```bash
Read: {{planning_artifacts}}/epics.md
```
Extract:
- Epic business objectives
- Technical constraints
- Dependencies
**Determine target directories:**
From story title/requirements, identify which directories to scan.
```
✅ Story Context Loaded
Story: {{epic_num}}.{{story_num}} - {{title}}
Target directories:
{{#each directories}}
- {{this}}
{{/each}}
[C] Continue to Codebase Scan
```
</step>
<step name="codebase_scan">
**VERIFY what code actually exists**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 CODEBASE SCAN
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**For each target directory:**
1. **List all source files:**
```bash
Glob: {{target_dir}}/src/**/*.ts
Glob: {{target_dir}}/src/**/*.tsx
```
2. **Check for specific required components:**
Based on story ACs, check if required files exist:
```bash
Glob: {{target_dir}}/src/auth/controllers/*oauth*.ts
# Result: ✅ EXISTS or ❌ MISSING
```
3. **Verify implementation depth:**
For files that exist, check quality:
```bash
Read: {{file}}
# Check for stubs
Grep: "MOCK|TODO|FIXME|Not implemented" {{file}}
# If found: ⚠️ STUB
```
4. **Check dependencies:**
```bash
Read: {{target_dir}}/package.json
# Required: axios - Found? ✅/❌
# Required: @aws-sdk/client-secrets-manager - Found? ✅/❌
```
5. **Check test coverage:**
```bash
Glob: {{target_dir}}/src/**/*.spec.ts
Glob: {{target_dir}}/test/**/*.test.ts
```
</step>
<step name="generate_gap_analysis">
**Create verified gap analysis**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 GAP ANALYSIS RESULTS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ IMPLEMENTED (Verified):
{{#each implemented}}
{{@index}}. **{{name}}**
- File: {{file}} ✅ EXISTS
- Status: {{status}}
- Tests: {{test_count}} tests
{{/each}}
❌ MISSING (Verified):
{{#each missing}}
{{@index}}. **{{name}}**
- Expected: {{expected_file}} ❌ NOT FOUND
- Needed for: {{requirement}}
{{/each}}
⚠️ PARTIAL (Stub/Incomplete):
{{#each partial}}
{{@index}}. **{{name}}**
- File: {{file}} ✅ EXISTS
- Issue: {{issue}}
{{/each}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="generate_story">
**Generate story with verified checkboxes**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📝 GENERATING STORY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Use story template with:
- `[x]` for VERIFIED items (evidence: file exists, not stub, has tests)
- `[~]` for PARTIAL items (evidence: file exists but stub/no tests)
- `[ ]` for MISSING items (evidence: file not found)
**Write story file:**
```bash
Write: {{story_dir}}/story-{{epic_num}}.{{story_num}}.md
```
**Validate generated story:**
```bash
# Check 7 sections exist
grep "^## " {{story_file}} | wc -l
# Should be 7
# Check gap analysis section exists
grep "Gap Analysis" {{story_file}}
```
</step>
<step name="update_sprint_status" if="update_sprint_status">
**Update sprint-status.yaml**
```bash
Read: {{sprint_status}}
# Update story status to "ready-for-dev" if was "backlog"
# Preserve comments and structure
Write: {{sprint_status}}
```
</step>
<step name="final_summary">
**Report completion**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ STORY REGENERATED WITH GAP ANALYSIS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Story: {{epic_num}}.{{story_num}} - {{title}}
File: {{story_file}}
Sections: 7/7 ✅
Gap Analysis Summary:
- ✅ {{implemented_count}} components VERIFIED complete
- ❌ {{missing_count}} components VERIFIED missing
- ⚠️ {{partial_count}} components PARTIAL (stub/no tests)
Checkboxes reflect VERIFIED codebase state.
Next Steps:
1. Review story for accuracy
2. Use /dev-story to implement missing components
3. Story provides complete context for implementation
[N] Regenerate next story
[Q] Quit
[R] Review generated story
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**If [N]:** Loop back to initialize with next story.
**If [R]:** Display story content, then show menu.
</step>
</process>
<examples>
```bash
# Regenerate specific story
/create-story-with-gap-analysis
> Which story? 1.9
# With explicit story file
/create-story-with-gap-analysis story_file=docs/sprint-artifacts/story-1.9.md
```
</examples>
<failure_handling>
**Story not found:** HALT with clear error.
**Target directory not found:** Warn, scan available directories.
**Glob/Read fails:** Log warning, count as MISSING.
**Write fails:** Report error, display generated content.
</failure_handling>
<success_criteria>
- [ ] Codebase scanned for all story requirements
- [ ] Gap analysis generated with evidence
- [ ] Story written with verified checkboxes
- [ ] 7 sections present
- [ ] Sprint status updated (if enabled)
</success_criteria>

View File

@ -14,10 +14,9 @@ story_dir: "{implementation_artifacts}"
# Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/create-story-with-gap-analysis"
template: "{installed_path}/template.md"
instructions: "{installed_path}/step-01-initialize.md"
instructions: "{installed_path}/workflow.md"
# Variables and inputs
# Variables
variables:
sprint_status: "{implementation_artifacts}/sprint-status.yaml"
epics_file: "{planning_artifacts}/epics.md"
@ -28,12 +27,6 @@ project_context: "**/project-context.md"
default_output_file: "{story_dir}/{{story_key}}.md"
# Workflow steps (processed in order)
steps:
- step-01-initialize.md
- step-02-codebase-scan.md
- step-03-generate-story.md
standalone: true
web_bundle: false

View File

@ -1,625 +0,0 @@
# Detect Ghost Features - Reverse Gap Analysis (Who You Gonna Call?)
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<workflow>
<step n="1" goal="Load all stories in scope">
<action>Determine scan scope based on parameters:</action>
<check if="scan_scope == 'epic' AND epic_number provided">
<action>Read {sprint_status}</action>
<action>Filter stories starting with "{{epic_number}}-"</action>
<action>Store as: stories_in_scope</action>
<output>🔍 Scanning Epic {{epic_number}} stories for documented features...</output>
</check>
<check if="scan_scope == 'sprint'">
<action>Read {sprint_status}</action>
<action>Get ALL story keys (exclude epics and retrospectives)</action>
<action>Store as: stories_in_scope</action>
<output>🔍 Scanning entire sprint for documented features...</output>
</check>
<check if="scan_scope == 'codebase'">
<action>Set stories_in_scope = ALL stories found in {sprint_artifacts}</action>
<output>🔍 Scanning entire codebase for documented features...</output>
</check>
<action>For each story in stories_in_scope:</action>
<action> Read story file</action>
<action> Extract documented artifacts:</action>
<action> - File List (all paths mentioned)</action>
<action> - Tasks (all file/component/service names mentioned)</action>
<action> - ACs (all features/functionality mentioned)</action>
<action> Store in: documented_artifacts[story_key] = {files, components, services, apis, features}</action>
<output>
✅ Loaded {{stories_in_scope.length}} stories
📋 Documented artifacts extracted from {{total_sections}} sections
</output>
</step>
<step n="2" goal="Scan codebase for actual implementations">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
👻 SCANNING FOR GHOST FEATURES
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Looking for: Components, APIs, Services, DB Tables, Models
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<substep n="2a" title="Scan for React/Vue/Angular components">
<check if="scan_for.components == true">
<action>Use Glob to find component files:</action>
<action> - **/*.component.{tsx,jsx,ts,js,vue} (Angular/Vue pattern)</action>
<action> - **/components/**/*.{tsx,jsx} (React pattern)</action>
<action> - **/src/**/*{Component,View,Screen,Page}.{tsx,jsx} (Named pattern)</action>
<action>For each found component file:</action>
<action> Extract component name from filename or export</action>
<action> Check file size (ignore <50 lines as trivial)</action>
<action> Read file to determine if it's a significant feature</action>
<action>Store as: codebase_components = [{name, path, size, purpose}]</action>
<output>📦 Found {{codebase_components.length}} components</output>
</check>
</substep>
<substep n="2b" title="Scan for API endpoints">
<check if="scan_for.api_endpoints == true">
<action>Use Glob to find API files:</action>
<action> - **/api/**/*.{ts,js} (Next.js/Express pattern)</action>
<action> - **/*.controller.{ts,js} (NestJS pattern)</action>
<action> - **/routes/**/*.{ts,js} (Generic routes)</action>
<action>Use Grep to find endpoint definitions:</action>
<action> - @Get|@Post|@Put|@Delete decorators (NestJS)</action>
<action> - export async function GET|POST|PUT|DELETE (Next.js App Router)</action>
<action> - router.get|post|put|delete (Express)</action>
<action> - app.route (Flask/FastAPI if Python)</action>
<action>For each endpoint found:</action>
<action> Extract: HTTP method, path, handler name</action>
<action> Read file to understand functionality</action>
<action>Store as: codebase_apis = [{method, path, handler, file}]</action>
<output>🌐 Found {{codebase_apis.length}} API endpoints</output>
</check>
</substep>
<substep n="2c" title="Scan for database tables">
<check if="scan_for.database_tables == true">
<action>Use Glob to find schema files:</action>
<action> - **/prisma/schema.prisma (Prisma)</action>
<action> - **/*.entity.{ts,js} (TypeORM)</action>
<action> - **/models/**/*.{ts,js} (Mongoose/Sequelize)</action>
<action> - **/*-table.ts (Custom)</action>
<action>Use Grep to find table definitions:</action>
<action> - model (Prisma)</action>
<action> - @Entity (TypeORM)</action>
<action> - createTable (Migrations)</action>
<action>For each table found:</action>
<action> Extract: table name, columns, relationships</action>
<action>Store as: codebase_tables = [{name, file, columns}]</action>
<output>🗄️ Found {{codebase_tables.length}} database tables</output>
</check>
</substep>
<substep n="2d" title="Scan for services/modules">
<check if="scan_for.services == true">
<action>Use Glob to find service files:</action>
<action> - **/*.service.{ts,js}</action>
<action> - **/services/**/*.{ts,js}</action>
<action> - **/*Service.{ts,js}</action>
<action>For each service found:</action>
<action> Extract: service name, key methods, dependencies</action>
<action> Ignore trivial services (<100 lines)</action>
<action>Store as: codebase_services = [{name, file, methods}]</action>
<output>⚙️ Found {{codebase_services.length}} services</output>
</check>
</substep>
</step>
<step n="3" goal="Cross-reference codebase artifacts with stories">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 CROSS-REFERENCING CODEBASE ↔ STORIES
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Initialize: orphaned_features = []</action>
<substep n="3a" title="Check components">
<iterate>For each component in codebase_components:</iterate>
<action>Search all stories for mentions of:</action>
<action> - Component name in File Lists</action>
<action> - Component name in Task descriptions</action>
<action> - Component file path in File Lists</action>
<action> - Feature described by component in ACs</action>
<check if="NO stories mention this component">
<action>Add to orphaned_features:</action>
<action>
type: "component"
name: {{component.name}}
path: {{component.path}}
size: {{component.size}} lines
purpose: {{inferred_purpose_from_code}}
severity: "HIGH" # Significant orphan
</action>
<output> 👻 ORPHAN: {{component.name}} ({{component.path}})</output>
</check>
<check if="stories mention this component">
<output> ✅ Documented: {{component.name}} → {{story_keys}}</output>
</check>
</substep>
<substep n="3b" title="Check API endpoints">
<iterate>For each API in codebase_apis:</iterate>
<action>Search all stories for mentions of:</action>
<action> - Endpoint path (e.g., "/api/users")</action>
<action> - HTTP method + resource (e.g., "POST users")</action>
<action> - Handler file in File Lists</action>
<action> - API functionality in ACs (e.g., "Users can create account")</action>
<check if="NO stories mention this API">
<action>Add to orphaned_features:</action>
<action>
type: "api"
method: {{api.method}}
path: {{api.path}}
handler: {{api.handler}}
file: {{api.file}}
severity: "CRITICAL" # APIs are critical functionality
</action>
<output> 👻 ORPHAN: {{api.method}} {{api.path}} ({{api.file}})</output>
</check>
</substep>
<substep n="3c" title="Check database tables">
<iterate>For each table in codebase_tables:</iterate>
<action>Search all stories for mentions of:</action>
<action> - Table name</action>
<action> - Migration file in File Lists</action>
<action> - Data model in Tasks</action>
<check if="NO stories mention this table">
<action>Add to orphaned_features:</action>
<action>
type: "database"
name: {{table.name}}
file: {{table.file}}
columns: {{table.columns.length}}
severity: "HIGH" # Database changes are significant
</action>
<output> 👻 ORPHAN: Table {{table.name}} ({{table.file}})</output>
</check>
</substep>
<substep n="3d" title="Check services">
<iterate>For each service in codebase_services:</iterate>
<action>Search all stories for mentions of:</action>
<action> - Service name or class name</action>
<action> - Service file in File Lists</action>
<action> - Service functionality in Tasks/ACs</action>
<check if="NO stories mention this service">
<action>Add to orphaned_features:</action>
<action>
type: "service"
name: {{service.name}}
file: {{service.file}}
methods: {{service.methods.length}}
severity: "MEDIUM" # Services are business logic
</action>
<output> 👻 ORPHAN: {{service.name}} ({{service.file}})</output>
</check>
</substep>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Cross-Reference Complete
👻 Orphaned Features: {{orphaned_features.length}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="4" goal="Analyze and categorize orphans">
<action>Group orphans by type and severity:</action>
<action>
- critical_orphans (APIs, auth, payment)
- high_orphans (Components, DB tables, services)
- medium_orphans (Utilities, helpers)
- low_orphans (Config files, constants)
</action>
<action>Estimate complexity for each orphan:</action>
<action> Based on file size, dependencies, test coverage</action>
<action>Suggest epic assignment based on functionality:</action>
<action> - Auth components → Epic focusing on authentication</action>
<action> - UI components → Epic focusing on frontend</action>
<action> - API endpoints → Epic for that resource type</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
👻 GHOST FEATURES DETECTED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Total Orphans:** {{orphaned_features.length}}
**By Severity:**
- 🔴 CRITICAL: {{critical_orphans.length}} (APIs, security-critical)
- 🟠 HIGH: {{high_orphans.length}} (Components, DB, services)
- 🟡 MEDIUM: {{medium_orphans.length}} (Utilities, helpers)
- 🟢 LOW: {{low_orphans.length}} (Config, constants)
**By Type:**
- Components: {{component_orphans.length}}
- API Endpoints: {{api_orphans.length}}
- Database Tables: {{db_orphans.length}}
- Services: {{service_orphans.length}}
- Other: {{other_orphans.length}}
---
**CRITICAL Orphans (Immediate Action Required):**
{{#each critical_orphans}}
{{@index + 1}}. **{{type | uppercase}}**: {{name}}
File: {{file}}
Purpose: {{inferred_purpose}}
Risk: {{why_critical}}
Suggested Epic: {{suggested_epic}}
{{/each}}
---
**HIGH Priority Orphans:**
{{#each high_orphans}}
{{@index + 1}}. **{{type | uppercase}}**: {{name}}
File: {{file}}
Size: {{size}} lines / {{complexity}} complexity
Suggested Epic: {{suggested_epic}}
{{/each}}
---
**Detection Confidence:**
- Artifacts scanned: {{total_artifacts_scanned}}
- Stories cross-referenced: {{stories_in_scope.length}}
- Documentation coverage: {{documented_pct}}%
- Orphan rate: {{orphan_rate}}%
{{#if orphan_rate > 20}}
⚠️ **HIGH ORPHAN RATE** - Over 20% of codebase is undocumented!
Recommend: Comprehensive backfill story creation session
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="5" goal="Propose backfill stories">
<check if="create_backfill_stories == false">
<output>
Backfill story creation disabled. To create stories for orphans, run:
/detect-ghost-features create_backfill_stories=true
</output>
<action>Jump to Step 7 (Generate Report)</action>
</check>
<check if="orphaned_features.length == 0">
<output>✅ No orphans found - all code is documented in stories!</output>
<action>Jump to Step 7</action>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📝 PROPOSING BACKFILL STORIES ({{orphaned_features.length}})
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<iterate>For each orphaned feature (prioritized by severity):</iterate>
<substep n="5a" title="Generate backfill story draft">
<action>Analyze orphan to understand functionality:</action>
<action> - Read implementation code</action>
<action> - Identify dependencies and related files</action>
<action> - Determine what it does (infer from code)</action>
<action> - Find tests (if any) to understand use cases</action>
<action>Generate story draft:</action>
<action>
Story Title: "Document existing {{name}} {{type}}"
Story Description:
This is a BACKFILL STORY documenting existing functionality found in the codebase
that was not tracked in any story (likely vibe-coded or manually added).
Business Context:
{{inferred_business_purpose_from_code}}
Current State:
**Implementation EXISTS:** {{file}}
- {{description_of_what_it_does}}
- {{key_features_or_methods}}
{{#if has_tests}}✅ Tests exist: {{test_files}}{{else}}❌ No tests found{{/if}}
Acceptance Criteria:
{{#each inferred_acs_from_code}}
- [ ] {{this}}
{{/each}}
Tasks:
- [x] {{name}} implementation (ALREADY EXISTS - {{file}})
{{#if missing_tests}}- [ ] Add tests for {{name}}{{/if}}
{{#if missing_docs}}- [ ] Add documentation for {{name}}{{/if}}
- [ ] Verify functionality works as expected
- [ ] Add to relevant epic or create new epic for backfills
Definition of Done:
- [x] Implementation exists and works
{{#if has_tests}}- [x] Tests exist{{else}}- [ ] Tests added{{/if}}
- [ ] Documented in story (this story)
- [ ] Assigned to appropriate epic
Story Type: BACKFILL (documenting existing code)
</action>
<output>
📄 Generated backfill story draft for: {{name}}
{{story_draft_preview}}
---
</output>
</substep>
<substep n="5b" title="Ask user if they want to create this backfill story">
<check if="auto_create == true">
<action>Create backfill story automatically</action>
<output>✅ Auto-created: {{story_filename}}</output>
</check>
<check if="auto_create == false">
<ask>
Create backfill story for {{name}}?
**Type:** {{type}}
**File:** {{file}}
**Suggested Epic:** {{suggested_epic}}
**Complexity:** {{complexity_estimate}}
[Y] Yes - Create this backfill story
[A] Auto - Create this and all remaining backfill stories
[E] Edit - Let me adjust the story draft first
[S] Skip - Don't create story for this orphan
[H] Halt - Stop backfill story creation
Your choice:
</ask>
<check if="choice == 'Y'">
<action>Create backfill story file: {sprint_artifacts}/backfill-{{type}}-{{name}}.md</action>
<action>Add to backfill_stories_created list</action>
<output>✅ Created: {{story_filename}}</output>
</check>
<check if="choice == 'A'">
<action>Set auto_create = true</action>
<action>Create this story and auto-create remaining</action>
</check>
<check if="choice == 'E'">
<ask>Provide your adjusted story content or instructions for modifications:</ask>
<action>Apply user's edits to story draft</action>
<action>Create modified backfill story</action>
</check>
<check if="choice == 'S'">
<action>Add to skipped_backfills list</action>
<output>⏭️ Skipped</output>
</check>
<check if="choice == 'H'">
<action>Exit backfill story creation loop</action>
<action>Jump to Step 6</action>
</check>
</check>
</substep>
<check if="add_to_sprint_status AND backfill_stories_created.length > 0">
<action>Load {sprint_status} file</action>
<iterate>For each created backfill story:</iterate>
<action> Add entry: {{backfill_story_key}}: backlog # BACKFILL - documents existing {{name}}</action>
<action>Save sprint-status.yaml</action>
<output>✅ Added {{backfill_stories_created.length}} backfill stories to sprint-status.yaml</output>
</check>
</step>
<step n="6" goal="Suggest epic organization for orphans">
<check if="backfill_stories_created.length > 0">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 BACKFILL STORY ORGANIZATION
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Group backfill stories by suggested epic:</action>
<iterate>For each suggested_epic:</iterate>
<output>
**{{suggested_epic}}:**
{{#each backfill_stories_for_epic}}
- {{story_key}}: {{name}} ({{type}})
{{/each}}
</output>
<output>
---
**Recommendations:**
1. **Option A: Create "Epic-Backfill" for all orphans**
- Single epic containing all backfill stories
- Easy to track undocumented code
- Clear separation from feature work
2. **Option B: Distribute to existing epics**
- Add each backfill story to its logical epic
- Better thematic grouping
- May inflate epic story counts
3. **Option C: Leave in backlog**
- Don't assign to epics yet
- Review and assign during next planning
**Your choice:**
[A] Create Epic-Backfill (recommended)
[B] Distribute to existing epics
[C] Leave in backlog for manual assignment
[S] Skip epic assignment
</output>
<ask>How should backfill stories be organized?</ask>
<check if="choice == 'A'">
<action>Create epic-backfill.md in epics directory</action>
<action>Update sprint-status.yaml with epic-backfill entry</action>
<action>Assign all backfill stories to epic-backfill</action>
</check>
<check if="choice == 'B'">
<iterate>For each backfill story:</iterate>
<action> Assign to suggested_epic in sprint-status.yaml</action>
<action> Update story_key to match epic (e.g., 2-11-backfill-userprofile)</action>
</check>
<check if="choice == 'C' OR choice == 'S'">
<action>Leave stories in backlog</action>
</check>
</check>
</step>
<step n="7" goal="Generate comprehensive report">
<check if="create_report == true">
<action>Write report to: {sprint_artifacts}/ghost-features-report-{{timestamp}}.md</action>
<action>Report structure:</action>
<action>
# Ghost Features Report (Reverse Gap Analysis)
**Generated:** {{timestamp}}
**Scope:** {{scan_scope}} {{#if epic_number}}(Epic {{epic_number}}){{/if}}
## Executive Summary
**Codebase Artifacts Scanned:** {{total_artifacts_scanned}}
**Stories Cross-Referenced:** {{stories_in_scope.length}}
**Orphaned Features Found:** {{orphaned_features.length}}
**Documentation Coverage:** {{documented_pct}}%
**Backfill Stories Created:** {{backfill_stories_created.length}}
## Orphaned Features Detail
### CRITICAL Orphans ({{critical_orphans.length}})
[Full list with files, purposes, risks]
### HIGH Priority Orphans ({{high_orphans.length}})
[Full list]
### MEDIUM Priority Orphans ({{medium_orphans.length}})
[Full list]
## Backfill Stories Created
{{#each backfill_stories_created}}
- {{story_key}}: {{story_file}}
{{/each}}
## Recommendations
[Epic assignment suggestions, next steps]
## Appendix: Scan Methodology
[How detection worked, patterns used, confidence levels]
</action>
<output>📄 Full report: {{report_path}}</output>
</check>
</step>
<step n="8" goal="Final summary and next steps">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ GHOST FEATURE DETECTION COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Scan Scope:** {{scan_scope}} {{#if epic_number}}(Epic {{epic_number}}){{/if}}
**Results:**
- 👻 Orphaned Features: {{orphaned_features.length}}
- 📝 Backfill Stories Created: {{backfill_stories_created.length}}
- ⏭️ Skipped: {{skipped_backfills.length}}
- 📊 Documentation Coverage: {{documented_pct}}%
{{#if orphaned_features.length == 0}}
**EXCELLENT!** All code is documented in stories.
Your codebase and story backlog are in perfect sync.
{{/if}}
{{#if orphaned_features.length > 0 AND backfill_stories_created.length == 0}}
**Action Required:**
Run with create_backfill_stories=true to generate stories for orphans
{{/if}}
{{#if backfill_stories_created.length > 0}}
**Next Steps:**
1. **Review backfill stories** - Check generated stories for accuracy
2. **Assign to epics** - Organize backfills (or create Epic-Backfill)
3. **Update sprint-status.yaml** - Already updated with {{backfill_stories_created.length}} new entries
4. **Prioritize** - Decide when to implement tests/docs for orphans
5. **Run revalidation** - Verify orphans work as expected
**Quick Commands:**
```bash
# Revalidate a backfill story to verify functionality
/revalidate-story story_file={{backfill_stories_created[0].file}}
# Process backfill stories (add tests/docs)
/batch-super-dev filter_by_epic=backfill
```
{{/if}}
{{#if create_report}}
**Detailed Report:** {{report_path}}
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
💡 **Pro Tip:** Run this periodically (e.g., end of each sprint) to catch
vibe-coded features before they become maintenance nightmares.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
</workflow>

View File

@ -1,367 +0,0 @@
<workflow>
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language}</critical>
<step n="1" goal="Find and load story file">
<check if="{{story_file}} is provided by user">
<action>Use {{story_file}} directly</action>
<action>Read COMPLETE story file</action>
<action>Extract story_key from filename or metadata</action>
<goto anchor="gap_analysis" />
</check>
<!-- Ask user for story to validate -->
<output>🔍 **Gap Analysis - Story Task Validation**
This workflow validates story tasks against your actual codebase.
**Use Cases:**
- Audit "done" stories to verify they match reality
- Validate story tasks before starting development
- Check if completed work was actually implemented
**Provide story to validate:**
</output>
<ask>Enter story file path, story key (e.g., "1-2-auth"), or status to scan (e.g., "done", "review", "in-progress"):</ask>
<check if="user provides file path">
<action>Use provided file path as {{story_file}}</action>
<action>Read COMPLETE story file</action>
<action>Extract story_key from filename</action>
<goto anchor="gap_analysis" />
</check>
<check if="user provides story key (e.g., 1-2-auth)">
<action>Search {story_dir} for file matching pattern {{story_key}}.md</action>
<action>Set {{story_file}} to found file path</action>
<action>Read COMPLETE story file</action>
<goto anchor="gap_analysis" />
</check>
<check if="user provides status (e.g., done, review, in-progress)">
<output>🔎 Scanning sprint-status.yaml for stories with status: {{user_input}}...</output>
<check if="{{sprint_status}} file exists">
<action>Load the FULL file: {{sprint_status}}</action>
<action>Parse development_status section</action>
<action>Find all stories where status equals {{user_input}}</action>
<check if="no stories found with that status">
<output>📋 No stories found with status: {{user_input}}
Available statuses: backlog, ready-for-dev, in-progress, review, done
</output>
<action>HALT</action>
</check>
<check if="multiple stories found">
<output>Found {{count}} stories with status {{user_input}}:
{{list_of_stories}}
</output>
<ask>Which story would you like to validate? [Enter story key or 'all']:</ask>
<check if="user says 'all'">
<action>Set {{batch_mode}} = true</action>
<action>Store list of all story keys to validate</action>
<action>Set {{story_file}} to first story in list</action>
<action>Read COMPLETE story file</action>
<goto anchor="gap_analysis" />
</check>
<check if="user provides specific story key">
<action>Set {{story_file}} to selected story path</action>
<action>Read COMPLETE story file</action>
<goto anchor="gap_analysis" />
</check>
</check>
<check if="single story found">
<action>Set {{story_file}} to found story path</action>
<action>Read COMPLETE story file</action>
<goto anchor="gap_analysis" />
</check>
</check>
<check if="{{sprint_status}} file does NOT exist">
<output>⚠️ No sprint-status.yaml found. Please provide direct story file path.</output>
<action>HALT</action>
</check>
</check>
<anchor id="gap_analysis" />
</step>
<step n="2" goal="Perform gap analysis">
<critical>🔍 CODEBASE REALITY CHECK - Validate tasks against actual code!</critical>
<output>📊 **Analyzing Story: {{story_key}}**
Scanning codebase to validate tasks...
</output>
<!-- Extract story context -->
<action>Parse story sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Status</action>
<action>Extract all tasks and subtasks from story file</action>
<action>Identify technical areas mentioned in tasks (files, classes, functions, services, components)</action>
<!-- SCAN PHASE: Analyze actual codebase -->
<action>Determine scan targets from task descriptions:</action>
<action>- For "Create X" tasks: Check if X already exists</action>
<action>- For "Implement Y" tasks: Search for Y functionality</action>
<action>- For "Add Z" tasks: Verify Z is missing</action>
<action>- For test tasks: Check for existing test files</action>
<action>Use Glob to find relevant files matching patterns from tasks (e.g., **/*.ts, **/*.tsx, **/*.test.ts)</action>
<action>Use Grep to search for specific classes, functions, or components mentioned in tasks</action>
<action>Use Read to verify implementation details and functionality in key discovered files</action>
<!-- ANALYSIS PHASE: Compare tasks to reality -->
<action>Document scan results:</action>
**CODEBASE REALITY:**
<action>✅ What Exists:
- List verified files, classes, functions, services found
- Note implementation completeness (partial vs full)
- Identify code that tasks claim to create but already exists
</action>
<action>❌ What's Missing:
- List features mentioned in tasks but NOT found in codebase
- Identify claimed implementations that don't exist
- Note tasks marked complete but code missing
</action>
<!-- TASK VALIDATION PHASE -->
<action>For each task in the story, determine:</action>
<action>- ACCURATE: Task matches reality (code exists if task is checked, missing if unchecked)</action>
<action>- FALSE POSITIVE: Task checked [x] but code doesn't exist (BS detection!)</action>
<action>- FALSE NEGATIVE: Task unchecked [ ] but code already exists</action>
<action>- NEEDS UPDATE: Task description doesn't match current implementation</action>
<action>Generate validation report with:</action>
<action>- Tasks that are accurate</action>
<action>- Tasks that are false positives (marked done but not implemented) ⚠️</action>
<action>- Tasks that are false negatives (not marked but already exist)</action>
<action>- Recommended task updates</action>
</step>
<step n="3" goal="Present findings and recommendations">
<critical>📋 SHOW TRUTH - Compare story claims vs codebase reality</critical>
<output>
📊 **Gap Analysis Results: {{story_key}}**
**Story Status:** {{story_status}}
---
**Codebase Scan Results:**
✅ **What Actually Exists:**
{{list_of_existing_files_features_with_details}}
❌ **What's Actually Missing:**
{{list_of_missing_elements_despite_claims}}
---
**Task Validation:**
{{if_any_accurate_tasks}}
✅ **Accurate Tasks** ({{count}}):
{{list_tasks_that_match_reality}}
{{endif}}
{{if_any_false_positives}}
⚠️ **FALSE POSITIVES** ({{count}}) - Marked done but NOT implemented:
{{list_tasks_marked_complete_but_code_missing}}
**WARNING:** These tasks claim completion but code doesn't exist!
{{endif}}
{{if_any_false_negatives}}
**FALSE NEGATIVES** ({{count}}) - Not marked but ALREADY exist:
{{list_tasks_unchecked_but_code_exists}}
{{endif}}
{{if_any_needs_update}}
🔄 **NEEDS UPDATE** ({{count}}) - Task description doesn't match implementation:
{{list_tasks_needing_description_updates}}
{{endif}}
---
📝 **Proposed Story Updates:**
{{if_false_positives_found}}
**CRITICAL - Uncheck false positives:**
{{list_tasks_to_uncheck_with_reasoning}}
{{endif}}
{{if_false_negatives_found}}
**Check completed work:**
{{list_tasks_to_check_with_verification}}
{{endif}}
{{if_task_updates_needed}}
**Update task descriptions:**
{{list_task_description_updates}}
{{endif}}
{{if_gap_analysis_section_missing}}
**Add Gap Analysis section** documenting findings
{{endif}}
---
**Story Accuracy Score:** {{percentage_of_accurate_tasks}}% ({{accurate_count}}/{{total_count}})
</output>
<check if="story status is 'done' or 'review'">
<check if="false positives found">
<output>🚨 **WARNING:** This story is marked {{story_status}} but has FALSE POSITIVES!
{{count}} task(s) claim completion but code doesn't exist.
This story may have been prematurely marked complete.
**Recommendation:** Update story status to 'in-progress' and complete missing work.
</output>
</check>
</check>
</step>
<step n="4" goal="Get user decision">
<ask>**What would you like to do?**
Options:
[U] Update - Apply proposed changes to story file
[A] Audit Report - Save findings to report file without updating story
[N] No Changes - Just show me the findings
[R] Review Details - Show me more details about specific findings
[C] Continue to Next - Move to next story (batch mode only)
[Q] Quit - Exit gap analysis
</ask>
<!-- UPDATE OPTION -->
<check if="user chooses Update (U)">
<action>Update story file with proposed changes:</action>
<action>- Uncheck false positive tasks</action>
<action>- Check false negative tasks</action>
<action>- Update task descriptions as needed</action>
<action>- Add or update "Gap Analysis" section with findings</action>
<action>- Add Change Log entry: "Gap analysis performed - tasks validated against codebase ({{date}})"</action>
<check if="false positives found AND story status is done or review">
<ask>Story has false positives. Update status to 'in-progress'? [Y/n]:</ask>
<check if="user approves">
<action>Update story Status to 'in-progress'</action>
<check if="sprint_status file exists">
<action>Update sprint-status.yaml status for this story to 'in-progress'</action>
</check>
</check>
</check>
<output>✅ Story file updated with gap analysis findings.
- {{changes_count}} task(s) updated
- Gap Analysis section added/updated
- Accuracy score: {{accuracy_percentage}}%
**File:** {{story_file}}
</output>
<check if="batch_mode is true">
<ask>Continue to next story? [Y/n]:</ask>
<check if="user approves">
<action>Load next story from batch list</action>
<goto step="2">Analyze next story</goto>
</check>
</check>
<action>HALT - Gap analysis complete</action>
</check>
<!-- AUDIT REPORT OPTION -->
<check if="user chooses Audit Report (A)">
<action>Generate audit report file: {{story_dir}}/gap-analysis-report-{{story_key}}-{{date}}.md</action>
<action>Include full findings, accuracy scores, recommendations</action>
<output>📄 Audit report saved: {{report_file}}
This report can be shared with team for review.
Story file was NOT modified.
</output>
<check if="batch_mode is true">
<ask>Continue to next story? [Y/n]:</ask>
<check if="user approves">
<action>Load next story from batch list</action>
<goto step="2">Analyze next story</goto>
</check>
</check>
<action>HALT - Gap analysis complete</action>
</check>
<!-- NO CHANGES OPTION -->
<check if="user chooses No Changes (N)">
<output> Findings displayed only. No files modified.</output>
<action>HALT - Gap analysis complete</action>
</check>
<!-- REVIEW DETAILS OPTION -->
<check if="user chooses Review Details (R)">
<ask>Which findings would you like more details about? (specify task numbers, file names, or areas):</ask>
<action>Provide detailed analysis of requested areas using Read tool for deeper code inspection</action>
<action>After review, re-present the decision options</action>
<action>Continue based on user's subsequent choice</action>
</check>
<!-- CONTINUE TO NEXT (batch mode) -->
<check if="user chooses Continue (C) AND batch_mode is true">
<action>Load next story from batch list</action>
<goto step="2">Analyze next story</goto>
</check>
<check if="user chooses Continue (C) AND batch_mode is NOT true">
<output>⚠️ Not in batch mode. Only one story to validate.</output>
<action>HALT</action>
</check>
<!-- QUIT OPTION -->
<check if="user chooses Quit (Q)">
<output>👋 Gap analysis session ended.
{{if batch_mode}}Processed {{processed_count}}/{{total_count}} stories.{{endif}}
</output>
<action>HALT</action>
</check>
</step>
<step n="5" goal="Completion summary">
<output>✅ **Gap Analysis Complete, {user_name}!**
{{if_single_story}}
**Story Analyzed:** {{story_key}}
**Accuracy Score:** {{accuracy_percentage}}%
**Actions Taken:** {{actions_summary}}
{{endif}}
{{if_batch_mode}}
**Batch Analysis Summary:**
- Stories analyzed: {{processed_count}}
- Average accuracy: {{avg_accuracy}}%
- False positives found: {{total_false_positives}}
- Stories updated: {{updated_count}}
{{endif}}
**Next Steps:**
- Review updated stories
- Address any false positives found
- Run dev-story for stories needing work
</output>
</step>
</workflow>

View File

@ -0,0 +1,246 @@
# Gap Analysis v3.0 - Verify Story Tasks Against Codebase
<purpose>
Validate story checkbox claims against actual codebase reality.
Find false positives (checked but not done) and false negatives (done but unchecked).
Interactive workflow with options to update, audit, or review.
</purpose>
<philosophy>
**Evidence-Based Verification**
Checkboxes lie. Code doesn't.
- Search codebase for implementation evidence
- Check for stubs, TODOs, empty functions
- Verify tests exist for claimed features
- Report accuracy of story completion claims
</philosophy>
<config>
name: gap-analysis
version: 3.0.0
defaults:
auto_update: false
create_audit_report: true
strict_mode: false # If true, stubs count as incomplete
output:
update_story: "Modify checkbox state to match reality"
audit_report: "Generate detailed gap analysis document"
no_changes: "Display results only"
</config>
<execution_context>
@patterns/verification.md
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="load_story" priority="first">
**Load and parse story file**
```bash
STORY_FILE="{{story_file}}"
[ -f "$STORY_FILE" ] || { echo "❌ story_file required"; exit 1; }
```
Use Read tool on story file. Extract:
- All `- [ ]` and `- [x]` items
- File references from Dev Agent Record
- Task descriptions with expected artifacts
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 GAP ANALYSIS: {{story_key}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Tasks: {{total_tasks}}
Currently checked: {{checked_count}}
```
</step>
<step name="verify_each_task">
**Verify each task against codebase**
For each task item:
1. **Extract artifacts** - File names, component names, function names
2. **Search codebase:**
```bash
# Check file exists
Glob: {{expected_file}}
# Check function/component exists
Grep: "{{function_or_component_name}}"
```
3. **If file exists, check quality:**
```bash
# Check for stubs
Grep: "TODO|FIXME|Not implemented|throw new Error" {{file}}
# Check for tests
Glob: {{file_base}}.test.* OR {{file_base}}.spec.*
```
4. **Determine status:**
- **VERIFIED:** File exists, not a stub, tests exist
- **PARTIAL:** File exists but stub/TODO or no tests
- **MISSING:** File does not exist
</step>
<step name="calculate_accuracy">
**Compare claimed vs actual**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 GAP ANALYSIS RESULTS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Tasks analyzed: {{total}}
By Status:
- ✅ Verified Complete: {{verified}} ({{verified_pct}}%)
- ⚠️ Partial: {{partial}} ({{partial_pct}}%)
- ❌ Missing: {{missing}} ({{missing_pct}}%)
Accuracy Analysis:
- Checked & Verified: {{correct_checked}}
- Checked but MISSING: {{false_positives}} ← FALSE POSITIVES
- Unchecked but DONE: {{false_negatives}} ← FALSE NEGATIVES
Checkbox Accuracy: {{accuracy}}%
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**If false positives found:**
```
⚠️ FALSE POSITIVES DETECTED
The following tasks are marked done but code is missing:
{{#each false_positives}}
- [ ] {{task}} — Expected: {{expected_file}} — ❌ NOT FOUND
{{/each}}
```
</step>
<step name="present_options">
**Ask user how to proceed**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 OPTIONS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[U] Update - Fix checkboxes to match reality
[A] Audit Report - Generate detailed report file
[N] No Changes - Display only (already done)
[R] Review Details - Show full evidence for each task
Your choice:
```
</step>
<step name="option_update" if="choice == U">
**Update story file checkboxes**
For false positives:
- Change `[x]` to `[ ]` for tasks with missing code
For false negatives:
- Change `[ ]` to `[x]` for tasks with verified code
Use Edit tool to make changes.
```
✅ Story checkboxes updated
- {{fp_count}} false positives unchecked
- {{fn_count}} false negatives checked
- New completion: {{new_pct}}%
```
</step>
<step name="option_audit" if="choice == A">
**Generate audit report**
Write to: `{{story_dir}}/gap-analysis-{{story_key}}-{{timestamp}}.md`
Include:
- Executive summary
- Detailed task-by-task evidence
- False positive/negative lists
- Recommendations
```
✅ Audit report generated: {{report_path}}
```
</step>
<step name="option_review" if="choice == R">
**Show detailed evidence**
For each task:
```
Task: {{task_text}}
Checkbox: {{checked_state}}
Evidence:
- File: {{file}} - {{exists ? "✅ EXISTS" : "❌ MISSING"}}
{{#if exists}}
- Stub check: {{is_stub ? "⚠️ STUB DETECTED" : "✅ Real implementation"}}
- Tests: {{has_tests ? "✅ Tests exist" : "❌ No tests"}}
{{/if}}
Verdict: {{status}}
```
After review, return to options menu.
</step>
<step name="final_summary">
**Display completion**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ GAP ANALYSIS COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Story: {{story_key}}
Verified Completion: {{verified_pct}}%
Checkbox Accuracy: {{accuracy}}%
{{#if updated}}
✅ Checkboxes updated to match reality
{{/if}}
{{#if report_generated}}
📄 Report: {{report_path}}
{{/if}}
{{#if false_positives > 0}}
⚠️ {{false_positives}} tasks need implementation work
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
</process>
<examples>
```bash
# Quick gap analysis of single story
/gap-analysis story_file=docs/sprint-artifacts/2-5-auth.md
# With auto-update enabled
/gap-analysis story_file=docs/sprint-artifacts/2-5-auth.md auto_update=true
```
</examples>
<failure_handling>
**Story file not found:** HALT with clear error.
**Search fails:** Log warning, count as MISSING.
**Edit fails:** Report error, suggest manual update.
</failure_handling>
<success_criteria>
- [ ] All tasks verified against codebase
- [ ] False positives/negatives identified
- [ ] Accuracy metrics calculated
- [ ] User choice executed (update/audit/review)
</success_criteria>

View File

@ -11,7 +11,7 @@ story_dir: "{implementation_artifacts}"
# Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/gap-analysis"
instructions: "{installed_path}/instructions.xml"
instructions: "{installed_path}/workflow.md"
# Variables
story_file: "" # User provides story file path or auto-discover

View File

@ -1,957 +0,0 @@
# Migrate to GitHub - Production-Grade Story Migration
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>RELIABILITY FIRST: This workflow prioritizes data integrity over speed</critical>
<workflow>
<step n="0" goal="Pre-Flight Safety Checks">
<critical>MUST verify all prerequisites before ANY migration operations</critical>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🛡️ PRE-FLIGHT SAFETY CHECKS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<substep n="0a" title="Verify GitHub MCP access">
<action>Test GitHub MCP connection:</action>
<action>Call: mcp__github__get_me()</action>
<check if="API call fails">
<output>
❌ CRITICAL: GitHub MCP not accessible
Cannot proceed with migration without GitHub API access.
Possible causes:
- GitHub MCP server not configured
- Authentication token missing or invalid
- Network connectivity issues
Fix:
1. Ensure GitHub MCP is configured in Claude settings
2. Verify token has required permissions:
- repo (full control)
- write:discussion (for comments)
3. Test connection: Try any GitHub MCP command
HALTING - Cannot migrate without GitHub access.
</output>
<action>HALT</action>
</check>
<action>Extract current user info:</action>
<action> - username: {{user.login}}</action>
<action> - user_id: {{user.id}}</action>
<output>✅ GitHub MCP connected (@{{username}})</output>
</substep>
<substep n="0b" title="Verify repository access">
<action>Verify github_owner and github_repo parameters provided</action>
<check if="parameters missing">
<output>
❌ ERROR: GitHub repository not specified
Required parameters:
github_owner: GitHub username or organization
github_repo: Repository name
Usage:
/migrate-to-github github_owner=jschulte github_repo=myproject
/migrate-to-github github_owner=jschulte github_repo=myproject mode=execute
HALTING
</output>
<action>HALT</action>
</check>
<action>Test repository access:</action>
<action>Call: mcp__github__list_issues({
owner: {{github_owner}},
repo: {{github_repo}},
per_page: 1
})</action>
<check if="repository not found or access denied">
<output>
❌ CRITICAL: Cannot access repository {{github_owner}}/{{github_repo}}
Possible causes:
- Repository doesn't exist
- Token lacks access to this repository
- Repository is private and token doesn't have permission
Verify:
1. Repository exists: <https://github.com/{{github_owner}}/{{github_repo}}>
2. Token has write access to issues
3. Repository name is spelled correctly
HALTING
</output>
<action>HALT</action>
</check>
<output>✅ Repository accessible ({{github_owner}}/{{github_repo}})</output>
</substep>
<substep n="0c" title="Verify local files exist">
<action>Check sprint-status.yaml exists:</action>
<action>test -f {{sprint_status}}</action>
<check if="file not found">
<output>
❌ ERROR: sprint-status.yaml not found at {{sprint_status}}
Cannot migrate without sprint status file.
Run /sprint-planning to generate it first.
HALTING
</output>
<action>HALT</action>
</check>
<action>Read and parse sprint-status.yaml</action>
<action>Count total stories to migrate</action>
<output>✅ Found {{total_stories}} stories in sprint-status.yaml</output>
<action>Verify story files exist:</action>
<action>For each story, try multiple naming patterns to find file</action>
<action>Report:</action>
<output>
📊 Story File Status:
- ✅ Files found: {{stories_with_files}}
- ❌ Files missing: {{stories_without_files}}
{{#if stories_without_files > 0}}
Missing: {{missing_story_keys}}
{{/if}}
</output>
<check if="stories_without_files > 0">
<ask>
⚠️ {{stories_without_files}} stories have no files
Options:
[C] Continue (only migrate stories with files)
[S] Skip these stories (add to skip list)
[H] Halt (fix missing files first)
Choice:
</ask>
<check if="choice == 'H'">
<action>HALT</action>
</check>
</check>
</substep>
<substep n="0d" title="Check for existing migration">
<action>Check if state file exists: {{state_file}}</action>
<check if="state file exists">
<action>Read migration state</action>
<action>Extract: stories_migrated, issues_created, last_completed, timestamp</action>
<output>
⚠️ Previous migration detected
Last migration:
- Date: {{migration_timestamp}}
- Stories migrated: {{stories_migrated.length}}
- Issues created: {{issues_created.length}}
- Last completed: {{last_completed}}
- Status: {{migration_status}}
Options:
[R] Resume (continue from where it left off)
[F] Fresh (start over, may create duplicates if not careful)
[V] View (show what was migrated)
[D] Delete state (clear and start fresh)
Choice:
</output>
<ask>How to proceed?</ask>
<check if="choice == 'R'">
<action>Set resume_mode = true</action>
<action>Load list of already-migrated stories</action>
<action>Filter them out from migration queue</action>
<output>✅ Resuming from story: {{last_completed}}</output>
</check>
<check if="choice == 'F'">
<output>⚠️ WARNING: Fresh start may create duplicate issues if stories were already migrated.</output>
<ask>Confirm fresh start (will check for duplicates)? (yes/no):</ask>
<check if="not confirmed">
<action>HALT</action>
</check>
</check>
<check if="choice == 'V'">
<action>Display migration state details</action>
<action>Then re-prompt for choice</action>
</check>
<check if="choice == 'D'">
<action>Delete state file</action>
<action>Set resume_mode = false</action>
<output>✅ State cleared</output>
</check>
</check>
</substep>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ PRE-FLIGHT CHECKS PASSED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
- GitHub MCP: Connected
- Repository: Accessible
- Sprint status: Loaded ({{total_stories}} stories)
- Story files: {{stories_with_files}} found
- Mode: {{mode}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="1" goal="Dry-run mode - Preview migration plan">
<check if="mode != 'dry-run'">
<action>Skip to Step 2 (Execute mode)</action>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 DRY-RUN MODE (Preview Only - No Changes)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
This will show what WOULD happen without actually creating issues.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>For each story in sprint-status.yaml:</action>
<iterate>For each story_key:</iterate>
<substep n="1a" title="Check if issue already exists">
<action>Search GitHub: mcp__github__search_issues({
query: "repo:{{github_owner}}/{{github_repo}} label:story:{{story_key}}"
})</action>
<check if="issue found">
<action>would_update = {{update_existing}}</action>
<output>
📝 Story {{story_key}}:
GitHub: Issue #{{existing_issue.number}} EXISTS
Action: {{#if would_update}}Would UPDATE{{else}}Would SKIP{{/if}}
Current labels: {{existing_issue.labels}}
Current assignee: {{existing_issue.assignee || "none"}}
</output>
</check>
<check if="issue not found">
<action>would_create = true</action>
<action>Read local story file</action>
<action>Parse: title, ACs, tasks, epic, status</action>
<output>
📝 Story {{story_key}}:
GitHub: NOT FOUND
Action: Would CREATE
Proposed Issue:
- Title: "Story {{story_key}}: {{parsed_title}}"
- Labels: type:story, story:{{story_key}}, status:{{status}}, epic:{{epic_number}}, complexity:{{complexity}}
- Milestone: Epic {{epic_number}}
- Acceptance Criteria: {{ac_count}} items
- Tasks: {{task_count}} items
- Assignee: {{#if status == 'in-progress'}}@{{infer_from_git_log}}{{else}}none{{/if}}
</output>
</check>
</substep>
<action>Count actions:</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 DRY-RUN SUMMARY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Total Stories:** {{total_stories}}
**Actions:**
- ✅ Would CREATE: {{would_create_count}} new issues
- 🔄 Would UPDATE: {{would_update_count}} existing issues
- ⏭️ Would SKIP: {{would_skip_count}} (existing, no update)
**Epics/Milestones:**
- Would CREATE: {{epic_milestones_to_create.length}} milestones
- Already exist: {{epic_milestones_existing.length}}
**Estimated API Calls:**
- Issue searches: {{total_stories}} (check existing)
- Issue creates: {{would_create_count}}
- Issue updates: {{would_update_count}}
- Milestone operations: {{milestone_operations}}
- **Total:** ~{{total_api_calls}} API calls
**Rate Limit Impact:**
- Authenticated limit: 5000/hour
- This migration: ~{{total_api_calls}} calls
- Remaining after: ~{{5000 - total_api_calls}}
- Safe: {{#if total_api_calls < 1000}}YES{{else}}Borderline (consider smaller batches){{/if}}
**Estimated Duration:** {{estimated_minutes}} minutes
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚠️ This was a DRY-RUN. No issues were created.
To execute the migration:
/migrate-to-github mode=execute github_owner={{github_owner}} github_repo={{github_repo}}
To migrate only Epic 2:
/migrate-to-github mode=execute filter_by_epic=2 github_owner={{github_owner}} github_repo={{github_repo}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Exit workflow (dry-run complete)</action>
</step>
<step n="2" goal="Execute mode - Perform migration with atomic operations">
<check if="mode != 'execute'">
<action>Skip to Step 3 (Verify mode)</action>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚡ EXECUTE MODE (Migrating Stories to GitHub)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**SAFETY GUARANTEES:**
✅ Idempotent - Can re-run safely (checks for duplicates)
✅ Atomic - Each story fully succeeds or rolls back
✅ Verified - Reads back each created issue
✅ Resumable - Saves state after each story
✅ Reversible - Creates rollback manifest
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<ask>
⚠️ FINAL CONFIRMATION
You are about to create ~{{would_create_count}} GitHub Issues.
This operation:
- WILL create issues in {{github_owner}}/{{github_repo}}
- WILL modify your GitHub repository
- CAN be rolled back (we'll create rollback manifest)
- CANNOT be undone automatically after issues are created
Have you:
- [ ] Run dry-run mode to preview?
- [ ] Verified repository is correct?
- [ ] Backed up sprint-status.yaml?
- [ ] Confirmed you want to proceed?
Type "I understand and want to proceed" to continue:
</ask>
<check if="confirmation != 'I understand and want to proceed'">
<output>❌ Migration cancelled - confirmation not received</output>
<action>HALT</action>
</check>
<action>Initialize migration state:</action>
<action>
migration_state = {
started_at: {{timestamp}},
mode: "execute",
github_owner: {{github_owner}},
github_repo: {{github_repo}},
total_stories: {{total_stories}},
stories_migrated: [],
issues_created: [],
issues_updated: [],
issues_failed: [],
rollback_manifest: [],
last_completed: null
}
</action>
<action>Save initial state to {{state_file}}</action>
<action>Initialize rollback manifest (for safety):</action>
<action>rollback_manifest = {
created_at: {{timestamp}},
github_owner: {{github_owner}},
github_repo: {{github_repo}},
created_issues: [] # Will track issue numbers for rollback
}</action>
<iterate>For each story in sprint-status.yaml:</iterate>
<substep n="2a" title="Migrate single story (ATOMIC)">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📦 Migrating {{current_index}}/{{total_stories}}: {{story_key}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Read local story file</action>
<check if="file not found">
<output> ⏭️ SKIP - No file found</output>
<action>Add to migration_state.issues_failed with reason: "File not found"</action>
<action>Continue to next story</action>
</check>
<action>Parse story file:</action>
<action> - Extract all 12 sections</action>
<action> - Parse Acceptance Criteria (convert to checkboxes)</action>
<action> - Parse Tasks (convert to checkboxes)</action>
<action> - Extract metadata: epic_number, complexity</action>
<action>Check if issue already exists (idempotent check):</action>
<action>Call: mcp__github__search_issues({
query: "repo:{{github_owner}}/{{github_repo}} label:story:{{story_key}}"
})</action>
<check if="issue exists AND update_existing == false">
<output> ✅ EXISTS - Issue #{{existing_issue.number}} (skipping, update_existing=false)</output>
<action>Add to migration_state.stories_migrated (already done)</action>
<action>Continue to next story</action>
</check>
<check if="issue exists AND update_existing == true">
<output> 🔄 EXISTS - Issue #{{existing_issue.number}} (updating)</output>
<action>ATOMIC UPDATE with retry:</action>
<action>
attempt = 0
max_attempts = {{max_retries}} + 1
WHILE attempt < max_attempts:
TRY:
# Update issue
result = mcp__github__issue_write({
method: "update",
owner: {{github_owner}},
repo: {{github_repo}},
issue_number: {{existing_issue.number}},
title: "Story {{story_key}}: {{parsed_title}}",
body: {{convertStoryToIssueBody(parsed)}},
labels: {{generateLabels(story_key, status, parsed)}}
})
# Verify update succeeded (read back)
sleep 1 second # GitHub eventual consistency
verification = mcp__github__issue_read({
method: "get",
owner: {{github_owner}},
repo: {{github_repo}},
issue_number: {{existing_issue.number}}
})
# Check verification
IF verification.title != expected_title:
THROW "Write verification failed"
# Success!
output: " ✅ UPDATED and VERIFIED - Issue #{{existing_issue.number}}"
BREAK
CATCH error:
attempt++
IF attempt < max_attempts:
sleep {{retry_backoff_ms[attempt]}}
output: " ⚠️ Retry {{attempt}}/{{max_retries}} after error: {{error}}"
ELSE:
output: " ❌ FAILED after {{max_retries}} retries: {{error}}"
add to migration_state.issues_failed
IF halt_on_critical_error:
HALT
ELSE:
CONTINUE to next story
</action>
<action>Add to migration_state.issues_updated</action>
</check>
<check if="issue does NOT exist">
<output> 🆕 CREATING new issue...</output>
<action>Generate issue body from story file:</action>
<action>
issue_body = """
**Story File:** [{{story_key}}.md]({{file_path_in_repo}})
**Epic:** {{epic_number}}
**Complexity:** {{complexity}} ({{task_count}} tasks)
## Business Context
{{parsed.businessContext}}
## Acceptance Criteria
{{#each parsed.acceptanceCriteria}}
- [ ] AC{{@index + 1}}: {{this}}
{{/each}}
## Tasks
{{#each parsed.tasks}}
- [ ] {{this}}
{{/each}}
## Technical Requirements
{{parsed.technicalRequirements}}
## Definition of Done
{{#each parsed.definitionOfDone}}
- [ ] {{this}}
{{/each}}
---
_Migrated from BMAD local files_
_Sync timestamp: {{timestamp}}_
_Local file: `{{story_file_path}}`_
"""
</action>
<action>Generate labels:</action>
<action>
labels = [
"type:story",
"story:{{story_key}}",
"status:{{current_status}}",
"epic:{{epic_number}}",
"complexity:{{complexity}}"
]
{{#if has_high_risk_keywords}}
labels.push("risk:high")
{{/if}}
</action>
<action>ATOMIC CREATE with retry and verification:</action>
<action>
attempt = 0
WHILE attempt < max_attempts:
TRY:
# Create issue
created_issue = mcp__github__issue_write({
method: "create",
owner: {{github_owner}},
repo: {{github_repo}},
title: "Story {{story_key}}: {{parsed_title}}",
body: {{issue_body}},
labels: {{labels}}
})
issue_number = created_issue.number
# CRITICAL: Verify creation succeeded (read back)
sleep 2 seconds # GitHub eventual consistency
verification = mcp__github__issue_read({
method: "get",
owner: {{github_owner}},
repo: {{github_repo}},
issue_number: {{issue_number}}
})
# Verify all fields
IF verification.title != expected_title:
THROW "Title mismatch after create"
IF NOT verification.labels.includes("story:{{story_key}}"):
THROW "Story label missing after create"
# Success - record for rollback capability
output: " ✅ CREATED and VERIFIED - Issue #{{issue_number}}"
rollback_manifest.created_issues.push({
story_key: {{story_key}},
issue_number: {{issue_number}},
created_at: {{timestamp}}
})
migration_state.issues_created.push({
story_key: {{story_key}},
issue_number: {{issue_number}}
})
BREAK
CATCH error:
attempt++
# Check if issue was created despite error (orphaned issue)
check_result = mcp__github__search_issues({
query: "repo:{{github_owner}}/{{github_repo}} label:story:{{story_key}}"
})
IF check_result.length > 0:
# Issue was created, verification failed - treat as success
output: " ✅ CREATED (verification had transient error)"
BREAK
IF attempt < max_attempts:
sleep {{retry_backoff_ms[attempt]}}
output: " ⚠️ Retry {{attempt}}/{{max_retries}}"
ELSE:
output: " ❌ FAILED after {{max_retries}} retries: {{error}}"
migration_state.issues_failed.push({
story_key: {{story_key}},
error: {{error}},
attempts: {{attempt}}
})
IF halt_on_critical_error:
output: "HALTING - Critical error during migration"
save migration_state
HALT
ELSE:
output: "Continuing despite failure (continue_on_failure=true)"
CONTINUE to next story
</action>
</check>
<action>Update migration state:</action>
<action>migration_state.stories_migrated.push({{story_key}})</action>
<action>migration_state.last_completed = {{story_key}}</action>
<check if="save_state_after_each == true">
<action>Save migration state to {{state_file}}</action>
<action>Save rollback manifest to {{output_folder}}/migration-rollback-{{timestamp}}.yaml</action>
</check>
<check if="current_index % 10 == 0">
<output>
📊 Progress: {{current_index}}/{{total_stories}} migrated
Created: {{issues_created.length}}
Updated: {{issues_updated.length}}
Failed: {{issues_failed.length}}
</output>
</check>
</substep>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ MIGRATION COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Total:** {{total_stories}} stories processed
**Created:** {{issues_created.length}} new issues
**Updated:** {{issues_updated.length}} existing issues
**Failed:** {{issues_failed.length}} errors
**Duration:** {{actual_duration}}
{{#if issues_failed.length > 0}}
**Failed Stories:**
{{#each issues_failed}}
- {{story_key}}: {{error}}
{{/each}}
Recommendation: Fix errors and re-run migration (will skip already-migrated stories)
{{/if}}
**Rollback Manifest:** {{rollback_manifest_path}}
(Use this file to delete created issues if needed)
**State File:** {{state_file}}
(Tracks migration progress for resume capability)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Continue to Step 3 (Verify)</action>
</step>
<step n="3" goal="Verify mode - Double-check migration accuracy">
<check if="mode != 'verify' AND mode != 'execute'">
<action>Skip to Step 4</action>
</check>
<check if="mode == 'execute'">
<ask>
Migration complete. Run verification to double-check accuracy? (yes/no):
</ask>
<check if="response != 'yes'">
<action>Skip to Step 5 (Report)</action>
</check>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 VERIFICATION MODE (Double-Checking Migration)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Load migration state from {{state_file}}</action>
<iterate>For each migrated story in migration_state.stories_migrated:</iterate>
<action>Fetch issue from GitHub:</action>
<action>Search: label:story:{{story_key}}</action>
<check if="issue not found">
<output> ❌ VERIFICATION FAILED: {{story_key}} - Issue not found in GitHub</output>
<action>Add to verification_failures</action>
</check>
<check if="issue found">
<action>Verify fields match expected:</action>
<action> - Title contains story_key ✓</action>
<action> - Label "story:{{story_key}}" exists ✓</action>
<action> - Status label matches sprint-status.yaml ✓</action>
<action> - AC count matches local file ✓</action>
<check if="all fields match">
<output> ✅ VERIFIED: {{story_key}} → Issue #{{issue_number}}</output>
</check>
<check if="fields mismatch">
<output> ⚠️ MISMATCH: {{story_key}} → Issue #{{issue_number}}</output>
<output> Expected: {{expected}}</output>
<output> Actual: {{actual}}</output>
<action>Add to verification_warnings</action>
</check>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 VERIFICATION RESULTS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Stories Checked:** {{stories_migrated.length}}
**✅ Verified Correct:** {{verified_count}}
**⚠️ Warnings:** {{verification_warnings.length}}
**❌ Failures:** {{verification_failures.length}}
{{#if verification_failures.length > 0}}
**Verification Failures:**
{{#each verification_failures}}
- {{this}}
{{/each}}
❌ Migration has errors - issues may be missing or incorrect
{{else}}
✅ All migrated stories verified in GitHub
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="4" goal="Rollback mode - Delete created issues">
<check if="mode != 'rollback'">
<action>Skip to Step 5 (Report)</action>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚠️ ROLLBACK MODE (Delete Migrated Issues)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Load rollback manifest from {{output_folder}}/migration-rollback-*.yaml</action>
<check if="manifest not found">
<output>
❌ ERROR: No rollback manifest found
Cannot rollback without manifest file.
Rollback manifests are in: {{output_folder}}/migration-rollback-*.yaml
HALTING
</output>
<action>HALT</action>
</check>
<output>
**Rollback Manifest:**
- Created: {{manifest.created_at}}
- Repository: {{manifest.github_owner}}/{{manifest.github_repo}}
- Issues to delete: {{manifest.created_issues.length}}
**WARNING:** This will PERMANENTLY DELETE these issues from GitHub:
{{#each manifest.created_issues}}
- Issue #{{issue_number}}: {{story_key}}
{{/each}}
This operation CANNOT be undone!
</output>
<ask>
Type "DELETE ALL ISSUES" to proceed with rollback:
</ask>
<check if="confirmation != 'DELETE ALL ISSUES'">
<output>❌ Rollback cancelled</output>
<action>HALT</action>
</check>
<iterate>For each issue in manifest.created_issues:</iterate>
<action>Delete issue (GitHub API doesn't support delete, so close + lock):</action>
<action>
# GitHub doesn't allow issue deletion via API
# Best we can do: close, lock, and add label "migrated:rolled-back"
mcp__github__issue_write({
method: "update",
issue_number: {{issue_number}},
state: "closed",
labels: ["migrated:rolled-back", "do-not-use"],
state_reason: "not_planned"
})
# Add comment explaining
mcp__github__add_issue_comment({
issue_number: {{issue_number}},
body: "Issue closed - migration was rolled back. Do not use."
})
</action>
<output> ✅ Rolled back: Issue #{{issue_number}}</output>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ ROLLBACK COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Issues Rolled Back:** {{manifest.created_issues.length}}
Note: GitHub API doesn't support issue deletion.
Issues were closed with label "migrated:rolled-back" instead.
To fully delete (manual):
1. Go to repository settings
2. Issues → Delete closed issues
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="5" goal="Generate comprehensive migration report">
<action>Calculate final statistics:</action>
<action>
final_stats = {
total_stories: {{total_stories}},
migrated_successfully: {{issues_created.length + issues_updated.length}},
failed: {{issues_failed.length}},
success_rate: ({{migrated_successfully}} / {{total_stories}}) * 100,
duration: {{end_time - start_time}},
avg_time_per_story: {{duration / total_stories}}
}
</action>
<check if="create_migration_report == true">
<action>Write comprehensive report to {{report_path}}</action>
<action>Report structure:</action>
<action>
# GitHub Migration Report
**Date:** {{timestamp}}
**Repository:** {{github_owner}}/{{github_repo}}
**Mode:** {{mode}}
## Executive Summary
- **Total Stories:** {{total_stories}}
- **✅ Migrated:** {{migrated_successfully}} ({{success_rate}}%)
- **❌ Failed:** {{failed}}
- **Duration:** {{duration}}
- **Avg per story:** {{avg_time_per_story}}
## Created Issues
{{#each issues_created}}
- Story {{story_key}} → Issue #{{issue_number}}
URL: <https://github.com/{{github_owner}}/{{github_repo}}/issues/{{issue_number}}>
{{/each}}
## Updated Issues
{{#each issues_updated}}
- Story {{story_key}} → Issue #{{issue_number}} (updated)
{{/each}}
## Failed Migrations
{{#if issues_failed.length > 0}}
{{#each issues_failed}}
- Story {{story_key}}: {{error}}
Attempts: {{attempts}}
{{/each}}
**Recovery Steps:**
1. Fix underlying issues (check error messages)
2. Re-run migration (will skip already-migrated stories)
{{else}}
None - all stories migrated successfully!
{{/if}}
## Rollback Information
**Rollback Manifest:** {{rollback_manifest_path}}
To rollback this migration:
```bash
/migrate-to-github mode=rollback
```
## Next Steps
1. **Verify migration:** /migrate-to-github mode=verify
2. **Test story checkout:** /checkout-story story_key=2-5-auth
3. **Enable GitHub sync:** Update workflow.yaml with github_sync_enabled=true
4. **Product Owner setup:** Share GitHub Issues URL with PO team
## Migration Details
**API Calls Made:** ~{{total_api_calls}}
**Rate Limit Used:** {{api_calls_used}}/5000
**Errors Encountered:** {{error_count}}
**Retries Performed:** {{retry_count}}
---
_Generated by BMAD migrate-to-github workflow_
</action>
<output>📄 Migration report: {{report_path}}</output>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ MIGRATION WORKFLOW COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Mode:** {{mode}}
**Success Rate:** {{success_rate}}%
{{#if mode == 'execute'}}
**✅ {{migrated_successfully}} stories now in GitHub Issues**
View in GitHub:
<https://github.com/{{github_owner}}/{{github_repo}}/issues?q=is:issue+label:type:story>
**Next Steps:**
1. Verify migration: /migrate-to-github mode=verify
2. Test workflows with GitHub sync enabled
3. Share Issues URL with Product Owner team
{{#if issues_failed.length > 0}}
⚠️ {{issues_failed.length}} stories failed - re-run to retry
{{/if}}
{{/if}}
{{#if mode == 'dry-run'}}
**This was a preview. No issues were created.**
To execute: /migrate-to-github mode=execute
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
</workflow>

View File

@ -1,188 +0,0 @@
# Multi-Agent Code Review
**Purpose:** Perform unbiased code review using multiple specialized AI agents in FRESH CONTEXT, with agent count based on story complexity.
## Overview
**Key Principle: FRESH CONTEXT**
- Review happens in NEW session (not the agent that wrote the code)
- Prevents bias from implementation decisions
- Provides truly independent perspective
**Variable Agent Count by Complexity:**
- **MICRO** (2 agents): Security + Code Quality - Quick sanity check
- **STANDARD** (4 agents): + Architecture + Testing - Balanced review
- **COMPLEX** (6 agents): + Performance + Domain Expert - Comprehensive analysis
**Available Specialized Agents:**
- **Security Agent**: Identifies vulnerabilities and security risks
- **Code Quality Agent**: Reviews style, maintainability, and best practices
- **Architecture Agent**: Reviews system design, patterns, and structure
- **Testing Agent**: Evaluates test coverage and quality
- **Performance Agent**: Analyzes efficiency and optimization opportunities
- **Domain Expert**: Validates business logic and domain constraints
## Workflow
### Step 1: Determine Agent Count
Based on {complexity_level}:
```
If complexity_level == "micro":
agent_count = 2
agents = ["security", "code_quality"]
Display: 🔍 MICRO Review (2 agents: Security + Code Quality)
Else if complexity_level == "standard":
agent_count = 4
agents = ["security", "code_quality", "architecture", "testing"]
Display: 📋 STANDARD Review (4 agents: Multi-perspective)
Else if complexity_level == "complex":
agent_count = 6
agents = ["security", "code_quality", "architecture", "testing", "performance", "domain_expert"]
Display: 🔬 COMPLEX Review (6 agents: Comprehensive analysis)
```
### Step 2: Load Story Context
```bash
# Read story file
story_file="{story_file}"
test -f "$story_file" || (echo "❌ Story file not found: $story_file" && exit 1)
```
Read the story file to understand:
- What was supposed to be implemented
- Acceptance criteria
- Tasks and subtasks
- File list
### Step 3: Invoke Multi-Agent Review Skill (Fresh Context + Smart Agent Selection)
**CRITICAL:** This review MUST happen in a FRESH CONTEXT (new session, different agent).
**Smart Agent Selection:**
- Skill analyzes changed files and selects MOST RELEVANT agents
- Touching payments code? → Add financial-security agent
- Touching auth code? → Add auth-security agent
- Touching file uploads? → Add file-security agent
- Touching performance-critical code? → Add performance agent
- Agent count determined by complexity, but agents chosen by code analysis
```xml
<invoke-skill skill="multi-agent-review">
<parameter name="story_id">{story_id}</parameter>
<parameter name="base_branch">{base_branch}</parameter>
<parameter name="max_agents">{agent_count}</parameter>
<parameter name="agent_selection">smart</parameter>
<parameter name="fresh_context">true</parameter>
</invoke-skill>
```
The skill will:
1. Create fresh context (unbiased review session)
2. Analyze changed files in the story
3. Detect code categories (auth, payments, file handling, etc.)
4. Select {agent_count} MOST RELEVANT specialized agents
5. Run parallel reviews from selected agents
6. Each agent reviews from their expertise perspective
7. Aggregate findings with severity ratings
8. Return comprehensive review report
### Step 3: Save Review Report
```bash
# The skill returns a review report
# Save it to: {review_report}
```
Display summary:
```
🤖 MULTI-AGENT CODE REVIEW COMPLETE
Agents Used: {agent_count}
- Architecture Agent
- Security Agent
- Performance Agent
- Testing Agent
- Code Quality Agent
Findings:
- 🔴 CRITICAL: {critical_count}
- 🟠 HIGH: {high_count}
- 🟡 MEDIUM: {medium_count}
- 🔵 LOW: {low_count}
- INFO: {info_count}
Report saved to: {review_report}
```
### Step 4: Present Findings
For each finding, display:
```
[{severity}] {title}
Agent: {agent_name}
Location: {file}:{line}
{description}
Recommendation:
{recommendation}
---
```
### Step 5: Next Steps
Suggest actions based on findings:
```
📋 RECOMMENDED NEXT STEPS:
If CRITICAL findings exist:
⚠️ MUST FIX before proceeding
- Address all critical security/correctness issues
- Re-run review after fixes
If only HIGH/MEDIUM findings:
✅ Story may proceed
- Consider addressing high-priority items
- Create follow-up tasks for medium items
- Document LOW items as tech debt
If only LOW/INFO findings:
✅ Code quality looks good
- Optional: Address style/optimization suggestions
- Proceed to completion
```
## Integration with Super-Dev-Pipeline
This workflow is designed to be called from super-dev-pipeline step 7 (code review) when the story complexity is COMPLEX or when user explicitly requests multi-agent review.
**When to Use:**
- Complex stories (≥16 tasks or high-risk keywords)
- Stories involving security-sensitive code
- Stories with significant architectural changes
- When single-agent review has been inconclusive
- User explicitly requests comprehensive review
**When NOT to Use:**
- Micro stories (≤3 tasks)
- Standard stories with simple changes
- Stories that passed adversarial review cleanly
## Output Files
- `{review_report}`: Full review findings in markdown
- Integrated into story completion summary
- Referenced in audit trail
## Error Handling
If multi-agent-review skill fails:
- Fall back to adversarial code review
- Log the failure reason
- Continue pipeline with warning

View File

@ -1,549 +0,0 @@
<workflow>
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language}</critical>
<critical>📝 PUSH-ALL - Stage, commit, and push changes with comprehensive safety validation</critical>
<!-- TARGETED vs ALL FILES MODE -->
<critical>⚡ PARALLEL AGENT MODE: When {{target_files}} is provided:
- ONLY stage and commit the specified files
- Do NOT use `git add .` or `git add -A`
- Use `git add [specific files]` instead
- This prevents committing work from other parallel agents
</critical>
<critical>📋 ALL FILES MODE: When {{target_files}} is empty:
- Stage ALL changes with `git add .`
- Original behavior for single-agent execution
</critical>
<step n="1" goal="Analyze repository changes">
<output>🔄 **Analyzing Repository Changes**
Scanning for changes to commit and push...
</output>
<!-- ANALYZE CHANGES PHASE -->
<action>Run git commands in parallel:</action>
<action>- git status - Show modified/added/deleted/untracked files</action>
<action>- git diff --stat - Show change statistics</action>
<action>- git log -1 --oneline - Show recent commit for message style</action>
<action>- git branch --show-current - Confirm current branch</action>
<action>Parse git status output to identify:
- Modified files
- Added files
- Deleted files
- Untracked files
- Total insertion/deletion counts
</action>
<check if="no changes detected">
<output> **No Changes to Commit**
Working directory is clean.
Nothing to push.
</output>
<action>HALT - No work to do</action>
</check>
</step>
<step n="2" goal="Safety validation">
<critical>🔒 SAFETY CHECKS - Validate changes before committing</critical>
<action>Scan all changed files for dangerous patterns:</action>
**Secret Detection:**
<action>Check for files matching secret patterns:
- .env*, *.key, *.pem, credentials.json, secrets.yaml
- id_rsa, *.p12, *.pfx, *.cer
- Any file containing: _API_KEY=, _SECRET=, _TOKEN= with real values (not placeholders)
</action>
<action>Validate API keys are placeholders only:</action>
<action>✅ Acceptable placeholders:
- API_KEY=your-api-key-here
- SECRET=placeholder
- TOKEN=xxx
- API_KEY=${{YOUR_KEY}}
- SECRET_KEY=&lt;your-key&gt;
</action>
<action>❌ BLOCK real keys:
- OPENAI_API_KEY=sk-proj-xxxxx (real OpenAI key)
- AWS_SECRET_KEY=AKIA... (real AWS key)
- STRIPE_API_KEY=sk_live_... (real Stripe key)
- Any key with recognizable provider prefix + actual value
</action>
**File Size Check:**
<action>Check for files >10MB without Git LFS configuration</action>
**Build Artifacts:**
<action>Check for unwanted directories/files that should be gitignored:
- node_modules/, dist/, build/, .next/, __pycache__/, *.pyc, .venv/
- .DS_Store, Thumbs.db, *.swp, *.tmp, *.log (in root)
- *.class, target/, bin/ (Java)
- vendor/ (unless dependency managed)
</action>
**Git State:**
<action>Verify:
- .gitignore exists and properly configured
- No unresolved merge conflicts
- Git repository initialized
</action>
<!-- SAFETY DECISION -->
<check if="secrets detected OR real API keys found">
<output>🚨 **DANGER: Secrets Detected!**
The following sensitive data was found:
{{list_detected_secrets_with_files}}
❌ **BLOCKED:** Cannot commit secrets to version control.
**Actions Required:**
1. Move secrets to .env file (add to .gitignore)
2. Use environment variables: process.env.API_KEY
3. Remove secrets from tracked files: git rm --cached [file]
4. Update code to load from environment
**Example:**
```
// Before (UNSAFE):
const apiKey = 'sk-proj-xxxxx';
// After (SAFE):
const apiKey = process.env.OPENAI_API_KEY;
```
Halting workflow for safety.
</output>
<action>HALT - Cannot proceed with secrets</action>
</check>
<check if="large files detected without Git LFS">
<output>⚠️ **Warning: Large Files Detected**
Files >10MB found:
{{list_large_files_with_sizes}}
**Recommendation:** Set up Git LFS
```
git lfs install
git lfs track "*.{file_extension}"
git add .gitattributes
```
</output>
<ask>Proceed with large files anyway? [y/n]:</ask>
<check if="user says n">
<output>Halting. Please configure Git LFS first.</output>
<action>HALT</action>
</check>
</check>
<check if="build artifacts detected">
<output>⚠️ **Warning: Build Artifacts Detected**
These files should be in .gitignore:
{{list_build_artifacts}}
**Update .gitignore:**
```
node_modules/
dist/
build/
.DS_Store
```
</output>
<ask>Commit build artifacts anyway? [y/n]:</ask>
<check if="user says n">
<output>Halting. Update .gitignore and git rm --cached [files]</output>
<action>HALT</action>
</check>
</check>
<check if="current branch is main or master">
<output>⚠️ **Warning: Pushing to {{branch_name}}**
You're committing directly to {{branch_name}}.
**Recommendation:** Use feature branch workflow:
1. git checkout -b feature/my-changes
2. Make and commit changes
3. git push -u origin feature/my-changes
4. Create PR for review
</output>
<ask>Push directly to {{branch_name}}? [y/n]:</ask>
<check if="user says n">
<output>Halting. Create a feature branch instead.</output>
<action>HALT</action>
</check>
</check>
<output>✅ **Safety Checks Passed**
All validations completed successfully.
</output>
</step>
<step n="3" goal="Present summary and get confirmation">
<output>
📊 **Changes Summary**
**Files:**
- Modified: {{modified_count}}
- Added: {{added_count}}
- Deleted: {{deleted_count}}
- Untracked: {{untracked_count}}
**Total:** {{total_file_count}} files
**Changes:**
- Insertions: +{{insertion_count}} lines
- Deletions: -{{deletion_count}} lines
**Safety:**
{{if_all_safe}}
✅ No secrets detected
✅ No large files (or approved)
✅ No build artifacts (or approved)
✅ .gitignore configured
{{endif}}
{{if_warnings_approved}}
⚠️ Warnings acknowledged and approved
{{endif}}
**Git:**
- Branch: {{current_branch}}
- Remote: origin/{{current_branch}}
- Last commit: {{last_commit_message}}
---
**I will execute:**
1. `git add .` - Stage all changes
2. `git commit -m "[generated message]"` - Create commit
3. `git push` - Push to remote
</output>
<ask>**Proceed with commit and push?**
Options:
[yes] - Proceed with commit and push
[no] - Cancel (leave changes unstaged)
[review] - Show detailed diff first
</ask>
<check if="user says review">
<action>Execute: git diff --stat</action>
<action>Execute: git diff | head -100 (show first 100 lines of changes)</action>
<output>
{{diff_output}}
(Use 'git diff' to see full changes)
</output>
<ask>After reviewing, proceed with commit and push? [yes/no]:</ask>
</check>
<check if="user says no">
<output>❌ **Push-All Cancelled**
Changes remain unstaged. No git operations performed.
You can:
- Review changes: git status, git diff
- Commit manually: git add [files] && git commit
- Discard changes: git checkout -- [files]
</output>
<action>HALT - User cancelled</action>
</check>
</step>
<step n="4" goal="Stage changes">
<!-- TARGETED MODE: Only stage specified files -->
<check if="{{target_files}} is provided and not empty">
<output>📎 **Targeted Commit Mode** (parallel agent safe)
Staging only files from this story/task:
{{target_files}}
</output>
<action>Execute: git add {{target_files}}</action>
<action>Execute: git status</action>
<output>✅ **Targeted Files Staged**
Ready for commit ({{target_file_count}} files):
{{list_staged_files}}
Note: Other uncommitted changes in repo are NOT included.
</output>
</check>
<!-- ALL FILES MODE: Original behavior -->
<check if="{{target_files}} is empty or not provided">
<action>Execute: git add .</action>
<action>Execute: git status</action>
<output>✅ **All Changes Staged**
Ready for commit:
{{list_staged_files}}
</output>
</check>
</step>
<step n="5" goal="Generate commit message">
<critical>📝 COMMIT MESSAGE - Generate conventional commit format</critical>
<action>Analyze changes to determine commit type:</action>
<action>- feat: New features (new files with functionality)</action>
<action>- fix: Bug fixes (fixing broken functionality)</action>
<action>- docs: Documentation only (*.md, comments)</action>
<action>- style: Formatting, missing semicolons (no code change)</action>
<action>- refactor: Code restructuring (no feature/fix)</action>
<action>- test: Adding/updating tests</action>
<action>- chore: Tooling, configs, dependencies</action>
<action>- perf: Performance improvements</action>
<action>Determine scope (optional):
- Component/feature name if changes focused on one area
- Omit if changes span multiple areas
</action>
<action>Generate message summary (max 72 chars):
- Use imperative mood: "add feature" not "added feature"
- Lowercase except proper nouns
- No period at end
</action>
<action>Generate message body (if changes >5 files):
- List key changes as bullet points
- Max 3-5 bullets
- Keep concise
</action>
<action>Reference recent commits for style consistency</action>
<output>📝 **Generated Commit Message:**
```
{{generated_commit_message}}
```
Based on:
- {{commit_type}} commit type
- {{file_count}} files changed
- {{change_summary}}
</output>
<ask>**Use this commit message?**
Options:
[yes] - Use generated message
[edit] - Let me write custom message
[cancel] - Cancel push-all (leave staged)
</ask>
<check if="user says edit">
<ask>Enter your commit message (use conventional commit format if possible):</ask>
<action>Store user input as {{commit_message}}</action>
<output>✅ Using custom commit message</output>
</check>
<check if="user says cancel">
<output>❌ Push-all cancelled
Changes remain staged.
Run: git reset to unstage
</output>
<action>HALT</action>
</check>
<check if="user says yes">
<action>Use {{generated_commit_message}} as {{commit_message}}</action>
</check>
</step>
<step n="6" goal="Commit changes">
<action>Execute git commit with heredoc for multi-line message safety:
git commit -m "$(cat &lt;&lt;'EOF'
{{commit_message}}
EOF
)"
</action>
<check if="commit fails">
<output>❌ **Commit Failed**
Error: {{commit_error}}
**Common Causes:**
- Pre-commit hooks failing (linting, tests)
- Missing git config (user.name, user.email)
- Locked files or permissions
- Empty commit (no actual changes)
**Fix and try again:**
- Check pre-commit output
- Set git config: git config user.name "Your Name"
- Verify file permissions
</output>
<action>HALT - Fix errors before proceeding</action>
</check>
<action>Parse commit output for hash</action>
<output>✅ **Commit Created**
Commit: {{commit_hash}}
Message: {{commit_subject}}
</output>
</step>
<step n="7" goal="Push to remote">
<output>🚀 **Pushing to Remote**
Pushing {{current_branch}} to origin...
</output>
<action>Execute: git push</action>
<!-- HANDLE COMMON PUSH FAILURES -->
<check if="push fails with rejected (non-fast-forward)">
<output>⚠️ **Push Rejected - Remote Has New Commits**
Remote branch has commits you don't have locally.
Attempting to rebase and retry...
</output>
<action>Execute: git pull --rebase</action>
<check if="rebase has conflicts">
<output>❌ **Merge Conflicts During Rebase**
Conflicts found:
{{list_conflicted_files}}
**Manual resolution required:**
1. Resolve conflicts in listed files
2. git add [resolved files]
3. git rebase --continue
4. git push
Halting for manual conflict resolution.
</output>
<action>HALT - Resolve conflicts manually</action>
</check>
<action>Execute: git push</action>
</check>
<check if="push fails with no upstream branch">
<output> **No Upstream Branch Set**
First push to origin for this branch.
Setting upstream...
</output>
<action>Execute: git push -u origin {{current_branch}}</action>
</check>
<check if="push fails with protected branch">
<output>❌ **Push to Protected Branch Blocked**
Branch {{current_branch}} is protected on remote.
**Use PR workflow instead:**
1. Ensure you're on a feature branch
2. Push feature branch: git push -u origin feature-branch
3. Create PR for review
Changes are committed locally but not pushed.
</output>
<action>HALT - Use PR workflow for protected branches</action>
</check>
<check if="push fails with authentication">
<output>❌ **Authentication Failed**
Git push requires authentication.
**Fix authentication:**
- GitHub: Set up SSH key or Personal Access Token
- Check: git remote -v (verify remote URL)
- Docs: https://docs.github.com/authentication
Changes are committed locally but not pushed.
</output>
<action>HALT - Fix authentication</action>
</check>
<check if="push fails with other error">
<output>❌ **Push Failed**
Error: {{push_error}}
Your changes are committed locally but not pushed to remote.
**Troubleshoot:**
- Check network connection
- Verify remote exists: git remote -v
- Check permissions on remote repository
- Try manual push: git push
Halting for manual resolution.
</output>
<action>HALT - Manual push required</action>
</check>
<!-- SUCCESS -->
<check if="push succeeds">
<output>✅ **Successfully Pushed to Remote!**
**Commit:** {{commit_hash}} - {{commit_subject}}
**Branch:** {{current_branch}} → origin/{{current_branch}}
**Files changed:** {{file_count}} (+{{insertions}}, -{{deletions}})
---
Your changes are now on the remote repository.
</output>
<action>Execute: git log -1 --oneline --decorate</action>
<output>
**Latest commit:** {{git_log_output}}
</output>
</check>
</step>
<step n="8" goal="Completion summary">
<output>🎉 **Push-All Complete, {user_name}!**
**Summary:**
- ✅ {{file_count}} files committed
- ✅ Pushed to origin/{{current_branch}}
- ✅ All safety checks passed
**Commit Details:**
- Hash: {{commit_hash}}
- Message: {{commit_subject}}
- Changes: +{{insertions}}, -{{deletions}}
**Next Steps:**
- Verify on remote (GitHub/GitLab/etc)
- Create PR if working on feature branch
- Notify team if appropriate
**Git State:**
- Working directory: clean
- Branch: {{current_branch}}
- In sync with remote
</output>
</step>
</workflow>

View File

@ -0,0 +1,366 @@
# Push All v3.0 - Safe Git Staging, Commit, and Push
<purpose>
Safely stage, commit, and push changes with comprehensive validation.
Detects secrets, large files, build artifacts. Handles push failures gracefully.
Supports targeted mode for specific files (parallel agent coordination).
</purpose>
<philosophy>
**Safe by Default, No Surprises**
- Validate BEFORE committing (secrets, size, artifacts)
- Show exactly what will be committed
- Handle push failures with recovery options
- Never force push without explicit confirmation
</philosophy>
<config>
name: push-all
version: 3.0.0
modes:
full: "Stage all changes (default)"
targeted: "Only stage specified files"
defaults:
max_file_size_kb: 500
check_secrets: true
check_build_artifacts: true
auto_push: false
allow_force_push: false
secret_patterns:
- "AKIA[0-9A-Z]{16}" # AWS Access Key
- "sk-[a-zA-Z0-9]{48}" # OpenAI Key
- "ghp_[a-zA-Z0-9]{36}" # GitHub Personal Token
- "xox[baprs]-[a-zA-Z0-9-]+" # Slack Token
- "-----BEGIN.*PRIVATE KEY" # Private Keys
- "password\\s*=\\s*['\"][^'\"]{8,}" # Hardcoded passwords
build_artifacts:
- "node_modules/"
- "dist/"
- "build/"
- ".next/"
- "*.min.js"
- "*.bundle.js"
</config>
<execution_context>
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="check_git_state" priority="first">
**Verify git repository state**
```bash
# Check we're in a git repo
git rev-parse --is-inside-work-tree || { echo "❌ Not a git repository"; exit 1; }
# Get current branch
git branch --show-current
# Check for uncommitted changes
git status --porcelain
```
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📦 PUSH-ALL: {{mode}} mode
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Branch: {{branch}}
Mode: {{mode}}
{{#if targeted}}Files: {{file_list}}{{/if}}
```
**If no changes:**
```
✅ Working directory clean - nothing to commit
```
Exit successfully.
</step>
<step name="scan_changes">
**Identify files to be staged**
**Full mode:**
```bash
git status --porcelain | awk '{print $2}'
```
**Targeted mode:**
Only include files specified in `target_files` parameter.
**Categorize changes:**
- New files (A)
- Modified files (M)
- Deleted files (D)
- Renamed files (R)
</step>
<step name="secret_scan" if="check_secrets">
**Scan for secrets in staged content**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 SECRET SCAN
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
For each file to be staged:
```bash
# Check for secret patterns
Grep: "{{pattern}}" {{file}}
```
**If secrets found:**
```
❌ POTENTIAL SECRETS DETECTED
{{#each secrets}}
File: {{file}}
Line {{line}}: {{preview}} (pattern: {{pattern_name}})
{{/each}}
⚠️ BLOCKING COMMIT
Remove secrets before proceeding.
Options:
[I] Ignore (I know what I'm doing)
[E] Exclude these files
[H] Halt
```
**If [I] selected:** Require explicit confirmation text.
</step>
<step name="size_scan">
**Check for oversized files**
```bash
# Find files larger than max_file_size_kb
find . -type f -size +{{max_file_size}}k -not -path "./.git/*"
```
**If large files found:**
```
⚠️ LARGE FILES DETECTED
{{#each large_files}}
- {{file}} ({{size_kb}}KB)
{{/each}}
Options:
[I] Include anyway
[E] Exclude large files
[H] Halt
```
</step>
<step name="artifact_scan" if="check_build_artifacts">
**Check for build artifacts**
```bash
# Check if any staged files match artifact patterns
git status --porcelain | grep -E "{{artifact_pattern}}"
```
**If artifacts found:**
```
⚠️ BUILD ARTIFACTS DETECTED
{{#each artifacts}}
- {{file}}
{{/each}}
These should typically be in .gitignore.
Options:
[E] Exclude artifacts (recommended)
[I] Include anyway
[H] Halt
```
</step>
<step name="preview_commit">
**Show what will be committed**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 COMMIT PREVIEW
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Files to commit: {{count}}
Added ({{added_count}}):
{{#each added}}
+ {{file}}
{{/each}}
Modified ({{modified_count}}):
{{#each modified}}
M {{file}}
{{/each}}
Deleted ({{deleted_count}}):
{{#each deleted}}
- {{file}}
{{/each}}
{{#if excluded}}
Excluded: {{excluded_count}} files
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="get_commit_message">
**Generate or request commit message**
**If commit_message provided:** Use it.
**Otherwise, generate from changes:**
```
Analyzing changes to generate commit message...
Changes detected:
- {{summary_of_changes}}
Suggested message:
"{{generated_message}}"
[Y] Use this message
[E] Edit message
[C] Custom message
```
If user selects [C] or [E], prompt for message.
</step>
<step name="execute_commit">
**Stage and commit changes**
```bash
# Stage files (targeted or full)
{{#if targeted}}
git add {{#each target_files}}{{this}} {{/each}}
{{else}}
git add -A
{{/if}}
# Commit with message
git commit -m "{{commit_message}}"
```
**Verify commit:**
```bash
# Check commit was created
git log -1 --oneline
```
```
✅ Commit created: {{commit_hash}}
```
</step>
<step name="push_to_remote" if="auto_push OR user_confirms_push">
**Push to remote with error handling**
```bash
git push origin {{branch}}
```
**If push fails:**
**Case: Behind remote**
```
⚠️ Push rejected - branch is behind remote
Options:
[P] Pull and retry (git pull --rebase)
[F] Force push (DESTRUCTIVE - overwrites remote)
[H] Halt (commit preserved locally)
```
**Case: No upstream**
```
⚠️ No upstream branch
Setting upstream and pushing:
git push -u origin {{branch}}
```
**Case: Auth failure**
```
❌ Authentication failed
Check:
1. SSH key configured?
2. Token valid?
3. Repository access?
```
**Case: Protected branch**
```
❌ Cannot push to protected branch
Use pull request workflow instead:
gh pr create --title "{{commit_message}}"
```
</step>
<step name="final_summary">
**Display completion status**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ PUSH-ALL COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Branch: {{branch}}
Commit: {{commit_hash}}
Files: {{file_count}}
{{#if pushed}}
Remote: ✅ Pushed to origin/{{branch}}
{{else}}
Remote: ⏸️ Not pushed (commit preserved locally)
{{/if}}
{{#if excluded_count > 0}}
Excluded: {{excluded_count}} files (secrets/artifacts/size)
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
</process>
<examples>
```bash
# Stage all, commit, and push
/push-all commit_message="feat: add user authentication" auto_push=true
# Targeted mode - only specific files
/push-all mode=targeted target_files="src/auth.ts,src/auth.test.ts" commit_message="fix: auth bug"
# Dry run - see what would be committed
/push-all auto_push=false
```
</examples>
<failure_handling>
**Secrets detected:** BLOCK commit, require explicit override.
**Large files:** Warn, allow exclude or include.
**Build artifacts:** Warn, recommend exclude.
**Push rejected:** Offer pull/rebase, force push (with confirmation), or halt.
**Auth failure:** Report, suggest troubleshooting.
</failure_handling>
<success_criteria>
- [ ] Changes validated (secrets, size, artifacts)
- [ ] Files staged correctly
- [ ] Commit created with message
- [ ] Push successful (if requested)
- [ ] No unintended files included
</success_criteria>

View File

@ -9,7 +9,7 @@ communication_language: "{config_source}:communication_language"
# Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/push-all"
instructions: "{installed_path}/instructions.xml"
instructions: "{installed_path}/workflow.md"
# Target files to commit (for parallel agent execution)
# When empty/not provided: commits ALL changes (original behavior)

View File

@ -1,306 +0,0 @@
# Sprint Status Recovery - Instructions
**Workflow:** recover-sprint-status
**Purpose:** Fix sprint-status.yaml when tracking has drifted for days/weeks
---
## What This Workflow Does
Analyzes multiple sources to rebuild accurate sprint-status.yaml:
1. **Story File Quality** - Validates size (>=10KB), task lists, checkboxes
2. **Explicit Status: Fields** - Reads story Status: when present
3. **Git Commits** - Searches last 30 days for story references
4. **Autonomous Reports** - Checks .epic-*-completion-report.md files
5. **Task Completion Rate** - Analyzes checkbox completion in story files
**Infers Status Based On:**
- Explicit Status: field (highest priority)
- Git commits referencing story (strong signal)
- Autonomous completion reports (very high confidence)
- Task checkbox completion rate (90%+ = done)
- File quality (poor quality prevents "done" marking)
---
## Step 1: Run Recovery Analysis
```bash
Execute: {recovery_script} --dry-run
```
**This will:**
- Analyze all story files (quality, tasks, status)
- Search git commits for completion evidence
- Check autonomous completion reports
- Infer status from all evidence
- Report recommendations with confidence levels
**No changes** made in dry-run mode - just analysis.
---
## Step 2: Review Recommendations
**Check the output for:**
### High Confidence Updates (Safe)
- Stories with explicit Status: fields
- Stories in autonomous completion reports
- Stories with 3+ git commits + 90%+ tasks complete
### Medium Confidence Updates (Verify)
- Stories with 1-2 git commits
- Stories with 50-90% tasks complete
- Stories with file size >=10KB
### Low Confidence Updates (Question)
- Stories with no Status: field, no commits
- Stories with file size <10KB
- Stories with <5 tasks total
---
## Step 3: Choose Recovery Mode
### Conservative Mode (Safest)
```bash
Execute: {recovery_script} --conservative
```
**Only updates:**
- High/very high confidence stories
- Explicit Status: fields honored
- Git commits with 3+ references
- Won't infer or guess
**Best for:** Quick fixes, first-time recovery, risk-averse
---
### Aggressive Mode (Thorough)
```bash
Execute: {recovery_script} --aggressive --dry-run # Preview first!
Execute: {recovery_script} --aggressive # Then apply
```
**Updates:**
- Medium+ confidence stories
- Infers from git commits (even 1 commit)
- Uses task completion rate
- Pre-fills brownfield checkboxes
**Best for:** Major drift (30+ days), comprehensive recovery
---
### Interactive Mode (Recommended)
```bash
Execute: {recovery_script}
```
**Process:**
1. Shows all recommendations
2. Groups by confidence level
3. Asks for confirmation before each batch
4. Allows selective application
**Best for:** First-time use, learning the tool
---
## Step 4: Validate Results
```bash
Execute: ./scripts/sync-sprint-status.sh --validate
```
**Should show:**
- "✓ sprint-status.yaml is up to date!" (success)
- OR discrepancy count (if issues remain)
---
## Step 5: Commit Changes
```bash
git add docs/sprint-artifacts/sprint-status.yaml
git add .sprint-status-backups/ # Include backup for audit trail
git commit -m "fix(tracking): Recover sprint-status.yaml - {MODE} recovery"
```
---
## Recovery Scenarios
### Scenario 1: Autonomous Epic Completed, Tracking Not Updated
**Symptoms:**
- Autonomous completion report exists
- Git commits show work done
- sprint-status.yaml shows "in-progress" or "backlog"
**Solution:**
```bash
{recovery_script} --aggressive
# Will find completion report, mark all stories done
```
---
### Scenario 2: Manual Work Over Past Week Not Tracked
**Symptoms:**
- Story Status: fields updated to "done"
- sprint-status.yaml not synced
- Git commits exist
**Solution:**
```bash
./scripts/sync-sprint-status.sh
# Standard sync (reads Status: fields)
```
---
### Scenario 3: Story Files Missing Status: Fields
**Symptoms:**
- 100+ stories with no Status: field
- Some completed, some not
- No autonomous reports
**Solution:**
```bash
{recovery_script} --aggressive --dry-run # Preview inference
# Review recommendations carefully
{recovery_script} --aggressive # Apply if satisfied
```
---
### Scenario 4: Complete Chaos (Mix of All Above)
**Symptoms:**
- Some stories have Status:, some don't
- Autonomous reports for some epics
- Manual work on others
- sprint-status.yaml very outdated
**Solution:**
```bash
# Step 1: Run recovery in dry-run
{recovery_script} --aggressive --dry-run
# Step 2: Review /tmp/recovery_results.json
# Step 3: Apply in conservative mode first (safest updates)
{recovery_script} --conservative
# Step 4: Manually review remaining stories
# Update Status: fields for known completed work
# Step 5: Run sync to catch manual updates
./scripts/sync-sprint-status.sh
# Step 6: Final validation
./scripts/sync-sprint-status.sh --validate
```
---
## Quality Gates
**Recovery script will DOWNGRADE status if:**
- Story file < 10KB (not properly detailed)
- Story file has < 5 tasks (incomplete story)
- No git commits found (no evidence of work)
- Explicit Status: contradicts other evidence
**Recovery script will UPGRADE status if:**
- Autonomous completion report lists story as done
- 3+ git commits + 90%+ tasks checked
- Explicit Status: field says "done"
---
## Post-Recovery Checklist
After running recovery:
- [ ] Run validation: `./scripts/sync-sprint-status.sh --validate`
- [ ] Review backup: Check `.sprint-status-backups/` for before state
- [ ] Check epic statuses: Verify epic-level status matches story completion
- [ ] Spot-check 5-10 stories: Confirm inferred status is accurate
- [ ] Commit changes: Add recovery to version control
- [ ] Document issues: Note why drift occurred, prevent recurrence
---
## Preventing Future Drift
**After recovery:**
1. **Use workflows properly**
- `/create-story` - Adds to sprint-status.yaml automatically
- `/dev-story` - Updates both Status: and sprint-status.yaml
- Autonomous workflows - Now update tracking
2. **Run sync regularly**
- Weekly: `pnpm sync:sprint-status:dry-run` (check health)
- After manual Status: updates: `pnpm sync:sprint-status`
3. **CI/CD validation** (coming soon)
- Blocks PRs with out-of-sync tracking
- Forces sync before merge
---
## Troubleshooting
### "Recovery script shows 0 updates"
**Possible causes:**
- sprint-status.yaml already accurate
- Story files all have proper Status: fields
- No git commits found (check date range)
**Action:** Run `--dry-run` to see analysis, check `/tmp/recovery_results.json`
---
### "Low confidence on stories I know are done"
**Possible causes:**
- Story file < 10KB (not properly detailed)
- No git commits (work done outside git)
- No explicit Status: field
**Action:** Manually add Status: field to story, then run standard sync
---
### "Recovery marks incomplete stories as done"
**Possible causes:**
- Git commits exist but work abandoned
- Autonomous report lists story but implementation failed
- Tasks pre-checked incorrectly (brownfield error)
**Action:** Use conservative mode, manually verify, fix story files
---
## Output Files
**Created during recovery:**
- `.sprint-status-backups/sprint-status-recovery-{timestamp}.yaml` - Backup
- `/tmp/recovery_results.json` - Detailed analysis
- Updated `sprint-status.yaml` - Recovered status
---
**Last Updated:** 2026-01-02
**Status:** Production Ready
**Works On:** ANY BMAD project with sprint-status.yaml tracking

View File

@ -1,273 +0,0 @@
# Revalidate Epic - Batch Story Revalidation with Semaphore Pattern
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<workflow>
<step n="1" goal="Load sprint status and find epic stories">
<action>Verify epic_number parameter provided</action>
<check if="epic_number not provided">
<output>❌ ERROR: epic_number parameter required
Usage:
/revalidate-epic epic_number=2
/revalidate-epic epic_number=2 fill_gaps=true
/revalidate-epic epic_number=2 fill_gaps=true max_concurrent=5
</output>
<action>HALT</action>
</check>
<action>Read {sprint_status} file</action>
<action>Parse development_status map</action>
<action>Filter stories starting with "{{epic_number}}-" (e.g., "2-1-", "2-2-", etc.)</action>
<action>Exclude epics (keys starting with "epic-") and retrospectives</action>
<action>Store as: epic_stories (list of story keys)</action>
<check if="epic_stories is empty">
<output>❌ No stories found for Epic {{epic_number}}
Check sprint-status.yaml to verify epic number is correct.
</output>
<action>HALT</action>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 EPIC {{epic_number}} REVALIDATION
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Stories Found:** {{epic_stories.length}}
**Mode:** {{#if fill_gaps}}Verify & Fill Gaps{{else}}Verify Only{{/if}}
**Max Concurrent:** {{max_concurrent}} agents
**Pattern:** Semaphore (continuous worker pool)
**Stories to Revalidate:**
{{#each epic_stories}}
{{@index + 1}}. {{this}}
{{/each}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<ask>Proceed with revalidation? (yes/no):</ask>
<check if="response != 'yes'">
<output>❌ Revalidation cancelled</output>
<action>Exit workflow</action>
</check>
</step>
<step n="2" goal="Initialize semaphore pattern for parallel revalidation">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🚀 Starting Parallel Revalidation (Semaphore Pattern)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Initialize worker pool state:</action>
<action>
- story_queue = epic_stories
- active_workers = {}
- completed_stories = []
- failed_stories = []
- verification_results = {}
- next_story_index = 0
- max_workers = {{max_concurrent}}
</action>
<action>Fill initial worker slots:</action>
<iterate>While next_story_index < min(max_workers, story_queue.length):</iterate>
<action>
story_key = story_queue[next_story_index]
story_file = {sprint_artifacts}/{{story_key}}.md # Try multiple naming patterns if needed
worker_id = next_story_index + 1
Spawn Task agent:
- subagent_type: "general-purpose"
- description: "Revalidate story {{story_key}}"
- prompt: "Execute revalidate-story workflow for {{story_key}}.
CRITICAL INSTRUCTIONS:
1. Load workflow: _bmad/bmm/workflows/4-implementation/revalidate-story/workflow.yaml
2. Parameters: story_file={{story_file}}, fill_gaps={{fill_gaps}}
3. Clear all checkboxes
4. Verify each AC/Task/DoD against codebase
5. Re-check verified items
6. Report gaps
{{#if fill_gaps}}7. Fill gaps and commit{{/if}}
8. Return verification summary"
- run_in_background: true
Store in active_workers[worker_id]:
story_key: {{story_key}}
task_id: {{returned_task_id}}
started_at: {{timestamp}}
</action>
<action>Increment next_story_index</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ {{active_workers.size}} workers active
📋 {{story_queue.length - next_story_index}} stories queued
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="3" goal="Maintain worker pool until all stories revalidated">
<critical>SEMAPHORE PATTERN: Keep {{max_workers}} agents running continuously</critical>
<iterate>While active_workers.size > 0 OR next_story_index < story_queue.length:</iterate>
<action>Poll for completed workers (non-blocking):</action>
<iterate>For each worker_id in active_workers:</iterate>
<action>Check worker status using TaskOutput(task_id, block=false)</action>
<check if="worker completed successfully">
<action>Get verification results from worker output</action>
<action>Parse: verified_pct, gaps_found, gaps_filled</action>
<action>Store in verification_results[story_key]</action>
<action>Add to completed_stories</action>
<action>Remove from active_workers</action>
<output>✅ Worker {{worker_id}}: {{story_key}} → {{verified_pct}}% verified{{#if gaps_filled > 0}}, {{gaps_filled}} gaps filled{{/if}}</output>
<check if="next_story_index < story_queue.length">
<action>Refill slot with next story (same pattern as batch-super-dev)</action>
<output>🔄 Worker {{worker_id}} refilled: {{next_story_key}}</output>
</check>
</check>
<check if="worker failed">
<action>Add to failed_stories with error</action>
<action>Remove from active_workers</action>
<output>❌ Worker {{worker_id}}: {{story_key}} failed</output>
<check if="continue_on_failure AND next_story_index < story_queue.length">
<action>Refill slot despite failure</action>
</check>
</check>
<action>Display live progress every 30 seconds:</action>
<output>
📊 Live Progress: {{completed_stories.length}} completed, {{active_workers.size}} active, {{story_queue.length - next_story_index}} queued
</output>
<action>Sleep 5 seconds before next poll</action>
</step>
<step n="4" goal="Generate epic-level summary">
<action>Aggregate verification results across all stories:</action>
<action>
epic_total_items = sum of all items across stories
epic_verified = sum of verified items
epic_partial = sum of partial items
epic_missing = sum of missing items
epic_gaps_filled = sum of gaps filled
epic_verified_pct = (epic_verified / epic_total_items) × 100
</action>
<action>Group stories by verification percentage:</action>
<action>
- complete_stories (≥95% verified)
- mostly_complete_stories (80-94% verified)
- partial_stories (50-79% verified)
- incomplete_stories (<50% verified)
</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 EPIC {{epic_number}} REVALIDATION SUMMARY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Total Stories:** {{epic_stories.length}}
**Completed:** {{completed_stories.length}}
**Failed:** {{failed_stories.length}}
**Epic-Wide Verification:**
- ✅ Verified: {{epic_verified}}/{{epic_total_items}} ({{epic_verified_pct}}%)
- 🔶 Partial: {{epic_partial}}/{{epic_total_items}}
- ❌ Missing: {{epic_missing}}/{{epic_total_items}}
{{#if fill_gaps}}- 🔧 Gaps Filled: {{epic_gaps_filled}}{{/if}}
**Story Health:**
- ✅ Complete (≥95%): {{complete_stories.length}} stories
- 🔶 Mostly Complete (80-94%): {{mostly_complete_stories.length}} stories
- ⚠️ Partial (50-79%): {{partial_stories.length}} stories
- ❌ Incomplete (<50%): {{incomplete_stories.length}} stories
---
**Complete Stories (≥95% verified):**
{{#each complete_stories}}
- {{story_key}}: {{verified_pct}}% verified
{{/each}}
{{#if mostly_complete_stories.length > 0}}
**Mostly Complete Stories (80-94%):**
{{#each mostly_complete_stories}}
- {{story_key}}: {{verified_pct}}% verified ({{gaps_count}} gaps{{#if gaps_filled > 0}}, {{gaps_filled}} filled{{/if}})
{{/each}}
{{/if}}
{{#if partial_stories.length > 0}}
**⚠️ Partial Stories (50-79%):**
{{#each partial_stories}}
- {{story_key}}: {{verified_pct}}% verified ({{gaps_count}} gaps{{#if gaps_filled > 0}}, {{gaps_filled}} filled{{/if}})
{{/each}}
Recommendation: Continue development on these stories
{{/if}}
{{#if incomplete_stories.length > 0}}
**❌ Incomplete Stories (<50%):**
{{#each incomplete_stories}}
- {{story_key}}: {{verified_pct}}% verified ({{gaps_count}} gaps{{#if gaps_filled > 0}}, {{gaps_filled}} filled{{/if}})
{{/each}}
Recommendation: Re-implement these stories from scratch
{{/if}}
{{#if failed_stories.length > 0}}
**❌ Failed Revalidations:**
{{#each failed_stories}}
- {{story_key}}: {{error}}
{{/each}}
{{/if}}
---
**Epic Health Score:** {{epic_verified_pct}}/100
{{#if epic_verified_pct >= 95}}
✅ Epic is COMPLETE and verified
{{else if epic_verified_pct >= 80}}
🔶 Epic is MOSTLY COMPLETE ({{epic_missing}} items need attention)
{{else if epic_verified_pct >= 50}}
⚠️ Epic is PARTIALLY COMPLETE (significant gaps remain)
{{else}}
❌ Epic is INCOMPLETE (major rework needed)
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<check if="create_epic_report == true">
<action>Write epic summary to: {sprint_artifacts}/revalidation-epic-{{epic_number}}-{{timestamp}}.md</action>
<output>📄 Epic report: {{report_path}}</output>
</check>
<check if="update_sprint_status == true">
<action>Update sprint-status.yaml with revalidation timestamp and results</action>
<action>Add comment to epic entry: # Revalidated: {{epic_verified_pct}}% verified ({{timestamp}})</action>
</check>
</step>
</workflow>

View File

@ -1,510 +0,0 @@
# Revalidate Story - Verify Checkboxes Against Codebase Reality
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<workflow>
<step n="1" goal="Load story and backup current state">
<action>Verify story_file parameter provided</action>
<check if="story_file not provided">
<output>❌ ERROR: story_file parameter required
Usage:
/revalidate-story story_file=path/to/story.md
/revalidate-story story_file=path/to/story.md fill_gaps=true
</output>
<action>HALT</action>
</check>
<action>Read COMPLETE story file: {{story_file}}</action>
<action>Parse sections: Acceptance Criteria, Tasks/Subtasks, Definition of Done, Dev Agent Record</action>
<action>Extract story_key from filename (e.g., "2-7-image-file-handling")</action>
<action>Create backup of current checkbox state:</action>
<action>Count currently checked items:
- ac_checked_before = count of [x] in Acceptance Criteria
- tasks_checked_before = count of [x] in Tasks/Subtasks
- dod_checked_before = count of [x] in Definition of Done
- total_checked_before = sum of above
</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 STORY REVALIDATION STARTED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Story:** {{story_key}}
**File:** {{story_file}}
**Mode:** {{#if fill_gaps}}Verify & Fill Gaps{{else}}Verify Only{{/if}}
**Current State:**
- Acceptance Criteria: {{ac_checked_before}}/{{ac_total}} checked
- Tasks: {{tasks_checked_before}}/{{tasks_total}} checked
- Definition of Done: {{dod_checked_before}}/{{dod_total}} checked
- **Total:** {{total_checked_before}}/{{total_items}} ({{pct_before}}%)
**Action:** Clearing all checkboxes and re-verifying against codebase...
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="2" goal="Clear all checkboxes">
<output>🧹 Clearing all checkboxes to start fresh verification...</output>
<action>Use Edit tool to replace all [x] with [ ] in Acceptance Criteria section</action>
<action>Use Edit tool to replace all [x] with [ ] in Tasks/Subtasks section</action>
<action>Use Edit tool to replace all [x] with [ ] in Definition of Done section</action>
<action>Save story file with all boxes unchecked</action>
<output>✅ All checkboxes cleared. Starting verification from clean slate...</output>
</step>
<step n="3" goal="Verify Acceptance Criteria against codebase">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 VERIFYING ACCEPTANCE CRITERIA ({{ac_total}} items)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Extract all AC items from Acceptance Criteria section</action>
<iterate>For each AC item:</iterate>
<substep n="3a" title="Parse AC and determine what should exist">
<action>Extract AC description and identify artifacts:
- File mentions (e.g., "UserProfile component")
- Function names (e.g., "updateUser function")
- Features (e.g., "dark mode toggle")
- Test requirements (e.g., "unit tests covering edge cases")
</action>
<output>Verifying AC{{@index}}: {{ac_description}}</output>
</substep>
<substep n="3b" title="Search codebase for evidence">
<action>Use Glob to find relevant files:
- If AC mentions specific file: glob for that file
- If AC mentions component: glob for **/*ComponentName*
- If AC mentions feature: glob for files in related directories
</action>
<action>Use Grep to search for symbols/functions/features</action>
<action>Read found files to verify:</action>
<action>- NOT a stub (check for "TODO", "Not implemented", "throw new Error")</action>
<action>- Has actual implementation (not just empty function)</action>
<action>- Tests exist (search for *.test.* or *.spec.* files)</action>
<action>- Tests pass (if --fill-gaps mode, run tests)</action>
</substep>
<substep n="3c" title="Determine verification status">
<check if="all evidence found AND no stubs AND tests exist">
<action>verification_status = VERIFIED</action>
<action>Check box [x] in story file for this AC</action>
<action>Record evidence: "✅ VERIFIED: {{files_found}}, tests: {{test_files}}"</action>
<output> ✅ AC{{@index}}: VERIFIED</output>
</check>
<check if="partial evidence OR stubs found OR tests missing">
<action>verification_status = PARTIAL</action>
<action>Check box [~] in story file for this AC</action>
<action>Record gap: "🔶 PARTIAL: {{what_exists}}, missing: {{what_is_missing}}"</action>
<output> 🔶 AC{{@index}}: PARTIAL ({{what_is_missing}})</output>
<action>Add to gaps_list with details</action>
</check>
<check if="no evidence found">
<action>verification_status = MISSING</action>
<action>Leave box unchecked [ ] in story file</action>
<action>Record gap: "❌ MISSING: No implementation found for {{ac_description}}"</action>
<output> ❌ AC{{@index}}: MISSING</output>
<action>Add to gaps_list with details</action>
</check>
</substep>
<action>Save story file after each AC verification</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Acceptance Criteria Verification Complete
✅ Verified: {{ac_verified}}
🔶 Partial: {{ac_partial}}
❌ Missing: {{ac_missing}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="4" goal="Verify Tasks/Subtasks against codebase">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 VERIFYING TASKS ({{tasks_total}} items)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Extract all Task items from Tasks/Subtasks section</action>
<iterate>For each Task item (same verification logic as ACs):</iterate>
<action>Parse task description for artifacts</action>
<action>Search codebase with Glob/Grep</action>
<action>Read and verify (check for stubs, tests)</action>
<action>Determine status: VERIFIED | PARTIAL | MISSING</action>
<action>Update checkbox: [x] | [~] | [ ]</action>
<action>Record evidence or gap</action>
<action>Save story file</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Tasks Verification Complete
✅ Verified: {{tasks_verified}}
🔶 Partial: {{tasks_partial}}
❌ Missing: {{tasks_missing}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="5" goal="Verify Definition of Done against codebase">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 VERIFYING DEFINITION OF DONE ({{dod_total}} items)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Extract all DoD items from Definition of Done section</action>
<iterate>For each DoD item:</iterate>
<action>Parse DoD requirement:
- "Type check passes" → Run type checker
- "Unit tests 90%+ coverage" → Run coverage report
- "Linting clean" → Run linter
- "Build succeeds" → Run build
- "All tests pass" → Run test suite
</action>
<action>Execute verification for this DoD item</action>
<check if="verification passes">
<action>Check box [x]</action>
<action>Record: "✅ VERIFIED: {{verification_result}}"</action>
</check>
<check if="verification fails or N/A">
<action>Leave unchecked [ ] or partial [~]</action>
<action>Record gap if applicable</action>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Definition of Done Verification Complete
✅ Verified: {{dod_verified}}
🔶 Partial: {{dod_partial}}
❌ Missing: {{dod_missing}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="6" goal="Generate revalidation report">
<action>Calculate overall completion:</action>
<action>
total_verified = ac_verified + tasks_verified + dod_verified
total_partial = ac_partial + tasks_partial + dod_partial
total_missing = ac_missing + tasks_missing + dod_missing
total_items = ac_total + tasks_total + dod_total
verified_pct = (total_verified / total_items) × 100
completion_pct = ((total_verified + total_partial) / total_items) × 100
</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 REVALIDATION SUMMARY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Story:** {{story_key}}
**File:** {{story_file}}
**Verification Results:**
- ✅ Verified Complete: {{total_verified}}/{{total_items}} ({{verified_pct}}%)
- 🔶 Partially Complete: {{total_partial}}/{{total_items}}
- ❌ Missing/Incomplete: {{total_missing}}/{{total_items}}
**Breakdown:**
- Acceptance Criteria: {{ac_verified}}✅ {{ac_partial}}🔶 {{ac_missing}}❌ / {{ac_total}} total
- Tasks: {{tasks_verified}}✅ {{tasks_partial}}🔶 {{tasks_missing}}❌ / {{tasks_total}} total
- Definition of Done: {{dod_verified}}✅ {{dod_partial}}🔶 {{dod_missing}}❌ / {{dod_total}} total
**Status Assessment:**
{{#if verified_pct >= 95}}
✅ Story is COMPLETE ({{verified_pct}}% verified)
{{else if verified_pct >= 80}}
🔶 Story is MOSTLY COMPLETE ({{verified_pct}}% verified, {{total_missing}} gaps)
{{else if verified_pct >= 50}}
⚠️ Story is PARTIALLY COMPLETE ({{verified_pct}}% verified, {{total_missing}} gaps)
{{else}}
❌ Story is INCOMPLETE ({{verified_pct}}% verified, significant work missing)
{{/if}}
**Before Revalidation:** {{total_checked_before}}/{{total_items}} checked ({{pct_before}}%)
**After Revalidation:** {{total_verified}}/{{total_items}} verified ({{verified_pct}}%)
**Accuracy:** {{#if pct_before == verified_pct}}Perfect match{{else if pct_before > verified_pct}}{{pct_before - verified_pct}}% over-reported{{else}}{{verified_pct - pct_before}}% under-reported{{/if}}
{{#if total_missing > 0}}
---
**Gaps Found ({{total_missing}}):**
{{#each gaps_list}}
{{@index + 1}}. {{item_type}} - {{item_description}}
Status: {{status}}
Missing: {{what_is_missing}}
{{#if evidence}}Evidence checked: {{evidence}}{{/if}}
{{/each}}
---
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<check if="create_report == true">
<action>Write detailed report to: {sprint_artifacts}/revalidation-{{story_key}}-{{timestamp}}.md</action>
<action>Include: verification results, gaps list, evidence for each item, recommendations</action>
<output>📄 Detailed report: {{report_path}}</output>
</check>
</step>
<step n="7" goal="Decide on gap filling">
<check if="fill_gaps == false">
<output>
✅ Verification complete (verify-only mode)
{{#if total_missing > 0}}
**To fill the {{total_missing}} gaps, run:**
/revalidate-story story_file={{story_file}} fill_gaps=true
{{else}}
No gaps found - story is complete!
{{/if}}
</output>
<action>Exit workflow</action>
</check>
<check if="fill_gaps == true AND total_missing == 0">
<output>✅ No gaps to fill - story is already complete!</output>
<action>Exit workflow</action>
</check>
<check if="fill_gaps == true AND total_missing > 0">
<check if="total_missing > max_gaps_to_fill">
<output>
⚠️ TOO MANY GAPS: {{total_missing}} gaps found (max: {{max_gaps_to_fill}})
This story has too many missing items for automatic gap filling.
Consider:
1. Re-implementing the story from scratch with /dev-story
2. Manually implementing the gaps
3. Increasing max_gaps_to_fill in workflow.yaml (use cautiously)
Gap filling HALTED for safety.
</output>
<action>HALT</action>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔧 GAP FILLING MODE ({{total_missing}} gaps to fill)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Continue to Step 8</action>
</check>
</step>
<step n="8" goal="Fill gaps (implement missing items)">
<iterate>For each gap in gaps_list:</iterate>
<substep n="8a" title="Confirm gap filling">
<check if="require_confirmation == true">
<ask>
Fill this gap?
**Item:** {{item_description}}
**Type:** {{item_type}} ({{section}})
**Missing:** {{what_is_missing}}
[Y] Yes - Implement this item
[A] Auto-fill - Implement this and all remaining gaps without asking
[S] Skip - Leave this gap unfilled
[H] Halt - Stop gap filling
Your choice:
</ask>
<check if="choice == 'A'">
<action>Set require_confirmation = false (auto-fill remaining)</action>
</check>
<check if="choice == 'S'">
<action>Continue to next gap</action>
</check>
<check if="choice == 'H'">
<action>Exit gap filling loop</action>
<action>Jump to Step 9 (Summary)</action>
</check>
</check>
</substep>
<substep n="8b" title="Implement missing item">
<output>🔧 Implementing: {{item_description}}</output>
<action>Load story context (Technical Requirements, Architecture Compliance, Dev Notes)</action>
<action>Implement missing item following story specifications</action>
<action>Write tests if required</action>
<action>Run tests to verify implementation</action>
<action>Verify linting/type checking passes</action>
<check if="implementation succeeds AND tests pass">
<action>Check box [x] for this item in story file</action>
<action>Update File List with new/modified files</action>
<action>Add to Dev Agent Record: "Gap filled: {{item_description}}"</action>
<output> ✅ Implemented and verified</output>
<check if="commit_strategy == 'per_gap'">
<action>Stage files for this gap</action>
<action>Commit: "fix({{story_key}}): fill gap - {{item_description}}"</action>
<output> ✅ Committed</output>
</check>
</check>
<check if="implementation fails">
<output> ❌ Failed to implement: {{error_message}}</output>
<action>Leave box unchecked</action>
<action>Record failure in gaps_list</action>
<action>Add to failed_gaps</action>
</check>
</substep>
<action>After all gaps processed:</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Gap Filling Complete
✅ Filled: {{gaps_filled}}
❌ Failed: {{gaps_failed}}
⏭️ Skipped: {{gaps_skipped}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="9" goal="Re-verify filled gaps and finalize">
<check if="gaps_filled > 0">
<output>🔍 Re-verifying filled gaps...</output>
<iterate>For each filled gap:</iterate>
<action>Re-run verification for that item</action>
<action>Ensure still VERIFIED after all changes</action>
<output>✅ All filled gaps re-verified</output>
</check>
<action>Calculate final completion:</action>
<action>
final_verified = count of [x] across all sections
final_partial = count of [~] across all sections
final_missing = count of [ ] across all sections
final_pct = (final_verified / total_items) × 100
</action>
<check if="commit_strategy == 'all_at_once' AND gaps_filled > 0">
<action>Stage all changed files</action>
<action>Commit: "fix({{story_key}}): fill {{gaps_filled}} gaps from revalidation"</action>
<output>✅ All gaps committed</output>
</check>
<check if="update_sprint_status == true">
<action>Load {sprint_status} file</action>
<action>Update entry with current progress:</action>
<action>Format: {{story_key}}: {{current_status}} # Revalidated: {{final_verified}}/{{total_items}} ({{final_pct}}%) verified</action>
<action>Save sprint-status.yaml</action>
<output>✅ Sprint status updated with revalidation results</output>
</check>
<check if="update_dev_agent_record == true">
<action>Add to Dev Agent Record in story file:</action>
<action>
## Revalidation Record ({{timestamp}})
**Revalidation Mode:** {{#if fill_gaps}}Verify & Fill{{else}}Verify Only{{/if}}
**Results:**
- Verified: {{final_verified}}/{{total_items}} ({{final_pct}}%)
- Gaps Found: {{total_missing}}
- Gaps Filled: {{gaps_filled}}
**Evidence:**
{{#each verification_evidence}}
- {{item}}: {{evidence}}
{{/each}}
{{#if gaps_filled > 0}}
**Gaps Filled:**
{{#each filled_gaps}}
- {{item}}: {{what_was_implemented}}
{{/each}}
{{/if}}
{{#if failed_gaps.length > 0}}
**Failed to Fill:**
{{#each failed_gaps}}
- {{item}}: {{error}}
{{/each}}
{{/if}}
</action>
<action>Save story file</action>
</check>
</step>
<step n="10" goal="Final summary and recommendations">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ REVALIDATION COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Story:** {{story_key}}
**Final Status:**
- ✅ Verified Complete: {{final_verified}}/{{total_items}} ({{final_pct}}%)
- 🔶 Partially Complete: {{final_partial}}/{{total_items}}
- ❌ Missing/Incomplete: {{final_missing}}/{{total_items}}
{{#if fill_gaps}}
**Gap Filling Results:**
- Filled: {{gaps_filled}}
- Failed: {{gaps_failed}}
- Skipped: {{gaps_skipped}}
{{/if}}
**Accuracy Check:**
- Before revalidation: {{pct_before}}% checked
- After revalidation: {{final_pct}}% verified
- Checkbox accuracy: {{#if pct_before == final_pct}}✅ Perfect (0% discrepancy){{else if pct_before > final_pct}}⚠️ {{pct_before - final_pct}}% over-reported (checkboxes were optimistic){{else}}🔶 {{final_pct - pct_before}}% under-reported (work done but not checked){{/if}}
{{#if final_pct >= 95}}
**Recommendation:** Story is COMPLETE - mark as "done" or "review"
{{else if final_pct >= 80}}
**Recommendation:** Story is mostly complete - finish remaining {{final_missing}} items then mark "review"
{{else if final_pct >= 50}}
**Recommendation:** Story has significant gaps - continue development with /dev-story
{{else}}
**Recommendation:** Story is mostly incomplete - consider re-implementing with /dev-story or /super-dev-pipeline
{{/if}}
{{#if failed_gaps.length > 0}}
**⚠️ Manual attention needed for {{failed_gaps.length}} items that failed to fill automatically**
{{/if}}
{{#if create_report}}
**Detailed Report:** {sprint_artifacts}/revalidation-{{story_key}}-{{timestamp}}.md
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
</workflow>

View File

@ -1,283 +0,0 @@
# Super-Dev-Story Workflow
**Enhanced story development with comprehensive quality validation**
## What It Does
Super-dev-story is `/dev-story` on steroids - it includes ALL standard development steps PLUS additional quality gates:
```
Standard dev-story:
1-8. Development cycle → Mark "review"
Super-dev-story:
1-8. Development cycle
9.5. Post-dev gap analysis (verify work complete)
9.6. Automated code review (catch issues)
→ Fix issues if found (loop back to step 5)
9. Mark "review" (only after all validation passes)
```
## When to Use
### Use `/super-dev-story` for:
- ✅ Security-critical features (auth, payments, PII handling)
- ✅ Complex business logic with many edge cases
- ✅ Stories you want bulletproof before human review
- ✅ High-stakes features (production releases, customer-facing)
- ✅ When you want to minimize review cycles
### Use standard `/dev-story` for:
- Documentation updates
- Simple UI tweaks
- Configuration changes
- Low-risk experimental features
- When speed matters more than extra validation
## Cost vs Benefit
| Aspect | dev-story | super-dev-story |
|--------|-----------|-----------------|
| **Tokens** | 50K-100K | 80K-150K (+30-50%) |
| **Time** | Normal | +20-30% |
| **Quality** | Good | Excellent |
| **Review cycles** | 1-3 iterations | 0-1 iterations |
| **False completions** | Possible | Prevented |
**ROI:** Extra 30K tokens (~$0.09) prevents hours of rework and multiple review cycles
## What Gets Validated
### Step 9.5: Post-Dev Gap Analysis
**Checks:**
- Tasks marked [x] → Code actually exists and works?
- Required files → Actually created?
- Claimed tests → Actually exist and pass?
- Partial implementations → Marked complete prematurely?
**Catches:**
- ❌ "Created auth service" → File doesn't exist
- ❌ "Added tests with 90% coverage" → Only 60% actual
- ❌ "Implemented login" → Function exists but incomplete
**Actions if issues found:**
- Unchecks false positive tasks
- Adds tasks for missing work
- Loops back to implementation
### Step 9.6: Automated Code Review
**Reviews:**
- ✅ Correctness (logic errors, edge cases)
- ✅ Security (vulnerabilities, input validation)
- ✅ Architecture (pattern compliance, SOLID principles)
- ✅ Performance (inefficiencies, optimization opportunities)
- ✅ Testing (coverage gaps, test quality)
- ✅ Code Quality (readability, maintainability)
**Actions if issues found:**
- Adds review findings as tasks
- Loops back to implementation
- Continues until issues resolved
## Usage
### Basic Usage
```bash
# Load any BMAD agent
/super-dev-story
# Follows same flow as dev-story, with extra validation
```
### Specify Story
```bash
/super-dev-story _bmad-output/implementation-artifacts/story-1.2.md
```
### Expected Flow
```
1. Pre-dev gap analysis
├─ "Approve task updates? [Y/A/n/e/s/r]"
└─ Select option
2. Development (standard TDD cycle)
└─ Implements all tasks
3. Post-dev gap analysis
├─ Scans codebase
├─ If gaps: adds tasks, loops back
└─ If clean: proceeds
4. Code review
├─ Analyzes all changes
├─ If issues: adds tasks, loops back
└─ If clean: proceeds
5. Story marked "review"
└─ Truly complete!
```
## Fix Iteration Safety
Super-dev has a **max iteration limit** (default: 3) to prevent infinite loops:
```yaml
# workflow.yaml
super_dev_settings:
max_fix_iterations: 3 # Stop after 3 fix cycles
fail_on_critical_issues: true # HALT if critical security issues
```
If exceeded:
```
🛑 Maximum Fix Iterations Reached
Attempted 3 fix cycles.
Manual intervention required.
Issues remaining:
- [List of unresolved issues]
```
## Examples
### Example 1: Perfect First Try
```
/super-dev-story
Pre-gap: ✅ Tasks accurate
Development: ✅ 8 tasks completed
Post-gap: ✅ All work verified
Code review: ✅ No issues
→ Story complete! (45 minutes, 85K tokens)
```
### Example 2: Post-Dev Catches Incomplete Work
```
/super-dev-story
Pre-gap: ✅ Tasks accurate
Development: ✅ 8 tasks completed
Post-gap: ⚠️ Tests claim 90% coverage, actual 65%
→ Adds task: "Increase test coverage to 90%"
→ Implements missing tests
→ Post-gap: ✅ Now 92% coverage
→ Code review: ✅ No issues
→ Story complete! (52 minutes, 95K tokens)
```
### Example 3: Code Review Finds Security Issue
```
/super-dev-story
Pre-gap: ✅ Tasks accurate
Development: ✅ 10 tasks completed
Post-gap: ✅ All work verified
Code review: 🚨 CRITICAL - SQL injection vulnerability
→ Adds task: "Fix SQL injection in user search"
→ Implements parameterized queries
→ Post-gap: ✅ Verified
→ Code review: ✅ Security issue resolved
→ Story complete! (58 minutes, 110K tokens)
```
## Comparison to Standard Workflow
### Standard Flow (dev-story)
```
Day 1: Develop story (30 min)
Day 2: Human review finds 3 issues
Day 3: Fix issues (20 min)
Day 4: Human review again
Day 5: Approved
Total: 5 days, 2 review cycles
```
### Super-Dev Flow
```
Day 1: Super-dev-story
- Development (30 min)
- Post-gap finds 1 issue (auto-fix 5 min)
- Code review finds 2 issues (auto-fix 15 min)
- Complete (50 min total)
Day 2: Human review
Day 3: Approved (minimal/no changes needed)
Total: 3 days, 1 review cycle
```
**Savings:** 2 days, 1 fewer review cycle, higher initial quality
## Troubleshooting
### "Super-dev keeps looping forever"
**Cause:** Each validation finds new issues
**Solution:** This indicates quality problems. Review max_fix_iterations setting or manually intervene.
### "Post-dev gap analysis keeps failing"
**Cause:** Dev agent marking tasks complete prematurely
**Solution:** This is expected! Super-dev catches this. The loop ensures actual completion.
### "Code review too strict"
**Cause:** Reviewing for issues standard dev-story would miss
**Solution:** This is intentional. For less strict review, use standard dev-story.
### "Too many tokens/too slow"
**Cause:** Multi-stage validation adds overhead
**Solution:** Use standard dev-story for non-critical stories. Reserve super-dev for important work.
## Best Practices
1. **Reserve for important stories** - Don't use for trivial changes
2. **Trust the process** - Fix iterations mean it's working correctly
3. **Review limits** - Adjust max_fix_iterations if stories are complex
4. **Monitor costs** - Track token usage vs review cycle savings
5. **Learn patterns** - Code review findings inform future architecture
## Configuration Reference
```yaml
# _bmad/bmm/config.yaml or _bmad/bmgd/config.yaml
# Per-project settings
super_dev_settings:
post_dev_gap_analysis: true # Enable post-dev validation
auto_code_review: true # Enable automatic code review
fail_on_critical_issues: true # HALT on security vulnerabilities
max_fix_iterations: 3 # Maximum fix cycles before manual intervention
auto_fix_minor_issues: false # Auto-fix LOW severity without asking
```
## See Also
- [dev-story workflow](../dev-story/) - Standard development workflow
- [gap-analysis workflow](../gap-analysis/) - Standalone audit tool
- [Gap Analysis Guide](../../../../docs/gap-analysis.md) - Complete documentation
- [Super-Dev Mode Concept](../../../../docs/super-dev-mode.md) - Vision and roadmap
---
**Super-Dev-Story: Because "done" should mean DONE** ✅

View File

@ -1,299 +0,0 @@
<workflow>
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
<critical>Generate all documents in {document_output_language}</critical>
<critical>🚀 SUPER-DEV MODE: Enhanced quality workflow with post-implementation validation and automated code review</critical>
<critical>This workflow orchestrates existing workflows with additional validation steps</critical>
<!-- ═══════════════════════════════════════════════════════════════ -->
<!-- STEP 1: INVOKE STANDARD DEV-STORY WORKFLOW -->
<!-- ═══════════════════════════════════════════════════════════════ -->
<step n="1" goal="Execute standard dev-story workflow">
<critical>🎯 RUN DEV-STORY - Complete all standard development steps</critical>
<note>This includes: story loading, pre-dev gap analysis, development, testing, and task completion</note>
<output>🚀 **Super-Dev-Story: Enhanced Quality Workflow**
Running standard dev-story workflow (Steps 1-8)...
This includes:
✅ Story loading and validation
✅ Pre-dev gap analysis
✅ TDD implementation cycle
✅ Comprehensive testing
✅ Task completion validation
After dev-story completes, super-dev will add:
✅ Post-dev gap analysis
✅ Automated code review
✅ Auto push-all
</output>
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/4-implementation/dev-story/workflow.yaml">
<input name="story_file" value="{{story_file}}" />
<input name="auto_accept_gap_analysis" value="{{auto_accept_gap_analysis}}" />
<note>Pass through any user-provided story file path and auto-accept setting</note>
</invoke-workflow>
<check if="dev-story completed successfully">
<output>✅ Dev-story complete - all tasks implemented and tested
Proceeding to super-dev enhancements...
</output>
</check>
<check if="dev-story failed or halted">
<output>❌ Dev-story did not complete successfully
Cannot proceed with super-dev enhancements.
Fix issues and retry.
</output>
<action>HALT - dev-story must complete first</action>
</check>
</step>
<!-- ═══════════════════════════════════════════════════════════════ -->
<!-- STEP 2: POST-DEV GAP ANALYSIS (Super-Dev Enhancement) -->
<!-- ═══════════════════════════════════════════════════════════════ -->
<step n="2" goal="Post-development gap analysis">
<critical>🔍 POST-DEV VALIDATION - Verify all work actually completed!</critical>
<note>This catches incomplete implementations that were prematurely marked done</note>
<output>
🔎 **Post-Development Gap Analysis**
All tasks marked complete. Verifying against codebase reality...
</output>
<!-- Re-scan codebase with fresh eyes -->
<action>Re-read story file to get requirements and tasks</action>
<action>Extract all tasks marked [x] complete</action>
<action>For each completed task, identify what should exist in codebase</action>
<!-- SCAN PHASE -->
<action>Use Glob to find files that should have been created</action>
<action>Use Grep to search for functions/classes that should exist</action>
<action>Use Read to verify implementation completeness (not just existence)</action>
<action>Run tests to verify claimed test coverage actually exists and passes</action>
<!-- ANALYSIS PHASE -->
<action>Compare claimed work vs actual implementation:</action>
**POST-DEV VERIFICATION:**
<action>✅ Verified Complete:
- List tasks where code fully exists and works
- Confirm tests exist and pass
- Verify implementation matches requirements
</action>
<action>❌ False Positives Detected:
- List tasks marked [x] but code missing or incomplete
- Identify claimed tests that don't exist or fail
- Note partial implementations marked as complete
</action>
<!-- DECISION PHASE -->
<check if="false positives found">
<output>
⚠️ **Post-Dev Gaps Detected!**
**Tasks marked complete but implementation incomplete:**
{{list_false_positives_with_details}}
These issues must be addressed before story can be marked complete.
</output>
<action>Uncheck false positive tasks in story file</action>
<action>Add new tasks for missing work</action>
<action>Update Gap Analysis section with post-dev findings</action>
<output>🔄 Re-invoking dev-story to complete missing work...</output>
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/4-implementation/dev-story/workflow.yaml">
<input name="story_file" value="{{story_file}}" />
<input name="auto_accept_gap_analysis" value="{{auto_accept_gap_analysis}}" />
<note>Resume with added tasks for missing work</note>
</invoke-workflow>
<output>✅ Missing work completed. Proceeding to code review...</output>
</check>
<check if="no gaps found">
<output>✅ **Post-Dev Validation Passed**
All tasks verified complete against codebase.
Proceeding to code review...
</output>
<action>Update Gap Analysis section with post-dev verification results</action>
</check>
</step>
<!-- ═══════════════════════════════════════════════════════════════ -->
<!-- STEP 3: AUTOMATED CODE REVIEW (Super-Dev Enhancement) -->
<!-- ═══════════════════════════════════════════════════════════════ -->
<step n="3" goal="Automated code review">
<critical>👀 AUTO CODE REVIEW - Independent quality validation</critical>
<output>
🔍 **Running Automated Code Review**
Analyzing implementation for issues...
</output>
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/4-implementation/code-review/workflow.yaml">
<input name="story_file" value="{{story_file}}" />
<note>Run code review on completed story</note>
</invoke-workflow>
<action>Parse code review results from story file "Code Review" section</action>
<action>Extract issues by severity (Critical, High, Medium, Low)</action>
<action>Count total issues found</action>
<check if="critical or high severity issues found">
<output>🚨 **Code Review Found Issues Requiring Fixes**
Issues found: {{total_issue_count}}
- Critical: {{critical_count}}
- High: {{high_count}}
- Medium: {{medium_count}}
- Low: {{low_count}}
Adding review findings to story tasks and re-running dev-story...
</output>
<action>Add code review findings as tasks in story file</action>
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/4-implementation/dev-story/workflow.yaml">
<input name="story_file" value="{{story_file}}" />
<input name="auto_accept_gap_analysis" value="{{auto_accept_gap_analysis}}" />
<note>Fix code review issues</note>
</invoke-workflow>
<output>✅ Code review issues resolved. Proceeding to push...</output>
</check>
<check if="only medium or low issues found">
<output> **Code Review Found Minor Issues**
- Medium: {{medium_count}}
- Low: {{low_count}}
</output>
<ask>Auto-fix these minor issues? [Y/n/skip]:</ask>
<check if="user approves Y">
<action>Add review findings as tasks</action>
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/4-implementation/dev-story/workflow.yaml">
<input name="story_file" value="{{story_file}}" />
<input name="auto_accept_gap_analysis" value="{{auto_accept_gap_analysis}}" />
</invoke-workflow>
</check>
<check if="user says skip">
<action>Document issues in story file</action>
<output> Minor issues documented. Proceeding to push...</output>
</check>
</check>
<check if="no issues found">
<output>✅ **Code Review Passed**
No issues found. Implementation meets quality standards.
Proceeding to push...
</output>
</check>
</step>
<!-- ═══════════════════════════════════════════════════════════════ -->
<!-- STEP 4: PUSH ALL CHANGES (Super-Dev Enhancement) -->
<!-- ═══════════════════════════════════════════════════════════════ -->
<step n="4" goal="Commit and push story changes">
<critical>📝 PUSH-ALL - Stage, commit, and push with safety validation</critical>
<critical>⚡ TARGETED COMMIT: Only commit files from THIS story's File List (safe for parallel agents)</critical>
<!-- Extract File List from story file -->
<action>Read story file and extract the "File List" section</action>
<action>Parse all file paths listed (relative to repo root)</action>
<action>Also include the story file itself in the list</action>
<action>Store as {{story_files}} - space-separated list of all files</action>
<output>📝 **Committing Story Changes**
Files from this story:
{{story_files}}
Running push-all with targeted file list (parallel-agent safe)...
</output>
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/4-implementation/push-all/workflow.yaml">
<input name="target_files" value="{{story_files}}" />
<input name="story_key" value="{{story_key}}" />
<note>Only commit files changed by this story</note>
</invoke-workflow>
<check if="push-all succeeded">
<output>✅ Changes pushed to remote successfully</output>
</check>
<check if="push-all failed">
<output>⚠️ Push failed but story is complete locally
You can push manually when ready.
</output>
</check>
</step>
<!-- ═══════════════════════════════════════════════════════════════ -->
<!-- STEP 5: COMPLETION -->
<!-- ═══════════════════════════════════════════════════════════════ -->
<step n="5" goal="Super-dev completion summary">
<output>🎉 **SUPER-DEV STORY COMPLETE, {user_name}!**
**Quality Gates Passed:**
✅ Pre-dev gap analysis - Tasks validated before work
✅ Development - All tasks completed with TDD
✅ Post-dev gap analysis - Implementation verified
✅ Code review - Quality and security validated
✅ Pushed to remote - Changes backed up
**Story File:** {{story_file}}
**Status:** review (ready for human review)
---
**What Super-Dev Validated:**
1. 🔍 Tasks matched codebase reality before starting
2. 💻 Implementation completed per requirements
3. ✅ No false positive completions (all work verified)
4. 👀 Code quality and security validated
5. 📝 Changes committed and pushed to remote
**Next Steps:**
- Review the completed story
- Verify business requirements met
- Merge when approved
**Note:** This story went through enhanced quality validation.
It should require minimal human review.
</output>
<action>Based on {user_skill_level}, ask if user needs explanations about implementation, decisions, or findings</action>
<check if="user asks for explanations">
<action>Provide clear, contextual explanations</action>
</check>
<output>💡 **Tip:** This story was developed with super-dev-story for enhanced quality.
For faster development, use standard `dev-story` workflow.
For maximum quality, continue using `super-dev-story`.
</output>
</step>
</workflow>

View File

@ -0,0 +1,311 @@
# Super Dev Story v3.0 - Development with Quality Gates
<purpose>
Complete story development pipeline: dev-story → validation → code review → push.
Automatically re-invokes dev-story if gaps or review issues found.
Ensures production-ready code before pushing.
</purpose>
<philosophy>
**Quality Over Speed**
Don't just implement—verify, review, fix.
- Run dev-story for implementation
- Validate with gap analysis
- Code review for quality
- Fix issues before pushing
- Only push when truly ready
</philosophy>
<config>
name: super-dev-story
version: 3.0.0
stages:
- dev-story: "Implement the story"
- validate: "Run gap analysis"
- review: "Code review"
- push: "Safe commit and push"
defaults:
max_rework_loops: 3
auto_push: false
review_depth: "standard" # quick | standard | deep
validation_depth: "quick"
quality_gates:
validation_threshold: 90 # % tasks must be verified
review_threshold: "pass" # pass | pass_with_warnings
</config>
<execution_context>
@patterns/verification.md
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="initialize" priority="first">
**Load story and prepare pipeline**
```bash
STORY_FILE="{{story_file}}"
[ -f "$STORY_FILE" ] || { echo "❌ story_file required"; exit 1; }
```
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🚀 SUPER DEV STORY PIPELINE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Story: {{story_key}}
Stages: dev-story → validate → review → push
Quality Gates:
- Validation: ≥{{validation_threshold}}% verified
- Review: {{review_threshold}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Initialize:
- rework_count = 0
- stage = "dev-story"
</step>
<step name="stage_dev_story">
**Stage 1: Implement the story**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📝 STAGE 1: DEV-STORY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Invoke dev-story workflow:
```
/dev-story story_file={{story_file}}
```
Wait for completion. Capture:
- files_created
- files_modified
- tasks_completed
```
✅ Dev-story complete
Files: {{file_count}} created/modified
Tasks: {{tasks_completed}}/{{total_tasks}}
```
</step>
<step name="stage_validate">
**Stage 2: Validate implementation**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 STAGE 2: VALIDATION
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Invoke validation:
```
/validate scope=story target={{story_file}} depth={{validation_depth}}
```
Capture results:
- verified_pct
- false_positives
- category
**Check quality gate:**
```
if verified_pct < validation_threshold:
REWORK_NEEDED = true
reason = "Validation below {{validation_threshold}}%"
if false_positives > 0:
REWORK_NEEDED = true
reason = "{{false_positives}} tasks marked done but missing"
```
```
{{#if REWORK_NEEDED}}
⚠️ Validation failed: {{reason}}
{{else}}
✅ Validation passed: {{verified_pct}}% verified
{{/if}}
```
</step>
<step name="stage_review">
**Stage 3: Code review**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 STAGE 3: CODE REVIEW
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Invoke code review:
```
/multi-agent-review files={{files_modified}} depth={{review_depth}}
```
Capture results:
- verdict (PASS, PASS_WITH_WARNINGS, NEEDS_REWORK)
- issues
**Check quality gate:**
```
if verdict == "NEEDS_REWORK":
REWORK_NEEDED = true
reason = "Code review found blocking issues"
if review_threshold == "pass" AND verdict == "PASS_WITH_WARNINGS":
REWORK_NEEDED = true
reason = "Warnings not allowed in strict mode"
```
```
{{#if REWORK_NEEDED}}
⚠️ Review failed: {{reason}}
Issues: {{issues}}
{{else}}
✅ Review passed: {{verdict}}
{{/if}}
```
</step>
<step name="handle_rework" if="REWORK_NEEDED">
**Handle rework loop**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔄 REWORK REQUIRED (Loop {{rework_count + 1}}/{{max_rework_loops}})
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Reason: {{reason}}
{{#if validation_issues}}
Validation Issues:
{{#each validation_issues}}
- {{this}}
{{/each}}
{{/if}}
{{#if review_issues}}
Review Issues:
{{#each review_issues}}
- {{this}}
{{/each}}
{{/if}}
```
**Check loop limit:**
```
rework_count++
if rework_count > max_rework_loops:
echo "❌ Max rework loops exceeded"
echo "Manual intervention required"
HALT
```
**Re-invoke dev-story with issues:**
```
/dev-story story_file={{story_file}} fix_issues={{issues}}
```
After dev-story completes, return to validation stage.
</step>
<step name="stage_push" if="NOT REWORK_NEEDED">
**Stage 4: Push changes**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📦 STAGE 4: PUSH
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**Generate commit message from story:**
```
feat({{epic}}): {{story_title}}
- Implemented {{task_count}} tasks
- Verified: {{verified_pct}}%
- Review: {{verdict}}
Story: {{story_key}}
```
**If auto_push:**
```
/push-all commit_message="{{message}}" auto_push=true
```
**Otherwise, ask:**
```
Ready to push?
[Y] Yes, push now
[N] No, keep local (can push later)
[R] Review changes first
```
</step>
<step name="final_summary">
**Display pipeline results**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ SUPER DEV STORY COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Story: {{story_key}}
Pipeline Results:
- Dev-Story: ✅ Complete
- Validation: ✅ {{verified_pct}}% verified
- Review: ✅ {{verdict}}
- Push: {{pushed ? "✅ Pushed" : "⏸️ Local only"}}
Rework Loops: {{rework_count}}
Files Changed: {{file_count}}
Commit: {{commit_hash}}
{{#if pushed}}
Branch: {{branch}}
Ready for PR: gh pr create
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
</process>
<examples>
```bash
# Standard pipeline
/super-dev-story story_file=docs/sprint-artifacts/2-5-auth.md
# With auto-push
/super-dev-story story_file=docs/sprint-artifacts/2-5-auth.md auto_push=true
# Strict review mode
/super-dev-story story_file=docs/sprint-artifacts/2-5-auth.md review_threshold=pass
```
</examples>
<failure_handling>
**Dev-story fails:** Report error, halt pipeline.
**Validation below threshold:** Enter rework loop.
**Review finds blocking issues:** Enter rework loop.
**Max rework loops exceeded:** Halt, require manual intervention.
**Push fails:** Report error, commit preserved locally.
</failure_handling>
<success_criteria>
- [ ] Dev-story completed
- [ ] Validation ≥ threshold
- [ ] Review passed
- [ ] Changes committed
- [ ] Pushed (if requested)
- [ ] Story status updated
</success_criteria>

View File

@ -14,7 +14,7 @@ date: system-generated
# Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-story"
instructions: "{installed_path}/instructions.xml"
instructions: "{installed_path}/workflow.md"
validation: "{installed_path}/checklist.md"
story_file: "" # Explicit story path; auto-discovered if empty

View File

@ -1,170 +0,0 @@
# Create Story With Gap Analysis
**Custom Workflow by Jonah Schulte**
**Created:** December 24, 2025
**Purpose:** Generate stories with SYSTEMATIC codebase gap analysis (not inference-based)
---
## Problem This Solves
**Standard `/create-story` workflow:**
- ❌ Reads previous stories and git commits (passive)
- ❌ Infers what probably exists (guessing)
- ❌ Gap analysis quality varies by agent thoroughness
- ❌ Checkboxes may not reflect reality
**This custom workflow:**
- ✅ Actively scans codebase with Glob/Read tools
- ✅ Verifies file existence (not inference)
- ✅ Reads key files to check implementation depth (mocked vs real)
- ✅ Generates TRUTHFUL gap analysis
- ✅ Checkboxes are FACTS verified by file system
---
## Usage
```bash
/create-story-with-gap-analysis
# Or via Skill tool:
Skill: "create-story-with-gap-analysis"
Args: "1.9" (epic.story number)
```
**Workflow will:**
1. Load existing story + epic context
2. **SCAN codebase systematically** (Glob for files, Read to verify implementation)
3. Generate gap analysis with verified ✅/❌/⚠️ status
4. Update story file with truthful checkboxes
5. Save to _bmad-output/implementation-artifacts/
---
## What It Scans
**For each story, the workflow:**
1. **Identifies target directories** (from story title/requirements)
- Example: "admin-user-service" → apps/backend/admin-user-service/
2. **Globs for all files**
- `{target}/src/**/*.ts` - Find all TypeScript files
- `{target}/src/**/*.spec.ts` - Find all tests
3. **Checks specific required files**
- Based on ACs, check if files exist
- Example: `src/auth/controllers/bridgeid-auth.controller.ts` → ❌ MISSING
4. **Reads key files to verify depth**
- Check if mocked: Search for "MOCK" string
- Check if incomplete: Search for "TODO"
- Verify real implementation exists
5. **Checks package.json**
- Verify required dependencies are installed
- Identify missing packages
6. **Counts tests**
- How many test files exist
- Coverage for each component
---
## Output Format
**Generates story with:**
1. ✅ Standard BMAD 5 sections (Story, AC, Tasks, Dev Notes, Dev Agent Record)
2. ✅ Enhanced Dev Notes with verified gap analysis subsections:
- Gap Analysis: Current State vs Requirements
- Library/Framework Requirements (from package.json)
- File Structure Requirements (from Glob results)
- Testing Requirements (from test file count)
- Architecture Compliance
- Previous Story Intelligence
3. ✅ Truthful checkboxes based on verified file existence
---
## Difference from Standard /create-story
| Feature | /create-story | /create-story-with-gap-analysis |
|---------|---------------|--------------------------------|
| Reads previous story | ✅ | ✅ |
| Reads git commits | ✅ | ✅ |
| Loads epic context | ✅ | ✅ |
| **Scans codebase with Glob** | ❌ | ✅ SYSTEMATIC |
| **Verifies files exist** | ❌ | ✅ VERIFIED |
| **Reads files to check depth** | ❌ | ✅ MOCKED vs REAL |
| **Checks package.json** | ❌ | ✅ DEPENDENCIES |
| **Counts test coverage** | ❌ | ✅ COVERAGE |
| Gap analysis quality | Variable (agent-dependent) | Systematic (tool-verified) |
| Checkbox accuracy | Inference-based | File-existence-based |
---
## When to Use
**This workflow (planning-time gap analysis):**
- Use when regenerating/auditing stories
- Use when you want verified checkboxes upfront
- Best for stories that will be implemented immediately
- Manual verification at planning time
**Standard /create-story + /dev-story (dev-time gap analysis):**
- Recommended for most workflows
- Stories start as DRAFT, validated when dev begins
- Prevents staleness in batch planning
- Automatic verification at development time
**Use standard /create-story when:**
- Greenfield project (nothing exists yet)
- Backlog stories (won't be implemented for months)
- Epic planning phase (just sketching ideas)
**Tip:** Both approaches are complementary. You can use this workflow to regenerate stories, then use `/dev-story` which will re-validate at dev-time.
---
## Examples
**Regenerating Story 1.9:**
```bash
/create-story-with-gap-analysis
Choice: 1.9
# Workflow will:
# 1. Load existing 1-9-admin-user-service-bridgeid-rbac.md
# 2. Identify target: apps/backend/admin-user-service/
# 3. Glob: apps/backend/admin-user-service/src/**/*.ts (finds 47 files)
# 4. Check: src/auth/controllers/bridgeid-auth.controller.ts → ❌ MISSING
# 5. Read: src/bridgeid/services/bridgeid-client.service.ts → ⚠️ MOCKED
# 6. Read: package.json → axios ❌ NOT INSTALLED
# 7. Generate gap analysis with verified status
# 8. Write story with truthful checkboxes
```
**Result:** Story with verified gap analysis showing:
- ✅ 7 components IMPLEMENTED (verified file existence)
- ❌ 6 components MISSING (verified file not found)
- ⚠️ 1 component PARTIAL (file exists but contains "MOCK")
---
## Installation
This workflow is auto-discovered when BMAD is installed.
**To use:**
```bash
/bmad:bmm:workflows:create-story-with-gap-analysis
```
---
**Last Updated:** December 27, 2025
**Status:** Integrated into BMAD-METHOD

View File

@ -1,83 +0,0 @@
# Step 1: Initialize and Extract Story Requirements
## Goal
Load epic context and identify what needs to be scanned in the codebase.
## Execution
### 1. Determine Story to Create
**Ask user:**
```
Which story should I regenerate with gap analysis?
Options:
1. Provide story number (e.g., "1.9" or "1-9")
2. Provide story filename (e.g., "story-1.9.md" or legacy "1-9-admin-user-service-bridgeid-rbac.md")
Your choice:
```
**Parse input:**
- Extract epic_num (e.g., "1")
- Extract story_num (e.g., "9")
- Locate story file: `{story_dir}/story-{epic_num}.{story_num}.md` (fallback: `{story_dir}/{epic_num}-{story_num}-*.md`)
### 2. Load Existing Story Content
```bash
Read: {story_dir}/story-{epic_num}.{story_num}.md
# If not found, fallback:
Read: {story_dir}/{epic_num}-{story_num}-*.md
```
**Extract from existing story:**
- Story title
- User story text (As a... I want... So that...)
- Acceptance criteria (the requirements, not checkboxes)
- Any existing Dev Notes or technical context
**Store for later use.**
### 3. Load Epic Context
```bash
Read: {planning_artifacts}/epics.md
```
**Extract from epic:**
- Epic business objectives
- This story's original requirements
- Technical constraints
- Dependencies on other stories
### 4. Determine Target Directories
**From story title and requirements, identify:**
- Which service/app this story targets
- Which directories to scan
**Examples:**
- "admin-user-service" → `apps/backend/admin-user-service/`
- "Widget Batch 1" → `packages/widgets/`
- "POE Integration" → `apps/frontend/web/`
**Store target directories for Step 2 codebase scan.**
### 5. Ready for Codebase Scan
**Output:**
```
✅ Story Context Loaded
Story: {epic_num}.{story_num} - {title}
Target directories identified:
- {directory_1}
- {directory_2}
Ready to scan codebase for gap analysis.
[C] Continue to Codebase Scan
```
**WAIT for user to select Continue.**

View File

@ -1,184 +0,0 @@
# Step 2: Systematic Codebase Gap Analysis
## Goal
VERIFY what code actually exists vs what's missing using Glob and Read tools.
## CRITICAL
This step uses ACTUAL file system tools to generate TRUTHFUL gap analysis.
No guessing. No inference. VERIFY with tools.
## Execution
### 1. Scan Target Directories
**For each target directory identified in Step 1:**
```bash
# List all TypeScript files
Glob: {target_dir}/src/**/*.ts
Glob: {target_dir}/src/**/*.tsx
# Store file list
```
**Output:**
```
📁 Codebase Scan Results for {target_dir}
Found {count} TypeScript files:
- {file1}
- {file2}
...
```
### 2. Check for Specific Required Components
**Based on story Acceptance Criteria, check if required files exist:**
**Example for Auth Story:**
```bash
# Check for OAuth endpoints
Glob: {target_dir}/src/auth/controllers/*bridgeid*.ts
Result: ❌ MISSING (0 files found)
# Check for BridgeID client
Glob: {target_dir}/src/bridgeid/**/*.ts
Result: ✅ EXISTS (found: bridgeid-client.service.ts, bridgeid-sync.service.ts)
# Check for permission guards
Glob: {target_dir}/src/auth/guards/permissions*.ts
Result: ❌ MISSING (0 files found)
# Check for decorators
Glob: {target_dir}/src/auth/decorators/*permission*.ts
Result: ❌ MISSING (0 files found)
```
### 3. Verify Implementation Depth
**For files that exist, read them to check if MOCKED or REAL:**
```bash
# Read key implementation file
Read: {target_dir}/src/bridgeid/services/bridgeid-client.service.ts
# Search for indicators:
- Contains "MOCK" or "mock" → ⚠️ MOCKED (needs real implementation)
- Contains "TODO" → ⚠️ INCOMPLETE
- Contains real HTTP client (axios) → ✅ IMPLEMENTED
```
### 4. Check Dependencies
```bash
# Read package.json
Read: {target_dir}/package.json
# Verify required dependencies exist:
Required: axios
Found in package.json? → ❌ NO (needs to be added)
Required: @aws-sdk/client-secrets-manager
Found in package.json? → ❌ NO (needs to be added)
```
### 5. Check Test Coverage
```bash
# Find test files
Glob: {target_dir}/src/**/*.spec.ts
Glob: {target_dir}/test/**/*.test.ts
# Count tests
Found {test_count} test files
# Check for specific test coverage
Glob: {target_dir}/src/**/*bridgeid*.spec.ts
Result: ✅ EXISTS (found 3 test files)
```
### 6. Generate Truthful Gap Analysis
**Create structured gap analysis:**
```markdown
## Gap Analysis: Current State vs Requirements
**✅ IMPLEMENTED (Verified by Codebase Scan):**
1. **BridgeID Client Infrastructure** - MOCKED (needs real HTTP)
- File: src/bridgeid/services/bridgeid-client.service.ts ✅ EXISTS
- Implementation: Mock user data with circuit breaker
- Status: ⚠️ PARTIAL - Ready for real HTTP client
- Tests: 15 tests passing ✅
2. **User Synchronization Service**
- File: src/bridgeid/services/bridgeid-sync.service.ts ✅ EXISTS
- Implementation: Bulk sync BridgeID → admin_users
- Status: ✅ COMPLETE
- Tests: 6 tests passing ✅
3. **Role Mapping Logic**
- File: src/bridgeid/constants/role-mapping.constants.ts ✅ EXISTS
- Implementation: 7-tier role mapping with priority selection
- Status: ✅ COMPLETE
- Tests: 10 tests passing ✅
**❌ MISSING (Required for AC Completion):**
1. **BridgeID OAuth Endpoints**
- File: src/auth/controllers/bridgeid-auth.controller.ts ❌ NOT FOUND
- Need: POST /api/auth/bridgeid/login endpoint
- Need: GET /api/auth/bridgeid/callback endpoint
- Status: ❌ NOT IMPLEMENTED
2. **Permission Guards**
- File: src/auth/guards/permissions.guard.ts ❌ NOT FOUND
- File: src/auth/decorators/require-permissions.decorator.ts ❌ NOT FOUND
- Status: ❌ NOT IMPLEMENTED
3. **Real OAuth HTTP Client**
- Package: axios ❌ NOT in package.json
- Package: @aws-sdk/client-secrets-manager ❌ NOT in package.json
- Status: ❌ DEPENDENCIES NOT ADDED
```
### 7. Update Acceptance Criteria Checkboxes
**Based on verified gap analysis, mark checkboxes:**
```markdown
### AC1: BridgeID OAuth Integration
- [ ] OAuth login endpoint (VERIFIED MISSING - file not found)
- [ ] OAuth callback endpoint (VERIFIED MISSING - file not found)
- [ ] Client configuration (VERIFIED PARTIAL - exists but mocked)
### AC3: RBAC Permission System
- [x] Role mapping defined (VERIFIED COMPLETE - file exists, tests pass)
- [ ] Permission guard (VERIFIED MISSING - file not found)
- [ ] Permission decorator (VERIFIED MISSING - file not found)
```
**Checkboxes are now FACTS, not guesses.**
### 8. Present Gap Analysis
**Output:**
```
✅ Codebase Scan Complete
Scanned: apps/backend/admin-user-service/
Files found: 47 TypeScript files
Tests found: 31 test files
Gap Analysis Generated:
✅ 7 components IMPLEMENTED (verified)
❌ 6 components MISSING (verified)
⚠️ 1 component PARTIAL (needs completion)
Story checkboxes updated based on verified file existence.
[C] Continue to Story Generation
```
**WAIT for user to continue.**

View File

@ -1,181 +0,0 @@
# Step 3: Generate Story with Verified Gap Analysis
## Goal
Generate complete 7-section story file using verified gap analysis from Step 2.
## Execution
### 1. Load Template
```bash
Read: {installed_path}/template.md
```
### 2. Fill Template Variables
**Basic Story Info:**
- `{{epic_num}}` - from Step 1
- `{{story_num}}` - from Step 1
- `{{story_title}}` - from existing story or epic
- `{{priority}}` - from epic (P0, P1, P2)
- `{{effort}}` - from epic or estimate
**Story Section:**
- `{{role}}` - from existing story
- `{{action}}` - from existing story
- `{{benefit}}` - from existing story
**Business Context:**
- `{{business_value}}` - from epic context
- `{{scale_requirements}}` - from epic/architecture
- `{{compliance_requirements}}` - from epic/architecture
- `{{urgency}}` - from epic priority
**Acceptance Criteria:**
- `{{acceptance_criteria}}` - from epic + existing story
- Update checkboxes based on Step 2 gap analysis:
- [x] = Component verified EXISTS
- [ ] = Component verified MISSING
- [~] = Component verified PARTIAL (optional notation)
**Tasks / Subtasks:**
- `{{tasks_subtasks}}` - from epic + existing story
- Add "✅ DONE", "⚠️ PARTIAL", "❌ TODO" markers based on gap analysis
**Gap Analysis Section:**
- `{{implemented_components}}` - from Step 2 codebase scan (verified ✅)
- `{{missing_components}}` - from Step 2 codebase scan (verified ❌)
- `{{partial_components}}` - from Step 2 codebase scan (verified ⚠️)
**Architecture Compliance:**
- `{{architecture_patterns}}` - from architecture doc + playbooks
- Multi-tenant isolation requirements
- Caching strategies
- Error handling patterns
- Performance requirements
**Library/Framework Requirements:**
- `{{current_dependencies}}` - from Step 2 package.json scan
- `{{required_dependencies}}` - missing deps identified in Step 2
**File Structure:**
- `{{existing_files}}` - from Step 2 Glob results (verified ✅)
- `{{required_files}}` - from gap analysis (verified ❌)
**Testing Requirements:**
- `{{test_count}}` - from Step 2 test file count
- `{{required_tests}}` - based on missing components
- `{{coverage_target}}` - from architecture or default 90%
**Dev Agent Guardrails:**
- `{{guardrails}}` - from playbooks + previous story lessons
- What NOT to do
- Common mistakes to avoid
**Previous Story Intelligence:**
- `{{previous_story_learnings}}` - from Step 1 previous story Dev Agent Record
**Project Structure Notes:**
- `{{structure_alignment}}` - from architecture compliance
**References:**
- `{{references}}` - Links to epic, architecture, playbooks, related stories
**Definition of Done:**
- Standard DoD checklist with story-specific coverage target
### 3. Generate Complete Story
**Write filled template:**
```bash
Write: {story_dir}/story-{{epic_num}}.{{story_num}}.md
[Complete 7-section story with verified gap analysis]
```
### 4. Validate Generated Story
```bash
# Check section count
grep "^## " {story_dir}/story-{{epic_num}}.{{story_num}}.md | wc -l
# Should output: 7
# Check for gap analysis
grep -q "Gap Analysis.*Current State" {story_dir}/story-{{epic_num}}.{{story_num}}.md
# Should find it
# Run custom validation
./scripts/validate-bmad-format.sh {story_dir}/story-{{epic_num}}.{{story_num}}.md
# Update script to expect 7 sections + gap analysis subsection
```
### 5. Update Sprint Status
```bash
Read: {sprint_status}
# Find story entry
# Update status to "ready-for-dev" if was "backlog"
# Preserve all comments and structure
Write: {sprint_status}
```
### 6. Report Completion
**Output:**
```
✅ Story {{epic_num}}.{{story_num}} Regenerated with Gap Analysis
File: {story_dir}/story-{{epic_num}}.{{story_num}}.md
Sections: 7/7 ✅
Gap Analysis: VERIFIED with codebase scan
Summary:
✅ {{implemented_count}} components IMPLEMENTED (verified by file scan)
❌ {{missing_count}} components MISSING (verified file not found)
⚠️ {{partial_count}} components PARTIAL (file exists but mocked/incomplete)
Checkboxes in ACs and Tasks reflect VERIFIED status (not guesses).
Next Steps:
1. Review story file for accuracy
2. Use /dev-story to implement missing components
3. Story provides complete context for flawless implementation
Story is ready for development. 🚀
```
### 7. Cleanup
**Ask user:**
```
Story regeneration complete!
Would you like to:
[N] Regenerate next story ({{next_story_num}})
[Q] Quit workflow
[R] Review generated story first
Your choice:
```
**If N selected:** Loop back to Step 1 with next story number
**If Q selected:** End workflow
**If R selected:** Display story file, then show menu again
---
## Success Criteria
**Story generation succeeds when:**
1. ✅ 7 top-level ## sections present
2. ✅ Gap Analysis subsection exists with ✅/❌/⚠️ verified status
3. ✅ Checkboxes match codebase reality (spot-checked)
4. ✅ Dev Notes has all mandatory subsections
5. ✅ Definition of Done checklist included
6. ✅ File saved to correct location
7. ✅ Sprint status updated
---
**WORKFLOW COMPLETE - Ready to execute.**

View File

@ -1,179 +0,0 @@
# Story {{epic_num}}.{{story_num}}: {{story_title}}
**Status:** ready-for-dev
**Epic:** {{epic_num}}
**Priority:** {{priority}}
**Estimated Effort:** {{effort}}
---
## Story
As a **{{role}}**,
I want to **{{action}}**,
So that **{{benefit}}**.
---
## Business Context
### Why This Matters
{{business_value}}
### Production Reality
{{scale_requirements}}
{{compliance_requirements}}
{{urgency}}
---
## Acceptance Criteria
{{acceptance_criteria}}
---
## Tasks / Subtasks
{{tasks_subtasks}}
---
## Dev Notes
### Gap Analysis: Current State vs Requirements
**✅ IMPLEMENTED (Verified by Codebase Scan):**
{{implemented_components}}
**❌ MISSING (Required for AC Completion):**
{{missing_components}}
**⚠️ PARTIAL (Needs Enhancement):**
{{partial_components}}
### Architecture Compliance
{{architecture_patterns}}
### Library/Framework Requirements
**Current Dependencies:**
```json
{{current_dependencies}}
```
**Required Additions:**
```json
{{required_dependencies}}
```
### File Structure Requirements
**Completed Files:**
```
{{existing_files}}
```
**Required New Files:**
```
{{required_files}}
```
### Testing Requirements
**Current Test Coverage:** {{test_count}} tests passing
**Required Additional Tests:**
{{required_tests}}
**Target:** {{coverage_target}}
### Dev Agent Guardrails
{{guardrails}}
### Previous Story Intelligence
{{previous_story_learnings}}
### Project Structure Notes
{{structure_alignment}}
### References
{{references}}
---
## Definition of Done
### Code Quality (BLOCKING)
- [ ] Type check passes: `pnpm type-check` (zero errors)
- [ ] Zero `any` types in new code
- [ ] Lint passes: `pnpm lint` (zero errors in new code)
- [ ] Build succeeds: `pnpm build`
### Testing (BLOCKING)
- [ ] Unit tests: {{coverage_target}} coverage
- [ ] Integration tests: Key workflows validated
- [ ] All tests pass: New + existing (zero regressions)
### Security (BLOCKING)
- [ ] Dependency scan: `pnpm audit` (zero high/critical)
- [ ] No hardcoded secrets
- [ ] Input validation on all endpoints
- [ ] Auth checks on protected endpoints
- [ ] Audit logging on mutations
### Architecture Compliance (BLOCKING)
- [ ] Multi-tenant isolation: dealerId in all queries
- [ ] Cache namespacing: Cache keys include siteId
- [ ] Performance: External APIs cached, no N+1 queries
- [ ] Error handling: No silent failures
- [ ] Follows patterns from playbooks
### Deployment Validation (BLOCKING)
- [ ] Service starts: `pnpm dev` runs successfully
- [ ] Health check: `/health` returns 200
- [ ] Smoke test: Primary functionality verified
### Documentation (BLOCKING)
- [ ] API docs: Swagger decorators on endpoints
- [ ] Inline comments: Complex logic explained
- [ ] Story file: Dev Agent Record complete
---
## Dev Agent Record
### Agent Model Used
(To be filled by dev agent)
### Implementation Summary
(To be filled by dev agent)
### File List
(To be filled by dev agent)
### Test Results
(To be filled by dev agent)
### Completion Notes
(To be filled by dev agent)
---
**Generated by:** /create-story-with-gap-analysis
**Date:** {{date}}

View File

@ -0,0 +1,286 @@
# Create Story with Gap Analysis v3.0 - Verified Story Generation
<purpose>
Regenerate story with VERIFIED codebase gap analysis.
Uses Glob/Read tools to determine what actually exists vs what's missing.
Checkboxes reflect reality, not guesses.
</purpose>
<philosophy>
**Truth from Codebase, Not Assumptions**
1. Scan codebase for actual implementations
2. Verify files exist, check for stubs/TODOs
3. Check test coverage
4. Generate story with checkboxes matching reality
5. No guessing—every checkbox has evidence
</philosophy>
<config>
name: create-story-with-gap-analysis
version: 3.0.0
verification_status:
verified: "[x]" # File exists, real implementation, tests exist
partial: "[~]" # File exists but stub/TODO or no tests
missing: "[ ]" # File does not exist
defaults:
update_sprint_status: true
create_report: false
</config>
<execution_context>
@patterns/verification.md
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="initialize" priority="first">
**Identify story and load context**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 STORY REGENERATION WITH GAP ANALYSIS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**Ask user for story:**
```
Which story should I regenerate with gap analysis?
Provide:
- Story number (e.g., "1.9" or "1-9")
- OR story filename
Your choice:
```
**Parse input:**
- Extract epic_num, story_num
- Locate story file
**Load existing story:**
```bash
Read: {{story_dir}}/story-{{epic_num}}.{{story_num}}.md
```
Extract:
- Story title
- User story (As a... I want... So that...)
- Acceptance criteria
- Tasks
- Dev Notes
**Load epic context:**
```bash
Read: {{planning_artifacts}}/epics.md
```
Extract:
- Epic business objectives
- Technical constraints
- Dependencies
**Determine target directories:**
From story title/requirements, identify which directories to scan.
```
✅ Story Context Loaded
Story: {{epic_num}}.{{story_num}} - {{title}}
Target directories:
{{#each directories}}
- {{this}}
{{/each}}
[C] Continue to Codebase Scan
```
</step>
<step name="codebase_scan">
**VERIFY what code actually exists**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 CODEBASE SCAN
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**For each target directory:**
1. **List all source files:**
```bash
Glob: {{target_dir}}/src/**/*.ts
Glob: {{target_dir}}/src/**/*.tsx
```
2. **Check for specific required components:**
Based on story ACs, check if required files exist:
```bash
Glob: {{target_dir}}/src/auth/controllers/*oauth*.ts
# Result: ✅ EXISTS or ❌ MISSING
```
3. **Verify implementation depth:**
For files that exist, check quality:
```bash
Read: {{file}}
# Check for stubs
Grep: "MOCK|TODO|FIXME|Not implemented" {{file}}
# If found: ⚠️ STUB
```
4. **Check dependencies:**
```bash
Read: {{target_dir}}/package.json
# Required: axios - Found? ✅/❌
# Required: @aws-sdk/client-secrets-manager - Found? ✅/❌
```
5. **Check test coverage:**
```bash
Glob: {{target_dir}}/src/**/*.spec.ts
Glob: {{target_dir}}/test/**/*.test.ts
```
</step>
<step name="generate_gap_analysis">
**Create verified gap analysis**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 GAP ANALYSIS RESULTS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ IMPLEMENTED (Verified):
{{#each implemented}}
{{@index}}. **{{name}}**
- File: {{file}} ✅ EXISTS
- Status: {{status}}
- Tests: {{test_count}} tests
{{/each}}
❌ MISSING (Verified):
{{#each missing}}
{{@index}}. **{{name}}**
- Expected: {{expected_file}} ❌ NOT FOUND
- Needed for: {{requirement}}
{{/each}}
⚠️ PARTIAL (Stub/Incomplete):
{{#each partial}}
{{@index}}. **{{name}}**
- File: {{file}} ✅ EXISTS
- Issue: {{issue}}
{{/each}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="generate_story">
**Generate story with verified checkboxes**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📝 GENERATING STORY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Use story template with:
- `[x]` for VERIFIED items (evidence: file exists, not stub, has tests)
- `[~]` for PARTIAL items (evidence: file exists but stub/no tests)
- `[ ]` for MISSING items (evidence: file not found)
**Write story file:**
```bash
Write: {{story_dir}}/story-{{epic_num}}.{{story_num}}.md
```
**Validate generated story:**
```bash
# Check 7 sections exist
grep "^## " {{story_file}} | wc -l
# Should be 7
# Check gap analysis section exists
grep "Gap Analysis" {{story_file}}
```
</step>
<step name="update_sprint_status" if="update_sprint_status">
**Update sprint-status.yaml**
```bash
Read: {{sprint_status}}
# Update story status to "ready-for-dev" if was "backlog"
# Preserve comments and structure
Write: {{sprint_status}}
```
</step>
<step name="final_summary">
**Report completion**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ STORY REGENERATED WITH GAP ANALYSIS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Story: {{epic_num}}.{{story_num}} - {{title}}
File: {{story_file}}
Sections: 7/7 ✅
Gap Analysis Summary:
- ✅ {{implemented_count}} components VERIFIED complete
- ❌ {{missing_count}} components VERIFIED missing
- ⚠️ {{partial_count}} components PARTIAL (stub/no tests)
Checkboxes reflect VERIFIED codebase state.
Next Steps:
1. Review story for accuracy
2. Use /dev-story to implement missing components
3. Story provides complete context for implementation
[N] Regenerate next story
[Q] Quit
[R] Review generated story
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**If [N]:** Loop back to initialize with next story.
**If [R]:** Display story content, then show menu.
</step>
</process>
<examples>
```bash
# Regenerate specific story
/create-story-with-gap-analysis
> Which story? 1.9
# With explicit story file
/create-story-with-gap-analysis story_file=docs/sprint-artifacts/story-1.9.md
```
</examples>
<failure_handling>
**Story not found:** HALT with clear error.
**Target directory not found:** Warn, scan available directories.
**Glob/Read fails:** Log warning, count as MISSING.
**Write fails:** Report error, display generated content.
</failure_handling>
<success_criteria>
- [ ] Codebase scanned for all story requirements
- [ ] Gap analysis generated with evidence
- [ ] Story written with verified checkboxes
- [ ] 7 sections present
- [ ] Sprint status updated (if enabled)
</success_criteria>

View File

@ -14,10 +14,9 @@ story_dir: "{implementation_artifacts}"
# Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/create-story-with-gap-analysis"
template: "{installed_path}/template.md"
instructions: "{installed_path}/step-01-initialize.md"
instructions: "{installed_path}/workflow.md"
# Variables and inputs
# Variables
variables:
sprint_status: "{implementation_artifacts}/sprint-status.yaml"
epics_file: "{planning_artifacts}/epics.md"
@ -28,12 +27,6 @@ project_context: "**/project-context.md"
default_output_file: "{story_dir}/{{story_key}}.md"
# Workflow steps (processed in order)
steps:
- step-01-initialize.md
- step-02-codebase-scan.md
- step-03-generate-story.md
standalone: true
web_bundle: false

View File

@ -1,625 +0,0 @@
# Detect Ghost Features - Reverse Gap Analysis (Who You Gonna Call?)
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<workflow>
<step n="1" goal="Load all stories in scope">
<action>Determine scan scope based on parameters:</action>
<check if="scan_scope == 'epic' AND epic_number provided">
<action>Read {sprint_status}</action>
<action>Filter stories starting with "{{epic_number}}-"</action>
<action>Store as: stories_in_scope</action>
<output>🔍 Scanning Epic {{epic_number}} stories for documented features...</output>
</check>
<check if="scan_scope == 'sprint'">
<action>Read {sprint_status}</action>
<action>Get ALL story keys (exclude epics and retrospectives)</action>
<action>Store as: stories_in_scope</action>
<output>🔍 Scanning entire sprint for documented features...</output>
</check>
<check if="scan_scope == 'codebase'">
<action>Set stories_in_scope = ALL stories found in {sprint_artifacts}</action>
<output>🔍 Scanning entire codebase for documented features...</output>
</check>
<action>For each story in stories_in_scope:</action>
<action> Read story file</action>
<action> Extract documented artifacts:</action>
<action> - File List (all paths mentioned)</action>
<action> - Tasks (all file/component/service names mentioned)</action>
<action> - ACs (all features/functionality mentioned)</action>
<action> Store in: documented_artifacts[story_key] = {files, components, services, apis, features}</action>
<output>
✅ Loaded {{stories_in_scope.length}} stories
📋 Documented artifacts extracted from {{total_sections}} sections
</output>
</step>
<step n="2" goal="Scan codebase for actual implementations">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
👻 SCANNING FOR GHOST FEATURES
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Looking for: Components, APIs, Services, DB Tables, Models
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<substep n="2a" title="Scan for React/Vue/Angular components">
<check if="scan_for.components == true">
<action>Use Glob to find component files:</action>
<action> - **/*.component.{tsx,jsx,ts,js,vue} (Angular/Vue pattern)</action>
<action> - **/components/**/*.{tsx,jsx} (React pattern)</action>
<action> - **/src/**/*{Component,View,Screen,Page}.{tsx,jsx} (Named pattern)</action>
<action>For each found component file:</action>
<action> Extract component name from filename or export</action>
<action> Check file size (ignore <50 lines as trivial)</action>
<action> Read file to determine if it's a significant feature</action>
<action>Store as: codebase_components = [{name, path, size, purpose}]</action>
<output>📦 Found {{codebase_components.length}} components</output>
</check>
</substep>
<substep n="2b" title="Scan for API endpoints">
<check if="scan_for.api_endpoints == true">
<action>Use Glob to find API files:</action>
<action> - **/api/**/*.{ts,js} (Next.js/Express pattern)</action>
<action> - **/*.controller.{ts,js} (NestJS pattern)</action>
<action> - **/routes/**/*.{ts,js} (Generic routes)</action>
<action>Use Grep to find endpoint definitions:</action>
<action> - @Get|@Post|@Put|@Delete decorators (NestJS)</action>
<action> - export async function GET|POST|PUT|DELETE (Next.js App Router)</action>
<action> - router.get|post|put|delete (Express)</action>
<action> - app.route (Flask/FastAPI if Python)</action>
<action>For each endpoint found:</action>
<action> Extract: HTTP method, path, handler name</action>
<action> Read file to understand functionality</action>
<action>Store as: codebase_apis = [{method, path, handler, file}]</action>
<output>🌐 Found {{codebase_apis.length}} API endpoints</output>
</check>
</substep>
<substep n="2c" title="Scan for database tables">
<check if="scan_for.database_tables == true">
<action>Use Glob to find schema files:</action>
<action> - **/prisma/schema.prisma (Prisma)</action>
<action> - **/*.entity.{ts,js} (TypeORM)</action>
<action> - **/models/**/*.{ts,js} (Mongoose/Sequelize)</action>
<action> - **/*-table.ts (Custom)</action>
<action>Use Grep to find table definitions:</action>
<action> - model (Prisma)</action>
<action> - @Entity (TypeORM)</action>
<action> - createTable (Migrations)</action>
<action>For each table found:</action>
<action> Extract: table name, columns, relationships</action>
<action>Store as: codebase_tables = [{name, file, columns}]</action>
<output>🗄️ Found {{codebase_tables.length}} database tables</output>
</check>
</substep>
<substep n="2d" title="Scan for services/modules">
<check if="scan_for.services == true">
<action>Use Glob to find service files:</action>
<action> - **/*.service.{ts,js}</action>
<action> - **/services/**/*.{ts,js}</action>
<action> - **/*Service.{ts,js}</action>
<action>For each service found:</action>
<action> Extract: service name, key methods, dependencies</action>
<action> Ignore trivial services (<100 lines)</action>
<action>Store as: codebase_services = [{name, file, methods}]</action>
<output>⚙️ Found {{codebase_services.length}} services</output>
</check>
</substep>
</step>
<step n="3" goal="Cross-reference codebase artifacts with stories">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 CROSS-REFERENCING CODEBASE ↔ STORIES
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Initialize: orphaned_features = []</action>
<substep n="3a" title="Check components">
<iterate>For each component in codebase_components:</iterate>
<action>Search all stories for mentions of:</action>
<action> - Component name in File Lists</action>
<action> - Component name in Task descriptions</action>
<action> - Component file path in File Lists</action>
<action> - Feature described by component in ACs</action>
<check if="NO stories mention this component">
<action>Add to orphaned_features:</action>
<action>
type: "component"
name: {{component.name}}
path: {{component.path}}
size: {{component.size}} lines
purpose: {{inferred_purpose_from_code}}
severity: "HIGH" # Significant orphan
</action>
<output> 👻 ORPHAN: {{component.name}} ({{component.path}})</output>
</check>
<check if="stories mention this component">
<output> ✅ Documented: {{component.name}} → {{story_keys}}</output>
</check>
</substep>
<substep n="3b" title="Check API endpoints">
<iterate>For each API in codebase_apis:</iterate>
<action>Search all stories for mentions of:</action>
<action> - Endpoint path (e.g., "/api/users")</action>
<action> - HTTP method + resource (e.g., "POST users")</action>
<action> - Handler file in File Lists</action>
<action> - API functionality in ACs (e.g., "Users can create account")</action>
<check if="NO stories mention this API">
<action>Add to orphaned_features:</action>
<action>
type: "api"
method: {{api.method}}
path: {{api.path}}
handler: {{api.handler}}
file: {{api.file}}
severity: "CRITICAL" # APIs are critical functionality
</action>
<output> 👻 ORPHAN: {{api.method}} {{api.path}} ({{api.file}})</output>
</check>
</substep>
<substep n="3c" title="Check database tables">
<iterate>For each table in codebase_tables:</iterate>
<action>Search all stories for mentions of:</action>
<action> - Table name</action>
<action> - Migration file in File Lists</action>
<action> - Data model in Tasks</action>
<check if="NO stories mention this table">
<action>Add to orphaned_features:</action>
<action>
type: "database"
name: {{table.name}}
file: {{table.file}}
columns: {{table.columns.length}}
severity: "HIGH" # Database changes are significant
</action>
<output> 👻 ORPHAN: Table {{table.name}} ({{table.file}})</output>
</check>
</substep>
<substep n="3d" title="Check services">
<iterate>For each service in codebase_services:</iterate>
<action>Search all stories for mentions of:</action>
<action> - Service name or class name</action>
<action> - Service file in File Lists</action>
<action> - Service functionality in Tasks/ACs</action>
<check if="NO stories mention this service">
<action>Add to orphaned_features:</action>
<action>
type: "service"
name: {{service.name}}
file: {{service.file}}
methods: {{service.methods.length}}
severity: "MEDIUM" # Services are business logic
</action>
<output> 👻 ORPHAN: {{service.name}} ({{service.file}})</output>
</check>
</substep>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Cross-Reference Complete
👻 Orphaned Features: {{orphaned_features.length}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="4" goal="Analyze and categorize orphans">
<action>Group orphans by type and severity:</action>
<action>
- critical_orphans (APIs, auth, payment)
- high_orphans (Components, DB tables, services)
- medium_orphans (Utilities, helpers)
- low_orphans (Config files, constants)
</action>
<action>Estimate complexity for each orphan:</action>
<action> Based on file size, dependencies, test coverage</action>
<action>Suggest epic assignment based on functionality:</action>
<action> - Auth components → Epic focusing on authentication</action>
<action> - UI components → Epic focusing on frontend</action>
<action> - API endpoints → Epic for that resource type</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
👻 GHOST FEATURES DETECTED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Total Orphans:** {{orphaned_features.length}}
**By Severity:**
- 🔴 CRITICAL: {{critical_orphans.length}} (APIs, security-critical)
- 🟠 HIGH: {{high_orphans.length}} (Components, DB, services)
- 🟡 MEDIUM: {{medium_orphans.length}} (Utilities, helpers)
- 🟢 LOW: {{low_orphans.length}} (Config, constants)
**By Type:**
- Components: {{component_orphans.length}}
- API Endpoints: {{api_orphans.length}}
- Database Tables: {{db_orphans.length}}
- Services: {{service_orphans.length}}
- Other: {{other_orphans.length}}
---
**CRITICAL Orphans (Immediate Action Required):**
{{#each critical_orphans}}
{{@index + 1}}. **{{type | uppercase}}**: {{name}}
File: {{file}}
Purpose: {{inferred_purpose}}
Risk: {{why_critical}}
Suggested Epic: {{suggested_epic}}
{{/each}}
---
**HIGH Priority Orphans:**
{{#each high_orphans}}
{{@index + 1}}. **{{type | uppercase}}**: {{name}}
File: {{file}}
Size: {{size}} lines / {{complexity}} complexity
Suggested Epic: {{suggested_epic}}
{{/each}}
---
**Detection Confidence:**
- Artifacts scanned: {{total_artifacts_scanned}}
- Stories cross-referenced: {{stories_in_scope.length}}
- Documentation coverage: {{documented_pct}}%
- Orphan rate: {{orphan_rate}}%
{{#if orphan_rate > 20}}
⚠️ **HIGH ORPHAN RATE** - Over 20% of codebase is undocumented!
Recommend: Comprehensive backfill story creation session
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="5" goal="Propose backfill stories">
<check if="create_backfill_stories == false">
<output>
Backfill story creation disabled. To create stories for orphans, run:
/detect-ghost-features create_backfill_stories=true
</output>
<action>Jump to Step 7 (Generate Report)</action>
</check>
<check if="orphaned_features.length == 0">
<output>✅ No orphans found - all code is documented in stories!</output>
<action>Jump to Step 7</action>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📝 PROPOSING BACKFILL STORIES ({{orphaned_features.length}})
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<iterate>For each orphaned feature (prioritized by severity):</iterate>
<substep n="5a" title="Generate backfill story draft">
<action>Analyze orphan to understand functionality:</action>
<action> - Read implementation code</action>
<action> - Identify dependencies and related files</action>
<action> - Determine what it does (infer from code)</action>
<action> - Find tests (if any) to understand use cases</action>
<action>Generate story draft:</action>
<action>
Story Title: "Document existing {{name}} {{type}}"
Story Description:
This is a BACKFILL STORY documenting existing functionality found in the codebase
that was not tracked in any story (likely vibe-coded or manually added).
Business Context:
{{inferred_business_purpose_from_code}}
Current State:
**Implementation EXISTS:** {{file}}
- {{description_of_what_it_does}}
- {{key_features_or_methods}}
{{#if has_tests}}✅ Tests exist: {{test_files}}{{else}}❌ No tests found{{/if}}
Acceptance Criteria:
{{#each inferred_acs_from_code}}
- [ ] {{this}}
{{/each}}
Tasks:
- [x] {{name}} implementation (ALREADY EXISTS - {{file}})
{{#if missing_tests}}- [ ] Add tests for {{name}}{{/if}}
{{#if missing_docs}}- [ ] Add documentation for {{name}}{{/if}}
- [ ] Verify functionality works as expected
- [ ] Add to relevant epic or create new epic for backfills
Definition of Done:
- [x] Implementation exists and works
{{#if has_tests}}- [x] Tests exist{{else}}- [ ] Tests added{{/if}}
- [ ] Documented in story (this story)
- [ ] Assigned to appropriate epic
Story Type: BACKFILL (documenting existing code)
</action>
<output>
📄 Generated backfill story draft for: {{name}}
{{story_draft_preview}}
---
</output>
</substep>
<substep n="5b" title="Ask user if they want to create this backfill story">
<check if="auto_create == true">
<action>Create backfill story automatically</action>
<output>✅ Auto-created: {{story_filename}}</output>
</check>
<check if="auto_create == false">
<ask>
Create backfill story for {{name}}?
**Type:** {{type}}
**File:** {{file}}
**Suggested Epic:** {{suggested_epic}}
**Complexity:** {{complexity_estimate}}
[Y] Yes - Create this backfill story
[A] Auto - Create this and all remaining backfill stories
[E] Edit - Let me adjust the story draft first
[S] Skip - Don't create story for this orphan
[H] Halt - Stop backfill story creation
Your choice:
</ask>
<check if="choice == 'Y'">
<action>Create backfill story file: {sprint_artifacts}/backfill-{{type}}-{{name}}.md</action>
<action>Add to backfill_stories_created list</action>
<output>✅ Created: {{story_filename}}</output>
</check>
<check if="choice == 'A'">
<action>Set auto_create = true</action>
<action>Create this story and auto-create remaining</action>
</check>
<check if="choice == 'E'">
<ask>Provide your adjusted story content or instructions for modifications:</ask>
<action>Apply user's edits to story draft</action>
<action>Create modified backfill story</action>
</check>
<check if="choice == 'S'">
<action>Add to skipped_backfills list</action>
<output>⏭️ Skipped</output>
</check>
<check if="choice == 'H'">
<action>Exit backfill story creation loop</action>
<action>Jump to Step 6</action>
</check>
</check>
</substep>
<check if="add_to_sprint_status AND backfill_stories_created.length > 0">
<action>Load {sprint_status} file</action>
<iterate>For each created backfill story:</iterate>
<action> Add entry: {{backfill_story_key}}: backlog # BACKFILL - documents existing {{name}}</action>
<action>Save sprint-status.yaml</action>
<output>✅ Added {{backfill_stories_created.length}} backfill stories to sprint-status.yaml</output>
</check>
</step>
<step n="6" goal="Suggest epic organization for orphans">
<check if="backfill_stories_created.length > 0">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 BACKFILL STORY ORGANIZATION
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Group backfill stories by suggested epic:</action>
<iterate>For each suggested_epic:</iterate>
<output>
**{{suggested_epic}}:**
{{#each backfill_stories_for_epic}}
- {{story_key}}: {{name}} ({{type}})
{{/each}}
</output>
<output>
---
**Recommendations:**
1. **Option A: Create "Epic-Backfill" for all orphans**
- Single epic containing all backfill stories
- Easy to track undocumented code
- Clear separation from feature work
2. **Option B: Distribute to existing epics**
- Add each backfill story to its logical epic
- Better thematic grouping
- May inflate epic story counts
3. **Option C: Leave in backlog**
- Don't assign to epics yet
- Review and assign during next planning
**Your choice:**
[A] Create Epic-Backfill (recommended)
[B] Distribute to existing epics
[C] Leave in backlog for manual assignment
[S] Skip epic assignment
</output>
<ask>How should backfill stories be organized?</ask>
<check if="choice == 'A'">
<action>Create epic-backfill.md in epics directory</action>
<action>Update sprint-status.yaml with epic-backfill entry</action>
<action>Assign all backfill stories to epic-backfill</action>
</check>
<check if="choice == 'B'">
<iterate>For each backfill story:</iterate>
<action> Assign to suggested_epic in sprint-status.yaml</action>
<action> Update story_key to match epic (e.g., 2-11-backfill-userprofile)</action>
</check>
<check if="choice == 'C' OR choice == 'S'">
<action>Leave stories in backlog</action>
</check>
</check>
</step>
<step n="7" goal="Generate comprehensive report">
<check if="create_report == true">
<action>Write report to: {sprint_artifacts}/ghost-features-report-{{timestamp}}.md</action>
<action>Report structure:</action>
<action>
# Ghost Features Report (Reverse Gap Analysis)
**Generated:** {{timestamp}}
**Scope:** {{scan_scope}} {{#if epic_number}}(Epic {{epic_number}}){{/if}}
## Executive Summary
**Codebase Artifacts Scanned:** {{total_artifacts_scanned}}
**Stories Cross-Referenced:** {{stories_in_scope.length}}
**Orphaned Features Found:** {{orphaned_features.length}}
**Documentation Coverage:** {{documented_pct}}%
**Backfill Stories Created:** {{backfill_stories_created.length}}
## Orphaned Features Detail
### CRITICAL Orphans ({{critical_orphans.length}})
[Full list with files, purposes, risks]
### HIGH Priority Orphans ({{high_orphans.length}})
[Full list]
### MEDIUM Priority Orphans ({{medium_orphans.length}})
[Full list]
## Backfill Stories Created
{{#each backfill_stories_created}}
- {{story_key}}: {{story_file}}
{{/each}}
## Recommendations
[Epic assignment suggestions, next steps]
## Appendix: Scan Methodology
[How detection worked, patterns used, confidence levels]
</action>
<output>📄 Full report: {{report_path}}</output>
</check>
</step>
<step n="8" goal="Final summary and next steps">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ GHOST FEATURE DETECTION COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Scan Scope:** {{scan_scope}} {{#if epic_number}}(Epic {{epic_number}}){{/if}}
**Results:**
- 👻 Orphaned Features: {{orphaned_features.length}}
- 📝 Backfill Stories Created: {{backfill_stories_created.length}}
- ⏭️ Skipped: {{skipped_backfills.length}}
- 📊 Documentation Coverage: {{documented_pct}}%
{{#if orphaned_features.length == 0}}
**EXCELLENT!** All code is documented in stories.
Your codebase and story backlog are in perfect sync.
{{/if}}
{{#if orphaned_features.length > 0 AND backfill_stories_created.length == 0}}
**Action Required:**
Run with create_backfill_stories=true to generate stories for orphans
{{/if}}
{{#if backfill_stories_created.length > 0}}
**Next Steps:**
1. **Review backfill stories** - Check generated stories for accuracy
2. **Assign to epics** - Organize backfills (or create Epic-Backfill)
3. **Update sprint-status.yaml** - Already updated with {{backfill_stories_created.length}} new entries
4. **Prioritize** - Decide when to implement tests/docs for orphans
5. **Run revalidation** - Verify orphans work as expected
**Quick Commands:**
```bash
# Revalidate a backfill story to verify functionality
/revalidate-story story_file={{backfill_stories_created[0].file}}
# Process backfill stories (add tests/docs)
/batch-super-dev filter_by_epic=backfill
```
{{/if}}
{{#if create_report}}
**Detailed Report:** {{report_path}}
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
💡 **Pro Tip:** Run this periodically (e.g., end of each sprint) to catch
vibe-coded features before they become maintenance nightmares.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
</workflow>

View File

@ -1,367 +0,0 @@
<workflow>
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language}</critical>
<step n="1" goal="Find and load story file">
<check if="{{story_file}} is provided by user">
<action>Use {{story_file}} directly</action>
<action>Read COMPLETE story file</action>
<action>Extract story_key from filename or metadata</action>
<goto anchor="gap_analysis" />
</check>
<!-- Ask user for story to validate -->
<output>🔍 **Gap Analysis - Story Task Validation**
This workflow validates story tasks against your actual codebase.
**Use Cases:**
- Audit "done" stories to verify they match reality
- Validate story tasks before starting development
- Check if completed work was actually implemented
**Provide story to validate:**
</output>
<ask>Enter story file path, story key (e.g., "1-2-auth"), or status to scan (e.g., "done", "review", "in-progress"):</ask>
<check if="user provides file path">
<action>Use provided file path as {{story_file}}</action>
<action>Read COMPLETE story file</action>
<action>Extract story_key from filename</action>
<goto anchor="gap_analysis" />
</check>
<check if="user provides story key (e.g., 1-2-auth)">
<action>Search {story_dir} for file matching pattern {{story_key}}.md</action>
<action>Set {{story_file}} to found file path</action>
<action>Read COMPLETE story file</action>
<goto anchor="gap_analysis" />
</check>
<check if="user provides status (e.g., done, review, in-progress)">
<output>🔎 Scanning sprint-status.yaml for stories with status: {{user_input}}...</output>
<check if="{{sprint_status}} file exists">
<action>Load the FULL file: {{sprint_status}}</action>
<action>Parse development_status section</action>
<action>Find all stories where status equals {{user_input}}</action>
<check if="no stories found with that status">
<output>📋 No stories found with status: {{user_input}}
Available statuses: backlog, ready-for-dev, in-progress, review, done
</output>
<action>HALT</action>
</check>
<check if="multiple stories found">
<output>Found {{count}} stories with status {{user_input}}:
{{list_of_stories}}
</output>
<ask>Which story would you like to validate? [Enter story key or 'all']:</ask>
<check if="user says 'all'">
<action>Set {{batch_mode}} = true</action>
<action>Store list of all story keys to validate</action>
<action>Set {{story_file}} to first story in list</action>
<action>Read COMPLETE story file</action>
<goto anchor="gap_analysis" />
</check>
<check if="user provides specific story key">
<action>Set {{story_file}} to selected story path</action>
<action>Read COMPLETE story file</action>
<goto anchor="gap_analysis" />
</check>
</check>
<check if="single story found">
<action>Set {{story_file}} to found story path</action>
<action>Read COMPLETE story file</action>
<goto anchor="gap_analysis" />
</check>
</check>
<check if="{{sprint_status}} file does NOT exist">
<output>⚠️ No sprint-status.yaml found. Please provide direct story file path.</output>
<action>HALT</action>
</check>
</check>
<anchor id="gap_analysis" />
</step>
<step n="2" goal="Perform gap analysis">
<critical>🔍 CODEBASE REALITY CHECK - Validate tasks against actual code!</critical>
<output>📊 **Analyzing Story: {{story_key}}**
Scanning codebase to validate tasks...
</output>
<!-- Extract story context -->
<action>Parse story sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Status</action>
<action>Extract all tasks and subtasks from story file</action>
<action>Identify technical areas mentioned in tasks (files, classes, functions, services, components)</action>
<!-- SCAN PHASE: Analyze actual codebase -->
<action>Determine scan targets from task descriptions:</action>
<action>- For "Create X" tasks: Check if X already exists</action>
<action>- For "Implement Y" tasks: Search for Y functionality</action>
<action>- For "Add Z" tasks: Verify Z is missing</action>
<action>- For test tasks: Check for existing test files</action>
<action>Use Glob to find relevant files matching patterns from tasks (e.g., **/*.ts, **/*.tsx, **/*.test.ts)</action>
<action>Use Grep to search for specific classes, functions, or components mentioned in tasks</action>
<action>Use Read to verify implementation details and functionality in key discovered files</action>
<!-- ANALYSIS PHASE: Compare tasks to reality -->
<action>Document scan results:</action>
**CODEBASE REALITY:**
<action>✅ What Exists:
- List verified files, classes, functions, services found
- Note implementation completeness (partial vs full)
- Identify code that tasks claim to create but already exists
</action>
<action>❌ What's Missing:
- List features mentioned in tasks but NOT found in codebase
- Identify claimed implementations that don't exist
- Note tasks marked complete but code missing
</action>
<!-- TASK VALIDATION PHASE -->
<action>For each task in the story, determine:</action>
<action>- ACCURATE: Task matches reality (code exists if task is checked, missing if unchecked)</action>
<action>- FALSE POSITIVE: Task checked [x] but code doesn't exist (BS detection!)</action>
<action>- FALSE NEGATIVE: Task unchecked [ ] but code already exists</action>
<action>- NEEDS UPDATE: Task description doesn't match current implementation</action>
<action>Generate validation report with:</action>
<action>- Tasks that are accurate</action>
<action>- Tasks that are false positives (marked done but not implemented) ⚠️</action>
<action>- Tasks that are false negatives (not marked but already exist)</action>
<action>- Recommended task updates</action>
</step>
<step n="3" goal="Present findings and recommendations">
<critical>📋 SHOW TRUTH - Compare story claims vs codebase reality</critical>
<output>
📊 **Gap Analysis Results: {{story_key}}**
**Story Status:** {{story_status}}
---
**Codebase Scan Results:**
✅ **What Actually Exists:**
{{list_of_existing_files_features_with_details}}
❌ **What's Actually Missing:**
{{list_of_missing_elements_despite_claims}}
---
**Task Validation:**
{{if_any_accurate_tasks}}
✅ **Accurate Tasks** ({{count}}):
{{list_tasks_that_match_reality}}
{{endif}}
{{if_any_false_positives}}
⚠️ **FALSE POSITIVES** ({{count}}) - Marked done but NOT implemented:
{{list_tasks_marked_complete_but_code_missing}}
**WARNING:** These tasks claim completion but code doesn't exist!
{{endif}}
{{if_any_false_negatives}}
**FALSE NEGATIVES** ({{count}}) - Not marked but ALREADY exist:
{{list_tasks_unchecked_but_code_exists}}
{{endif}}
{{if_any_needs_update}}
🔄 **NEEDS UPDATE** ({{count}}) - Task description doesn't match implementation:
{{list_tasks_needing_description_updates}}
{{endif}}
---
📝 **Proposed Story Updates:**
{{if_false_positives_found}}
**CRITICAL - Uncheck false positives:**
{{list_tasks_to_uncheck_with_reasoning}}
{{endif}}
{{if_false_negatives_found}}
**Check completed work:**
{{list_tasks_to_check_with_verification}}
{{endif}}
{{if_task_updates_needed}}
**Update task descriptions:**
{{list_task_description_updates}}
{{endif}}
{{if_gap_analysis_section_missing}}
**Add Gap Analysis section** documenting findings
{{endif}}
---
**Story Accuracy Score:** {{percentage_of_accurate_tasks}}% ({{accurate_count}}/{{total_count}})
</output>
<check if="story status is 'done' or 'review'">
<check if="false positives found">
<output>🚨 **WARNING:** This story is marked {{story_status}} but has FALSE POSITIVES!
{{count}} task(s) claim completion but code doesn't exist.
This story may have been prematurely marked complete.
**Recommendation:** Update story status to 'in-progress' and complete missing work.
</output>
</check>
</check>
</step>
<step n="4" goal="Get user decision">
<ask>**What would you like to do?**
Options:
[U] Update - Apply proposed changes to story file
[A] Audit Report - Save findings to report file without updating story
[N] No Changes - Just show me the findings
[R] Review Details - Show me more details about specific findings
[C] Continue to Next - Move to next story (batch mode only)
[Q] Quit - Exit gap analysis
</ask>
<!-- UPDATE OPTION -->
<check if="user chooses Update (U)">
<action>Update story file with proposed changes:</action>
<action>- Uncheck false positive tasks</action>
<action>- Check false negative tasks</action>
<action>- Update task descriptions as needed</action>
<action>- Add or update "Gap Analysis" section with findings</action>
<action>- Add Change Log entry: "Gap analysis performed - tasks validated against codebase ({{date}})"</action>
<check if="false positives found AND story status is done or review">
<ask>Story has false positives. Update status to 'in-progress'? [Y/n]:</ask>
<check if="user approves">
<action>Update story Status to 'in-progress'</action>
<check if="sprint_status file exists">
<action>Update sprint-status.yaml status for this story to 'in-progress'</action>
</check>
</check>
</check>
<output>✅ Story file updated with gap analysis findings.
- {{changes_count}} task(s) updated
- Gap Analysis section added/updated
- Accuracy score: {{accuracy_percentage}}%
**File:** {{story_file}}
</output>
<check if="batch_mode is true">
<ask>Continue to next story? [Y/n]:</ask>
<check if="user approves">
<action>Load next story from batch list</action>
<goto step="2">Analyze next story</goto>
</check>
</check>
<action>HALT - Gap analysis complete</action>
</check>
<!-- AUDIT REPORT OPTION -->
<check if="user chooses Audit Report (A)">
<action>Generate audit report file: {{story_dir}}/gap-analysis-report-{{story_key}}-{{date}}.md</action>
<action>Include full findings, accuracy scores, recommendations</action>
<output>📄 Audit report saved: {{report_file}}
This report can be shared with team for review.
Story file was NOT modified.
</output>
<check if="batch_mode is true">
<ask>Continue to next story? [Y/n]:</ask>
<check if="user approves">
<action>Load next story from batch list</action>
<goto step="2">Analyze next story</goto>
</check>
</check>
<action>HALT - Gap analysis complete</action>
</check>
<!-- NO CHANGES OPTION -->
<check if="user chooses No Changes (N)">
<output> Findings displayed only. No files modified.</output>
<action>HALT - Gap analysis complete</action>
</check>
<!-- REVIEW DETAILS OPTION -->
<check if="user chooses Review Details (R)">
<ask>Which findings would you like more details about? (specify task numbers, file names, or areas):</ask>
<action>Provide detailed analysis of requested areas using Read tool for deeper code inspection</action>
<action>After review, re-present the decision options</action>
<action>Continue based on user's subsequent choice</action>
</check>
<!-- CONTINUE TO NEXT (batch mode) -->
<check if="user chooses Continue (C) AND batch_mode is true">
<action>Load next story from batch list</action>
<goto step="2">Analyze next story</goto>
</check>
<check if="user chooses Continue (C) AND batch_mode is NOT true">
<output>⚠️ Not in batch mode. Only one story to validate.</output>
<action>HALT</action>
</check>
<!-- QUIT OPTION -->
<check if="user chooses Quit (Q)">
<output>👋 Gap analysis session ended.
{{if batch_mode}}Processed {{processed_count}}/{{total_count}} stories.{{endif}}
</output>
<action>HALT</action>
</check>
</step>
<step n="5" goal="Completion summary">
<output>✅ **Gap Analysis Complete, {user_name}!**
{{if_single_story}}
**Story Analyzed:** {{story_key}}
**Accuracy Score:** {{accuracy_percentage}}%
**Actions Taken:** {{actions_summary}}
{{endif}}
{{if_batch_mode}}
**Batch Analysis Summary:**
- Stories analyzed: {{processed_count}}
- Average accuracy: {{avg_accuracy}}%
- False positives found: {{total_false_positives}}
- Stories updated: {{updated_count}}
{{endif}}
**Next Steps:**
- Review updated stories
- Address any false positives found
- Run dev-story for stories needing work
</output>
</step>
</workflow>

View File

@ -0,0 +1,246 @@
# Gap Analysis v3.0 - Verify Story Tasks Against Codebase
<purpose>
Validate story checkbox claims against actual codebase reality.
Find false positives (checked but not done) and false negatives (done but unchecked).
Interactive workflow with options to update, audit, or review.
</purpose>
<philosophy>
**Evidence-Based Verification**
Checkboxes lie. Code doesn't.
- Search codebase for implementation evidence
- Check for stubs, TODOs, empty functions
- Verify tests exist for claimed features
- Report accuracy of story completion claims
</philosophy>
<config>
name: gap-analysis
version: 3.0.0
defaults:
auto_update: false
create_audit_report: true
strict_mode: false # If true, stubs count as incomplete
output:
update_story: "Modify checkbox state to match reality"
audit_report: "Generate detailed gap analysis document"
no_changes: "Display results only"
</config>
<execution_context>
@patterns/verification.md
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="load_story" priority="first">
**Load and parse story file**
```bash
STORY_FILE="{{story_file}}"
[ -f "$STORY_FILE" ] || { echo "❌ story_file required"; exit 1; }
```
Use Read tool on story file. Extract:
- All `- [ ]` and `- [x]` items
- File references from Dev Agent Record
- Task descriptions with expected artifacts
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 GAP ANALYSIS: {{story_key}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Tasks: {{total_tasks}}
Currently checked: {{checked_count}}
```
</step>
<step name="verify_each_task">
**Verify each task against codebase**
For each task item:
1. **Extract artifacts** - File names, component names, function names
2. **Search codebase:**
```bash
# Check file exists
Glob: {{expected_file}}
# Check function/component exists
Grep: "{{function_or_component_name}}"
```
3. **If file exists, check quality:**
```bash
# Check for stubs
Grep: "TODO|FIXME|Not implemented|throw new Error" {{file}}
# Check for tests
Glob: {{file_base}}.test.* OR {{file_base}}.spec.*
```
4. **Determine status:**
- **VERIFIED:** File exists, not a stub, tests exist
- **PARTIAL:** File exists but stub/TODO or no tests
- **MISSING:** File does not exist
</step>
<step name="calculate_accuracy">
**Compare claimed vs actual**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 GAP ANALYSIS RESULTS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Tasks analyzed: {{total}}
By Status:
- ✅ Verified Complete: {{verified}} ({{verified_pct}}%)
- ⚠️ Partial: {{partial}} ({{partial_pct}}%)
- ❌ Missing: {{missing}} ({{missing_pct}}%)
Accuracy Analysis:
- Checked & Verified: {{correct_checked}}
- Checked but MISSING: {{false_positives}} ← FALSE POSITIVES
- Unchecked but DONE: {{false_negatives}} ← FALSE NEGATIVES
Checkbox Accuracy: {{accuracy}}%
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**If false positives found:**
```
⚠️ FALSE POSITIVES DETECTED
The following tasks are marked done but code is missing:
{{#each false_positives}}
- [ ] {{task}} — Expected: {{expected_file}} — ❌ NOT FOUND
{{/each}}
```
</step>
<step name="present_options">
**Ask user how to proceed**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 OPTIONS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[U] Update - Fix checkboxes to match reality
[A] Audit Report - Generate detailed report file
[N] No Changes - Display only (already done)
[R] Review Details - Show full evidence for each task
Your choice:
```
</step>
<step name="option_update" if="choice == U">
**Update story file checkboxes**
For false positives:
- Change `[x]` to `[ ]` for tasks with missing code
For false negatives:
- Change `[ ]` to `[x]` for tasks with verified code
Use Edit tool to make changes.
```
✅ Story checkboxes updated
- {{fp_count}} false positives unchecked
- {{fn_count}} false negatives checked
- New completion: {{new_pct}}%
```
</step>
<step name="option_audit" if="choice == A">
**Generate audit report**
Write to: `{{story_dir}}/gap-analysis-{{story_key}}-{{timestamp}}.md`
Include:
- Executive summary
- Detailed task-by-task evidence
- False positive/negative lists
- Recommendations
```
✅ Audit report generated: {{report_path}}
```
</step>
<step name="option_review" if="choice == R">
**Show detailed evidence**
For each task:
```
Task: {{task_text}}
Checkbox: {{checked_state}}
Evidence:
- File: {{file}} - {{exists ? "✅ EXISTS" : "❌ MISSING"}}
{{#if exists}}
- Stub check: {{is_stub ? "⚠️ STUB DETECTED" : "✅ Real implementation"}}
- Tests: {{has_tests ? "✅ Tests exist" : "❌ No tests"}}
{{/if}}
Verdict: {{status}}
```
After review, return to options menu.
</step>
<step name="final_summary">
**Display completion**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ GAP ANALYSIS COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Story: {{story_key}}
Verified Completion: {{verified_pct}}%
Checkbox Accuracy: {{accuracy}}%
{{#if updated}}
✅ Checkboxes updated to match reality
{{/if}}
{{#if report_generated}}
📄 Report: {{report_path}}
{{/if}}
{{#if false_positives > 0}}
⚠️ {{false_positives}} tasks need implementation work
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
</process>
<examples>
```bash
# Quick gap analysis of single story
/gap-analysis story_file=docs/sprint-artifacts/2-5-auth.md
# With auto-update enabled
/gap-analysis story_file=docs/sprint-artifacts/2-5-auth.md auto_update=true
```
</examples>
<failure_handling>
**Story file not found:** HALT with clear error.
**Search fails:** Log warning, count as MISSING.
**Edit fails:** Report error, suggest manual update.
</failure_handling>
<success_criteria>
- [ ] All tasks verified against codebase
- [ ] False positives/negatives identified
- [ ] Accuracy metrics calculated
- [ ] User choice executed (update/audit/review)
</success_criteria>

View File

@ -11,7 +11,7 @@ story_dir: "{implementation_artifacts}"
# Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/gap-analysis"
instructions: "{installed_path}/instructions.xml"
instructions: "{installed_path}/workflow.md"
# Variables
story_file: "" # User provides story file path or auto-discover

View File

@ -1,957 +0,0 @@
# Migrate to GitHub - Production-Grade Story Migration
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>RELIABILITY FIRST: This workflow prioritizes data integrity over speed</critical>
<workflow>
<step n="0" goal="Pre-Flight Safety Checks">
<critical>MUST verify all prerequisites before ANY migration operations</critical>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🛡️ PRE-FLIGHT SAFETY CHECKS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<substep n="0a" title="Verify GitHub MCP access">
<action>Test GitHub MCP connection:</action>
<action>Call: mcp__github__get_me()</action>
<check if="API call fails">
<output>
❌ CRITICAL: GitHub MCP not accessible
Cannot proceed with migration without GitHub API access.
Possible causes:
- GitHub MCP server not configured
- Authentication token missing or invalid
- Network connectivity issues
Fix:
1. Ensure GitHub MCP is configured in Claude settings
2. Verify token has required permissions:
- repo (full control)
- write:discussion (for comments)
3. Test connection: Try any GitHub MCP command
HALTING - Cannot migrate without GitHub access.
</output>
<action>HALT</action>
</check>
<action>Extract current user info:</action>
<action> - username: {{user.login}}</action>
<action> - user_id: {{user.id}}</action>
<output>✅ GitHub MCP connected (@{{username}})</output>
</substep>
<substep n="0b" title="Verify repository access">
<action>Verify github_owner and github_repo parameters provided</action>
<check if="parameters missing">
<output>
❌ ERROR: GitHub repository not specified
Required parameters:
github_owner: GitHub username or organization
github_repo: Repository name
Usage:
/migrate-to-github github_owner=jschulte github_repo=myproject
/migrate-to-github github_owner=jschulte github_repo=myproject mode=execute
HALTING
</output>
<action>HALT</action>
</check>
<action>Test repository access:</action>
<action>Call: mcp__github__list_issues({
owner: {{github_owner}},
repo: {{github_repo}},
per_page: 1
})</action>
<check if="repository not found or access denied">
<output>
❌ CRITICAL: Cannot access repository {{github_owner}}/{{github_repo}}
Possible causes:
- Repository doesn't exist
- Token lacks access to this repository
- Repository is private and token doesn't have permission
Verify:
1. Repository exists: <https://github.com/{{github_owner}}/{{github_repo}}>
2. Token has write access to issues
3. Repository name is spelled correctly
HALTING
</output>
<action>HALT</action>
</check>
<output>✅ Repository accessible ({{github_owner}}/{{github_repo}})</output>
</substep>
<substep n="0c" title="Verify local files exist">
<action>Check sprint-status.yaml exists:</action>
<action>test -f {{sprint_status}}</action>
<check if="file not found">
<output>
❌ ERROR: sprint-status.yaml not found at {{sprint_status}}
Cannot migrate without sprint status file.
Run /sprint-planning to generate it first.
HALTING
</output>
<action>HALT</action>
</check>
<action>Read and parse sprint-status.yaml</action>
<action>Count total stories to migrate</action>
<output>✅ Found {{total_stories}} stories in sprint-status.yaml</output>
<action>Verify story files exist:</action>
<action>For each story, try multiple naming patterns to find file</action>
<action>Report:</action>
<output>
📊 Story File Status:
- ✅ Files found: {{stories_with_files}}
- ❌ Files missing: {{stories_without_files}}
{{#if stories_without_files > 0}}
Missing: {{missing_story_keys}}
{{/if}}
</output>
<check if="stories_without_files > 0">
<ask>
⚠️ {{stories_without_files}} stories have no files
Options:
[C] Continue (only migrate stories with files)
[S] Skip these stories (add to skip list)
[H] Halt (fix missing files first)
Choice:
</ask>
<check if="choice == 'H'">
<action>HALT</action>
</check>
</check>
</substep>
<substep n="0d" title="Check for existing migration">
<action>Check if state file exists: {{state_file}}</action>
<check if="state file exists">
<action>Read migration state</action>
<action>Extract: stories_migrated, issues_created, last_completed, timestamp</action>
<output>
⚠️ Previous migration detected
Last migration:
- Date: {{migration_timestamp}}
- Stories migrated: {{stories_migrated.length}}
- Issues created: {{issues_created.length}}
- Last completed: {{last_completed}}
- Status: {{migration_status}}
Options:
[R] Resume (continue from where it left off)
[F] Fresh (start over, may create duplicates if not careful)
[V] View (show what was migrated)
[D] Delete state (clear and start fresh)
Choice:
</output>
<ask>How to proceed?</ask>
<check if="choice == 'R'">
<action>Set resume_mode = true</action>
<action>Load list of already-migrated stories</action>
<action>Filter them out from migration queue</action>
<output>✅ Resuming from story: {{last_completed}}</output>
</check>
<check if="choice == 'F'">
<output>⚠️ WARNING: Fresh start may create duplicate issues if stories were already migrated.</output>
<ask>Confirm fresh start (will check for duplicates)? (yes/no):</ask>
<check if="not confirmed">
<action>HALT</action>
</check>
</check>
<check if="choice == 'V'">
<action>Display migration state details</action>
<action>Then re-prompt for choice</action>
</check>
<check if="choice == 'D'">
<action>Delete state file</action>
<action>Set resume_mode = false</action>
<output>✅ State cleared</output>
</check>
</check>
</substep>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ PRE-FLIGHT CHECKS PASSED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
- GitHub MCP: Connected
- Repository: Accessible
- Sprint status: Loaded ({{total_stories}} stories)
- Story files: {{stories_with_files}} found
- Mode: {{mode}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="1" goal="Dry-run mode - Preview migration plan">
<check if="mode != 'dry-run'">
<action>Skip to Step 2 (Execute mode)</action>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 DRY-RUN MODE (Preview Only - No Changes)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
This will show what WOULD happen without actually creating issues.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>For each story in sprint-status.yaml:</action>
<iterate>For each story_key:</iterate>
<substep n="1a" title="Check if issue already exists">
<action>Search GitHub: mcp__github__search_issues({
query: "repo:{{github_owner}}/{{github_repo}} label:story:{{story_key}}"
})</action>
<check if="issue found">
<action>would_update = {{update_existing}}</action>
<output>
📝 Story {{story_key}}:
GitHub: Issue #{{existing_issue.number}} EXISTS
Action: {{#if would_update}}Would UPDATE{{else}}Would SKIP{{/if}}
Current labels: {{existing_issue.labels}}
Current assignee: {{existing_issue.assignee || "none"}}
</output>
</check>
<check if="issue not found">
<action>would_create = true</action>
<action>Read local story file</action>
<action>Parse: title, ACs, tasks, epic, status</action>
<output>
📝 Story {{story_key}}:
GitHub: NOT FOUND
Action: Would CREATE
Proposed Issue:
- Title: "Story {{story_key}}: {{parsed_title}}"
- Labels: type:story, story:{{story_key}}, status:{{status}}, epic:{{epic_number}}, complexity:{{complexity}}
- Milestone: Epic {{epic_number}}
- Acceptance Criteria: {{ac_count}} items
- Tasks: {{task_count}} items
- Assignee: {{#if status == 'in-progress'}}@{{infer_from_git_log}}{{else}}none{{/if}}
</output>
</check>
</substep>
<action>Count actions:</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 DRY-RUN SUMMARY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Total Stories:** {{total_stories}}
**Actions:**
- ✅ Would CREATE: {{would_create_count}} new issues
- 🔄 Would UPDATE: {{would_update_count}} existing issues
- ⏭️ Would SKIP: {{would_skip_count}} (existing, no update)
**Epics/Milestones:**
- Would CREATE: {{epic_milestones_to_create.length}} milestones
- Already exist: {{epic_milestones_existing.length}}
**Estimated API Calls:**
- Issue searches: {{total_stories}} (check existing)
- Issue creates: {{would_create_count}}
- Issue updates: {{would_update_count}}
- Milestone operations: {{milestone_operations}}
- **Total:** ~{{total_api_calls}} API calls
**Rate Limit Impact:**
- Authenticated limit: 5000/hour
- This migration: ~{{total_api_calls}} calls
- Remaining after: ~{{5000 - total_api_calls}}
- Safe: {{#if total_api_calls < 1000}}YES{{else}}Borderline (consider smaller batches){{/if}}
**Estimated Duration:** {{estimated_minutes}} minutes
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚠️ This was a DRY-RUN. No issues were created.
To execute the migration:
/migrate-to-github mode=execute github_owner={{github_owner}} github_repo={{github_repo}}
To migrate only Epic 2:
/migrate-to-github mode=execute filter_by_epic=2 github_owner={{github_owner}} github_repo={{github_repo}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Exit workflow (dry-run complete)</action>
</step>
<step n="2" goal="Execute mode - Perform migration with atomic operations">
<check if="mode != 'execute'">
<action>Skip to Step 3 (Verify mode)</action>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚡ EXECUTE MODE (Migrating Stories to GitHub)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**SAFETY GUARANTEES:**
✅ Idempotent - Can re-run safely (checks for duplicates)
✅ Atomic - Each story fully succeeds or rolls back
✅ Verified - Reads back each created issue
✅ Resumable - Saves state after each story
✅ Reversible - Creates rollback manifest
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<ask>
⚠️ FINAL CONFIRMATION
You are about to create ~{{would_create_count}} GitHub Issues.
This operation:
- WILL create issues in {{github_owner}}/{{github_repo}}
- WILL modify your GitHub repository
- CAN be rolled back (we'll create rollback manifest)
- CANNOT be undone automatically after issues are created
Have you:
- [ ] Run dry-run mode to preview?
- [ ] Verified repository is correct?
- [ ] Backed up sprint-status.yaml?
- [ ] Confirmed you want to proceed?
Type "I understand and want to proceed" to continue:
</ask>
<check if="confirmation != 'I understand and want to proceed'">
<output>❌ Migration cancelled - confirmation not received</output>
<action>HALT</action>
</check>
<action>Initialize migration state:</action>
<action>
migration_state = {
started_at: {{timestamp}},
mode: "execute",
github_owner: {{github_owner}},
github_repo: {{github_repo}},
total_stories: {{total_stories}},
stories_migrated: [],
issues_created: [],
issues_updated: [],
issues_failed: [],
rollback_manifest: [],
last_completed: null
}
</action>
<action>Save initial state to {{state_file}}</action>
<action>Initialize rollback manifest (for safety):</action>
<action>rollback_manifest = {
created_at: {{timestamp}},
github_owner: {{github_owner}},
github_repo: {{github_repo}},
created_issues: [] # Will track issue numbers for rollback
}</action>
<iterate>For each story in sprint-status.yaml:</iterate>
<substep n="2a" title="Migrate single story (ATOMIC)">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📦 Migrating {{current_index}}/{{total_stories}}: {{story_key}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Read local story file</action>
<check if="file not found">
<output> ⏭️ SKIP - No file found</output>
<action>Add to migration_state.issues_failed with reason: "File not found"</action>
<action>Continue to next story</action>
</check>
<action>Parse story file:</action>
<action> - Extract all 12 sections</action>
<action> - Parse Acceptance Criteria (convert to checkboxes)</action>
<action> - Parse Tasks (convert to checkboxes)</action>
<action> - Extract metadata: epic_number, complexity</action>
<action>Check if issue already exists (idempotent check):</action>
<action>Call: mcp__github__search_issues({
query: "repo:{{github_owner}}/{{github_repo}} label:story:{{story_key}}"
})</action>
<check if="issue exists AND update_existing == false">
<output> ✅ EXISTS - Issue #{{existing_issue.number}} (skipping, update_existing=false)</output>
<action>Add to migration_state.stories_migrated (already done)</action>
<action>Continue to next story</action>
</check>
<check if="issue exists AND update_existing == true">
<output> 🔄 EXISTS - Issue #{{existing_issue.number}} (updating)</output>
<action>ATOMIC UPDATE with retry:</action>
<action>
attempt = 0
max_attempts = {{max_retries}} + 1
WHILE attempt < max_attempts:
TRY:
# Update issue
result = mcp__github__issue_write({
method: "update",
owner: {{github_owner}},
repo: {{github_repo}},
issue_number: {{existing_issue.number}},
title: "Story {{story_key}}: {{parsed_title}}",
body: {{convertStoryToIssueBody(parsed)}},
labels: {{generateLabels(story_key, status, parsed)}}
})
# Verify update succeeded (read back)
sleep 1 second # GitHub eventual consistency
verification = mcp__github__issue_read({
method: "get",
owner: {{github_owner}},
repo: {{github_repo}},
issue_number: {{existing_issue.number}}
})
# Check verification
IF verification.title != expected_title:
THROW "Write verification failed"
# Success!
output: " ✅ UPDATED and VERIFIED - Issue #{{existing_issue.number}}"
BREAK
CATCH error:
attempt++
IF attempt < max_attempts:
sleep {{retry_backoff_ms[attempt]}}
output: " ⚠️ Retry {{attempt}}/{{max_retries}} after error: {{error}}"
ELSE:
output: " ❌ FAILED after {{max_retries}} retries: {{error}}"
add to migration_state.issues_failed
IF halt_on_critical_error:
HALT
ELSE:
CONTINUE to next story
</action>
<action>Add to migration_state.issues_updated</action>
</check>
<check if="issue does NOT exist">
<output> 🆕 CREATING new issue...</output>
<action>Generate issue body from story file:</action>
<action>
issue_body = """
**Story File:** [{{story_key}}.md]({{file_path_in_repo}})
**Epic:** {{epic_number}}
**Complexity:** {{complexity}} ({{task_count}} tasks)
## Business Context
{{parsed.businessContext}}
## Acceptance Criteria
{{#each parsed.acceptanceCriteria}}
- [ ] AC{{@index + 1}}: {{this}}
{{/each}}
## Tasks
{{#each parsed.tasks}}
- [ ] {{this}}
{{/each}}
## Technical Requirements
{{parsed.technicalRequirements}}
## Definition of Done
{{#each parsed.definitionOfDone}}
- [ ] {{this}}
{{/each}}
---
_Migrated from BMAD local files_
_Sync timestamp: {{timestamp}}_
_Local file: `{{story_file_path}}`_
"""
</action>
<action>Generate labels:</action>
<action>
labels = [
"type:story",
"story:{{story_key}}",
"status:{{current_status}}",
"epic:{{epic_number}}",
"complexity:{{complexity}}"
]
{{#if has_high_risk_keywords}}
labels.push("risk:high")
{{/if}}
</action>
<action>ATOMIC CREATE with retry and verification:</action>
<action>
attempt = 0
WHILE attempt < max_attempts:
TRY:
# Create issue
created_issue = mcp__github__issue_write({
method: "create",
owner: {{github_owner}},
repo: {{github_repo}},
title: "Story {{story_key}}: {{parsed_title}}",
body: {{issue_body}},
labels: {{labels}}
})
issue_number = created_issue.number
# CRITICAL: Verify creation succeeded (read back)
sleep 2 seconds # GitHub eventual consistency
verification = mcp__github__issue_read({
method: "get",
owner: {{github_owner}},
repo: {{github_repo}},
issue_number: {{issue_number}}
})
# Verify all fields
IF verification.title != expected_title:
THROW "Title mismatch after create"
IF NOT verification.labels.includes("story:{{story_key}}"):
THROW "Story label missing after create"
# Success - record for rollback capability
output: " ✅ CREATED and VERIFIED - Issue #{{issue_number}}"
rollback_manifest.created_issues.push({
story_key: {{story_key}},
issue_number: {{issue_number}},
created_at: {{timestamp}}
})
migration_state.issues_created.push({
story_key: {{story_key}},
issue_number: {{issue_number}}
})
BREAK
CATCH error:
attempt++
# Check if issue was created despite error (orphaned issue)
check_result = mcp__github__search_issues({
query: "repo:{{github_owner}}/{{github_repo}} label:story:{{story_key}}"
})
IF check_result.length > 0:
# Issue was created, verification failed - treat as success
output: " ✅ CREATED (verification had transient error)"
BREAK
IF attempt < max_attempts:
sleep {{retry_backoff_ms[attempt]}}
output: " ⚠️ Retry {{attempt}}/{{max_retries}}"
ELSE:
output: " ❌ FAILED after {{max_retries}} retries: {{error}}"
migration_state.issues_failed.push({
story_key: {{story_key}},
error: {{error}},
attempts: {{attempt}}
})
IF halt_on_critical_error:
output: "HALTING - Critical error during migration"
save migration_state
HALT
ELSE:
output: "Continuing despite failure (continue_on_failure=true)"
CONTINUE to next story
</action>
</check>
<action>Update migration state:</action>
<action>migration_state.stories_migrated.push({{story_key}})</action>
<action>migration_state.last_completed = {{story_key}}</action>
<check if="save_state_after_each == true">
<action>Save migration state to {{state_file}}</action>
<action>Save rollback manifest to {{output_folder}}/migration-rollback-{{timestamp}}.yaml</action>
</check>
<check if="current_index % 10 == 0">
<output>
📊 Progress: {{current_index}}/{{total_stories}} migrated
Created: {{issues_created.length}}
Updated: {{issues_updated.length}}
Failed: {{issues_failed.length}}
</output>
</check>
</substep>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ MIGRATION COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Total:** {{total_stories}} stories processed
**Created:** {{issues_created.length}} new issues
**Updated:** {{issues_updated.length}} existing issues
**Failed:** {{issues_failed.length}} errors
**Duration:** {{actual_duration}}
{{#if issues_failed.length > 0}}
**Failed Stories:**
{{#each issues_failed}}
- {{story_key}}: {{error}}
{{/each}}
Recommendation: Fix errors and re-run migration (will skip already-migrated stories)
{{/if}}
**Rollback Manifest:** {{rollback_manifest_path}}
(Use this file to delete created issues if needed)
**State File:** {{state_file}}
(Tracks migration progress for resume capability)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Continue to Step 3 (Verify)</action>
</step>
<step n="3" goal="Verify mode - Double-check migration accuracy">
<check if="mode != 'verify' AND mode != 'execute'">
<action>Skip to Step 4</action>
</check>
<check if="mode == 'execute'">
<ask>
Migration complete. Run verification to double-check accuracy? (yes/no):
</ask>
<check if="response != 'yes'">
<action>Skip to Step 5 (Report)</action>
</check>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 VERIFICATION MODE (Double-Checking Migration)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Load migration state from {{state_file}}</action>
<iterate>For each migrated story in migration_state.stories_migrated:</iterate>
<action>Fetch issue from GitHub:</action>
<action>Search: label:story:{{story_key}}</action>
<check if="issue not found">
<output> ❌ VERIFICATION FAILED: {{story_key}} - Issue not found in GitHub</output>
<action>Add to verification_failures</action>
</check>
<check if="issue found">
<action>Verify fields match expected:</action>
<action> - Title contains story_key ✓</action>
<action> - Label "story:{{story_key}}" exists ✓</action>
<action> - Status label matches sprint-status.yaml ✓</action>
<action> - AC count matches local file ✓</action>
<check if="all fields match">
<output> ✅ VERIFIED: {{story_key}} → Issue #{{issue_number}}</output>
</check>
<check if="fields mismatch">
<output> ⚠️ MISMATCH: {{story_key}} → Issue #{{issue_number}}</output>
<output> Expected: {{expected}}</output>
<output> Actual: {{actual}}</output>
<action>Add to verification_warnings</action>
</check>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 VERIFICATION RESULTS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Stories Checked:** {{stories_migrated.length}}
**✅ Verified Correct:** {{verified_count}}
**⚠️ Warnings:** {{verification_warnings.length}}
**❌ Failures:** {{verification_failures.length}}
{{#if verification_failures.length > 0}}
**Verification Failures:**
{{#each verification_failures}}
- {{this}}
{{/each}}
❌ Migration has errors - issues may be missing or incorrect
{{else}}
✅ All migrated stories verified in GitHub
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="4" goal="Rollback mode - Delete created issues">
<check if="mode != 'rollback'">
<action>Skip to Step 5 (Report)</action>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚠️ ROLLBACK MODE (Delete Migrated Issues)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Load rollback manifest from {{output_folder}}/migration-rollback-*.yaml</action>
<check if="manifest not found">
<output>
❌ ERROR: No rollback manifest found
Cannot rollback without manifest file.
Rollback manifests are in: {{output_folder}}/migration-rollback-*.yaml
HALTING
</output>
<action>HALT</action>
</check>
<output>
**Rollback Manifest:**
- Created: {{manifest.created_at}}
- Repository: {{manifest.github_owner}}/{{manifest.github_repo}}
- Issues to delete: {{manifest.created_issues.length}}
**WARNING:** This will PERMANENTLY DELETE these issues from GitHub:
{{#each manifest.created_issues}}
- Issue #{{issue_number}}: {{story_key}}
{{/each}}
This operation CANNOT be undone!
</output>
<ask>
Type "DELETE ALL ISSUES" to proceed with rollback:
</ask>
<check if="confirmation != 'DELETE ALL ISSUES'">
<output>❌ Rollback cancelled</output>
<action>HALT</action>
</check>
<iterate>For each issue in manifest.created_issues:</iterate>
<action>Delete issue (GitHub API doesn't support delete, so close + lock):</action>
<action>
# GitHub doesn't allow issue deletion via API
# Best we can do: close, lock, and add label "migrated:rolled-back"
mcp__github__issue_write({
method: "update",
issue_number: {{issue_number}},
state: "closed",
labels: ["migrated:rolled-back", "do-not-use"],
state_reason: "not_planned"
})
# Add comment explaining
mcp__github__add_issue_comment({
issue_number: {{issue_number}},
body: "Issue closed - migration was rolled back. Do not use."
})
</action>
<output> ✅ Rolled back: Issue #{{issue_number}}</output>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ ROLLBACK COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Issues Rolled Back:** {{manifest.created_issues.length}}
Note: GitHub API doesn't support issue deletion.
Issues were closed with label "migrated:rolled-back" instead.
To fully delete (manual):
1. Go to repository settings
2. Issues → Delete closed issues
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="5" goal="Generate comprehensive migration report">
<action>Calculate final statistics:</action>
<action>
final_stats = {
total_stories: {{total_stories}},
migrated_successfully: {{issues_created.length + issues_updated.length}},
failed: {{issues_failed.length}},
success_rate: ({{migrated_successfully}} / {{total_stories}}) * 100,
duration: {{end_time - start_time}},
avg_time_per_story: {{duration / total_stories}}
}
</action>
<check if="create_migration_report == true">
<action>Write comprehensive report to {{report_path}}</action>
<action>Report structure:</action>
<action>
# GitHub Migration Report
**Date:** {{timestamp}}
**Repository:** {{github_owner}}/{{github_repo}}
**Mode:** {{mode}}
## Executive Summary
- **Total Stories:** {{total_stories}}
- **✅ Migrated:** {{migrated_successfully}} ({{success_rate}}%)
- **❌ Failed:** {{failed}}
- **Duration:** {{duration}}
- **Avg per story:** {{avg_time_per_story}}
## Created Issues
{{#each issues_created}}
- Story {{story_key}} → Issue #{{issue_number}}
URL: <https://github.com/{{github_owner}}/{{github_repo}}/issues/{{issue_number}}>
{{/each}}
## Updated Issues
{{#each issues_updated}}
- Story {{story_key}} → Issue #{{issue_number}} (updated)
{{/each}}
## Failed Migrations
{{#if issues_failed.length > 0}}
{{#each issues_failed}}
- Story {{story_key}}: {{error}}
Attempts: {{attempts}}
{{/each}}
**Recovery Steps:**
1. Fix underlying issues (check error messages)
2. Re-run migration (will skip already-migrated stories)
{{else}}
None - all stories migrated successfully!
{{/if}}
## Rollback Information
**Rollback Manifest:** {{rollback_manifest_path}}
To rollback this migration:
```bash
/migrate-to-github mode=rollback
```
## Next Steps
1. **Verify migration:** /migrate-to-github mode=verify
2. **Test story checkout:** /checkout-story story_key=2-5-auth
3. **Enable GitHub sync:** Update workflow.yaml with github_sync_enabled=true
4. **Product Owner setup:** Share GitHub Issues URL with PO team
## Migration Details
**API Calls Made:** ~{{total_api_calls}}
**Rate Limit Used:** {{api_calls_used}}/5000
**Errors Encountered:** {{error_count}}
**Retries Performed:** {{retry_count}}
---
_Generated by BMAD migrate-to-github workflow_
</action>
<output>📄 Migration report: {{report_path}}</output>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ MIGRATION WORKFLOW COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Mode:** {{mode}}
**Success Rate:** {{success_rate}}%
{{#if mode == 'execute'}}
**✅ {{migrated_successfully}} stories now in GitHub Issues**
View in GitHub:
<https://github.com/{{github_owner}}/{{github_repo}}/issues?q=is:issue+label:type:story>
**Next Steps:**
1. Verify migration: /migrate-to-github mode=verify
2. Test workflows with GitHub sync enabled
3. Share Issues URL with Product Owner team
{{#if issues_failed.length > 0}}
⚠️ {{issues_failed.length}} stories failed - re-run to retry
{{/if}}
{{/if}}
{{#if mode == 'dry-run'}}
**This was a preview. No issues were created.**
To execute: /migrate-to-github mode=execute
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
</workflow>

View File

@ -1,188 +0,0 @@
# Multi-Agent Code Review
**Purpose:** Perform unbiased code review using multiple specialized AI agents in FRESH CONTEXT, with agent count based on story complexity.
## Overview
**Key Principle: FRESH CONTEXT**
- Review happens in NEW session (not the agent that wrote the code)
- Prevents bias from implementation decisions
- Provides truly independent perspective
**Variable Agent Count by Complexity:**
- **MICRO** (2 agents): Security + Code Quality - Quick sanity check
- **STANDARD** (4 agents): + Architecture + Testing - Balanced review
- **COMPLEX** (6 agents): + Performance + Domain Expert - Comprehensive analysis
**Available Specialized Agents:**
- **Security Agent**: Identifies vulnerabilities and security risks
- **Code Quality Agent**: Reviews style, maintainability, and best practices
- **Architecture Agent**: Reviews system design, patterns, and structure
- **Testing Agent**: Evaluates test coverage and quality
- **Performance Agent**: Analyzes efficiency and optimization opportunities
- **Domain Expert**: Validates business logic and domain constraints
## Workflow
### Step 1: Determine Agent Count
Based on {complexity_level}:
```
If complexity_level == "micro":
agent_count = 2
agents = ["security", "code_quality"]
Display: 🔍 MICRO Review (2 agents: Security + Code Quality)
Else if complexity_level == "standard":
agent_count = 4
agents = ["security", "code_quality", "architecture", "testing"]
Display: 📋 STANDARD Review (4 agents: Multi-perspective)
Else if complexity_level == "complex":
agent_count = 6
agents = ["security", "code_quality", "architecture", "testing", "performance", "domain_expert"]
Display: 🔬 COMPLEX Review (6 agents: Comprehensive analysis)
```
### Step 2: Load Story Context
```bash
# Read story file
story_file="{story_file}"
test -f "$story_file" || (echo "❌ Story file not found: $story_file" && exit 1)
```
Read the story file to understand:
- What was supposed to be implemented
- Acceptance criteria
- Tasks and subtasks
- File list
### Step 3: Invoke Multi-Agent Review Skill (Fresh Context + Smart Agent Selection)
**CRITICAL:** This review MUST happen in a FRESH CONTEXT (new session, different agent).
**Smart Agent Selection:**
- Skill analyzes changed files and selects MOST RELEVANT agents
- Touching payments code? → Add financial-security agent
- Touching auth code? → Add auth-security agent
- Touching file uploads? → Add file-security agent
- Touching performance-critical code? → Add performance agent
- Agent count determined by complexity, but agents chosen by code analysis
```xml
<invoke-skill skill="multi-agent-review">
<parameter name="story_id">{story_id}</parameter>
<parameter name="base_branch">{base_branch}</parameter>
<parameter name="max_agents">{agent_count}</parameter>
<parameter name="agent_selection">smart</parameter>
<parameter name="fresh_context">true</parameter>
</invoke-skill>
```
The skill will:
1. Create fresh context (unbiased review session)
2. Analyze changed files in the story
3. Detect code categories (auth, payments, file handling, etc.)
4. Select {agent_count} MOST RELEVANT specialized agents
5. Run parallel reviews from selected agents
6. Each agent reviews from their expertise perspective
7. Aggregate findings with severity ratings
8. Return comprehensive review report
### Step 3: Save Review Report
```bash
# The skill returns a review report
# Save it to: {review_report}
```
Display summary:
```
🤖 MULTI-AGENT CODE REVIEW COMPLETE
Agents Used: {agent_count}
- Architecture Agent
- Security Agent
- Performance Agent
- Testing Agent
- Code Quality Agent
Findings:
- 🔴 CRITICAL: {critical_count}
- 🟠 HIGH: {high_count}
- 🟡 MEDIUM: {medium_count}
- 🔵 LOW: {low_count}
- INFO: {info_count}
Report saved to: {review_report}
```
### Step 4: Present Findings
For each finding, display:
```
[{severity}] {title}
Agent: {agent_name}
Location: {file}:{line}
{description}
Recommendation:
{recommendation}
---
```
### Step 5: Next Steps
Suggest actions based on findings:
```
📋 RECOMMENDED NEXT STEPS:
If CRITICAL findings exist:
⚠️ MUST FIX before proceeding
- Address all critical security/correctness issues
- Re-run review after fixes
If only HIGH/MEDIUM findings:
✅ Story may proceed
- Consider addressing high-priority items
- Create follow-up tasks for medium items
- Document LOW items as tech debt
If only LOW/INFO findings:
✅ Code quality looks good
- Optional: Address style/optimization suggestions
- Proceed to completion
```
## Integration with Super-Dev-Pipeline
This workflow is designed to be called from super-dev-pipeline step 7 (code review) when the story complexity is COMPLEX or when user explicitly requests multi-agent review.
**When to Use:**
- Complex stories (≥16 tasks or high-risk keywords)
- Stories involving security-sensitive code
- Stories with significant architectural changes
- When single-agent review has been inconclusive
- User explicitly requests comprehensive review
**When NOT to Use:**
- Micro stories (≤3 tasks)
- Standard stories with simple changes
- Stories that passed adversarial review cleanly
## Output Files
- `{review_report}`: Full review findings in markdown
- Integrated into story completion summary
- Referenced in audit trail
## Error Handling
If multi-agent-review skill fails:
- Fall back to adversarial code review
- Log the failure reason
- Continue pipeline with warning

View File

@ -1,549 +0,0 @@
<workflow>
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language}</critical>
<critical>📝 PUSH-ALL - Stage, commit, and push changes with comprehensive safety validation</critical>
<!-- TARGETED vs ALL FILES MODE -->
<critical>⚡ PARALLEL AGENT MODE: When {{target_files}} is provided:
- ONLY stage and commit the specified files
- Do NOT use `git add .` or `git add -A`
- Use `git add [specific files]` instead
- This prevents committing work from other parallel agents
</critical>
<critical>📋 ALL FILES MODE: When {{target_files}} is empty:
- Stage ALL changes with `git add .`
- Original behavior for single-agent execution
</critical>
<step n="1" goal="Analyze repository changes">
<output>🔄 **Analyzing Repository Changes**
Scanning for changes to commit and push...
</output>
<!-- ANALYZE CHANGES PHASE -->
<action>Run git commands in parallel:</action>
<action>- git status - Show modified/added/deleted/untracked files</action>
<action>- git diff --stat - Show change statistics</action>
<action>- git log -1 --oneline - Show recent commit for message style</action>
<action>- git branch --show-current - Confirm current branch</action>
<action>Parse git status output to identify:
- Modified files
- Added files
- Deleted files
- Untracked files
- Total insertion/deletion counts
</action>
<check if="no changes detected">
<output> **No Changes to Commit**
Working directory is clean.
Nothing to push.
</output>
<action>HALT - No work to do</action>
</check>
</step>
<step n="2" goal="Safety validation">
<critical>🔒 SAFETY CHECKS - Validate changes before committing</critical>
<action>Scan all changed files for dangerous patterns:</action>
**Secret Detection:**
<action>Check for files matching secret patterns:
- .env*, *.key, *.pem, credentials.json, secrets.yaml
- id_rsa, *.p12, *.pfx, *.cer
- Any file containing: _API_KEY=, _SECRET=, _TOKEN= with real values (not placeholders)
</action>
<action>Validate API keys are placeholders only:</action>
<action>✅ Acceptable placeholders:
- API_KEY=your-api-key-here
- SECRET=placeholder
- TOKEN=xxx
- API_KEY=${{YOUR_KEY}}
- SECRET_KEY=&lt;your-key&gt;
</action>
<action>❌ BLOCK real keys:
- OPENAI_API_KEY=sk-proj-xxxxx (real OpenAI key)
- AWS_SECRET_KEY=AKIA... (real AWS key)
- STRIPE_API_KEY=sk_live_... (real Stripe key)
- Any key with recognizable provider prefix + actual value
</action>
**File Size Check:**
<action>Check for files >10MB without Git LFS configuration</action>
**Build Artifacts:**
<action>Check for unwanted directories/files that should be gitignored:
- node_modules/, dist/, build/, .next/, __pycache__/, *.pyc, .venv/
- .DS_Store, Thumbs.db, *.swp, *.tmp, *.log (in root)
- *.class, target/, bin/ (Java)
- vendor/ (unless dependency managed)
</action>
**Git State:**
<action>Verify:
- .gitignore exists and properly configured
- No unresolved merge conflicts
- Git repository initialized
</action>
<!-- SAFETY DECISION -->
<check if="secrets detected OR real API keys found">
<output>🚨 **DANGER: Secrets Detected!**
The following sensitive data was found:
{{list_detected_secrets_with_files}}
❌ **BLOCKED:** Cannot commit secrets to version control.
**Actions Required:**
1. Move secrets to .env file (add to .gitignore)
2. Use environment variables: process.env.API_KEY
3. Remove secrets from tracked files: git rm --cached [file]
4. Update code to load from environment
**Example:**
```
// Before (UNSAFE):
const apiKey = 'sk-proj-xxxxx';
// After (SAFE):
const apiKey = process.env.OPENAI_API_KEY;
```
Halting workflow for safety.
</output>
<action>HALT - Cannot proceed with secrets</action>
</check>
<check if="large files detected without Git LFS">
<output>⚠️ **Warning: Large Files Detected**
Files >10MB found:
{{list_large_files_with_sizes}}
**Recommendation:** Set up Git LFS
```
git lfs install
git lfs track "*.{file_extension}"
git add .gitattributes
```
</output>
<ask>Proceed with large files anyway? [y/n]:</ask>
<check if="user says n">
<output>Halting. Please configure Git LFS first.</output>
<action>HALT</action>
</check>
</check>
<check if="build artifacts detected">
<output>⚠️ **Warning: Build Artifacts Detected**
These files should be in .gitignore:
{{list_build_artifacts}}
**Update .gitignore:**
```
node_modules/
dist/
build/
.DS_Store
```
</output>
<ask>Commit build artifacts anyway? [y/n]:</ask>
<check if="user says n">
<output>Halting. Update .gitignore and git rm --cached [files]</output>
<action>HALT</action>
</check>
</check>
<check if="current branch is main or master">
<output>⚠️ **Warning: Pushing to {{branch_name}}**
You're committing directly to {{branch_name}}.
**Recommendation:** Use feature branch workflow:
1. git checkout -b feature/my-changes
2. Make and commit changes
3. git push -u origin feature/my-changes
4. Create PR for review
</output>
<ask>Push directly to {{branch_name}}? [y/n]:</ask>
<check if="user says n">
<output>Halting. Create a feature branch instead.</output>
<action>HALT</action>
</check>
</check>
<output>✅ **Safety Checks Passed**
All validations completed successfully.
</output>
</step>
<step n="3" goal="Present summary and get confirmation">
<output>
📊 **Changes Summary**
**Files:**
- Modified: {{modified_count}}
- Added: {{added_count}}
- Deleted: {{deleted_count}}
- Untracked: {{untracked_count}}
**Total:** {{total_file_count}} files
**Changes:**
- Insertions: +{{insertion_count}} lines
- Deletions: -{{deletion_count}} lines
**Safety:**
{{if_all_safe}}
✅ No secrets detected
✅ No large files (or approved)
✅ No build artifacts (or approved)
✅ .gitignore configured
{{endif}}
{{if_warnings_approved}}
⚠️ Warnings acknowledged and approved
{{endif}}
**Git:**
- Branch: {{current_branch}}
- Remote: origin/{{current_branch}}
- Last commit: {{last_commit_message}}
---
**I will execute:**
1. `git add .` - Stage all changes
2. `git commit -m "[generated message]"` - Create commit
3. `git push` - Push to remote
</output>
<ask>**Proceed with commit and push?**
Options:
[yes] - Proceed with commit and push
[no] - Cancel (leave changes unstaged)
[review] - Show detailed diff first
</ask>
<check if="user says review">
<action>Execute: git diff --stat</action>
<action>Execute: git diff | head -100 (show first 100 lines of changes)</action>
<output>
{{diff_output}}
(Use 'git diff' to see full changes)
</output>
<ask>After reviewing, proceed with commit and push? [yes/no]:</ask>
</check>
<check if="user says no">
<output>❌ **Push-All Cancelled**
Changes remain unstaged. No git operations performed.
You can:
- Review changes: git status, git diff
- Commit manually: git add [files] && git commit
- Discard changes: git checkout -- [files]
</output>
<action>HALT - User cancelled</action>
</check>
</step>
<step n="4" goal="Stage changes">
<!-- TARGETED MODE: Only stage specified files -->
<check if="{{target_files}} is provided and not empty">
<output>📎 **Targeted Commit Mode** (parallel agent safe)
Staging only files from this story/task:
{{target_files}}
</output>
<action>Execute: git add {{target_files}}</action>
<action>Execute: git status</action>
<output>✅ **Targeted Files Staged**
Ready for commit ({{target_file_count}} files):
{{list_staged_files}}
Note: Other uncommitted changes in repo are NOT included.
</output>
</check>
<!-- ALL FILES MODE: Original behavior -->
<check if="{{target_files}} is empty or not provided">
<action>Execute: git add .</action>
<action>Execute: git status</action>
<output>✅ **All Changes Staged**
Ready for commit:
{{list_staged_files}}
</output>
</check>
</step>
<step n="5" goal="Generate commit message">
<critical>📝 COMMIT MESSAGE - Generate conventional commit format</critical>
<action>Analyze changes to determine commit type:</action>
<action>- feat: New features (new files with functionality)</action>
<action>- fix: Bug fixes (fixing broken functionality)</action>
<action>- docs: Documentation only (*.md, comments)</action>
<action>- style: Formatting, missing semicolons (no code change)</action>
<action>- refactor: Code restructuring (no feature/fix)</action>
<action>- test: Adding/updating tests</action>
<action>- chore: Tooling, configs, dependencies</action>
<action>- perf: Performance improvements</action>
<action>Determine scope (optional):
- Component/feature name if changes focused on one area
- Omit if changes span multiple areas
</action>
<action>Generate message summary (max 72 chars):
- Use imperative mood: "add feature" not "added feature"
- Lowercase except proper nouns
- No period at end
</action>
<action>Generate message body (if changes >5 files):
- List key changes as bullet points
- Max 3-5 bullets
- Keep concise
</action>
<action>Reference recent commits for style consistency</action>
<output>📝 **Generated Commit Message:**
```
{{generated_commit_message}}
```
Based on:
- {{commit_type}} commit type
- {{file_count}} files changed
- {{change_summary}}
</output>
<ask>**Use this commit message?**
Options:
[yes] - Use generated message
[edit] - Let me write custom message
[cancel] - Cancel push-all (leave staged)
</ask>
<check if="user says edit">
<ask>Enter your commit message (use conventional commit format if possible):</ask>
<action>Store user input as {{commit_message}}</action>
<output>✅ Using custom commit message</output>
</check>
<check if="user says cancel">
<output>❌ Push-all cancelled
Changes remain staged.
Run: git reset to unstage
</output>
<action>HALT</action>
</check>
<check if="user says yes">
<action>Use {{generated_commit_message}} as {{commit_message}}</action>
</check>
</step>
<step n="6" goal="Commit changes">
<action>Execute git commit with heredoc for multi-line message safety:
git commit -m "$(cat &lt;&lt;'EOF'
{{commit_message}}
EOF
)"
</action>
<check if="commit fails">
<output>❌ **Commit Failed**
Error: {{commit_error}}
**Common Causes:**
- Pre-commit hooks failing (linting, tests)
- Missing git config (user.name, user.email)
- Locked files or permissions
- Empty commit (no actual changes)
**Fix and try again:**
- Check pre-commit output
- Set git config: git config user.name "Your Name"
- Verify file permissions
</output>
<action>HALT - Fix errors before proceeding</action>
</check>
<action>Parse commit output for hash</action>
<output>✅ **Commit Created**
Commit: {{commit_hash}}
Message: {{commit_subject}}
</output>
</step>
<step n="7" goal="Push to remote">
<output>🚀 **Pushing to Remote**
Pushing {{current_branch}} to origin...
</output>
<action>Execute: git push</action>
<!-- HANDLE COMMON PUSH FAILURES -->
<check if="push fails with rejected (non-fast-forward)">
<output>⚠️ **Push Rejected - Remote Has New Commits**
Remote branch has commits you don't have locally.
Attempting to rebase and retry...
</output>
<action>Execute: git pull --rebase</action>
<check if="rebase has conflicts">
<output>❌ **Merge Conflicts During Rebase**
Conflicts found:
{{list_conflicted_files}}
**Manual resolution required:**
1. Resolve conflicts in listed files
2. git add [resolved files]
3. git rebase --continue
4. git push
Halting for manual conflict resolution.
</output>
<action>HALT - Resolve conflicts manually</action>
</check>
<action>Execute: git push</action>
</check>
<check if="push fails with no upstream branch">
<output> **No Upstream Branch Set**
First push to origin for this branch.
Setting upstream...
</output>
<action>Execute: git push -u origin {{current_branch}}</action>
</check>
<check if="push fails with protected branch">
<output>❌ **Push to Protected Branch Blocked**
Branch {{current_branch}} is protected on remote.
**Use PR workflow instead:**
1. Ensure you're on a feature branch
2. Push feature branch: git push -u origin feature-branch
3. Create PR for review
Changes are committed locally but not pushed.
</output>
<action>HALT - Use PR workflow for protected branches</action>
</check>
<check if="push fails with authentication">
<output>❌ **Authentication Failed**
Git push requires authentication.
**Fix authentication:**
- GitHub: Set up SSH key or Personal Access Token
- Check: git remote -v (verify remote URL)
- Docs: https://docs.github.com/authentication
Changes are committed locally but not pushed.
</output>
<action>HALT - Fix authentication</action>
</check>
<check if="push fails with other error">
<output>❌ **Push Failed**
Error: {{push_error}}
Your changes are committed locally but not pushed to remote.
**Troubleshoot:**
- Check network connection
- Verify remote exists: git remote -v
- Check permissions on remote repository
- Try manual push: git push
Halting for manual resolution.
</output>
<action>HALT - Manual push required</action>
</check>
<!-- SUCCESS -->
<check if="push succeeds">
<output>✅ **Successfully Pushed to Remote!**
**Commit:** {{commit_hash}} - {{commit_subject}}
**Branch:** {{current_branch}} → origin/{{current_branch}}
**Files changed:** {{file_count}} (+{{insertions}}, -{{deletions}})
---
Your changes are now on the remote repository.
</output>
<action>Execute: git log -1 --oneline --decorate</action>
<output>
**Latest commit:** {{git_log_output}}
</output>
</check>
</step>
<step n="8" goal="Completion summary">
<output>🎉 **Push-All Complete, {user_name}!**
**Summary:**
- ✅ {{file_count}} files committed
- ✅ Pushed to origin/{{current_branch}}
- ✅ All safety checks passed
**Commit Details:**
- Hash: {{commit_hash}}
- Message: {{commit_subject}}
- Changes: +{{insertions}}, -{{deletions}}
**Next Steps:**
- Verify on remote (GitHub/GitLab/etc)
- Create PR if working on feature branch
- Notify team if appropriate
**Git State:**
- Working directory: clean
- Branch: {{current_branch}}
- In sync with remote
</output>
</step>
</workflow>

View File

@ -0,0 +1,366 @@
# Push All v3.0 - Safe Git Staging, Commit, and Push
<purpose>
Safely stage, commit, and push changes with comprehensive validation.
Detects secrets, large files, build artifacts. Handles push failures gracefully.
Supports targeted mode for specific files (parallel agent coordination).
</purpose>
<philosophy>
**Safe by Default, No Surprises**
- Validate BEFORE committing (secrets, size, artifacts)
- Show exactly what will be committed
- Handle push failures with recovery options
- Never force push without explicit confirmation
</philosophy>
<config>
name: push-all
version: 3.0.0
modes:
full: "Stage all changes (default)"
targeted: "Only stage specified files"
defaults:
max_file_size_kb: 500
check_secrets: true
check_build_artifacts: true
auto_push: false
allow_force_push: false
secret_patterns:
- "AKIA[0-9A-Z]{16}" # AWS Access Key
- "sk-[a-zA-Z0-9]{48}" # OpenAI Key
- "ghp_[a-zA-Z0-9]{36}" # GitHub Personal Token
- "xox[baprs]-[a-zA-Z0-9-]+" # Slack Token
- "-----BEGIN.*PRIVATE KEY" # Private Keys
- "password\\s*=\\s*['\"][^'\"]{8,}" # Hardcoded passwords
build_artifacts:
- "node_modules/"
- "dist/"
- "build/"
- ".next/"
- "*.min.js"
- "*.bundle.js"
</config>
<execution_context>
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="check_git_state" priority="first">
**Verify git repository state**
```bash
# Check we're in a git repo
git rev-parse --is-inside-work-tree || { echo "❌ Not a git repository"; exit 1; }
# Get current branch
git branch --show-current
# Check for uncommitted changes
git status --porcelain
```
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📦 PUSH-ALL: {{mode}} mode
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Branch: {{branch}}
Mode: {{mode}}
{{#if targeted}}Files: {{file_list}}{{/if}}
```
**If no changes:**
```
✅ Working directory clean - nothing to commit
```
Exit successfully.
</step>
<step name="scan_changes">
**Identify files to be staged**
**Full mode:**
```bash
git status --porcelain | awk '{print $2}'
```
**Targeted mode:**
Only include files specified in `target_files` parameter.
**Categorize changes:**
- New files (A)
- Modified files (M)
- Deleted files (D)
- Renamed files (R)
</step>
<step name="secret_scan" if="check_secrets">
**Scan for secrets in staged content**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 SECRET SCAN
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
For each file to be staged:
```bash
# Check for secret patterns
Grep: "{{pattern}}" {{file}}
```
**If secrets found:**
```
❌ POTENTIAL SECRETS DETECTED
{{#each secrets}}
File: {{file}}
Line {{line}}: {{preview}} (pattern: {{pattern_name}})
{{/each}}
⚠️ BLOCKING COMMIT
Remove secrets before proceeding.
Options:
[I] Ignore (I know what I'm doing)
[E] Exclude these files
[H] Halt
```
**If [I] selected:** Require explicit confirmation text.
</step>
<step name="size_scan">
**Check for oversized files**
```bash
# Find files larger than max_file_size_kb
find . -type f -size +{{max_file_size}}k -not -path "./.git/*"
```
**If large files found:**
```
⚠️ LARGE FILES DETECTED
{{#each large_files}}
- {{file}} ({{size_kb}}KB)
{{/each}}
Options:
[I] Include anyway
[E] Exclude large files
[H] Halt
```
</step>
<step name="artifact_scan" if="check_build_artifacts">
**Check for build artifacts**
```bash
# Check if any staged files match artifact patterns
git status --porcelain | grep -E "{{artifact_pattern}}"
```
**If artifacts found:**
```
⚠️ BUILD ARTIFACTS DETECTED
{{#each artifacts}}
- {{file}}
{{/each}}
These should typically be in .gitignore.
Options:
[E] Exclude artifacts (recommended)
[I] Include anyway
[H] Halt
```
</step>
<step name="preview_commit">
**Show what will be committed**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 COMMIT PREVIEW
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Files to commit: {{count}}
Added ({{added_count}}):
{{#each added}}
+ {{file}}
{{/each}}
Modified ({{modified_count}}):
{{#each modified}}
M {{file}}
{{/each}}
Deleted ({{deleted_count}}):
{{#each deleted}}
- {{file}}
{{/each}}
{{#if excluded}}
Excluded: {{excluded_count}} files
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="get_commit_message">
**Generate or request commit message**
**If commit_message provided:** Use it.
**Otherwise, generate from changes:**
```
Analyzing changes to generate commit message...
Changes detected:
- {{summary_of_changes}}
Suggested message:
"{{generated_message}}"
[Y] Use this message
[E] Edit message
[C] Custom message
```
If user selects [C] or [E], prompt for message.
</step>
<step name="execute_commit">
**Stage and commit changes**
```bash
# Stage files (targeted or full)
{{#if targeted}}
git add {{#each target_files}}{{this}} {{/each}}
{{else}}
git add -A
{{/if}}
# Commit with message
git commit -m "{{commit_message}}"
```
**Verify commit:**
```bash
# Check commit was created
git log -1 --oneline
```
```
✅ Commit created: {{commit_hash}}
```
</step>
<step name="push_to_remote" if="auto_push OR user_confirms_push">
**Push to remote with error handling**
```bash
git push origin {{branch}}
```
**If push fails:**
**Case: Behind remote**
```
⚠️ Push rejected - branch is behind remote
Options:
[P] Pull and retry (git pull --rebase)
[F] Force push (DESTRUCTIVE - overwrites remote)
[H] Halt (commit preserved locally)
```
**Case: No upstream**
```
⚠️ No upstream branch
Setting upstream and pushing:
git push -u origin {{branch}}
```
**Case: Auth failure**
```
❌ Authentication failed
Check:
1. SSH key configured?
2. Token valid?
3. Repository access?
```
**Case: Protected branch**
```
❌ Cannot push to protected branch
Use pull request workflow instead:
gh pr create --title "{{commit_message}}"
```
</step>
<step name="final_summary">
**Display completion status**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ PUSH-ALL COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Branch: {{branch}}
Commit: {{commit_hash}}
Files: {{file_count}}
{{#if pushed}}
Remote: ✅ Pushed to origin/{{branch}}
{{else}}
Remote: ⏸️ Not pushed (commit preserved locally)
{{/if}}
{{#if excluded_count > 0}}
Excluded: {{excluded_count}} files (secrets/artifacts/size)
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
</process>
<examples>
```bash
# Stage all, commit, and push
/push-all commit_message="feat: add user authentication" auto_push=true
# Targeted mode - only specific files
/push-all mode=targeted target_files="src/auth.ts,src/auth.test.ts" commit_message="fix: auth bug"
# Dry run - see what would be committed
/push-all auto_push=false
```
</examples>
<failure_handling>
**Secrets detected:** BLOCK commit, require explicit override.
**Large files:** Warn, allow exclude or include.
**Build artifacts:** Warn, recommend exclude.
**Push rejected:** Offer pull/rebase, force push (with confirmation), or halt.
**Auth failure:** Report, suggest troubleshooting.
</failure_handling>
<success_criteria>
- [ ] Changes validated (secrets, size, artifacts)
- [ ] Files staged correctly
- [ ] Commit created with message
- [ ] Push successful (if requested)
- [ ] No unintended files included
</success_criteria>

View File

@ -9,7 +9,7 @@ communication_language: "{config_source}:communication_language"
# Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/push-all"
instructions: "{installed_path}/instructions.xml"
instructions: "{installed_path}/workflow.md"
# Target files to commit (for parallel agent execution)
# When empty/not provided: commits ALL changes (original behavior)

View File

@ -1,306 +0,0 @@
# Sprint Status Recovery - Instructions
**Workflow:** recover-sprint-status
**Purpose:** Fix sprint-status.yaml when tracking has drifted for days/weeks
---
## What This Workflow Does
Analyzes multiple sources to rebuild accurate sprint-status.yaml:
1. **Story File Quality** - Validates size (>=10KB), task lists, checkboxes
2. **Explicit Status: Fields** - Reads story Status: when present
3. **Git Commits** - Searches last 30 days for story references
4. **Autonomous Reports** - Checks .epic-*-completion-report.md files
5. **Task Completion Rate** - Analyzes checkbox completion in story files
**Infers Status Based On:**
- Explicit Status: field (highest priority)
- Git commits referencing story (strong signal)
- Autonomous completion reports (very high confidence)
- Task checkbox completion rate (90%+ = done)
- File quality (poor quality prevents "done" marking)
---
## Step 1: Run Recovery Analysis
```bash
Execute: {recovery_script} --dry-run
```
**This will:**
- Analyze all story files (quality, tasks, status)
- Search git commits for completion evidence
- Check autonomous completion reports
- Infer status from all evidence
- Report recommendations with confidence levels
**No changes** made in dry-run mode - just analysis.
---
## Step 2: Review Recommendations
**Check the output for:**
### High Confidence Updates (Safe)
- Stories with explicit Status: fields
- Stories in autonomous completion reports
- Stories with 3+ git commits + 90%+ tasks complete
### Medium Confidence Updates (Verify)
- Stories with 1-2 git commits
- Stories with 50-90% tasks complete
- Stories with file size >=10KB
### Low Confidence Updates (Question)
- Stories with no Status: field, no commits
- Stories with file size <10KB
- Stories with <5 tasks total
---
## Step 3: Choose Recovery Mode
### Conservative Mode (Safest)
```bash
Execute: {recovery_script} --conservative
```
**Only updates:**
- High/very high confidence stories
- Explicit Status: fields honored
- Git commits with 3+ references
- Won't infer or guess
**Best for:** Quick fixes, first-time recovery, risk-averse
---
### Aggressive Mode (Thorough)
```bash
Execute: {recovery_script} --aggressive --dry-run # Preview first!
Execute: {recovery_script} --aggressive # Then apply
```
**Updates:**
- Medium+ confidence stories
- Infers from git commits (even 1 commit)
- Uses task completion rate
- Pre-fills brownfield checkboxes
**Best for:** Major drift (30+ days), comprehensive recovery
---
### Interactive Mode (Recommended)
```bash
Execute: {recovery_script}
```
**Process:**
1. Shows all recommendations
2. Groups by confidence level
3. Asks for confirmation before each batch
4. Allows selective application
**Best for:** First-time use, learning the tool
---
## Step 4: Validate Results
```bash
Execute: ./scripts/sync-sprint-status.sh --validate
```
**Should show:**
- "✓ sprint-status.yaml is up to date!" (success)
- OR discrepancy count (if issues remain)
---
## Step 5: Commit Changes
```bash
git add docs/sprint-artifacts/sprint-status.yaml
git add .sprint-status-backups/ # Include backup for audit trail
git commit -m "fix(tracking): Recover sprint-status.yaml - {MODE} recovery"
```
---
## Recovery Scenarios
### Scenario 1: Autonomous Epic Completed, Tracking Not Updated
**Symptoms:**
- Autonomous completion report exists
- Git commits show work done
- sprint-status.yaml shows "in-progress" or "backlog"
**Solution:**
```bash
{recovery_script} --aggressive
# Will find completion report, mark all stories done
```
---
### Scenario 2: Manual Work Over Past Week Not Tracked
**Symptoms:**
- Story Status: fields updated to "done"
- sprint-status.yaml not synced
- Git commits exist
**Solution:**
```bash
./scripts/sync-sprint-status.sh
# Standard sync (reads Status: fields)
```
---
### Scenario 3: Story Files Missing Status: Fields
**Symptoms:**
- 100+ stories with no Status: field
- Some completed, some not
- No autonomous reports
**Solution:**
```bash
{recovery_script} --aggressive --dry-run # Preview inference
# Review recommendations carefully
{recovery_script} --aggressive # Apply if satisfied
```
---
### Scenario 4: Complete Chaos (Mix of All Above)
**Symptoms:**
- Some stories have Status:, some don't
- Autonomous reports for some epics
- Manual work on others
- sprint-status.yaml very outdated
**Solution:**
```bash
# Step 1: Run recovery in dry-run
{recovery_script} --aggressive --dry-run
# Step 2: Review /tmp/recovery_results.json
# Step 3: Apply in conservative mode first (safest updates)
{recovery_script} --conservative
# Step 4: Manually review remaining stories
# Update Status: fields for known completed work
# Step 5: Run sync to catch manual updates
./scripts/sync-sprint-status.sh
# Step 6: Final validation
./scripts/sync-sprint-status.sh --validate
```
---
## Quality Gates
**Recovery script will DOWNGRADE status if:**
- Story file < 10KB (not properly detailed)
- Story file has < 5 tasks (incomplete story)
- No git commits found (no evidence of work)
- Explicit Status: contradicts other evidence
**Recovery script will UPGRADE status if:**
- Autonomous completion report lists story as done
- 3+ git commits + 90%+ tasks checked
- Explicit Status: field says "done"
---
## Post-Recovery Checklist
After running recovery:
- [ ] Run validation: `./scripts/sync-sprint-status.sh --validate`
- [ ] Review backup: Check `.sprint-status-backups/` for before state
- [ ] Check epic statuses: Verify epic-level status matches story completion
- [ ] Spot-check 5-10 stories: Confirm inferred status is accurate
- [ ] Commit changes: Add recovery to version control
- [ ] Document issues: Note why drift occurred, prevent recurrence
---
## Preventing Future Drift
**After recovery:**
1. **Use workflows properly**
- `/create-story` - Adds to sprint-status.yaml automatically
- `/dev-story` - Updates both Status: and sprint-status.yaml
- Autonomous workflows - Now update tracking
2. **Run sync regularly**
- Weekly: `pnpm sync:sprint-status:dry-run` (check health)
- After manual Status: updates: `pnpm sync:sprint-status`
3. **CI/CD validation** (coming soon)
- Blocks PRs with out-of-sync tracking
- Forces sync before merge
---
## Troubleshooting
### "Recovery script shows 0 updates"
**Possible causes:**
- sprint-status.yaml already accurate
- Story files all have proper Status: fields
- No git commits found (check date range)
**Action:** Run `--dry-run` to see analysis, check `/tmp/recovery_results.json`
---
### "Low confidence on stories I know are done"
**Possible causes:**
- Story file < 10KB (not properly detailed)
- No git commits (work done outside git)
- No explicit Status: field
**Action:** Manually add Status: field to story, then run standard sync
---
### "Recovery marks incomplete stories as done"
**Possible causes:**
- Git commits exist but work abandoned
- Autonomous report lists story but implementation failed
- Tasks pre-checked incorrectly (brownfield error)
**Action:** Use conservative mode, manually verify, fix story files
---
## Output Files
**Created during recovery:**
- `.sprint-status-backups/sprint-status-recovery-{timestamp}.yaml` - Backup
- `/tmp/recovery_results.json` - Detailed analysis
- Updated `sprint-status.yaml` - Recovered status
---
**Last Updated:** 2026-01-02
**Status:** Production Ready
**Works On:** ANY BMAD project with sprint-status.yaml tracking

View File

@ -1,273 +0,0 @@
# Revalidate Epic - Batch Story Revalidation with Semaphore Pattern
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<workflow>
<step n="1" goal="Load sprint status and find epic stories">
<action>Verify epic_number parameter provided</action>
<check if="epic_number not provided">
<output>❌ ERROR: epic_number parameter required
Usage:
/revalidate-epic epic_number=2
/revalidate-epic epic_number=2 fill_gaps=true
/revalidate-epic epic_number=2 fill_gaps=true max_concurrent=5
</output>
<action>HALT</action>
</check>
<action>Read {sprint_status} file</action>
<action>Parse development_status map</action>
<action>Filter stories starting with "{{epic_number}}-" (e.g., "2-1-", "2-2-", etc.)</action>
<action>Exclude epics (keys starting with "epic-") and retrospectives</action>
<action>Store as: epic_stories (list of story keys)</action>
<check if="epic_stories is empty">
<output>❌ No stories found for Epic {{epic_number}}
Check sprint-status.yaml to verify epic number is correct.
</output>
<action>HALT</action>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 EPIC {{epic_number}} REVALIDATION
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Stories Found:** {{epic_stories.length}}
**Mode:** {{#if fill_gaps}}Verify & Fill Gaps{{else}}Verify Only{{/if}}
**Max Concurrent:** {{max_concurrent}} agents
**Pattern:** Semaphore (continuous worker pool)
**Stories to Revalidate:**
{{#each epic_stories}}
{{@index + 1}}. {{this}}
{{/each}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<ask>Proceed with revalidation? (yes/no):</ask>
<check if="response != 'yes'">
<output>❌ Revalidation cancelled</output>
<action>Exit workflow</action>
</check>
</step>
<step n="2" goal="Initialize semaphore pattern for parallel revalidation">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🚀 Starting Parallel Revalidation (Semaphore Pattern)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Initialize worker pool state:</action>
<action>
- story_queue = epic_stories
- active_workers = {}
- completed_stories = []
- failed_stories = []
- verification_results = {}
- next_story_index = 0
- max_workers = {{max_concurrent}}
</action>
<action>Fill initial worker slots:</action>
<iterate>While next_story_index < min(max_workers, story_queue.length):</iterate>
<action>
story_key = story_queue[next_story_index]
story_file = {sprint_artifacts}/{{story_key}}.md # Try multiple naming patterns if needed
worker_id = next_story_index + 1
Spawn Task agent:
- subagent_type: "general-purpose"
- description: "Revalidate story {{story_key}}"
- prompt: "Execute revalidate-story workflow for {{story_key}}.
CRITICAL INSTRUCTIONS:
1. Load workflow: _bmad/bmm/workflows/4-implementation/revalidate-story/workflow.yaml
2. Parameters: story_file={{story_file}}, fill_gaps={{fill_gaps}}
3. Clear all checkboxes
4. Verify each AC/Task/DoD against codebase
5. Re-check verified items
6. Report gaps
{{#if fill_gaps}}7. Fill gaps and commit{{/if}}
8. Return verification summary"
- run_in_background: true
Store in active_workers[worker_id]:
story_key: {{story_key}}
task_id: {{returned_task_id}}
started_at: {{timestamp}}
</action>
<action>Increment next_story_index</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ {{active_workers.size}} workers active
📋 {{story_queue.length - next_story_index}} stories queued
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="3" goal="Maintain worker pool until all stories revalidated">
<critical>SEMAPHORE PATTERN: Keep {{max_workers}} agents running continuously</critical>
<iterate>While active_workers.size > 0 OR next_story_index < story_queue.length:</iterate>
<action>Poll for completed workers (non-blocking):</action>
<iterate>For each worker_id in active_workers:</iterate>
<action>Check worker status using TaskOutput(task_id, block=false)</action>
<check if="worker completed successfully">
<action>Get verification results from worker output</action>
<action>Parse: verified_pct, gaps_found, gaps_filled</action>
<action>Store in verification_results[story_key]</action>
<action>Add to completed_stories</action>
<action>Remove from active_workers</action>
<output>✅ Worker {{worker_id}}: {{story_key}} → {{verified_pct}}% verified{{#if gaps_filled > 0}}, {{gaps_filled}} gaps filled{{/if}}</output>
<check if="next_story_index < story_queue.length">
<action>Refill slot with next story (same pattern as batch-super-dev)</action>
<output>🔄 Worker {{worker_id}} refilled: {{next_story_key}}</output>
</check>
</check>
<check if="worker failed">
<action>Add to failed_stories with error</action>
<action>Remove from active_workers</action>
<output>❌ Worker {{worker_id}}: {{story_key}} failed</output>
<check if="continue_on_failure AND next_story_index < story_queue.length">
<action>Refill slot despite failure</action>
</check>
</check>
<action>Display live progress every 30 seconds:</action>
<output>
📊 Live Progress: {{completed_stories.length}} completed, {{active_workers.size}} active, {{story_queue.length - next_story_index}} queued
</output>
<action>Sleep 5 seconds before next poll</action>
</step>
<step n="4" goal="Generate epic-level summary">
<action>Aggregate verification results across all stories:</action>
<action>
epic_total_items = sum of all items across stories
epic_verified = sum of verified items
epic_partial = sum of partial items
epic_missing = sum of missing items
epic_gaps_filled = sum of gaps filled
epic_verified_pct = (epic_verified / epic_total_items) × 100
</action>
<action>Group stories by verification percentage:</action>
<action>
- complete_stories (≥95% verified)
- mostly_complete_stories (80-94% verified)
- partial_stories (50-79% verified)
- incomplete_stories (<50% verified)
</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 EPIC {{epic_number}} REVALIDATION SUMMARY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Total Stories:** {{epic_stories.length}}
**Completed:** {{completed_stories.length}}
**Failed:** {{failed_stories.length}}
**Epic-Wide Verification:**
- ✅ Verified: {{epic_verified}}/{{epic_total_items}} ({{epic_verified_pct}}%)
- 🔶 Partial: {{epic_partial}}/{{epic_total_items}}
- ❌ Missing: {{epic_missing}}/{{epic_total_items}}
{{#if fill_gaps}}- 🔧 Gaps Filled: {{epic_gaps_filled}}{{/if}}
**Story Health:**
- ✅ Complete (≥95%): {{complete_stories.length}} stories
- 🔶 Mostly Complete (80-94%): {{mostly_complete_stories.length}} stories
- ⚠️ Partial (50-79%): {{partial_stories.length}} stories
- ❌ Incomplete (<50%): {{incomplete_stories.length}} stories
---
**Complete Stories (≥95% verified):**
{{#each complete_stories}}
- {{story_key}}: {{verified_pct}}% verified
{{/each}}
{{#if mostly_complete_stories.length > 0}}
**Mostly Complete Stories (80-94%):**
{{#each mostly_complete_stories}}
- {{story_key}}: {{verified_pct}}% verified ({{gaps_count}} gaps{{#if gaps_filled > 0}}, {{gaps_filled}} filled{{/if}})
{{/each}}
{{/if}}
{{#if partial_stories.length > 0}}
**⚠️ Partial Stories (50-79%):**
{{#each partial_stories}}
- {{story_key}}: {{verified_pct}}% verified ({{gaps_count}} gaps{{#if gaps_filled > 0}}, {{gaps_filled}} filled{{/if}})
{{/each}}
Recommendation: Continue development on these stories
{{/if}}
{{#if incomplete_stories.length > 0}}
**❌ Incomplete Stories (<50%):**
{{#each incomplete_stories}}
- {{story_key}}: {{verified_pct}}% verified ({{gaps_count}} gaps{{#if gaps_filled > 0}}, {{gaps_filled}} filled{{/if}})
{{/each}}
Recommendation: Re-implement these stories from scratch
{{/if}}
{{#if failed_stories.length > 0}}
**❌ Failed Revalidations:**
{{#each failed_stories}}
- {{story_key}}: {{error}}
{{/each}}
{{/if}}
---
**Epic Health Score:** {{epic_verified_pct}}/100
{{#if epic_verified_pct >= 95}}
✅ Epic is COMPLETE and verified
{{else if epic_verified_pct >= 80}}
🔶 Epic is MOSTLY COMPLETE ({{epic_missing}} items need attention)
{{else if epic_verified_pct >= 50}}
⚠️ Epic is PARTIALLY COMPLETE (significant gaps remain)
{{else}}
❌ Epic is INCOMPLETE (major rework needed)
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<check if="create_epic_report == true">
<action>Write epic summary to: {sprint_artifacts}/revalidation-epic-{{epic_number}}-{{timestamp}}.md</action>
<output>📄 Epic report: {{report_path}}</output>
</check>
<check if="update_sprint_status == true">
<action>Update sprint-status.yaml with revalidation timestamp and results</action>
<action>Add comment to epic entry: # Revalidated: {{epic_verified_pct}}% verified ({{timestamp}})</action>
</check>
</step>
</workflow>

View File

@ -1,510 +0,0 @@
# Revalidate Story - Verify Checkboxes Against Codebase Reality
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<workflow>
<step n="1" goal="Load story and backup current state">
<action>Verify story_file parameter provided</action>
<check if="story_file not provided">
<output>❌ ERROR: story_file parameter required
Usage:
/revalidate-story story_file=path/to/story.md
/revalidate-story story_file=path/to/story.md fill_gaps=true
</output>
<action>HALT</action>
</check>
<action>Read COMPLETE story file: {{story_file}}</action>
<action>Parse sections: Acceptance Criteria, Tasks/Subtasks, Definition of Done, Dev Agent Record</action>
<action>Extract story_key from filename (e.g., "2-7-image-file-handling")</action>
<action>Create backup of current checkbox state:</action>
<action>Count currently checked items:
- ac_checked_before = count of [x] in Acceptance Criteria
- tasks_checked_before = count of [x] in Tasks/Subtasks
- dod_checked_before = count of [x] in Definition of Done
- total_checked_before = sum of above
</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 STORY REVALIDATION STARTED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Story:** {{story_key}}
**File:** {{story_file}}
**Mode:** {{#if fill_gaps}}Verify & Fill Gaps{{else}}Verify Only{{/if}}
**Current State:**
- Acceptance Criteria: {{ac_checked_before}}/{{ac_total}} checked
- Tasks: {{tasks_checked_before}}/{{tasks_total}} checked
- Definition of Done: {{dod_checked_before}}/{{dod_total}} checked
- **Total:** {{total_checked_before}}/{{total_items}} ({{pct_before}}%)
**Action:** Clearing all checkboxes and re-verifying against codebase...
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="2" goal="Clear all checkboxes">
<output>🧹 Clearing all checkboxes to start fresh verification...</output>
<action>Use Edit tool to replace all [x] with [ ] in Acceptance Criteria section</action>
<action>Use Edit tool to replace all [x] with [ ] in Tasks/Subtasks section</action>
<action>Use Edit tool to replace all [x] with [ ] in Definition of Done section</action>
<action>Save story file with all boxes unchecked</action>
<output>✅ All checkboxes cleared. Starting verification from clean slate...</output>
</step>
<step n="3" goal="Verify Acceptance Criteria against codebase">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 VERIFYING ACCEPTANCE CRITERIA ({{ac_total}} items)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Extract all AC items from Acceptance Criteria section</action>
<iterate>For each AC item:</iterate>
<substep n="3a" title="Parse AC and determine what should exist">
<action>Extract AC description and identify artifacts:
- File mentions (e.g., "UserProfile component")
- Function names (e.g., "updateUser function")
- Features (e.g., "dark mode toggle")
- Test requirements (e.g., "unit tests covering edge cases")
</action>
<output>Verifying AC{{@index}}: {{ac_description}}</output>
</substep>
<substep n="3b" title="Search codebase for evidence">
<action>Use Glob to find relevant files:
- If AC mentions specific file: glob for that file
- If AC mentions component: glob for **/*ComponentName*
- If AC mentions feature: glob for files in related directories
</action>
<action>Use Grep to search for symbols/functions/features</action>
<action>Read found files to verify:</action>
<action>- NOT a stub (check for "TODO", "Not implemented", "throw new Error")</action>
<action>- Has actual implementation (not just empty function)</action>
<action>- Tests exist (search for *.test.* or *.spec.* files)</action>
<action>- Tests pass (if --fill-gaps mode, run tests)</action>
</substep>
<substep n="3c" title="Determine verification status">
<check if="all evidence found AND no stubs AND tests exist">
<action>verification_status = VERIFIED</action>
<action>Check box [x] in story file for this AC</action>
<action>Record evidence: "✅ VERIFIED: {{files_found}}, tests: {{test_files}}"</action>
<output> ✅ AC{{@index}}: VERIFIED</output>
</check>
<check if="partial evidence OR stubs found OR tests missing">
<action>verification_status = PARTIAL</action>
<action>Check box [~] in story file for this AC</action>
<action>Record gap: "🔶 PARTIAL: {{what_exists}}, missing: {{what_is_missing}}"</action>
<output> 🔶 AC{{@index}}: PARTIAL ({{what_is_missing}})</output>
<action>Add to gaps_list with details</action>
</check>
<check if="no evidence found">
<action>verification_status = MISSING</action>
<action>Leave box unchecked [ ] in story file</action>
<action>Record gap: "❌ MISSING: No implementation found for {{ac_description}}"</action>
<output> ❌ AC{{@index}}: MISSING</output>
<action>Add to gaps_list with details</action>
</check>
</substep>
<action>Save story file after each AC verification</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Acceptance Criteria Verification Complete
✅ Verified: {{ac_verified}}
🔶 Partial: {{ac_partial}}
❌ Missing: {{ac_missing}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="4" goal="Verify Tasks/Subtasks against codebase">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 VERIFYING TASKS ({{tasks_total}} items)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Extract all Task items from Tasks/Subtasks section</action>
<iterate>For each Task item (same verification logic as ACs):</iterate>
<action>Parse task description for artifacts</action>
<action>Search codebase with Glob/Grep</action>
<action>Read and verify (check for stubs, tests)</action>
<action>Determine status: VERIFIED | PARTIAL | MISSING</action>
<action>Update checkbox: [x] | [~] | [ ]</action>
<action>Record evidence or gap</action>
<action>Save story file</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Tasks Verification Complete
✅ Verified: {{tasks_verified}}
🔶 Partial: {{tasks_partial}}
❌ Missing: {{tasks_missing}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="5" goal="Verify Definition of Done against codebase">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 VERIFYING DEFINITION OF DONE ({{dod_total}} items)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Extract all DoD items from Definition of Done section</action>
<iterate>For each DoD item:</iterate>
<action>Parse DoD requirement:
- "Type check passes" → Run type checker
- "Unit tests 90%+ coverage" → Run coverage report
- "Linting clean" → Run linter
- "Build succeeds" → Run build
- "All tests pass" → Run test suite
</action>
<action>Execute verification for this DoD item</action>
<check if="verification passes">
<action>Check box [x]</action>
<action>Record: "✅ VERIFIED: {{verification_result}}"</action>
</check>
<check if="verification fails or N/A">
<action>Leave unchecked [ ] or partial [~]</action>
<action>Record gap if applicable</action>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Definition of Done Verification Complete
✅ Verified: {{dod_verified}}
🔶 Partial: {{dod_partial}}
❌ Missing: {{dod_missing}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="6" goal="Generate revalidation report">
<action>Calculate overall completion:</action>
<action>
total_verified = ac_verified + tasks_verified + dod_verified
total_partial = ac_partial + tasks_partial + dod_partial
total_missing = ac_missing + tasks_missing + dod_missing
total_items = ac_total + tasks_total + dod_total
verified_pct = (total_verified / total_items) × 100
completion_pct = ((total_verified + total_partial) / total_items) × 100
</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 REVALIDATION SUMMARY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Story:** {{story_key}}
**File:** {{story_file}}
**Verification Results:**
- ✅ Verified Complete: {{total_verified}}/{{total_items}} ({{verified_pct}}%)
- 🔶 Partially Complete: {{total_partial}}/{{total_items}}
- ❌ Missing/Incomplete: {{total_missing}}/{{total_items}}
**Breakdown:**
- Acceptance Criteria: {{ac_verified}}✅ {{ac_partial}}🔶 {{ac_missing}}❌ / {{ac_total}} total
- Tasks: {{tasks_verified}}✅ {{tasks_partial}}🔶 {{tasks_missing}}❌ / {{tasks_total}} total
- Definition of Done: {{dod_verified}}✅ {{dod_partial}}🔶 {{dod_missing}}❌ / {{dod_total}} total
**Status Assessment:**
{{#if verified_pct >= 95}}
✅ Story is COMPLETE ({{verified_pct}}% verified)
{{else if verified_pct >= 80}}
🔶 Story is MOSTLY COMPLETE ({{verified_pct}}% verified, {{total_missing}} gaps)
{{else if verified_pct >= 50}}
⚠️ Story is PARTIALLY COMPLETE ({{verified_pct}}% verified, {{total_missing}} gaps)
{{else}}
❌ Story is INCOMPLETE ({{verified_pct}}% verified, significant work missing)
{{/if}}
**Before Revalidation:** {{total_checked_before}}/{{total_items}} checked ({{pct_before}}%)
**After Revalidation:** {{total_verified}}/{{total_items}} verified ({{verified_pct}}%)
**Accuracy:** {{#if pct_before == verified_pct}}Perfect match{{else if pct_before > verified_pct}}{{pct_before - verified_pct}}% over-reported{{else}}{{verified_pct - pct_before}}% under-reported{{/if}}
{{#if total_missing > 0}}
---
**Gaps Found ({{total_missing}}):**
{{#each gaps_list}}
{{@index + 1}}. {{item_type}} - {{item_description}}
Status: {{status}}
Missing: {{what_is_missing}}
{{#if evidence}}Evidence checked: {{evidence}}{{/if}}
{{/each}}
---
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<check if="create_report == true">
<action>Write detailed report to: {sprint_artifacts}/revalidation-{{story_key}}-{{timestamp}}.md</action>
<action>Include: verification results, gaps list, evidence for each item, recommendations</action>
<output>📄 Detailed report: {{report_path}}</output>
</check>
</step>
<step n="7" goal="Decide on gap filling">
<check if="fill_gaps == false">
<output>
✅ Verification complete (verify-only mode)
{{#if total_missing > 0}}
**To fill the {{total_missing}} gaps, run:**
/revalidate-story story_file={{story_file}} fill_gaps=true
{{else}}
No gaps found - story is complete!
{{/if}}
</output>
<action>Exit workflow</action>
</check>
<check if="fill_gaps == true AND total_missing == 0">
<output>✅ No gaps to fill - story is already complete!</output>
<action>Exit workflow</action>
</check>
<check if="fill_gaps == true AND total_missing > 0">
<check if="total_missing > max_gaps_to_fill">
<output>
⚠️ TOO MANY GAPS: {{total_missing}} gaps found (max: {{max_gaps_to_fill}})
This story has too many missing items for automatic gap filling.
Consider:
1. Re-implementing the story from scratch with /dev-story
2. Manually implementing the gaps
3. Increasing max_gaps_to_fill in workflow.yaml (use cautiously)
Gap filling HALTED for safety.
</output>
<action>HALT</action>
</check>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔧 GAP FILLING MODE ({{total_missing}} gaps to fill)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
<action>Continue to Step 8</action>
</check>
</step>
<step n="8" goal="Fill gaps (implement missing items)">
<iterate>For each gap in gaps_list:</iterate>
<substep n="8a" title="Confirm gap filling">
<check if="require_confirmation == true">
<ask>
Fill this gap?
**Item:** {{item_description}}
**Type:** {{item_type}} ({{section}})
**Missing:** {{what_is_missing}}
[Y] Yes - Implement this item
[A] Auto-fill - Implement this and all remaining gaps without asking
[S] Skip - Leave this gap unfilled
[H] Halt - Stop gap filling
Your choice:
</ask>
<check if="choice == 'A'">
<action>Set require_confirmation = false (auto-fill remaining)</action>
</check>
<check if="choice == 'S'">
<action>Continue to next gap</action>
</check>
<check if="choice == 'H'">
<action>Exit gap filling loop</action>
<action>Jump to Step 9 (Summary)</action>
</check>
</check>
</substep>
<substep n="8b" title="Implement missing item">
<output>🔧 Implementing: {{item_description}}</output>
<action>Load story context (Technical Requirements, Architecture Compliance, Dev Notes)</action>
<action>Implement missing item following story specifications</action>
<action>Write tests if required</action>
<action>Run tests to verify implementation</action>
<action>Verify linting/type checking passes</action>
<check if="implementation succeeds AND tests pass">
<action>Check box [x] for this item in story file</action>
<action>Update File List with new/modified files</action>
<action>Add to Dev Agent Record: "Gap filled: {{item_description}}"</action>
<output> ✅ Implemented and verified</output>
<check if="commit_strategy == 'per_gap'">
<action>Stage files for this gap</action>
<action>Commit: "fix({{story_key}}): fill gap - {{item_description}}"</action>
<output> ✅ Committed</output>
</check>
</check>
<check if="implementation fails">
<output> ❌ Failed to implement: {{error_message}}</output>
<action>Leave box unchecked</action>
<action>Record failure in gaps_list</action>
<action>Add to failed_gaps</action>
</check>
</substep>
<action>After all gaps processed:</action>
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Gap Filling Complete
✅ Filled: {{gaps_filled}}
❌ Failed: {{gaps_failed}}
⏭️ Skipped: {{gaps_skipped}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
<step n="9" goal="Re-verify filled gaps and finalize">
<check if="gaps_filled > 0">
<output>🔍 Re-verifying filled gaps...</output>
<iterate>For each filled gap:</iterate>
<action>Re-run verification for that item</action>
<action>Ensure still VERIFIED after all changes</action>
<output>✅ All filled gaps re-verified</output>
</check>
<action>Calculate final completion:</action>
<action>
final_verified = count of [x] across all sections
final_partial = count of [~] across all sections
final_missing = count of [ ] across all sections
final_pct = (final_verified / total_items) × 100
</action>
<check if="commit_strategy == 'all_at_once' AND gaps_filled > 0">
<action>Stage all changed files</action>
<action>Commit: "fix({{story_key}}): fill {{gaps_filled}} gaps from revalidation"</action>
<output>✅ All gaps committed</output>
</check>
<check if="update_sprint_status == true">
<action>Load {sprint_status} file</action>
<action>Update entry with current progress:</action>
<action>Format: {{story_key}}: {{current_status}} # Revalidated: {{final_verified}}/{{total_items}} ({{final_pct}}%) verified</action>
<action>Save sprint-status.yaml</action>
<output>✅ Sprint status updated with revalidation results</output>
</check>
<check if="update_dev_agent_record == true">
<action>Add to Dev Agent Record in story file:</action>
<action>
## Revalidation Record ({{timestamp}})
**Revalidation Mode:** {{#if fill_gaps}}Verify & Fill{{else}}Verify Only{{/if}}
**Results:**
- Verified: {{final_verified}}/{{total_items}} ({{final_pct}}%)
- Gaps Found: {{total_missing}}
- Gaps Filled: {{gaps_filled}}
**Evidence:**
{{#each verification_evidence}}
- {{item}}: {{evidence}}
{{/each}}
{{#if gaps_filled > 0}}
**Gaps Filled:**
{{#each filled_gaps}}
- {{item}}: {{what_was_implemented}}
{{/each}}
{{/if}}
{{#if failed_gaps.length > 0}}
**Failed to Fill:**
{{#each failed_gaps}}
- {{item}}: {{error}}
{{/each}}
{{/if}}
</action>
<action>Save story file</action>
</check>
</step>
<step n="10" goal="Final summary and recommendations">
<output>
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ REVALIDATION COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Story:** {{story_key}}
**Final Status:**
- ✅ Verified Complete: {{final_verified}}/{{total_items}} ({{final_pct}}%)
- 🔶 Partially Complete: {{final_partial}}/{{total_items}}
- ❌ Missing/Incomplete: {{final_missing}}/{{total_items}}
{{#if fill_gaps}}
**Gap Filling Results:**
- Filled: {{gaps_filled}}
- Failed: {{gaps_failed}}
- Skipped: {{gaps_skipped}}
{{/if}}
**Accuracy Check:**
- Before revalidation: {{pct_before}}% checked
- After revalidation: {{final_pct}}% verified
- Checkbox accuracy: {{#if pct_before == final_pct}}✅ Perfect (0% discrepancy){{else if pct_before > final_pct}}⚠️ {{pct_before - final_pct}}% over-reported (checkboxes were optimistic){{else}}🔶 {{final_pct - pct_before}}% under-reported (work done but not checked){{/if}}
{{#if final_pct >= 95}}
**Recommendation:** Story is COMPLETE - mark as "done" or "review"
{{else if final_pct >= 80}}
**Recommendation:** Story is mostly complete - finish remaining {{final_missing}} items then mark "review"
{{else if final_pct >= 50}}
**Recommendation:** Story has significant gaps - continue development with /dev-story
{{else}}
**Recommendation:** Story is mostly incomplete - consider re-implementing with /dev-story or /super-dev-pipeline
{{/if}}
{{#if failed_gaps.length > 0}}
**⚠️ Manual attention needed for {{failed_gaps.length}} items that failed to fill automatically**
{{/if}}
{{#if create_report}}
**Detailed Report:** {sprint_artifacts}/revalidation-{{story_key}}-{{timestamp}}.md
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</output>
</step>
</workflow>

View File

@ -1,283 +0,0 @@
# Super-Dev-Story Workflow
**Enhanced story development with comprehensive quality validation**
## What It Does
Super-dev-story is `/dev-story` on steroids - it includes ALL standard development steps PLUS additional quality gates:
```
Standard dev-story:
1-8. Development cycle → Mark "review"
Super-dev-story:
1-8. Development cycle
9.5. Post-dev gap analysis (verify work complete)
9.6. Automated code review (catch issues)
→ Fix issues if found (loop back to step 5)
9. Mark "review" (only after all validation passes)
```
## When to Use
### Use `/super-dev-story` for:
- ✅ Security-critical features (auth, payments, PII handling)
- ✅ Complex business logic with many edge cases
- ✅ Stories you want bulletproof before human review
- ✅ High-stakes features (production releases, customer-facing)
- ✅ When you want to minimize review cycles
### Use standard `/dev-story` for:
- Documentation updates
- Simple UI tweaks
- Configuration changes
- Low-risk experimental features
- When speed matters more than extra validation
## Cost vs Benefit
| Aspect | dev-story | super-dev-story |
|--------|-----------|-----------------|
| **Tokens** | 50K-100K | 80K-150K (+30-50%) |
| **Time** | Normal | +20-30% |
| **Quality** | Good | Excellent |
| **Review cycles** | 1-3 iterations | 0-1 iterations |
| **False completions** | Possible | Prevented |
**ROI:** Extra 30K tokens (~$0.09) prevents hours of rework and multiple review cycles
## What Gets Validated
### Step 9.5: Post-Dev Gap Analysis
**Checks:**
- Tasks marked [x] → Code actually exists and works?
- Required files → Actually created?
- Claimed tests → Actually exist and pass?
- Partial implementations → Marked complete prematurely?
**Catches:**
- ❌ "Created auth service" → File doesn't exist
- ❌ "Added tests with 90% coverage" → Only 60% actual
- ❌ "Implemented login" → Function exists but incomplete
**Actions if issues found:**
- Unchecks false positive tasks
- Adds tasks for missing work
- Loops back to implementation
### Step 9.6: Automated Code Review
**Reviews:**
- ✅ Correctness (logic errors, edge cases)
- ✅ Security (vulnerabilities, input validation)
- ✅ Architecture (pattern compliance, SOLID principles)
- ✅ Performance (inefficiencies, optimization opportunities)
- ✅ Testing (coverage gaps, test quality)
- ✅ Code Quality (readability, maintainability)
**Actions if issues found:**
- Adds review findings as tasks
- Loops back to implementation
- Continues until issues resolved
## Usage
### Basic Usage
```bash
# Load any BMAD agent
/super-dev-story
# Follows same flow as dev-story, with extra validation
```
### Specify Story
```bash
/super-dev-story _bmad-output/implementation-artifacts/story-1.2.md
```
### Expected Flow
```
1. Pre-dev gap analysis
├─ "Approve task updates? [Y/A/n/e/s/r]"
└─ Select option
2. Development (standard TDD cycle)
└─ Implements all tasks
3. Post-dev gap analysis
├─ Scans codebase
├─ If gaps: adds tasks, loops back
└─ If clean: proceeds
4. Code review
├─ Analyzes all changes
├─ If issues: adds tasks, loops back
└─ If clean: proceeds
5. Story marked "review"
└─ Truly complete!
```
## Fix Iteration Safety
Super-dev has a **max iteration limit** (default: 3) to prevent infinite loops:
```yaml
# workflow.yaml
super_dev_settings:
max_fix_iterations: 3 # Stop after 3 fix cycles
fail_on_critical_issues: true # HALT if critical security issues
```
If exceeded:
```
🛑 Maximum Fix Iterations Reached
Attempted 3 fix cycles.
Manual intervention required.
Issues remaining:
- [List of unresolved issues]
```
## Examples
### Example 1: Perfect First Try
```
/super-dev-story
Pre-gap: ✅ Tasks accurate
Development: ✅ 8 tasks completed
Post-gap: ✅ All work verified
Code review: ✅ No issues
→ Story complete! (45 minutes, 85K tokens)
```
### Example 2: Post-Dev Catches Incomplete Work
```
/super-dev-story
Pre-gap: ✅ Tasks accurate
Development: ✅ 8 tasks completed
Post-gap: ⚠️ Tests claim 90% coverage, actual 65%
→ Adds task: "Increase test coverage to 90%"
→ Implements missing tests
→ Post-gap: ✅ Now 92% coverage
→ Code review: ✅ No issues
→ Story complete! (52 minutes, 95K tokens)
```
### Example 3: Code Review Finds Security Issue
```
/super-dev-story
Pre-gap: ✅ Tasks accurate
Development: ✅ 10 tasks completed
Post-gap: ✅ All work verified
Code review: 🚨 CRITICAL - SQL injection vulnerability
→ Adds task: "Fix SQL injection in user search"
→ Implements parameterized queries
→ Post-gap: ✅ Verified
→ Code review: ✅ Security issue resolved
→ Story complete! (58 minutes, 110K tokens)
```
## Comparison to Standard Workflow
### Standard Flow (dev-story)
```
Day 1: Develop story (30 min)
Day 2: Human review finds 3 issues
Day 3: Fix issues (20 min)
Day 4: Human review again
Day 5: Approved
Total: 5 days, 2 review cycles
```
### Super-Dev Flow
```
Day 1: Super-dev-story
- Development (30 min)
- Post-gap finds 1 issue (auto-fix 5 min)
- Code review finds 2 issues (auto-fix 15 min)
- Complete (50 min total)
Day 2: Human review
Day 3: Approved (minimal/no changes needed)
Total: 3 days, 1 review cycle
```
**Savings:** 2 days, 1 fewer review cycle, higher initial quality
## Troubleshooting
### "Super-dev keeps looping forever"
**Cause:** Each validation finds new issues
**Solution:** This indicates quality problems. Review max_fix_iterations setting or manually intervene.
### "Post-dev gap analysis keeps failing"
**Cause:** Dev agent marking tasks complete prematurely
**Solution:** This is expected! Super-dev catches this. The loop ensures actual completion.
### "Code review too strict"
**Cause:** Reviewing for issues standard dev-story would miss
**Solution:** This is intentional. For less strict review, use standard dev-story.
### "Too many tokens/too slow"
**Cause:** Multi-stage validation adds overhead
**Solution:** Use standard dev-story for non-critical stories. Reserve super-dev for important work.
## Best Practices
1. **Reserve for important stories** - Don't use for trivial changes
2. **Trust the process** - Fix iterations mean it's working correctly
3. **Review limits** - Adjust max_fix_iterations if stories are complex
4. **Monitor costs** - Track token usage vs review cycle savings
5. **Learn patterns** - Code review findings inform future architecture
## Configuration Reference
```yaml
# _bmad/bmm/config.yaml or _bmad/bmgd/config.yaml
# Per-project settings
super_dev_settings:
post_dev_gap_analysis: true # Enable post-dev validation
auto_code_review: true # Enable automatic code review
fail_on_critical_issues: true # HALT on security vulnerabilities
max_fix_iterations: 3 # Maximum fix cycles before manual intervention
auto_fix_minor_issues: false # Auto-fix LOW severity without asking
```
## See Also
- [dev-story workflow](../dev-story/) - Standard development workflow
- [gap-analysis workflow](../gap-analysis/) - Standalone audit tool
- [Gap Analysis Guide](../../../../docs/gap-analysis.md) - Complete documentation
- [Super-Dev Mode Concept](../../../../docs/super-dev-mode.md) - Vision and roadmap
---
**Super-Dev-Story: Because "done" should mean DONE** ✅

View File

@ -1,299 +0,0 @@
<workflow>
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
<critical>Generate all documents in {document_output_language}</critical>
<critical>🚀 SUPER-DEV MODE: Enhanced quality workflow with post-implementation validation and automated code review</critical>
<critical>This workflow orchestrates existing workflows with additional validation steps</critical>
<!-- ═══════════════════════════════════════════════════════════════ -->
<!-- STEP 1: INVOKE STANDARD DEV-STORY WORKFLOW -->
<!-- ═══════════════════════════════════════════════════════════════ -->
<step n="1" goal="Execute standard dev-story workflow">
<critical>🎯 RUN DEV-STORY - Complete all standard development steps</critical>
<note>This includes: story loading, pre-dev gap analysis, development, testing, and task completion</note>
<output>🚀 **Super-Dev-Story: Enhanced Quality Workflow**
Running standard dev-story workflow (Steps 1-8)...
This includes:
✅ Story loading and validation
✅ Pre-dev gap analysis
✅ TDD implementation cycle
✅ Comprehensive testing
✅ Task completion validation
After dev-story completes, super-dev will add:
✅ Post-dev gap analysis
✅ Automated code review
✅ Auto push-all
</output>
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/4-implementation/dev-story/workflow.yaml">
<input name="story_file" value="{{story_file}}" />
<input name="auto_accept_gap_analysis" value="{{auto_accept_gap_analysis}}" />
<note>Pass through any user-provided story file path and auto-accept setting</note>
</invoke-workflow>
<check if="dev-story completed successfully">
<output>✅ Dev-story complete - all tasks implemented and tested
Proceeding to super-dev enhancements...
</output>
</check>
<check if="dev-story failed or halted">
<output>❌ Dev-story did not complete successfully
Cannot proceed with super-dev enhancements.
Fix issues and retry.
</output>
<action>HALT - dev-story must complete first</action>
</check>
</step>
<!-- ═══════════════════════════════════════════════════════════════ -->
<!-- STEP 2: POST-DEV GAP ANALYSIS (Super-Dev Enhancement) -->
<!-- ═══════════════════════════════════════════════════════════════ -->
<step n="2" goal="Post-development gap analysis">
<critical>🔍 POST-DEV VALIDATION - Verify all work actually completed!</critical>
<note>This catches incomplete implementations that were prematurely marked done</note>
<output>
🔎 **Post-Development Gap Analysis**
All tasks marked complete. Verifying against codebase reality...
</output>
<!-- Re-scan codebase with fresh eyes -->
<action>Re-read story file to get requirements and tasks</action>
<action>Extract all tasks marked [x] complete</action>
<action>For each completed task, identify what should exist in codebase</action>
<!-- SCAN PHASE -->
<action>Use Glob to find files that should have been created</action>
<action>Use Grep to search for functions/classes that should exist</action>
<action>Use Read to verify implementation completeness (not just existence)</action>
<action>Run tests to verify claimed test coverage actually exists and passes</action>
<!-- ANALYSIS PHASE -->
<action>Compare claimed work vs actual implementation:</action>
**POST-DEV VERIFICATION:**
<action>✅ Verified Complete:
- List tasks where code fully exists and works
- Confirm tests exist and pass
- Verify implementation matches requirements
</action>
<action>❌ False Positives Detected:
- List tasks marked [x] but code missing or incomplete
- Identify claimed tests that don't exist or fail
- Note partial implementations marked as complete
</action>
<!-- DECISION PHASE -->
<check if="false positives found">
<output>
⚠️ **Post-Dev Gaps Detected!**
**Tasks marked complete but implementation incomplete:**
{{list_false_positives_with_details}}
These issues must be addressed before story can be marked complete.
</output>
<action>Uncheck false positive tasks in story file</action>
<action>Add new tasks for missing work</action>
<action>Update Gap Analysis section with post-dev findings</action>
<output>🔄 Re-invoking dev-story to complete missing work...</output>
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/4-implementation/dev-story/workflow.yaml">
<input name="story_file" value="{{story_file}}" />
<input name="auto_accept_gap_analysis" value="{{auto_accept_gap_analysis}}" />
<note>Resume with added tasks for missing work</note>
</invoke-workflow>
<output>✅ Missing work completed. Proceeding to code review...</output>
</check>
<check if="no gaps found">
<output>✅ **Post-Dev Validation Passed**
All tasks verified complete against codebase.
Proceeding to code review...
</output>
<action>Update Gap Analysis section with post-dev verification results</action>
</check>
</step>
<!-- ═══════════════════════════════════════════════════════════════ -->
<!-- STEP 3: AUTOMATED CODE REVIEW (Super-Dev Enhancement) -->
<!-- ═══════════════════════════════════════════════════════════════ -->
<step n="3" goal="Automated code review">
<critical>👀 AUTO CODE REVIEW - Independent quality validation</critical>
<output>
🔍 **Running Automated Code Review**
Analyzing implementation for issues...
</output>
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/4-implementation/code-review/workflow.yaml">
<input name="story_file" value="{{story_file}}" />
<note>Run code review on completed story</note>
</invoke-workflow>
<action>Parse code review results from story file "Code Review" section</action>
<action>Extract issues by severity (Critical, High, Medium, Low)</action>
<action>Count total issues found</action>
<check if="critical or high severity issues found">
<output>🚨 **Code Review Found Issues Requiring Fixes**
Issues found: {{total_issue_count}}
- Critical: {{critical_count}}
- High: {{high_count}}
- Medium: {{medium_count}}
- Low: {{low_count}}
Adding review findings to story tasks and re-running dev-story...
</output>
<action>Add code review findings as tasks in story file</action>
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/4-implementation/dev-story/workflow.yaml">
<input name="story_file" value="{{story_file}}" />
<input name="auto_accept_gap_analysis" value="{{auto_accept_gap_analysis}}" />
<note>Fix code review issues</note>
</invoke-workflow>
<output>✅ Code review issues resolved. Proceeding to push...</output>
</check>
<check if="only medium or low issues found">
<output> **Code Review Found Minor Issues**
- Medium: {{medium_count}}
- Low: {{low_count}}
</output>
<ask>Auto-fix these minor issues? [Y/n/skip]:</ask>
<check if="user approves Y">
<action>Add review findings as tasks</action>
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/4-implementation/dev-story/workflow.yaml">
<input name="story_file" value="{{story_file}}" />
<input name="auto_accept_gap_analysis" value="{{auto_accept_gap_analysis}}" />
</invoke-workflow>
</check>
<check if="user says skip">
<action>Document issues in story file</action>
<output> Minor issues documented. Proceeding to push...</output>
</check>
</check>
<check if="no issues found">
<output>✅ **Code Review Passed**
No issues found. Implementation meets quality standards.
Proceeding to push...
</output>
</check>
</step>
<!-- ═══════════════════════════════════════════════════════════════ -->
<!-- STEP 4: PUSH ALL CHANGES (Super-Dev Enhancement) -->
<!-- ═══════════════════════════════════════════════════════════════ -->
<step n="4" goal="Commit and push story changes">
<critical>📝 PUSH-ALL - Stage, commit, and push with safety validation</critical>
<critical>⚡ TARGETED COMMIT: Only commit files from THIS story's File List (safe for parallel agents)</critical>
<!-- Extract File List from story file -->
<action>Read story file and extract the "File List" section</action>
<action>Parse all file paths listed (relative to repo root)</action>
<action>Also include the story file itself in the list</action>
<action>Store as {{story_files}} - space-separated list of all files</action>
<output>📝 **Committing Story Changes**
Files from this story:
{{story_files}}
Running push-all with targeted file list (parallel-agent safe)...
</output>
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/4-implementation/push-all/workflow.yaml">
<input name="target_files" value="{{story_files}}" />
<input name="story_key" value="{{story_key}}" />
<note>Only commit files changed by this story</note>
</invoke-workflow>
<check if="push-all succeeded">
<output>✅ Changes pushed to remote successfully</output>
</check>
<check if="push-all failed">
<output>⚠️ Push failed but story is complete locally
You can push manually when ready.
</output>
</check>
</step>
<!-- ═══════════════════════════════════════════════════════════════ -->
<!-- STEP 5: COMPLETION -->
<!-- ═══════════════════════════════════════════════════════════════ -->
<step n="5" goal="Super-dev completion summary">
<output>🎉 **SUPER-DEV STORY COMPLETE, {user_name}!**
**Quality Gates Passed:**
✅ Pre-dev gap analysis - Tasks validated before work
✅ Development - All tasks completed with TDD
✅ Post-dev gap analysis - Implementation verified
✅ Code review - Quality and security validated
✅ Pushed to remote - Changes backed up
**Story File:** {{story_file}}
**Status:** review (ready for human review)
---
**What Super-Dev Validated:**
1. 🔍 Tasks matched codebase reality before starting
2. 💻 Implementation completed per requirements
3. ✅ No false positive completions (all work verified)
4. 👀 Code quality and security validated
5. 📝 Changes committed and pushed to remote
**Next Steps:**
- Review the completed story
- Verify business requirements met
- Merge when approved
**Note:** This story went through enhanced quality validation.
It should require minimal human review.
</output>
<action>Based on {user_skill_level}, ask if user needs explanations about implementation, decisions, or findings</action>
<check if="user asks for explanations">
<action>Provide clear, contextual explanations</action>
</check>
<output>💡 **Tip:** This story was developed with super-dev-story for enhanced quality.
For faster development, use standard `dev-story` workflow.
For maximum quality, continue using `super-dev-story`.
</output>
</step>
</workflow>

View File

@ -0,0 +1,311 @@
# Super Dev Story v3.0 - Development with Quality Gates
<purpose>
Complete story development pipeline: dev-story → validation → code review → push.
Automatically re-invokes dev-story if gaps or review issues found.
Ensures production-ready code before pushing.
</purpose>
<philosophy>
**Quality Over Speed**
Don't just implement—verify, review, fix.
- Run dev-story for implementation
- Validate with gap analysis
- Code review for quality
- Fix issues before pushing
- Only push when truly ready
</philosophy>
<config>
name: super-dev-story
version: 3.0.0
stages:
- dev-story: "Implement the story"
- validate: "Run gap analysis"
- review: "Code review"
- push: "Safe commit and push"
defaults:
max_rework_loops: 3
auto_push: false
review_depth: "standard" # quick | standard | deep
validation_depth: "quick"
quality_gates:
validation_threshold: 90 # % tasks must be verified
review_threshold: "pass" # pass | pass_with_warnings
</config>
<execution_context>
@patterns/verification.md
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="initialize" priority="first">
**Load story and prepare pipeline**
```bash
STORY_FILE="{{story_file}}"
[ -f "$STORY_FILE" ] || { echo "❌ story_file required"; exit 1; }
```
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🚀 SUPER DEV STORY PIPELINE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Story: {{story_key}}
Stages: dev-story → validate → review → push
Quality Gates:
- Validation: ≥{{validation_threshold}}% verified
- Review: {{review_threshold}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Initialize:
- rework_count = 0
- stage = "dev-story"
</step>
<step name="stage_dev_story">
**Stage 1: Implement the story**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📝 STAGE 1: DEV-STORY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Invoke dev-story workflow:
```
/dev-story story_file={{story_file}}
```
Wait for completion. Capture:
- files_created
- files_modified
- tasks_completed
```
✅ Dev-story complete
Files: {{file_count}} created/modified
Tasks: {{tasks_completed}}/{{total_tasks}}
```
</step>
<step name="stage_validate">
**Stage 2: Validate implementation**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 STAGE 2: VALIDATION
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Invoke validation:
```
/validate scope=story target={{story_file}} depth={{validation_depth}}
```
Capture results:
- verified_pct
- false_positives
- category
**Check quality gate:**
```
if verified_pct < validation_threshold:
REWORK_NEEDED = true
reason = "Validation below {{validation_threshold}}%"
if false_positives > 0:
REWORK_NEEDED = true
reason = "{{false_positives}} tasks marked done but missing"
```
```
{{#if REWORK_NEEDED}}
⚠️ Validation failed: {{reason}}
{{else}}
✅ Validation passed: {{verified_pct}}% verified
{{/if}}
```
</step>
<step name="stage_review">
**Stage 3: Code review**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 STAGE 3: CODE REVIEW
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Invoke code review:
```
/multi-agent-review files={{files_modified}} depth={{review_depth}}
```
Capture results:
- verdict (PASS, PASS_WITH_WARNINGS, NEEDS_REWORK)
- issues
**Check quality gate:**
```
if verdict == "NEEDS_REWORK":
REWORK_NEEDED = true
reason = "Code review found blocking issues"
if review_threshold == "pass" AND verdict == "PASS_WITH_WARNINGS":
REWORK_NEEDED = true
reason = "Warnings not allowed in strict mode"
```
```
{{#if REWORK_NEEDED}}
⚠️ Review failed: {{reason}}
Issues: {{issues}}
{{else}}
✅ Review passed: {{verdict}}
{{/if}}
```
</step>
<step name="handle_rework" if="REWORK_NEEDED">
**Handle rework loop**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔄 REWORK REQUIRED (Loop {{rework_count + 1}}/{{max_rework_loops}})
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Reason: {{reason}}
{{#if validation_issues}}
Validation Issues:
{{#each validation_issues}}
- {{this}}
{{/each}}
{{/if}}
{{#if review_issues}}
Review Issues:
{{#each review_issues}}
- {{this}}
{{/each}}
{{/if}}
```
**Check loop limit:**
```
rework_count++
if rework_count > max_rework_loops:
echo "❌ Max rework loops exceeded"
echo "Manual intervention required"
HALT
```
**Re-invoke dev-story with issues:**
```
/dev-story story_file={{story_file}} fix_issues={{issues}}
```
After dev-story completes, return to validation stage.
</step>
<step name="stage_push" if="NOT REWORK_NEEDED">
**Stage 4: Push changes**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📦 STAGE 4: PUSH
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**Generate commit message from story:**
```
feat({{epic}}): {{story_title}}
- Implemented {{task_count}} tasks
- Verified: {{verified_pct}}%
- Review: {{verdict}}
Story: {{story_key}}
```
**If auto_push:**
```
/push-all commit_message="{{message}}" auto_push=true
```
**Otherwise, ask:**
```
Ready to push?
[Y] Yes, push now
[N] No, keep local (can push later)
[R] Review changes first
```
</step>
<step name="final_summary">
**Display pipeline results**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ SUPER DEV STORY COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Story: {{story_key}}
Pipeline Results:
- Dev-Story: ✅ Complete
- Validation: ✅ {{verified_pct}}% verified
- Review: ✅ {{verdict}}
- Push: {{pushed ? "✅ Pushed" : "⏸️ Local only"}}
Rework Loops: {{rework_count}}
Files Changed: {{file_count}}
Commit: {{commit_hash}}
{{#if pushed}}
Branch: {{branch}}
Ready for PR: gh pr create
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
</process>
<examples>
```bash
# Standard pipeline
/super-dev-story story_file=docs/sprint-artifacts/2-5-auth.md
# With auto-push
/super-dev-story story_file=docs/sprint-artifacts/2-5-auth.md auto_push=true
# Strict review mode
/super-dev-story story_file=docs/sprint-artifacts/2-5-auth.md review_threshold=pass
```
</examples>
<failure_handling>
**Dev-story fails:** Report error, halt pipeline.
**Validation below threshold:** Enter rework loop.
**Review finds blocking issues:** Enter rework loop.
**Max rework loops exceeded:** Halt, require manual intervention.
**Push fails:** Report error, commit preserved locally.
</failure_handling>
<success_criteria>
- [ ] Dev-story completed
- [ ] Validation ≥ threshold
- [ ] Review passed
- [ ] Changes committed
- [ ] Pushed (if requested)
- [ ] Story status updated
</success_criteria>

View File

@ -14,7 +14,7 @@ date: system-generated
# Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/super-dev-story"
instructions: "{installed_path}/instructions.xml"
instructions: "{installed_path}/workflow.md"
validation: "{installed_path}/checklist.md"
story_file: "" # Explicit story path; auto-discovered if empty