feat: unified workflow format for remaining implementation workflows

Converted 6 workflows to GSD-style unified format:
- multi-agent-review (188 → 197 lines)
- recover-sprint-status (306 → 172 lines, 44% reduction)
- revalidate-epic (273 → 189 lines, 31% reduction)
- revalidate-story (510 → 225 lines, 56% reduction)
- detect-ghost-features (625 → 278 lines, 56% reduction)
- migrate-to-github (957 → 279 lines, 71% reduction)

All use semantic tags, explicit commands, and @patterns references.
This commit is contained in:
Jonah Schulte 2026-01-27 00:35:03 -05:00
parent e93d00a7d7
commit cff4770c74
12 changed files with 2680 additions and 0 deletions

View File

@ -0,0 +1,278 @@
# Detect Ghost Features v3.0 - Reverse Gap Analysis
<purpose>
Find undocumented code (components, APIs, services, tables) that exist in codebase
but aren't tracked in any story. "Who you gonna call?" - Ghost Features.
</purpose>
<philosophy>
**Reverse Gap Analysis**
Normal gap analysis: story says X should exist → does it?
Reverse gap analysis: X exists in code → is it documented?
Undocumented features become maintenance nightmares.
Find them, create backfill stories, restore traceability.
</philosophy>
<config>
name: detect-ghost-features
version: 3.0.0
scan_scope:
epic: "Filter to specific epic number"
sprint: "All stories in sprint-status.yaml"
codebase: "All stories in sprint-artifacts"
scan_for:
components: true
api_endpoints: true
database_tables: true
services: true
severity:
critical: "APIs, auth, payment (undocumented = high risk)"
high: "Components, DB tables, services"
medium: "Utilities, helpers"
low: "Config files, constants"
defaults:
create_backfill_stories: false
auto_create: false
add_to_sprint_status: true
create_report: true
</config>
<execution_context>
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="load_stories" priority="first">
**Load documented artifacts from stories**
Based on scan_scope (epic/sprint/codebase):
```bash
# Get all story files
STORIES=$(ls docs/sprint-artifacts/*.md | grep -v "epic-")
```
For each story:
1. Read story file
2. Extract documented artifacts:
- File List (all paths mentioned)
- Tasks (file/component/service names)
- ACs (features/functionality)
3. Store in: documented_artifacts[story_key]
</step>
<step name="scan_codebase">
**Scan codebase for actual implementations**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
👻 SCANNING FOR GHOST FEATURES
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**Components:**
```bash
# Find React/Vue/Angular components
find . -name "*.tsx" -o -name "*.jsx" | xargs grep -l "export.*function\|export.*const"
```
**API Endpoints:**
```bash
# Find Next.js/Express routes
find . -path "*/api/*" -name "*.ts"
grep -r "export.*GET\|export.*POST\|router\.\(get\|post\)" .
```
**Database Tables:**
```bash
# Find Prisma/TypeORM models
grep -r "^model " prisma/schema.prisma
find . -name "*.entity.ts"
```
**Services:**
```bash
find . -name "*.service.ts" -o -name "*Service.ts"
```
</step>
<step name="cross_reference">
**Compare codebase artifacts to story documentation**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 CROSS-REFERENCING
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
For each codebase artifact:
1. Search all stories for mentions of:
- Component/file name
- File path
- Feature description
2. If NO stories mention it → ORPHAN (ghost feature)
3. If stories mention it → Documented
Track orphans with:
- type (component/api/db/service)
- name and path
- severity
- inferred purpose
</step>
<step name="categorize_orphans">
**Analyze and prioritize ghost features**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
👻 GHOST FEATURES DETECTED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total Orphans: {{count}}
By Severity:
- 🔴 CRITICAL: {{critical}} (APIs, security)
- 🟠 HIGH: {{high}} (Components, DB, services)
- 🟡 MEDIUM: {{medium}} (Utilities)
- 🟢 LOW: {{low}} (Config)
By Type:
- Components: {{components}}
- API Endpoints: {{apis}}
- Database Tables: {{tables}}
- Services: {{services}}
Documentation Coverage: {{documented_pct}}%
Orphan Rate: {{orphan_pct}}%
{{#if orphan_pct > 20}}
⚠️ HIGH ORPHAN RATE - Over 20% undocumented!
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="create_backfill_stories" if="create_backfill_stories">
**Generate stories for orphaned features**
For each orphan (prioritized by severity):
1. **Analyze orphan** - Read implementation, find tests, understand purpose
2. **Generate story draft:**
```markdown
# Story: Document existing {{name}}
**Type:** BACKFILL (documenting existing code)
## Business Context
{{inferred_from_code}}
## Current State
✅ Implementation EXISTS: {{file}}
{{#if has_tests}}✅ Tests exist{{else}}❌ No tests{{/if}}
## Acceptance Criteria
{{inferred_acs}}
## Tasks
- [x] {{name}} implementation (ALREADY EXISTS)
{{#if !has_tests}}- [ ] Add tests{{/if}}
- [ ] Verify functionality
- [ ] Assign to epic
```
3. **Ask user** (unless auto_create):
- [Y] Create story
- [A] Auto-create all remaining
- [S] Skip this orphan
- [H] Halt
4. **Write story file:** `docs/sprint-artifacts/backfill-{{type}}-{{name}}.md`
5. **Update sprint-status.yaml** (if enabled)
</step>
<step name="suggest_organization" if="backfill_stories_created">
**Recommend epic assignment**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 BACKFILL ORGANIZATION
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Options:
[A] Create Epic-Backfill (recommended)
- Single epic for all backfill stories
- Clear separation from feature work
[B] Distribute to existing epics
- Add each to its logical epic
[C] Leave in backlog
- Manual assignment later
```
</step>
<step name="generate_report" if="create_report">
**Write comprehensive ghost features report**
Write to: `docs/sprint-artifacts/ghost-features-report-{{timestamp}}.md`
Include:
- Executive summary
- Full orphan list by severity
- Backfill stories created
- Recommendations
- Scan methodology
</step>
<step name="final_summary">
**Display results and next steps**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ GHOST FEATURE DETECTION COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Orphans Found: {{orphan_count}}
Backfill Stories Created: {{backfill_count}}
Documentation Coverage: {{documented_pct}}%
{{#if orphan_count == 0}}
✅ All code is documented in stories!
{{else}}
Next Steps:
1. Review backfill stories for accuracy
2. Assign to epics
3. Add tests/docs for orphans
4. Run revalidation to verify
{{/if}}
💡 Pro Tip: Run this periodically to catch
vibe-coded features before they become
maintenance nightmares.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
</process>
<failure_handling>
**No stories found:** Check scan_scope, verify sprint-artifacts exists.
**Scan fails:** Report which scan type failed, continue others.
**Backfill creation fails:** Skip, continue to next orphan.
</failure_handling>
<success_criteria>
- [ ] All artifact types scanned
- [ ] Cross-reference completed
- [ ] Orphans categorized by severity
- [ ] Backfill stories created (if enabled)
- [ ] Report generated
</success_criteria>

View File

@ -0,0 +1,279 @@
# Migrate to GitHub v3.0 - Production-Grade Story Migration
<purpose>
Migrate BMAD stories to GitHub Issues with full safety guarantees.
Idempotent, atomic, verified, resumable, and reversible.
</purpose>
<philosophy>
**Reliability First, Data Integrity Over Speed**
- Idempotent: Can re-run safely (checks for duplicates)
- Atomic: Each story fully succeeds or rolls back
- Verified: Reads back each created issue
- Resumable: Saves state after each story
- Reversible: Creates rollback manifest
</philosophy>
<config>
name: migrate-to-github
version: 3.0.0
modes:
dry-run: {description: "Preview only, no changes", default: true}
execute: {description: "Actually create issues"}
verify: {description: "Double-check migration accuracy"}
rollback: {description: "Close migrated issues"}
defaults:
update_existing: false
halt_on_critical_error: true
save_state_after_each: true
max_retries: 3
retry_backoff_ms: [1000, 3000, 10000]
labels:
- "type:story"
- "story:{{story_key}}"
- "status:{{status}}"
- "epic:{{epic_number}}"
- "complexity:{{complexity}}"
</config>
<execution_context>
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="preflight_checks" priority="first">
**Verify all prerequisites before ANY operations**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🛡️ PRE-FLIGHT SAFETY CHECKS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**1. Verify GitHub MCP access:**
```
Call: mcp__github__get_me()
If fails: HALT - Cannot proceed without GitHub API access
```
**2. Verify repository access:**
```
Call: mcp__github__list_issues(owner, repo, per_page=1)
If fails: HALT - Repository not accessible
```
**3. Verify local files exist:**
```bash
[ -f "docs/sprint-artifacts/sprint-status.yaml" ] || { echo "HALT"; exit 1; }
```
**4. Check for existing migration:**
- If state file exists: offer Resume/Fresh/View/Delete
- If resuming: load already-migrated stories, filter from queue
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ PRE-FLIGHT CHECKS PASSED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="dry_run" if="mode == dry-run">
**Preview migration plan without making changes**
For each story:
1. Search GitHub for existing issue with label `story:{{story_key}}`
2. If exists: mark as "Would UPDATE" or "Would SKIP"
3. If not exists: mark as "Would CREATE"
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 DRY-RUN SUMMARY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Would CREATE: {{create_count}} new issues
Would UPDATE: {{update_count}} existing issues
Would SKIP: {{skip_count}}
Estimated API Calls: ~{{total_calls}}
Rate Limit Impact: Safe (< 1000 calls)
⚠️ This was a DRY-RUN. No issues created.
To execute: /migrate-to-github mode=execute
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="execute" if="mode == execute">
**Perform migration with atomic operations**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚡ EXECUTE MODE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**Final confirmation:**
```
Type "I understand and want to proceed" to continue:
```
Initialize migration state and rollback manifest.
For each story:
**1. Check if exists (idempotent):**
```
Search: label:story:{{story_key}}
If exists AND update_existing=false: SKIP
```
**2. Generate issue body:**
```markdown
**Story File:** [{{story_key}}.md](path)
**Epic:** {{epic_number}}
## Business Context
{{parsed.businessContext}}
## Acceptance Criteria
{{#each ac}}
- [ ] {{this}}
{{/each}}
## Tasks
{{#each tasks}}
- [ ] {{this}}
{{/each}}
```
**3. Create/update with retry and verification:**
```
attempt = 0
WHILE attempt < max_retries:
TRY:
result = mcp__github__issue_write(create/update)
sleep 2 seconds # GitHub eventual consistency
verification = mcp__github__issue_read(issue_number)
IF verification.title != expected:
THROW "Verification failed"
SUCCESS - add to rollback manifest
BREAK
CATCH:
attempt++
IF attempt < max_retries:
sleep backoff_ms[attempt]
ELSE:
FAIL - add to issues_failed
```
**4. Save state after each story**
**5. Progress updates every 10 stories:**
```
📊 Progress: {{index}}/{{total}}
Created: {{created}}, Updated: {{updated}}, Failed: {{failed}}
```
</step>
<step name="verify" if="mode == verify">
**Double-check migration accuracy**
For each migrated story:
1. Fetch issue from GitHub
2. Verify title, labels, AC count match
3. Report mismatches
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 VERIFICATION RESULTS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Verified Correct: {{verified}}
Warnings: {{warnings}}
Failures: {{failures}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="rollback" if="mode == rollback">
**Close migrated issues (GitHub API doesn't support delete)**
Load rollback manifest. For each created issue:
```
mcp__github__issue_write({
issue_number: {{number}},
state: "closed",
labels: ["migrated:rolled-back"],
state_reason: "not_planned"
})
mcp__github__add_issue_comment({
body: "Issue closed - migration was rolled back."
})
```
</step>
<step name="generate_report">
**Create comprehensive migration report**
Write to: `docs/sprint-artifacts/github-migration-{{timestamp}}.md`
Include:
- Executive summary
- Created/updated/failed issues
- GitHub URLs for each issue
- Rollback instructions
- Next steps
</step>
<step name="final_summary">
**Display completion status**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ MIGRATION COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total: {{total}} stories
Created: {{created}}
Updated: {{updated}}
Failed: {{failed}}
Success Rate: {{success_pct}}%
View in GitHub:
https://github.com/{{owner}}/{{repo}}/issues?q=label:type:story
Rollback Manifest: {{rollback_path}}
State File: {{state_path}}
Next Steps:
1. Verify: /migrate-to-github mode=verify
2. Enable GitHub sync in workflow.yaml
3. Share Issues URL with Product Owner
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
</process>
<failure_handling>
**GitHub MCP unavailable:** HALT - Cannot proceed.
**Repository not accessible:** HALT - Check permissions.
**Issue create fails:** Retry with backoff, then fail story.
**Verification fails:** Log warning, continue.
**All stories fail:** Report systemic issue, HALT.
</failure_handling>
<success_criteria>
- [ ] Pre-flight checks passed
- [ ] All stories processed
- [ ] Issues verified after creation
- [ ] State and rollback manifest saved
- [ ] Report generated
</success_criteria>

View File

@ -0,0 +1,197 @@
# Multi-Agent Code Review v3.0
<purpose>
Perform unbiased code review using multiple specialized AI agents in fresh context.
Agent count scales with story complexity. Independent perspective prevents bias.
</purpose>
<philosophy>
**Fresh Context, Multiple Perspectives**
- Review happens in NEW session (not the agent that wrote the code)
- Prevents bias from implementation decisions
- Agent count determined by complexity, agents chosen by code analysis
- Smart selection: touching auth code → auth-security agent, etc.
</philosophy>
<config>
name: multi-agent-review
version: 3.0.0
agent_selection:
micro: {count: 2, agents: [security, code_quality]}
standard: {count: 4, agents: [security, code_quality, architecture, testing]}
complex: {count: 6, agents: [security, code_quality, architecture, testing, performance, domain_expert]}
available_agents:
security: "Identifies vulnerabilities and security risks"
code_quality: "Reviews style, maintainability, best practices"
architecture: "Reviews system design, patterns, structure"
testing: "Evaluates test coverage and quality"
performance: "Analyzes efficiency and optimization"
domain_expert: "Validates business logic and domain constraints"
</config>
<execution_context>
@patterns/security-checklist.md
@patterns/hospital-grade.md
@patterns/agent-completion.md
</execution_context>
<process>
<step name="determine_agent_count" priority="first">
**Select agents based on complexity**
```
If complexity_level == "micro":
agents = ["security", "code_quality"]
Display: 🔍 MICRO Review (2 agents)
Else if complexity_level == "standard":
agents = ["security", "code_quality", "architecture", "testing"]
Display: 📋 STANDARD Review (4 agents)
Else if complexity_level == "complex":
agents = ALL 6 agents
Display: 🔬 COMPLEX Review (6 agents)
```
</step>
<step name="load_story_context">
**Load story file and understand requirements**
```bash
STORY_FILE="{{story_file}}"
[ -f "$STORY_FILE" ] || { echo "❌ Story file not found"; exit 1; }
```
Use Read tool on story file. Extract:
- What was supposed to be implemented
- Acceptance criteria
- Tasks and subtasks
- File list
</step>
<step name="invoke_review_agents">
**Spawn review agents in fresh context**
For each agent in selected agents, spawn Task agent:
```
Task({
subagent_type: "general-purpose",
description: "{{agent_type}} review for {{story_key}}",
prompt: `
You are the {{AGENT_TYPE}} reviewer for story {{story_key}}.
<execution_context>
@patterns/security-checklist.md
@patterns/hospital-grade.md
</execution_context>
<context>
Story: [inline story content]
Changed files: [git diff output]
</context>
<objective>
Review from your {{agent_type}} perspective. Find issues, be thorough.
</objective>
<success_criteria>
- [ ] All relevant files reviewed
- [ ] Issues categorized by severity (CRITICAL/HIGH/MEDIUM/LOW)
- [ ] Return ## AGENT COMPLETE with findings
</success_criteria>
`
})
```
Wait for all agents to complete. Aggregate findings.
</step>
<step name="aggregate_findings">
**Collect and categorize all findings**
Merge findings from all agents:
- CRITICAL: Security vulnerabilities, data loss risks
- HIGH: Production bugs, logic errors
- MEDIUM: Technical debt, maintainability
- LOW: Nice-to-have improvements
</step>
<step name="present_report">
**Display review summary**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🤖 MULTI-AGENT CODE REVIEW COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Agents Used: {{agent_count}}
- Security Agent
- Code Quality Agent
[...]
Findings:
- 🔴 CRITICAL: {{critical_count}}
- 🟠 HIGH: {{high_count}}
- 🟡 MEDIUM: {{medium_count}}
- 🔵 LOW: {{low_count}}
```
For each finding, display:
- Severity and title
- Agent that found it
- Location (file:line)
- Description and recommendation
</step>
<step name="recommend_actions">
**Suggest next steps based on findings**
```
📋 RECOMMENDED NEXT STEPS:
If CRITICAL findings exist:
⚠️ MUST FIX before proceeding
- Address all critical security/correctness issues
- Re-run review after fixes
If only HIGH/MEDIUM findings:
✅ Story may proceed
- Consider addressing high-priority items
- Create follow-up tasks for medium items
If only LOW/INFO findings:
✅ Code quality looks good
- Optional: Address style/optimization suggestions
```
</step>
</process>
<integration>
**When to use:**
- Complex stories (≥16 tasks or high-risk keywords)
- Security-sensitive code
- Significant architectural changes
- When single-agent review was inconclusive
**When NOT to use:**
- Micro stories (≤3 tasks)
- Standard stories with simple changes
- Stories that passed adversarial review cleanly
</integration>
<failure_handling>
**Review agent fails:** Fall back to adversarial code review.
**API error:** Log failure, continue pipeline with warning.
</failure_handling>
<success_criteria>
- [ ] All selected agents completed review
- [ ] Findings aggregated and categorized
- [ ] Report displayed with recommendations
</success_criteria>

View File

@ -0,0 +1,172 @@
# Recover Sprint Status v3.0
<purpose>
Fix sprint-status.yaml when tracking has drifted. Analyzes multiple sources
(story files, git commits, completion reports) to rebuild accurate status.
</purpose>
<philosophy>
**Multiple Evidence Sources, Conservative Updates**
1. Story file quality (size, tasks, checkboxes)
2. Explicit Status: fields in stories
3. Git commits (last 30 days)
4. Autonomous completion reports
5. Task completion rate
Trust explicit Status: fields highest. Require evidence for status changes.
</philosophy>
<config>
name: recover-sprint-status
version: 3.0.0
modes:
dry-run: {description: "Analysis only, no changes", default: true}
conservative: {description: "High confidence updates only"}
aggressive: {description: "Medium+ confidence, infers from git"}
interactive: {description: "Ask before each batch"}
confidence_levels:
very_high: {sources: [explicit_status, completion_report]}
high: {sources: [3+ git_commits, 90% tasks_complete]}
medium: {sources: [1-2 git_commits, 50-90% tasks_complete]}
low: {sources: [no_status, no_commits, small_file]}
</config>
<execution_context>
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="analyze_sources" priority="first">
**Scan all evidence sources**
```bash
# Find story files
SPRINT_ARTIFACTS="docs/sprint-artifacts"
STORIES=$(ls $SPRINT_ARTIFACTS/*.md 2>/dev/null | grep -v "epic-")
# Get recent git commits
git log --oneline --since="30 days ago" > /tmp/recent_commits.txt
```
For each story:
1. Read story file, extract Status: field if present
2. Check file size (≥10KB = properly detailed)
3. Count tasks and checkbox completion
4. Search git commits for story references
5. Check for completion reports (.epic-*-completion-report.md)
</step>
<step name="calculate_confidence">
**Determine confidence level for each story**
| Evidence | Confidence | Action |
|----------|------------|--------|
| Explicit Status: done | Very High | Trust it |
| Completion report lists story | Very High | Mark done |
| 3+ git commits + 90% checked | High | Mark done |
| 1-2 commits OR 50-90% checked | Medium | Mark in-progress |
| No commits, <50% checked | Low | Leave as-is |
| File <10KB | Low | Downgrade if done |
</step>
<step name="preview_changes" if="mode == dry-run">
**Show recommendations without applying**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 RECOVERY ANALYSIS (Dry Run)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
High Confidence Updates:
- 2-5-auth: backlog → done (explicit Status:, 3 commits)
- 2-6-profile: in-progress → done (completion report)
Medium Confidence Updates:
- 2-7-settings: backlog → in-progress (2 commits)
Low Confidence (verify manually):
- 2-8-dashboard: no Status:, no commits, <10KB file
```
Exit after preview. No changes made.
</step>
<step name="apply_conservative" if="mode == conservative">
**Apply only high/very-high confidence updates**
For each high+ confidence story:
1. Backup current sprint-status.yaml
2. Use Edit tool to update status
3. Log change
```bash
# Backup
cp $SPRINT_STATUS .sprint-status-backups/sprint-status-recovery-$(date +%Y%m%d).yaml
```
Skip medium/low confidence stories.
</step>
<step name="apply_aggressive" if="mode == aggressive">
**Apply medium+ confidence updates**
Includes:
- Inferring from git commits (even 1 commit)
- Using task completion rate
- Pre-filling brownfield checkboxes
```
⚠️ AGGRESSIVE mode may make incorrect inferences.
Review results carefully.
```
</step>
<step name="validate_results">
**Verify recovery worked**
```bash
./scripts/sync-sprint-status.sh --validate
```
Should show:
- "✓ sprint-status.yaml is up to date!" (success)
- OR discrepancy count (if issues remain)
</step>
<step name="commit_changes" if="changes_made">
**Commit the recovery**
Use Bash to commit:
```bash
git add docs/sprint-artifacts/sprint-status.yaml
git add .sprint-status-backups/
git commit -m "fix(tracking): Recover sprint-status.yaml - {{mode}} recovery"
```
</step>
</process>
<failure_handling>
**No changes detected:** sprint-status.yaml already accurate.
**Low confidence on known-done stories:** Add Status: field manually, re-run.
**Recovery marks incomplete as done:** Use conservative mode, verify manually.
</failure_handling>
<post_recovery_checklist>
- [ ] Run validation: `./scripts/sync-sprint-status.sh --validate`
- [ ] Review backup in `.sprint-status-backups/`
- [ ] Spot-check 5-10 stories for accuracy
- [ ] Commit changes
- [ ] Document why drift occurred
</post_recovery_checklist>
<success_criteria>
- [ ] All evidence sources analyzed
- [ ] Changes applied based on confidence threshold
- [ ] Validation passes
- [ ] Backup created
</success_criteria>

View File

@ -0,0 +1,189 @@
# Revalidate Epic v3.0 - Batch Story Revalidation
<purpose>
Batch revalidate all stories in an epic using parallel agents (semaphore pattern).
Clears checkboxes, verifies against codebase, re-checks verified items.
</purpose>
<philosophy>
**Parallel Verification, Continuous Worker Pool**
- Spawn up to N workers, refill as each completes
- Each story gets fresh context verification
- Aggregate results into epic-level health score
- Optionally fill gaps found during verification
</philosophy>
<config>
name: revalidate-epic
version: 3.0.0
defaults:
max_concurrent: 3
fill_gaps: false
continue_on_failure: true
create_epic_report: true
update_sprint_status: true
</config>
<execution_context>
@patterns/verification.md
@patterns/hospital-grade.md
@revalidate-story/workflow.md
</execution_context>
<process>
<step name="load_epic_stories" priority="first">
**Find all stories for the epic**
```bash
EPIC_NUMBER="{{epic_number}}"
[ -n "$EPIC_NUMBER" ] || { echo "ERROR: epic_number required"; exit 1; }
# Filter stories from sprint-status.yaml
grep "^${EPIC_NUMBER}-" docs/sprint-artifacts/sprint-status.yaml
```
Use Read tool on sprint-status.yaml. Filter stories starting with `{epic_number}-`.
Exclude epics and retrospectives.
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 EPIC {{epic_number}} REVALIDATION
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Stories Found: {{count}}
Mode: {{fill_gaps ? "Verify & Fill Gaps" : "Verify Only"}}
Max Concurrent: {{max_concurrent}} agents
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Use AskUserQuestion: Proceed with revalidation? (yes/no)
</step>
<step name="spawn_worker_pool">
**Initialize semaphore pattern for parallel revalidation**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🚀 Starting Parallel Revalidation
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Initialize state:
- story_queue = epic_stories
- active_workers = {}
- completed_stories = []
- failed_stories = []
Fill initial worker slots (up to max_concurrent):
```
Task({
subagent_type: "general-purpose",
description: "Revalidate story {{story_key}}",
prompt: `
Execute revalidate-story workflow for {{story_key}}.
<execution_context>
@revalidate-story/workflow.md
</execution_context>
Parameters:
- story_file: {{story_file}}
- fill_gaps: {{fill_gaps}}
Return verification summary with verified_pct, gaps_found, gaps_filled.
`,
run_in_background: true
})
```
</step>
<step name="maintain_worker_pool">
**Keep workers running until all stories done**
While active_workers > 0 OR stories remaining in queue:
1. Poll for completed workers (non-blocking with TaskOutput)
2. When worker completes:
- Parse verification results
- Add to completed_stories
- If more stories in queue: spawn new worker in that slot
3. Display progress every 30 seconds:
```
📊 Progress: {{completed}} completed, {{active}} active, {{queued}} queued
```
</step>
<step name="aggregate_results">
**Generate epic-level summary**
Calculate totals across all stories:
- epic_verified = sum of verified items
- epic_partial = sum of partial items
- epic_missing = sum of missing items
- epic_verified_pct = (verified / total) × 100
Group stories by health:
- Complete (≥95% verified)
- Mostly Complete (80-94%)
- Partial (50-79%)
- Incomplete (<50%)
</step>
<step name="display_summary">
**Show epic revalidation results**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 EPIC {{epic_number}} REVALIDATION SUMMARY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total Stories: {{count}}
Completed: {{completed_count}}
Failed: {{failed_count}}
Epic-Wide Verification:
- ✅ Verified: {{verified}}/{{total}} ({{pct}}%)
- 🔶 Partial: {{partial}}/{{total}}
- ❌ Missing: {{missing}}/{{total}}
Epic Health Score: {{epic_verified_pct}}/100
{{#if pct >= 95}}
✅ Epic is COMPLETE and verified
{{else if pct >= 80}}
🔶 Epic is MOSTLY COMPLETE
{{else if pct >= 50}}
⚠️ Epic is PARTIALLY COMPLETE
{{else}}
❌ Epic is INCOMPLETE (major rework needed)
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="update_tracking" if="update_sprint_status">
**Update sprint-status with revalidation results**
Use Edit tool to add comment to epic entry:
```
epic-{{epic_number}}: done # Revalidated: {{pct}}% verified ({{timestamp}})
```
</step>
</process>
<failure_handling>
**Worker fails:** Log error, refill slot if continue_on_failure=true.
**All stories fail:** Report systemic issue, halt batch.
**Story file missing:** Skip with warning.
</failure_handling>
<success_criteria>
- [ ] All epic stories processed
- [ ] Results aggregated
- [ ] Epic health score calculated
- [ ] Sprint status updated (if enabled)
</success_criteria>

View File

@ -0,0 +1,225 @@
# Revalidate Story v3.0 - Verify Checkboxes Against Codebase
<purpose>
Clear all checkboxes and re-verify each item against actual codebase reality.
Detects over-reported completion and identifies real gaps.
Optionally fills gaps by implementing missing items.
</purpose>
<philosophy>
**Trust But Verify, Evidence Required**
1. Clear all checkboxes (fresh start)
2. For each AC/Task/DoD: search codebase for evidence
3. Only re-check if evidence found AND not a stub
4. Report accuracy: was completion over-reported or under-reported?
</philosophy>
<config>
name: revalidate-story
version: 3.0.0
defaults:
fill_gaps: false
max_gaps_to_fill: 10
commit_strategy: "all_at_once" # or "per_gap"
create_report: true
update_sprint_status: true
verification_status:
verified: {checkbox: "[x]", evidence: "found, not stub, tests exist"}
partial: {checkbox: "[~]", evidence: "partial implementation or missing tests"}
missing: {checkbox: "[ ]", evidence: "not found in codebase"}
</config>
<execution_context>
@patterns/verification.md
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="load_and_backup" priority="first">
**Load story and backup current state**
```bash
STORY_FILE="{{story_file}}"
[ -f "$STORY_FILE" ] || { echo "ERROR: story_file required"; exit 1; }
```
Use Read tool on story file. Count current checkboxes:
- ac_checked_before
- tasks_checked_before
- dod_checked_before
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 STORY REVALIDATION STARTED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Story: {{story_key}}
Mode: {{fill_gaps ? "Verify & Fill Gaps" : "Verify Only"}}
Current State:
- Acceptance Criteria: {{ac_checked}}/{{ac_total}} checked
- Tasks: {{tasks_checked}}/{{tasks_total}} checked
- DoD: {{dod_checked}}/{{dod_total}} checked
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="clear_checkboxes">
**Clear all checkboxes for fresh verification**
Use Edit tool (replace_all: true):
- `[x]``[ ]` in Acceptance Criteria section
- `[x]``[ ]` in Tasks section
- `[x]``[ ]` in Definition of Done section
</step>
<step name="verify_acceptance_criteria">
**Verify each AC against codebase**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 VERIFYING ACCEPTANCE CRITERIA
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
For each AC item:
1. **Parse AC** - Extract file/component/feature mentions
2. **Search codebase** - Use Glob/Grep to find evidence
3. **Verify implementation** - Read files, check for:
- NOT a stub (no "TODO", "Not implemented", empty function)
- Has actual implementation
- Tests exist (*.test.* or *.spec.*)
4. **Determine status:**
- VERIFIED: Evidence found, not stub, tests exist → check [x]
- PARTIAL: Partial evidence or missing tests → check [~]
- MISSING: No evidence found → leave [ ]
5. **Record evidence or gap**
</step>
<step name="verify_tasks">
**Verify each task against codebase**
Same process as ACs:
- Parse task description for artifacts
- Search codebase with Glob/Grep
- Read and verify (check for stubs, tests)
- Update checkbox based on evidence
</step>
<step name="verify_definition_of_done">
**Verify DoD items**
For common DoD items, run actual checks:
- "Type check passes" → `npm run type-check`
- "Unit tests pass" → `npm test`
- "Linting clean" → `npm run lint`
- "Build succeeds" → `npm run build`
</step>
<step name="generate_report">
**Calculate and display results**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 REVALIDATION SUMMARY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Story: {{story_key}}
Verification Results:
- ✅ Verified: {{verified}}/{{total}} ({{pct}}%)
- 🔶 Partial: {{partial}}/{{total}}
- ❌ Missing: {{missing}}/{{total}}
Accuracy Check:
- Before: {{pct_before}}% checked
- After: {{verified_pct}}% verified
- {{pct_before > verified_pct ? "Over-reported" : "Under-reported"}}
{{#if missing > 0}}
Gaps Found ({{missing}}):
[list gaps with what's missing]
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="fill_gaps" if="fill_gaps AND gaps_found">
**Implement missing items**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔧 GAP FILLING MODE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Safety check:
```
if gaps_found > max_gaps_to_fill:
echo "⚠️ TOO MANY GAPS ({{gaps}} > {{max}})"
echo "Consider re-implementing with /dev-story"
HALT
```
For each gap:
1. Load story context
2. Implement missing item
3. Write tests
4. Run tests to verify
5. Check box [x] if successful
6. Commit if commit_strategy == "per_gap"
</step>
<step name="finalize">
**Re-verify and commit**
If gaps were filled:
1. Re-run verification on filled gaps
2. Commit all changes (if commit_strategy == "all_at_once")
Update sprint-status.yaml with revalidation result:
```
{{story_key}}: {{status}} # Revalidated: {{pct}}% ({{timestamp}})
```
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ REVALIDATION COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Final: {{verified}}/{{total}} verified ({{pct}}%)
Recommendation:
{{#if pct >= 95}}
✅ Story is COMPLETE - mark as "done"
{{else if pct >= 80}}
🔶 Mostly complete - finish remaining items
{{else if pct >= 50}}
⚠️ Significant gaps - continue with /dev-story
{{else}}
❌ Mostly incomplete - consider re-implementing
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
</process>
<failure_handling>
**File not found:** HALT with error.
**Verification fails:** Record gap, continue to next item.
**Gap fill fails:** Leave unchecked, record failure.
**Too many gaps:** HALT, recommend re-implementation.
</failure_handling>
<success_criteria>
- [ ] All items verified against codebase
- [ ] Checkboxes reflect actual implementation
- [ ] Accuracy comparison displayed
- [ ] Gaps filled (if enabled)
- [ ] Sprint status updated
</success_criteria>

View File

@ -0,0 +1,278 @@
# Detect Ghost Features v3.0 - Reverse Gap Analysis
<purpose>
Find undocumented code (components, APIs, services, tables) that exist in codebase
but aren't tracked in any story. "Who you gonna call?" - Ghost Features.
</purpose>
<philosophy>
**Reverse Gap Analysis**
Normal gap analysis: story says X should exist → does it?
Reverse gap analysis: X exists in code → is it documented?
Undocumented features become maintenance nightmares.
Find them, create backfill stories, restore traceability.
</philosophy>
<config>
name: detect-ghost-features
version: 3.0.0
scan_scope:
epic: "Filter to specific epic number"
sprint: "All stories in sprint-status.yaml"
codebase: "All stories in sprint-artifacts"
scan_for:
components: true
api_endpoints: true
database_tables: true
services: true
severity:
critical: "APIs, auth, payment (undocumented = high risk)"
high: "Components, DB tables, services"
medium: "Utilities, helpers"
low: "Config files, constants"
defaults:
create_backfill_stories: false
auto_create: false
add_to_sprint_status: true
create_report: true
</config>
<execution_context>
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="load_stories" priority="first">
**Load documented artifacts from stories**
Based on scan_scope (epic/sprint/codebase):
```bash
# Get all story files
STORIES=$(ls docs/sprint-artifacts/*.md | grep -v "epic-")
```
For each story:
1. Read story file
2. Extract documented artifacts:
- File List (all paths mentioned)
- Tasks (file/component/service names)
- ACs (features/functionality)
3. Store in: documented_artifacts[story_key]
</step>
<step name="scan_codebase">
**Scan codebase for actual implementations**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
👻 SCANNING FOR GHOST FEATURES
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**Components:**
```bash
# Find React/Vue/Angular components
find . -name "*.tsx" -o -name "*.jsx" | xargs grep -l "export.*function\|export.*const"
```
**API Endpoints:**
```bash
# Find Next.js/Express routes
find . -path "*/api/*" -name "*.ts"
grep -r "export.*GET\|export.*POST\|router\.\(get\|post\)" .
```
**Database Tables:**
```bash
# Find Prisma/TypeORM models
grep -r "^model " prisma/schema.prisma
find . -name "*.entity.ts"
```
**Services:**
```bash
find . -name "*.service.ts" -o -name "*Service.ts"
```
</step>
<step name="cross_reference">
**Compare codebase artifacts to story documentation**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 CROSS-REFERENCING
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
For each codebase artifact:
1. Search all stories for mentions of:
- Component/file name
- File path
- Feature description
2. If NO stories mention it → ORPHAN (ghost feature)
3. If stories mention it → Documented
Track orphans with:
- type (component/api/db/service)
- name and path
- severity
- inferred purpose
</step>
<step name="categorize_orphans">
**Analyze and prioritize ghost features**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
👻 GHOST FEATURES DETECTED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total Orphans: {{count}}
By Severity:
- 🔴 CRITICAL: {{critical}} (APIs, security)
- 🟠 HIGH: {{high}} (Components, DB, services)
- 🟡 MEDIUM: {{medium}} (Utilities)
- 🟢 LOW: {{low}} (Config)
By Type:
- Components: {{components}}
- API Endpoints: {{apis}}
- Database Tables: {{tables}}
- Services: {{services}}
Documentation Coverage: {{documented_pct}}%
Orphan Rate: {{orphan_pct}}%
{{#if orphan_pct > 20}}
⚠️ HIGH ORPHAN RATE - Over 20% undocumented!
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="create_backfill_stories" if="create_backfill_stories">
**Generate stories for orphaned features**
For each orphan (prioritized by severity):
1. **Analyze orphan** - Read implementation, find tests, understand purpose
2. **Generate story draft:**
```markdown
# Story: Document existing {{name}}
**Type:** BACKFILL (documenting existing code)
## Business Context
{{inferred_from_code}}
## Current State
✅ Implementation EXISTS: {{file}}
{{#if has_tests}}✅ Tests exist{{else}}❌ No tests{{/if}}
## Acceptance Criteria
{{inferred_acs}}
## Tasks
- [x] {{name}} implementation (ALREADY EXISTS)
{{#if !has_tests}}- [ ] Add tests{{/if}}
- [ ] Verify functionality
- [ ] Assign to epic
```
3. **Ask user** (unless auto_create):
- [Y] Create story
- [A] Auto-create all remaining
- [S] Skip this orphan
- [H] Halt
4. **Write story file:** `docs/sprint-artifacts/backfill-{{type}}-{{name}}.md`
5. **Update sprint-status.yaml** (if enabled)
</step>
<step name="suggest_organization" if="backfill_stories_created">
**Recommend epic assignment**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 BACKFILL ORGANIZATION
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Options:
[A] Create Epic-Backfill (recommended)
- Single epic for all backfill stories
- Clear separation from feature work
[B] Distribute to existing epics
- Add each to its logical epic
[C] Leave in backlog
- Manual assignment later
```
</step>
<step name="generate_report" if="create_report">
**Write comprehensive ghost features report**
Write to: `docs/sprint-artifacts/ghost-features-report-{{timestamp}}.md`
Include:
- Executive summary
- Full orphan list by severity
- Backfill stories created
- Recommendations
- Scan methodology
</step>
<step name="final_summary">
**Display results and next steps**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ GHOST FEATURE DETECTION COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Orphans Found: {{orphan_count}}
Backfill Stories Created: {{backfill_count}}
Documentation Coverage: {{documented_pct}}%
{{#if orphan_count == 0}}
✅ All code is documented in stories!
{{else}}
Next Steps:
1. Review backfill stories for accuracy
2. Assign to epics
3. Add tests/docs for orphans
4. Run revalidation to verify
{{/if}}
💡 Pro Tip: Run this periodically to catch
vibe-coded features before they become
maintenance nightmares.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
</process>
<failure_handling>
**No stories found:** Check scan_scope, verify sprint-artifacts exists.
**Scan fails:** Report which scan type failed, continue others.
**Backfill creation fails:** Skip, continue to next orphan.
</failure_handling>
<success_criteria>
- [ ] All artifact types scanned
- [ ] Cross-reference completed
- [ ] Orphans categorized by severity
- [ ] Backfill stories created (if enabled)
- [ ] Report generated
</success_criteria>

View File

@ -0,0 +1,279 @@
# Migrate to GitHub v3.0 - Production-Grade Story Migration
<purpose>
Migrate BMAD stories to GitHub Issues with full safety guarantees.
Idempotent, atomic, verified, resumable, and reversible.
</purpose>
<philosophy>
**Reliability First, Data Integrity Over Speed**
- Idempotent: Can re-run safely (checks for duplicates)
- Atomic: Each story fully succeeds or rolls back
- Verified: Reads back each created issue
- Resumable: Saves state after each story
- Reversible: Creates rollback manifest
</philosophy>
<config>
name: migrate-to-github
version: 3.0.0
modes:
dry-run: {description: "Preview only, no changes", default: true}
execute: {description: "Actually create issues"}
verify: {description: "Double-check migration accuracy"}
rollback: {description: "Close migrated issues"}
defaults:
update_existing: false
halt_on_critical_error: true
save_state_after_each: true
max_retries: 3
retry_backoff_ms: [1000, 3000, 10000]
labels:
- "type:story"
- "story:{{story_key}}"
- "status:{{status}}"
- "epic:{{epic_number}}"
- "complexity:{{complexity}}"
</config>
<execution_context>
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="preflight_checks" priority="first">
**Verify all prerequisites before ANY operations**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🛡️ PRE-FLIGHT SAFETY CHECKS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**1. Verify GitHub MCP access:**
```
Call: mcp__github__get_me()
If fails: HALT - Cannot proceed without GitHub API access
```
**2. Verify repository access:**
```
Call: mcp__github__list_issues(owner, repo, per_page=1)
If fails: HALT - Repository not accessible
```
**3. Verify local files exist:**
```bash
[ -f "docs/sprint-artifacts/sprint-status.yaml" ] || { echo "HALT"; exit 1; }
```
**4. Check for existing migration:**
- If state file exists: offer Resume/Fresh/View/Delete
- If resuming: load already-migrated stories, filter from queue
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ PRE-FLIGHT CHECKS PASSED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="dry_run" if="mode == dry-run">
**Preview migration plan without making changes**
For each story:
1. Search GitHub for existing issue with label `story:{{story_key}}`
2. If exists: mark as "Would UPDATE" or "Would SKIP"
3. If not exists: mark as "Would CREATE"
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 DRY-RUN SUMMARY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Would CREATE: {{create_count}} new issues
Would UPDATE: {{update_count}} existing issues
Would SKIP: {{skip_count}}
Estimated API Calls: ~{{total_calls}}
Rate Limit Impact: Safe (< 1000 calls)
⚠️ This was a DRY-RUN. No issues created.
To execute: /migrate-to-github mode=execute
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="execute" if="mode == execute">
**Perform migration with atomic operations**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚡ EXECUTE MODE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**Final confirmation:**
```
Type "I understand and want to proceed" to continue:
```
Initialize migration state and rollback manifest.
For each story:
**1. Check if exists (idempotent):**
```
Search: label:story:{{story_key}}
If exists AND update_existing=false: SKIP
```
**2. Generate issue body:**
```markdown
**Story File:** [{{story_key}}.md](path)
**Epic:** {{epic_number}}
## Business Context
{{parsed.businessContext}}
## Acceptance Criteria
{{#each ac}}
- [ ] {{this}}
{{/each}}
## Tasks
{{#each tasks}}
- [ ] {{this}}
{{/each}}
```
**3. Create/update with retry and verification:**
```
attempt = 0
WHILE attempt < max_retries:
TRY:
result = mcp__github__issue_write(create/update)
sleep 2 seconds # GitHub eventual consistency
verification = mcp__github__issue_read(issue_number)
IF verification.title != expected:
THROW "Verification failed"
SUCCESS - add to rollback manifest
BREAK
CATCH:
attempt++
IF attempt < max_retries:
sleep backoff_ms[attempt]
ELSE:
FAIL - add to issues_failed
```
**4. Save state after each story**
**5. Progress updates every 10 stories:**
```
📊 Progress: {{index}}/{{total}}
Created: {{created}}, Updated: {{updated}}, Failed: {{failed}}
```
</step>
<step name="verify" if="mode == verify">
**Double-check migration accuracy**
For each migrated story:
1. Fetch issue from GitHub
2. Verify title, labels, AC count match
3. Report mismatches
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 VERIFICATION RESULTS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Verified Correct: {{verified}}
Warnings: {{warnings}}
Failures: {{failures}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="rollback" if="mode == rollback">
**Close migrated issues (GitHub API doesn't support delete)**
Load rollback manifest. For each created issue:
```
mcp__github__issue_write({
issue_number: {{number}},
state: "closed",
labels: ["migrated:rolled-back"],
state_reason: "not_planned"
})
mcp__github__add_issue_comment({
body: "Issue closed - migration was rolled back."
})
```
</step>
<step name="generate_report">
**Create comprehensive migration report**
Write to: `docs/sprint-artifacts/github-migration-{{timestamp}}.md`
Include:
- Executive summary
- Created/updated/failed issues
- GitHub URLs for each issue
- Rollback instructions
- Next steps
</step>
<step name="final_summary">
**Display completion status**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ MIGRATION COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total: {{total}} stories
Created: {{created}}
Updated: {{updated}}
Failed: {{failed}}
Success Rate: {{success_pct}}%
View in GitHub:
https://github.com/{{owner}}/{{repo}}/issues?q=label:type:story
Rollback Manifest: {{rollback_path}}
State File: {{state_path}}
Next Steps:
1. Verify: /migrate-to-github mode=verify
2. Enable GitHub sync in workflow.yaml
3. Share Issues URL with Product Owner
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
</process>
<failure_handling>
**GitHub MCP unavailable:** HALT - Cannot proceed.
**Repository not accessible:** HALT - Check permissions.
**Issue create fails:** Retry with backoff, then fail story.
**Verification fails:** Log warning, continue.
**All stories fail:** Report systemic issue, HALT.
</failure_handling>
<success_criteria>
- [ ] Pre-flight checks passed
- [ ] All stories processed
- [ ] Issues verified after creation
- [ ] State and rollback manifest saved
- [ ] Report generated
</success_criteria>

View File

@ -0,0 +1,197 @@
# Multi-Agent Code Review v3.0
<purpose>
Perform unbiased code review using multiple specialized AI agents in fresh context.
Agent count scales with story complexity. Independent perspective prevents bias.
</purpose>
<philosophy>
**Fresh Context, Multiple Perspectives**
- Review happens in NEW session (not the agent that wrote the code)
- Prevents bias from implementation decisions
- Agent count determined by complexity, agents chosen by code analysis
- Smart selection: touching auth code → auth-security agent, etc.
</philosophy>
<config>
name: multi-agent-review
version: 3.0.0
agent_selection:
micro: {count: 2, agents: [security, code_quality]}
standard: {count: 4, agents: [security, code_quality, architecture, testing]}
complex: {count: 6, agents: [security, code_quality, architecture, testing, performance, domain_expert]}
available_agents:
security: "Identifies vulnerabilities and security risks"
code_quality: "Reviews style, maintainability, best practices"
architecture: "Reviews system design, patterns, structure"
testing: "Evaluates test coverage and quality"
performance: "Analyzes efficiency and optimization"
domain_expert: "Validates business logic and domain constraints"
</config>
<execution_context>
@patterns/security-checklist.md
@patterns/hospital-grade.md
@patterns/agent-completion.md
</execution_context>
<process>
<step name="determine_agent_count" priority="first">
**Select agents based on complexity**
```
If complexity_level == "micro":
agents = ["security", "code_quality"]
Display: 🔍 MICRO Review (2 agents)
Else if complexity_level == "standard":
agents = ["security", "code_quality", "architecture", "testing"]
Display: 📋 STANDARD Review (4 agents)
Else if complexity_level == "complex":
agents = ALL 6 agents
Display: 🔬 COMPLEX Review (6 agents)
```
</step>
<step name="load_story_context">
**Load story file and understand requirements**
```bash
STORY_FILE="{{story_file}}"
[ -f "$STORY_FILE" ] || { echo "❌ Story file not found"; exit 1; }
```
Use Read tool on story file. Extract:
- What was supposed to be implemented
- Acceptance criteria
- Tasks and subtasks
- File list
</step>
<step name="invoke_review_agents">
**Spawn review agents in fresh context**
For each agent in selected agents, spawn Task agent:
```
Task({
subagent_type: "general-purpose",
description: "{{agent_type}} review for {{story_key}}",
prompt: `
You are the {{AGENT_TYPE}} reviewer for story {{story_key}}.
<execution_context>
@patterns/security-checklist.md
@patterns/hospital-grade.md
</execution_context>
<context>
Story: [inline story content]
Changed files: [git diff output]
</context>
<objective>
Review from your {{agent_type}} perspective. Find issues, be thorough.
</objective>
<success_criteria>
- [ ] All relevant files reviewed
- [ ] Issues categorized by severity (CRITICAL/HIGH/MEDIUM/LOW)
- [ ] Return ## AGENT COMPLETE with findings
</success_criteria>
`
})
```
Wait for all agents to complete. Aggregate findings.
</step>
<step name="aggregate_findings">
**Collect and categorize all findings**
Merge findings from all agents:
- CRITICAL: Security vulnerabilities, data loss risks
- HIGH: Production bugs, logic errors
- MEDIUM: Technical debt, maintainability
- LOW: Nice-to-have improvements
</step>
<step name="present_report">
**Display review summary**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🤖 MULTI-AGENT CODE REVIEW COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Agents Used: {{agent_count}}
- Security Agent
- Code Quality Agent
[...]
Findings:
- 🔴 CRITICAL: {{critical_count}}
- 🟠 HIGH: {{high_count}}
- 🟡 MEDIUM: {{medium_count}}
- 🔵 LOW: {{low_count}}
```
For each finding, display:
- Severity and title
- Agent that found it
- Location (file:line)
- Description and recommendation
</step>
<step name="recommend_actions">
**Suggest next steps based on findings**
```
📋 RECOMMENDED NEXT STEPS:
If CRITICAL findings exist:
⚠️ MUST FIX before proceeding
- Address all critical security/correctness issues
- Re-run review after fixes
If only HIGH/MEDIUM findings:
✅ Story may proceed
- Consider addressing high-priority items
- Create follow-up tasks for medium items
If only LOW/INFO findings:
✅ Code quality looks good
- Optional: Address style/optimization suggestions
```
</step>
</process>
<integration>
**When to use:**
- Complex stories (≥16 tasks or high-risk keywords)
- Security-sensitive code
- Significant architectural changes
- When single-agent review was inconclusive
**When NOT to use:**
- Micro stories (≤3 tasks)
- Standard stories with simple changes
- Stories that passed adversarial review cleanly
</integration>
<failure_handling>
**Review agent fails:** Fall back to adversarial code review.
**API error:** Log failure, continue pipeline with warning.
</failure_handling>
<success_criteria>
- [ ] All selected agents completed review
- [ ] Findings aggregated and categorized
- [ ] Report displayed with recommendations
</success_criteria>

View File

@ -0,0 +1,172 @@
# Recover Sprint Status v3.0
<purpose>
Fix sprint-status.yaml when tracking has drifted. Analyzes multiple sources
(story files, git commits, completion reports) to rebuild accurate status.
</purpose>
<philosophy>
**Multiple Evidence Sources, Conservative Updates**
1. Story file quality (size, tasks, checkboxes)
2. Explicit Status: fields in stories
3. Git commits (last 30 days)
4. Autonomous completion reports
5. Task completion rate
Trust explicit Status: fields highest. Require evidence for status changes.
</philosophy>
<config>
name: recover-sprint-status
version: 3.0.0
modes:
dry-run: {description: "Analysis only, no changes", default: true}
conservative: {description: "High confidence updates only"}
aggressive: {description: "Medium+ confidence, infers from git"}
interactive: {description: "Ask before each batch"}
confidence_levels:
very_high: {sources: [explicit_status, completion_report]}
high: {sources: [3+ git_commits, 90% tasks_complete]}
medium: {sources: [1-2 git_commits, 50-90% tasks_complete]}
low: {sources: [no_status, no_commits, small_file]}
</config>
<execution_context>
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="analyze_sources" priority="first">
**Scan all evidence sources**
```bash
# Find story files
SPRINT_ARTIFACTS="docs/sprint-artifacts"
STORIES=$(ls $SPRINT_ARTIFACTS/*.md 2>/dev/null | grep -v "epic-")
# Get recent git commits
git log --oneline --since="30 days ago" > /tmp/recent_commits.txt
```
For each story:
1. Read story file, extract Status: field if present
2. Check file size (≥10KB = properly detailed)
3. Count tasks and checkbox completion
4. Search git commits for story references
5. Check for completion reports (.epic-*-completion-report.md)
</step>
<step name="calculate_confidence">
**Determine confidence level for each story**
| Evidence | Confidence | Action |
|----------|------------|--------|
| Explicit Status: done | Very High | Trust it |
| Completion report lists story | Very High | Mark done |
| 3+ git commits + 90% checked | High | Mark done |
| 1-2 commits OR 50-90% checked | Medium | Mark in-progress |
| No commits, <50% checked | Low | Leave as-is |
| File <10KB | Low | Downgrade if done |
</step>
<step name="preview_changes" if="mode == dry-run">
**Show recommendations without applying**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 RECOVERY ANALYSIS (Dry Run)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
High Confidence Updates:
- 2-5-auth: backlog → done (explicit Status:, 3 commits)
- 2-6-profile: in-progress → done (completion report)
Medium Confidence Updates:
- 2-7-settings: backlog → in-progress (2 commits)
Low Confidence (verify manually):
- 2-8-dashboard: no Status:, no commits, <10KB file
```
Exit after preview. No changes made.
</step>
<step name="apply_conservative" if="mode == conservative">
**Apply only high/very-high confidence updates**
For each high+ confidence story:
1. Backup current sprint-status.yaml
2. Use Edit tool to update status
3. Log change
```bash
# Backup
cp $SPRINT_STATUS .sprint-status-backups/sprint-status-recovery-$(date +%Y%m%d).yaml
```
Skip medium/low confidence stories.
</step>
<step name="apply_aggressive" if="mode == aggressive">
**Apply medium+ confidence updates**
Includes:
- Inferring from git commits (even 1 commit)
- Using task completion rate
- Pre-filling brownfield checkboxes
```
⚠️ AGGRESSIVE mode may make incorrect inferences.
Review results carefully.
```
</step>
<step name="validate_results">
**Verify recovery worked**
```bash
./scripts/sync-sprint-status.sh --validate
```
Should show:
- "✓ sprint-status.yaml is up to date!" (success)
- OR discrepancy count (if issues remain)
</step>
<step name="commit_changes" if="changes_made">
**Commit the recovery**
Use Bash to commit:
```bash
git add docs/sprint-artifacts/sprint-status.yaml
git add .sprint-status-backups/
git commit -m "fix(tracking): Recover sprint-status.yaml - {{mode}} recovery"
```
</step>
</process>
<failure_handling>
**No changes detected:** sprint-status.yaml already accurate.
**Low confidence on known-done stories:** Add Status: field manually, re-run.
**Recovery marks incomplete as done:** Use conservative mode, verify manually.
</failure_handling>
<post_recovery_checklist>
- [ ] Run validation: `./scripts/sync-sprint-status.sh --validate`
- [ ] Review backup in `.sprint-status-backups/`
- [ ] Spot-check 5-10 stories for accuracy
- [ ] Commit changes
- [ ] Document why drift occurred
</post_recovery_checklist>
<success_criteria>
- [ ] All evidence sources analyzed
- [ ] Changes applied based on confidence threshold
- [ ] Validation passes
- [ ] Backup created
</success_criteria>

View File

@ -0,0 +1,189 @@
# Revalidate Epic v3.0 - Batch Story Revalidation
<purpose>
Batch revalidate all stories in an epic using parallel agents (semaphore pattern).
Clears checkboxes, verifies against codebase, re-checks verified items.
</purpose>
<philosophy>
**Parallel Verification, Continuous Worker Pool**
- Spawn up to N workers, refill as each completes
- Each story gets fresh context verification
- Aggregate results into epic-level health score
- Optionally fill gaps found during verification
</philosophy>
<config>
name: revalidate-epic
version: 3.0.0
defaults:
max_concurrent: 3
fill_gaps: false
continue_on_failure: true
create_epic_report: true
update_sprint_status: true
</config>
<execution_context>
@patterns/verification.md
@patterns/hospital-grade.md
@revalidate-story/workflow.md
</execution_context>
<process>
<step name="load_epic_stories" priority="first">
**Find all stories for the epic**
```bash
EPIC_NUMBER="{{epic_number}}"
[ -n "$EPIC_NUMBER" ] || { echo "ERROR: epic_number required"; exit 1; }
# Filter stories from sprint-status.yaml
grep "^${EPIC_NUMBER}-" docs/sprint-artifacts/sprint-status.yaml
```
Use Read tool on sprint-status.yaml. Filter stories starting with `{epic_number}-`.
Exclude epics and retrospectives.
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 EPIC {{epic_number}} REVALIDATION
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Stories Found: {{count}}
Mode: {{fill_gaps ? "Verify & Fill Gaps" : "Verify Only"}}
Max Concurrent: {{max_concurrent}} agents
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Use AskUserQuestion: Proceed with revalidation? (yes/no)
</step>
<step name="spawn_worker_pool">
**Initialize semaphore pattern for parallel revalidation**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🚀 Starting Parallel Revalidation
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Initialize state:
- story_queue = epic_stories
- active_workers = {}
- completed_stories = []
- failed_stories = []
Fill initial worker slots (up to max_concurrent):
```
Task({
subagent_type: "general-purpose",
description: "Revalidate story {{story_key}}",
prompt: `
Execute revalidate-story workflow for {{story_key}}.
<execution_context>
@revalidate-story/workflow.md
</execution_context>
Parameters:
- story_file: {{story_file}}
- fill_gaps: {{fill_gaps}}
Return verification summary with verified_pct, gaps_found, gaps_filled.
`,
run_in_background: true
})
```
</step>
<step name="maintain_worker_pool">
**Keep workers running until all stories done**
While active_workers > 0 OR stories remaining in queue:
1. Poll for completed workers (non-blocking with TaskOutput)
2. When worker completes:
- Parse verification results
- Add to completed_stories
- If more stories in queue: spawn new worker in that slot
3. Display progress every 30 seconds:
```
📊 Progress: {{completed}} completed, {{active}} active, {{queued}} queued
```
</step>
<step name="aggregate_results">
**Generate epic-level summary**
Calculate totals across all stories:
- epic_verified = sum of verified items
- epic_partial = sum of partial items
- epic_missing = sum of missing items
- epic_verified_pct = (verified / total) × 100
Group stories by health:
- Complete (≥95% verified)
- Mostly Complete (80-94%)
- Partial (50-79%)
- Incomplete (<50%)
</step>
<step name="display_summary">
**Show epic revalidation results**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 EPIC {{epic_number}} REVALIDATION SUMMARY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total Stories: {{count}}
Completed: {{completed_count}}
Failed: {{failed_count}}
Epic-Wide Verification:
- ✅ Verified: {{verified}}/{{total}} ({{pct}}%)
- 🔶 Partial: {{partial}}/{{total}}
- ❌ Missing: {{missing}}/{{total}}
Epic Health Score: {{epic_verified_pct}}/100
{{#if pct >= 95}}
✅ Epic is COMPLETE and verified
{{else if pct >= 80}}
🔶 Epic is MOSTLY COMPLETE
{{else if pct >= 50}}
⚠️ Epic is PARTIALLY COMPLETE
{{else}}
❌ Epic is INCOMPLETE (major rework needed)
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="update_tracking" if="update_sprint_status">
**Update sprint-status with revalidation results**
Use Edit tool to add comment to epic entry:
```
epic-{{epic_number}}: done # Revalidated: {{pct}}% verified ({{timestamp}})
```
</step>
</process>
<failure_handling>
**Worker fails:** Log error, refill slot if continue_on_failure=true.
**All stories fail:** Report systemic issue, halt batch.
**Story file missing:** Skip with warning.
</failure_handling>
<success_criteria>
- [ ] All epic stories processed
- [ ] Results aggregated
- [ ] Epic health score calculated
- [ ] Sprint status updated (if enabled)
</success_criteria>

View File

@ -0,0 +1,225 @@
# Revalidate Story v3.0 - Verify Checkboxes Against Codebase
<purpose>
Clear all checkboxes and re-verify each item against actual codebase reality.
Detects over-reported completion and identifies real gaps.
Optionally fills gaps by implementing missing items.
</purpose>
<philosophy>
**Trust But Verify, Evidence Required**
1. Clear all checkboxes (fresh start)
2. For each AC/Task/DoD: search codebase for evidence
3. Only re-check if evidence found AND not a stub
4. Report accuracy: was completion over-reported or under-reported?
</philosophy>
<config>
name: revalidate-story
version: 3.0.0
defaults:
fill_gaps: false
max_gaps_to_fill: 10
commit_strategy: "all_at_once" # or "per_gap"
create_report: true
update_sprint_status: true
verification_status:
verified: {checkbox: "[x]", evidence: "found, not stub, tests exist"}
partial: {checkbox: "[~]", evidence: "partial implementation or missing tests"}
missing: {checkbox: "[ ]", evidence: "not found in codebase"}
</config>
<execution_context>
@patterns/verification.md
@patterns/hospital-grade.md
</execution_context>
<process>
<step name="load_and_backup" priority="first">
**Load story and backup current state**
```bash
STORY_FILE="{{story_file}}"
[ -f "$STORY_FILE" ] || { echo "ERROR: story_file required"; exit 1; }
```
Use Read tool on story file. Count current checkboxes:
- ac_checked_before
- tasks_checked_before
- dod_checked_before
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔍 STORY REVALIDATION STARTED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Story: {{story_key}}
Mode: {{fill_gaps ? "Verify & Fill Gaps" : "Verify Only"}}
Current State:
- Acceptance Criteria: {{ac_checked}}/{{ac_total}} checked
- Tasks: {{tasks_checked}}/{{tasks_total}} checked
- DoD: {{dod_checked}}/{{dod_total}} checked
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="clear_checkboxes">
**Clear all checkboxes for fresh verification**
Use Edit tool (replace_all: true):
- `[x]``[ ]` in Acceptance Criteria section
- `[x]``[ ]` in Tasks section
- `[x]``[ ]` in Definition of Done section
</step>
<step name="verify_acceptance_criteria">
**Verify each AC against codebase**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 VERIFYING ACCEPTANCE CRITERIA
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
For each AC item:
1. **Parse AC** - Extract file/component/feature mentions
2. **Search codebase** - Use Glob/Grep to find evidence
3. **Verify implementation** - Read files, check for:
- NOT a stub (no "TODO", "Not implemented", empty function)
- Has actual implementation
- Tests exist (*.test.* or *.spec.*)
4. **Determine status:**
- VERIFIED: Evidence found, not stub, tests exist → check [x]
- PARTIAL: Partial evidence or missing tests → check [~]
- MISSING: No evidence found → leave [ ]
5. **Record evidence or gap**
</step>
<step name="verify_tasks">
**Verify each task against codebase**
Same process as ACs:
- Parse task description for artifacts
- Search codebase with Glob/Grep
- Read and verify (check for stubs, tests)
- Update checkbox based on evidence
</step>
<step name="verify_definition_of_done">
**Verify DoD items**
For common DoD items, run actual checks:
- "Type check passes" → `npm run type-check`
- "Unit tests pass" → `npm test`
- "Linting clean" → `npm run lint`
- "Build succeeds" → `npm run build`
</step>
<step name="generate_report">
**Calculate and display results**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 REVALIDATION SUMMARY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Story: {{story_key}}
Verification Results:
- ✅ Verified: {{verified}}/{{total}} ({{pct}}%)
- 🔶 Partial: {{partial}}/{{total}}
- ❌ Missing: {{missing}}/{{total}}
Accuracy Check:
- Before: {{pct_before}}% checked
- After: {{verified_pct}}% verified
- {{pct_before > verified_pct ? "Over-reported" : "Under-reported"}}
{{#if missing > 0}}
Gaps Found ({{missing}}):
[list gaps with what's missing]
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
<step name="fill_gaps" if="fill_gaps AND gaps_found">
**Implement missing items**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔧 GAP FILLING MODE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
Safety check:
```
if gaps_found > max_gaps_to_fill:
echo "⚠️ TOO MANY GAPS ({{gaps}} > {{max}})"
echo "Consider re-implementing with /dev-story"
HALT
```
For each gap:
1. Load story context
2. Implement missing item
3. Write tests
4. Run tests to verify
5. Check box [x] if successful
6. Commit if commit_strategy == "per_gap"
</step>
<step name="finalize">
**Re-verify and commit**
If gaps were filled:
1. Re-run verification on filled gaps
2. Commit all changes (if commit_strategy == "all_at_once")
Update sprint-status.yaml with revalidation result:
```
{{story_key}}: {{status}} # Revalidated: {{pct}}% ({{timestamp}})
```
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ REVALIDATION COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Final: {{verified}}/{{total}} verified ({{pct}}%)
Recommendation:
{{#if pct >= 95}}
✅ Story is COMPLETE - mark as "done"
{{else if pct >= 80}}
🔶 Mostly complete - finish remaining items
{{else if pct >= 50}}
⚠️ Significant gaps - continue with /dev-story
{{else}}
❌ Mostly incomplete - consider re-implementing
{{/if}}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
</step>
</process>
<failure_handling>
**File not found:** HALT with error.
**Verification fails:** Record gap, continue to next item.
**Gap fill fails:** Leave unchecked, record failure.
**Too many gaps:** HALT, recommend re-implementation.
</failure_handling>
<success_criteria>
- [ ] All items verified against codebase
- [ ] Checkboxes reflect actual implementation
- [ ] Accuracy comparison displayed
- [ ] Gaps filled (if enabled)
- [ ] Sprint status updated
</success_criteria>