# Batch Super-Dev - Interactive Story Selector
## AKA: "Mend the Gap" π
**Primary Use Case:** Gap analysis and reconciliation workflow
This workflow helps you "mind the gap" between story requirements and codebase reality, then "mend the gap" by building only what's truly missing.
### What This Workflow Does
1. **Scans codebase** to verify what's actually implemented vs what stories claim
2. **Finds the gap** between story requirements and reality
3. **Mends the gap** by building ONLY what's truly missing (no duplicate work)
4. **Updates tracking** to reflect actual completion status (check boxes, sprint-status)
### Common Use Cases
**Reconciliation Mode (Most Common):**
- Work was done but not properly tracked
- Stories say "build X" but X is 60-80% already done
- Need second set of eyes to find real gaps
- Update story checkboxes to match reality
**Greenfield Mode:**
- Story says "build X", nothing exists
- Build 100% from scratch with full quality gates
**Brownfield Mode:**
- Story says "modify X", X exists
- Refactor carefully, add only new requirements
### Execution Modes
**Sequential (Recommended for Gap Analysis):**
- Process stories ONE-BY-ONE in THIS SESSION
- After each story: verify existing code β build only gaps β check boxes β move to next
- Easier to monitor, can intervene if issues found
- Best for reconciliation work
**Parallel (For Greenfield Batch Implementation):**
- Spawn autonomous Task agents to process stories concurrently
- Faster completion but harder to monitor
- Best when stories are independent and greenfield
The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml
You MUST have already loaded and processed: {project-root}/_bmad/bmm/workflows/4-implementation/batch-super-dev/workflow.yaml
Read {sprint_status} file
Parse metadata: project, project_key, tracking_system
Parse development_status map
Filter stories with status = "ready-for-dev"
Exclude entries that are epics (keys starting with "epic-") or retrospectives (keys ending with "-retrospective")
Further filter stories to only include those starting with "{filter_by_epic}-"
If filter_by_epic = "3", only include stories like "3-1-...", "3-2-...", etc.
Sort filtered stories by epic number, then story number (e.g., 1-1, 1-2, 2-1, 3-1)
Store as: ready_for_dev_stories (list of story keys)
Exit workflow
Read comment field for each story from sprint-status.yaml (text after # on the same line)
For each story, verify story file exists using multiple naming patterns:
Try in order: 1) {sprint_artifacts}/{story_key}.md, 2) {sprint_artifacts}/story-{story_key}.md, 3) {sprint_artifacts}/{story_key_with_dots}.md
Mark stories as: β
(file exists), β (file missing), π (already implemented but not marked done)
For each story in ready_for_dev_stories:
Check if story file exists (already done in Step 2)
Create story file with gap analysis? (yes/no):
Mark story for removal from selection
Add to skipped_stories list with reason: "Story creation requires manual workflow (agents cannot invoke /create-story)"
Add to manual_actions_required list: "Regenerate {{story_key}} with /create-story-with-gap-analysis"
Mark story for removal from selection
Add to skipped_stories list with reason: "User declined story creation"
Read story file: {{file_path}}
Parse sections and validate BMAD format
Check for all 12 required sections:
1. Business Context
2. Current State
3. Acceptance Criteria
4. Tasks and Subtasks
5. Technical Requirements
6. Architecture Compliance
7. Testing Requirements
8. Dev Agent Guardrails
9. Definition of Done
10. References
11. Dev Agent Record
12. Change Log
Count sections present: sections_found
Check Current State content length (word count)
Check Acceptance Criteria item count: ac_count
Count unchecked tasks ([ ]) in Tasks/Subtasks: task_count
Look for gap analysis markers (β
/β) in Current State
Mark story for removal from selection
Add to skipped_stories list with reason: "INVALID - Only {{task_count}} tasks (need β₯3)"
Regenerate story with codebase scan? (yes/no):
Mark story for removal from selection
Add to skipped_stories list with reason: "Story regeneration requires manual workflow (agents cannot invoke /create-story)"
Add to manual_actions_required list: "Regenerate {{story_key}} with /create-story-with-gap-analysis"
Mark story for removal from selection
Add to skipped_stories list with reason: "User declined regeneration"
Mark story as validated
Mark story as validated (already done)
Remove skipped stories from ready_for_dev_stories
Update count of available stories
Exit workflow
For each validated story:
Read story file: {{file_path}}
Count unchecked tasks ([ ]) at top level only in Tasks/Subtasks section β task_count
(See workflow.yaml complexity.task_counting.method = "top_level_only")
Set {{story_key}}.complexity = {level: "INVALID", score: 0, task_count: {{task_count}}, reason: "Insufficient tasks ({{task_count}}/3 minimum)"}
Continue to next story
Extract file paths mentioned in tasks β file_count
Scan story title and task descriptions for risk keywords using rules from workflow.yaml:
- Case insensitive matching (require_word_boundaries: true)
- Include keyword variants (e.g., "authentication" matches "auth")
- Scan: story_title, task_descriptions, subtask_descriptions
Calculate complexity score:
- Base score = task_count
- Add 5 for each HIGH risk keyword match (auth, security, payment, migration, database, schema, encryption)
- Add 2 for each MEDIUM risk keyword match (api, integration, external, third-party, cache)
- Add 0 for LOW risk keywords (ui, style, config, docs, test)
- Count each keyword only once (no duplicates)
Assign complexity level using mutually exclusive decision tree (priority order):
1. Check COMPLEX first (highest priority):
IF (task_count β₯ 16 OR complexity_score β₯ 20 OR has ANY HIGH risk keyword)
THEN level = COMPLEX
2. Else check MICRO (lowest complexity):
ELSE IF (task_count β€ 3 AND complexity_score β€ 5 AND file_count β€ 5)
THEN level = MICRO
3. Else default to STANDARD:
ELSE level = STANDARD
This ensures no overlaps:
- Story with HIGH keyword β COMPLEX (never MICRO or STANDARD)
- Story with 4-15 tasks or >5 files β STANDARD (not MICRO or COMPLEX)
- Story with β€3 tasks, β€5 files, no HIGH keywords β MICRO
Store complexity_level for story: {{story_key}}.complexity = {level, score, task_count, risk_keywords}
Group stories by complexity level
Filter out INVALID stories (those with level="INVALID"):
For each INVALID story, add to skipped_stories with reason from complexity object
Remove INVALID stories from complexity_groups and ready_for_dev_stories
Exit workflow
**Select stories to process:**
Enter story numbers to process (examples):
- Single: `1`
- Multiple: `1,3,5`
- Range: `1-5` (processes 1,2,3,4,5)
- Mixed: `1,3-5,8` (processes 1,3,4,5,8)
- All: `all` (processes all {{count}} stories)
Or:
- `cancel` - Exit without processing
**Your selection:**
Parse user input
Exit workflow
Set selected_stories = all ready_for_dev_stories
Parse selection (handle commas, ranges)
Input "1,3-5,8" β indexes [1,3,4,5,8] β map to story keys
Map selected indexes to story keys from ready_for_dev_stories
Store as: selected_stories
Truncate selected_stories to first max_stories entries
Display confirmation
**How should these stories be processed?**
Options:
- **sequential**: Run stories one-by-one in this session (slower, easier to monitor)
- **parallel**: Spawn Task agents to process stories concurrently (faster, autonomous)
Enter: sequential or parallel
Capture response as: execution_mode
Set parallel_count = 1
Set use_task_agents = false
Set use_task_agents = true
**How many agents should run in parallel?**
Options:
- **2**: Conservative (low resource usage, easier debugging)
- **4**: Moderate (balanced performance, recommended)
- **8**: Aggressive (higher throughput)
- **10**: Maximum (10 agent limit for safety)
- **all**: Use all stories (max 10 agents)
Enter number (2-10) or 'all':
Capture response as: parallel_count
If parallel_count == 'all': set parallel_count = min(count of selected_stories, 10)
If parallel_count > 10: set parallel_count = 10 (safety limit)
Confirm execution plan? (yes/no):
Exit workflow
Initialize counters: completed=0, failed=0, failed_stories=[], reconciliation_warnings=[], reconciliation_warnings_count=0
Set start_time = current timestamp
Jump to Step 4-Parallel (Task Agent execution)
Continue to Step 4-Sequential (In-session execution)
For each story in selected_stories:
Invoke workflow: /bmad:bmm:workflows:super-dev-pipeline
Parameters: mode=batch, story_key={{story_key}}, complexity_level={{story_key}}.complexity.level
Execute Step 4.5: Smart Story Reconciliation
Load reconciliation instructions: {installed_path}/step-4.5-reconcile-story-status.md
Execute reconciliation with story_key={{story_key}}
Increment completed counter
Increment completed counter (implementation was successful)
Add to reconciliation_warnings: {story_key: {{story_key}}, warning_message: "Reconciliation failed - manual verification needed"}
Increment reconciliation_warnings_count
Increment failed counter
Add story_key to failed_stories list
Jump to Step 5 (Summary)
Wait {{pause_between_stories}} seconds
After all stories processed, jump to Step 5 (Summary)
Initialize worker pool state:
- story_queue = selected_stories (all stories to process)
- active_workers = {} (map of worker_id β {story_key, task_id, started_at})
- completed_stories = []
- failed_stories = []
- next_story_index = 0
- max_workers = {{parallel_count}}
Spawn first {{max_workers}} agents (or fewer if less stories):
While next_story_index < min(max_workers, story_queue.length):
story_key = story_queue[next_story_index]
worker_id = next_story_index + 1
Spawn Task agent:
- subagent_type: "general-purpose"
- description: "Implement story {{story_key}}"
- prompt: "Execute super-dev-pipeline workflow for story {{story_key}}.
CRITICAL INSTRUCTIONS:
1. Load workflow.xml: _bmad/core/tasks/workflow.xml
2. Load workflow config: _bmad/bmm/workflows/4-implementation/super-dev-pipeline/workflow.yaml
3. Execute in BATCH mode with story_key={{story_key}} and complexity_level={{story_key}}.complexity.level
4. Follow all 7 pipeline steps (init, pre-gap, implement, post-validate, code-review, complete, summary)
5. Commit changes when complete
6. Report final status (done/failed) with file list
Story file will be auto-resolved from multiple naming conventions."
- run_in_background: true (non-blocking - critical for semaphore pattern)
Store in active_workers[worker_id]:
story_key: {{story_key}}
task_id: {{returned_task_id}}
started_at: {{timestamp}}
status: "running"
Increment next_story_index
After spawning initial workers:
SEMAPHORE PATTERN: Keep {{max_workers}} agents running continuously
While active_workers.size > 0 OR next_story_index < story_queue.length:
Poll for completed workers (check task outputs non-blocking):
For each worker_id in active_workers:
Check if worker task completed using TaskOutput(task_id, block=false)
Continue to next worker (don't wait)
Get worker details: story_key = active_workers[worker_id].story_key
Execute Step 4.5: Smart Story Reconciliation
Load reconciliation instructions: {installed_path}/step-4.5-reconcile-story-status.md
Execute reconciliation with story_key={{story_key}}
Add to completed_stories
Add to completed_stories (implementation successful)
Add to reconciliation_warnings: {story_key: {{story_key}}, warning_message: "Reconciliation failed - manual verification needed"}
Remove worker_id from active_workers (free the slot)
IMMEDIATELY refill slot if stories remain:
story_key = story_queue[next_story_index]
Spawn new Task agent for this worker_id (same parameters as init)
Update active_workers[worker_id] with new task_id and story_key
Increment next_story_index
Get worker details: story_key = active_workers[worker_id].story_key
Add to failed_stories
Remove worker_id from active_workers (free the slot)
Kill all active workers
Clear story_queue
Break worker pool loop
story_key = story_queue[next_story_index]
Spawn new Task agent for this worker_id
Update active_workers[worker_id] with new task_id and story_key
Increment next_story_index
Display live progress every 30 seconds:
Sleep 5 seconds before next poll (prevents tight loop)
After worker pool drains (all stories processed), jump to Step 5 (Summary)
Calculate end_time and total_duration
Calculate success_rate = (completed / total_count) * 100
Save batch log to {batch_log}
Log contents: start_time, end_time, total_duration, selected_stories, completed_stories, failed_stories, success_rate