# Batch Super-Dev - Interactive Story Selector
## AKA: "Mend the Gap" π
**Primary Use Case:** Gap analysis and reconciliation workflow
This workflow helps you "mind the gap" between story requirements and codebase reality, then "mend the gap" by building only what's truly missing.
### What This Workflow Does
1. **Scans codebase** to verify what's actually implemented vs what stories claim
2. **Finds the gap** between story requirements and reality
3. **Mends the gap** by building ONLY what's truly missing (no duplicate work)
4. **Updates tracking** to reflect actual completion status (check boxes, sprint-status)
### Common Use Cases
**Reconciliation Mode (Most Common):**
- Work was done but not properly tracked
- Stories say "build X" but X is 60-80% already done
- Need second set of eyes to find real gaps
- Update story checkboxes to match reality
**Greenfield Mode:**
- Story says "build X", nothing exists
- Build 100% from scratch with full quality gates
**Brownfield Mode:**
- Story says "modify X", X exists
- Refactor carefully, add only new requirements
### Execution Modes
**Sequential (Recommended for Gap Analysis):**
- Process stories ONE-BY-ONE in THIS SESSION
- After each story: verify existing code β build only gaps β check boxes β move to next
- Easier to monitor, can intervene if issues found
- Best for reconciliation work
**Parallel (For Greenfield Batch Implementation):**
- Spawn autonomous Task agents to process stories concurrently
- Faster completion but harder to monitor
- Best when stories are independent and greenfield
The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml
You MUST have already loaded and processed: {project-root}/_bmad/bmm/workflows/4-implementation/batch-super-dev/workflow.yaml
βοΈ HOSPITAL-GRADE CODE STANDARDS βοΈ
This code may be used in healthcare settings where LIVES ARE AT STAKE.
Every line of code must meet hospital-grade reliability standards.
QUALITY >> SPEED. Take 5 hours to do it right, not 1 hour to do it poorly.
Read {sprint_status} file
Parse metadata: project, project_key, tracking_system
Parse development_status map
Filter stories with status = "ready-for-dev" OR "backlog"
Exclude entries that are epics (keys starting with "epic-") or retrospectives (keys ending with "-retrospective")
Group by status: ready_for_dev_stories, backlog_stories
Further filter stories to only include those starting with "{filter_by_epic}-"
If filter_by_epic = "3", only include stories like "3-1-...", "3-2-...", etc.
Sort filtered stories by epic number, then story number (e.g., 1-1, 1-2, 2-1, 3-1)
Store as: ready_for_dev_stories (list of story keys)
Exit workflow
Combine both lists: available_stories = ready_for_dev_stories + backlog_stories
Read comment field for each story from sprint-status.yaml (text after # on the same line)
For each story, verify story file exists using COMPREHENSIVE naming pattern detection:
Parse story_key (e.g., "20-9-megamenu-navigation" or "20-9") to extract:
- epic_num: first number (e.g., "20")
- story_num: second number (e.g., "9")
- optional_suffix: everything after second number (e.g., "-megamenu-navigation" or empty)
Input: "20-9-megamenu-navigation" β epic=20, story=9, suffix="-megamenu-navigation"
Input: "20-11" β epic=20, story=11, suffix=""
π¨ ONE CANONICAL FORMAT - NO VARIATIONS
CANONICAL FORMAT: {story_key}.md
20-9-megamenu-navigation.md (epic-story-slug, NO prefix)
18-1-charge-model-state-machine.md (epic-story-slug, NO prefix)
Check if file exists: {sprint_artifacts}/{story_key}.md
Set file_status = β
EXISTS
Store file_path = {sprint_artifacts}/{story_key}.md
Set file_status = β MISSING
Check for legacy wrong-named files:
Search for: story-{story_key}.md (wrong - has "story-" prefix)
Rename: mv story-{story_key}.md {story_key}.md
Verify rename worked
Set file_status = β
EXISTS (after rename)
Store file_path = {sprint_artifacts}/{story_key}.md
file_status = β MISSING (genuinely missing)
Mark stories as: β
(file exists), β (file missing), π (already implemented but not marked done)
For each story in available_stories (ready_for_dev + backlog):
Check if story file exists (already done in Step 2)
Mark story as needs_story_creation = true
Mark story.creation_workflow = "/create-story" (lightweight, no gap analysis)
Mark story as validated (will create in next step)
Mark story for removal from selection
Add to skipped_stories list with reason: "Story file missing (status ready-for-dev but no file)"
Read story file: {{file_path}}
Parse sections and validate BMAD format
Check for all 12 required sections:
1. Business Context
2. Current State
3. Acceptance Criteria
4. Tasks and Subtasks
5. Technical Requirements
6. Architecture Compliance
7. Testing Requirements
8. Dev Agent Guardrails
9. Definition of Done
10. References
11. Dev Agent Record
12. Change Log
Count sections present: sections_found
Check Current State content length (word count)
Check Acceptance Criteria item count: ac_count
Count unchecked tasks ([ ]) in Tasks/Subtasks: task_count
Look for gap analysis markers (β
/β) in Current State
Mark story for removal from selection
Add to skipped_stories list with reason: "INVALID - Only {{task_count}} tasks (need β₯3)"
Regenerate story with codebase scan? (yes/no):
Mark story for removal from selection
Add to skipped_stories list with reason: "Story regeneration requires manual workflow (agents cannot invoke /create-story)"
Add to manual_actions_required list: "Regenerate {{story_key}} with /create-story-with-gap-analysis"
Mark story for removal from selection
Add to skipped_stories list with reason: "User declined regeneration"
Mark story as validated
Mark story as validated (already done)
Remove skipped stories from ready_for_dev_stories
Update count of available stories
Exit workflow
For each validated story:
Read story file: {{file_path}}
Count unchecked tasks ([ ]) at top level only in Tasks/Subtasks section β task_count
(See workflow.yaml complexity.task_counting.method = "top_level_only")
Set {{story_key}}.complexity = {level: "INVALID", score: 0, task_count: {{task_count}}, reason: "Insufficient tasks ({{task_count}}/3 minimum)"}
Continue to next story
Extract file paths mentioned in tasks β file_count
Scan story title and task descriptions for risk keywords using rules from workflow.yaml:
- Case insensitive matching (require_word_boundaries: true)
- Include keyword variants (e.g., "authentication" matches "auth")
- Scan: story_title, task_descriptions, subtask_descriptions
Calculate complexity score:
- Base score = task_count
- Add 5 for each HIGH risk keyword match (auth, security, payment, migration, database, schema, encryption)
- Add 2 for each MEDIUM risk keyword match (api, integration, external, third-party, cache)
- Add 0 for LOW risk keywords (ui, style, config, docs, test)
- Count each keyword only once (no duplicates)
Assign complexity level using mutually exclusive decision tree (priority order):
1. Check COMPLEX first (highest priority):
IF (task_count β₯ 16 OR complexity_score β₯ 20 OR has ANY HIGH risk keyword)
THEN level = COMPLEX
2. Else check MICRO (lowest complexity):
ELSE IF (task_count β€ 3 AND complexity_score β€ 5 AND file_count β€ 5)
THEN level = MICRO
3. Else default to STANDARD:
ELSE level = STANDARD
This ensures no overlaps:
- Story with HIGH keyword β COMPLEX (never MICRO or STANDARD)
- Story with 4-15 tasks or >5 files β STANDARD (not MICRO or COMPLEX)
- Story with β€3 tasks, β€5 files, no HIGH keywords β MICRO
Store complexity_level for story: {{story_key}}.complexity = {level, score, task_count, risk_keywords}
Group stories by complexity level
Filter out INVALID stories (those with level="INVALID"):
For each INVALID story, add to skipped_stories with reason from complexity object
Remove INVALID stories from complexity_groups and ready_for_dev_stories
Exit workflow
**Select stories to process:**
Enter story numbers to process (examples):
- Single: `1`
- Multiple: `1,3,5`
- Range: `1-5` (processes 1,2,3,4,5)
- Mixed: `1,3-5,8` (processes 1,3,4,5,8)
- All: `all` (processes all {{count}} stories)
Or:
- `cancel` - Exit without processing
**Your selection:**
Parse user input
Exit workflow
Set selected_stories = all ready_for_dev_stories
Parse selection (handle commas, ranges)
Input "1,3-5,8" β indexes [1,3,4,5,8] β map to story keys
Map selected indexes to story keys from ready_for_dev_stories
Store as: selected_stories
Truncate selected_stories to first max_stories entries
Display confirmation
For each selected story:
Read story file: {{story_file_path}}
Mark story as needs_creation
Continue to next story
Add to validation_failures list
Continue to next story
Validate story completeness:
- Count sections (need 12)
- Check Current State word count (need β₯100)
- Check gap analysis markers (β
/β)
- Count Acceptance Criteria (need β₯3)
- Count unchecked tasks (need β₯3)
Add to validation_failures: "{{story_key}}: Only {{task_count}} tasks"
Add to validation_warnings: "{{story_key}}: Needs regeneration"
Add to validated_stories list
Remove failed stories and continue? (yes/no):
Remove validation_failures from selected_stories
Exit workflow
Continue with these stories anyway? (yes/no):
Exit workflow
Jump to Step 3
Create these {{needs_creation.length}} story files now? (yes/no):
Remove needs_creation stories from selected_stories
Exit workflow
Spawn Task agents in PARALLEL (send all Task calls in SINGLE message):
For each story in needs_creation:
Task tool call:
- subagent_type: "general-purpose"
- description: "Create story {{story_key}}"
- prompt: "Create basic story file for {{story_key}}.
INSTRUCTIONS:
1. Read epic description from docs/epics.md (Epic {{epic_num}})
2. Read PRD requirements (docs/prd-art-collective-tenants.md)
3. Read architecture (docs/architecture-space-rentals.md)
4. Extract FRs for this story from PRD
5. Break down into 3-7 tasks with subtasks
6. Create story file at: docs/sprint-artifacts/{{story_key}}.md
7. Use template from: _bmad/bmm/workflows/4-implementation/create-story/template.md
8. NO gap analysis (defer to implementation)
9. Commit story file when complete
10. Report: story file path
Mode: batch (lightweight, no codebase scanning)"
- Store returned agent_id for tracking
Wait for ALL agents to complete (blocking)
Check each agent output:
Parse agent output for {{story_key}}
Verify file exists at docs/sprint-artifacts/{{story_key}}.md
Mark story.needs_story_creation = false
Add to failed_creations list
Remove from selected_stories
**Choose execution mode:**
[I] INTERACTIVE CHECKPOINT MODE
- After each story completes, pause for your review
- You approve before proceeding to next story
- Allows course correction if issues detected
- Best for: When you want to monitor progress
[A] FULLY AUTONOMOUS MODE
- Process all selected stories without pausing
- No human interaction until completion
- Best for: When stories are well-defined and you trust the process
Which mode? [I/A]:
Read user input
Set execution_mode = "interactive_checkpoint"
Set execution_mode = "fully_autonomous"
Activate hospital_grade_mode = true
Set quality_multiplier = 1.5
For each selected story:
Read story file Tasks section
Analyze task descriptions for dependencies on other selected stories:
Dependency detection rules:
- Look for mentions of other story keys (e.g., "18-1", "18-2")
- Look for phrases like "requires", "depends on", "needs", "after"
- Look for file paths that other stories create
- Look for models/services that other stories define
Build dependency map:
story_key: {
depends_on: [list of story keys this depends on],
blocks: [list of story keys that depend on this]
}
Compute waves using topological sort:
Wave 1: Stories with no dependencies (can start immediately)
Wave 2: Stories that only depend on Wave 1
Wave 3: Stories that depend on Wave 2
...
**How should these stories be processed?**
Options:
- **S**: Sequential - Run stories one-by-one (Task agent finishes before next starts)
- **P**: Parallel - Run stories concurrently (Multiple Task agents running simultaneously)
**Note:** Both modes use Task agents to keep story context out of the main thread.
The only difference is the number running at once.
Enter: S or P
Capture response as: execution_strategy
Set execution_mode = "sequential"
Set parallel_count = 1
Set use_task_agents = true
Set execution_mode = "parallel"
Set use_task_agents = true
**How many agents should run in parallel?**
Options:
- **2**: Conservative (low resource usage, easier debugging)
- **4**: Moderate (balanced performance, recommended)
- **8**: Aggressive (higher throughput)
- **10**: Maximum (10 agent limit for safety)
- **all**: Use all stories (max 10 agents)
Enter number (2-10) or 'all':
Capture response as: parallel_count
If parallel_count == 'all': set parallel_count = min(count of selected_stories, 10)
If parallel_count > 10: set parallel_count = 10 (safety limit)
Confirm execution plan? (yes/no):
Exit workflow
Initialize counters: completed=0, failed=0, failed_stories=[], reconciliation_warnings=[], reconciliation_warnings_count=0
Set start_time = current timestamp
Jump to Step 4-Sequential (Task agents, one at a time)
Jump to Step 4-Wave (Task agents, wave-based parallel)
Jump to Step 4-Parallel (Task agents, multiple concurrent)
Set abort_batch = false
For each wave in waves (in order):
Initialize wave worker pool state:
- wave_queue = stories
- Resolve wave_queue items to full story objects by matching story_key in selected_stories (include complexity_level, story_file_path)
- active_workers = {} (map of worker_id β {story_key, task_id, started_at})
- completed_wave_stories = []
- failed_wave_stories = []
- next_story_index = 0
- max_workers = min(parallel_count, wave_queue.length)
Spawn first {{max_workers}} agents (or fewer if less stories):
While next_story_index < min(max_workers, wave_queue.length):
story_key = wave_queue[next_story_index].story_key
complexity_level = wave_queue[next_story_index].complexity_level
story_file_path = wave_queue[next_story_index].story_file_path
worker_id = next_story_index + 1
Spawn Task agent:
- subagent_type: "general-purpose"
- description: "Implement story {{story_key}}"
- prompt: "Execute super-dev-pipeline workflow for story {{story_key}}.
Story file: docs/sprint-artifacts/{{story_key}}.md
Complexity: {{complexity_level}}
Mode: batch
Load workflow: /Users/jonahschulte/git/BMAD-METHOD/src/modules/bmm/workflows/4-implementation/super-dev-pipeline
Follow the multi-agent pipeline (builder, inspector, reviewer, fixer).
Commit when complete, update story status, report results."
- run_in_background: true (non-blocking)
Store in active_workers[worker_id]:
story_key: {{story_key}}
task_id: {{returned_task_id}}
started_at: {{timestamp}}
status: "running"
Increment next_story_index
WAVE BARRIER: Complete all stories in this wave before starting next wave
While active_workers.size > 0 OR next_story_index < wave_queue.length:
Poll for completed workers (check task outputs non-blocking):
For each worker_id in active_workers:
Check if worker task completed using TaskOutput(task_id, block=false)
Continue to next worker (don't wait)
Get worker details: story_key = active_workers[worker_id].story_key
Execute Step 4.5: Smart Story Reconciliation
Load reconciliation instructions: {installed_path}/step-4.5-reconcile-story-status.md
Execute reconciliation with story_key={{story_key}}
π¨ MANDATORY STORY FILE VERIFICATION - MAIN ORCHESTRATOR MUST RUN BASH
STORY_FILE="docs/sprint-artifacts/{{story_key}}.md"
echo "π Verifying story file: {{story_key}}"
CHECKED_COUNT=$(grep -c "^- \[x\]" "$STORY_FILE" 2>/dev/null || echo "0")
TOTAL_COUNT=$(grep -c "^- \[.\]" "$STORY_FILE" 2>/dev/null || echo "0")
echo " Checked tasks: $CHECKED_COUNT/$TOTAL_COUNT"
RECORD_FILLED=$(grep -A 20 "^### Dev Agent Record" "$STORY_FILE" 2>/dev/null | grep -c "Claude Sonnet" || echo "0")
echo " Dev Agent Record: $RECORD_FILLED"
echo "$CHECKED_COUNT" > /tmp/checked_{{story_key}}.txt
echo "$RECORD_FILLED" > /tmp/record_{{story_key}}.txt
AUTO-FIX PROCEDURE:
1. Read agent's commit to see what files were created/modified
2. Read story Tasks section to see what was supposed to be built
3. For each task, check if corresponding code exists in commit
4. If code exists, check off the task using Edit tool
5. Fill in Dev Agent Record with commit details
6. Verify fixes worked (re-count checked tasks)
Continue with story completion
Override story status to "in-progress"
Add to reconciliation_warnings with detailed diagnostic
Continue (do NOT kill workers)
Increment completed counter
Add to completed_wave_stories
Add to reconciliation_warnings: {story_key: {{story_key}}, warning_message: "Only {{task_completion_pct}}% tasks checked - manual verification needed"}
Remove worker_id from active_workers (free the slot)
IMMEDIATELY refill slot if stories remain in this wave:
story_key = wave_queue[next_story_index].story_key
complexity_level = wave_queue[next_story_index].complexity_level
story_file_path = wave_queue[next_story_index].story_file_path
Spawn new Task agent for this worker_id (same parameters as init)
Update active_workers[worker_id] with new task_id and story_key
Increment next_story_index
Get worker details: story_key = active_workers[worker_id].story_key
Increment failed counter
Add story_key to failed_stories list
Add to failed_wave_stories
Remove worker_id from active_workers (free the slot)
Kill all active workers
Clear active_workers
Set abort_batch = true
Break worker pool loop
story_key = wave_queue[next_story_index].story_key
complexity_level = wave_queue[next_story_index].complexity_level
story_file_path = wave_queue[next_story_index].story_file_path
Spawn new Task agent for this worker_id
Update active_workers[worker_id] with new task_id and story_key
Increment next_story_index
Break worker pool loop
Display live progress every 30 seconds:
Sleep 5 seconds before next poll (prevents tight loop)
Jump to Step 5 (Summary)
After all waves processed, jump to Step 5 (Summary)
For each story in selected_stories:
Use Task tool to spawn agent:
- subagent_type: "general-purpose"
- description: "Implement story {{story_key}}"
- prompt: "Execute super-dev-pipeline workflow for story {{story_key}}.
Story file: docs/sprint-artifacts/{{story_key}}.md
Complexity: {{complexity_level}}
Mode: batch
Load workflow: /Users/jonahschulte/git/BMAD-METHOD/src/modules/bmm/workflows/4-implementation/super-dev-pipeline
Follow the multi-agent pipeline (builder, inspector, reviewer, fixer).
Commit when complete, update story status, report results."
- Store agent_id
WAIT for agent to complete (blocking call)
π¨ STORY RECONCILIATION - ORCHESTRATOR DOES THIS NOW (NOT AGENTS)
YOU (orchestrator) must use Bash tool NOW with this command:
STORY_FILE="docs/sprint-artifacts/{{story_key}}.md"
echo "Verifying story file: $STORY_FILE"
CHECKED_COUNT=$(grep -c "^- \[x\]" "$STORY_FILE" 2>/dev/null || echo "0")
TOTAL_COUNT=$(grep -c "^- \[.\]" "$STORY_FILE" 2>/dev/null || echo "0")
echo "Checked tasks: $CHECKED_COUNT/$TOTAL_COUNT"
RECORD_FILLED=$(grep -A 20 "^### Dev Agent Record" "$STORY_FILE" 2>/dev/null | grep -c "Claude Sonnet" || echo "0")
echo "Dev Agent Record: $RECORD_FILLED"
echo "checked_count=$CHECKED_COUNT"
echo "record_filled=$RECORD_FILLED"
After running Bash tool, read the output and extract checked_count and record_filled values
MANDATORY AUTO-FIX - MAIN ORCHESTRATOR MUST EXECUTE THIS
AUTO-FIX PROCEDURE (YOU MUST DO THIS):
# Step 1: Get commit for this story
COMMIT_SHA=$(git log -1 --grep="{{story_key}}" --pretty=format:"%H" 2>/dev/null)
if [ -z "$COMMIT_SHA" ]; then
# Try finding by story key pattern
COMMIT_SHA=$(git log -5 --pretty=format:"%H %s" | grep -i "{{story_key}}" | head -1 | cut -d' ' -f1)
fi
echo "Found commit: $COMMIT_SHA"
# Step 2: Get files changed
git diff ${COMMIT_SHA}~1 $COMMIT_SHA --name-only | grep -v "test/" | grep -v "__tests__"
Step 3: Read story file to get Tasks section:
docs/sprint-artifacts/{{story_key}}.md
Step 4: For EACH task in Tasks section:
For each line starting with "- [ ]":
- Extract task description
- Check if git diff contains related file/function
- If YES: Use Edit tool to change "- [ ]" to "- [x]"
- Verify edit: Run bash grep to confirm checkbox is now checked
Step 5: Fill Dev Agent Record using Edit tool:
- Find "### Dev Agent Record" section
- Replace with actual data:
* Agent Model: Claude Sonnet 4.5 (multi-agent pipeline)
* Date: {{current_date}}
* Files: {{files_from_git_diff}}
* Notes: {{from_commit_message}}
Step 6: Re-run verification bash commands:
CHECKED_COUNT=$(grep -c "^- \[x\]" "$STORY_FILE")
RECORD_FILLED=$(grep -A 20 "^### Dev Agent Record" "$STORY_FILE" | grep -c "Claude Sonnet")
echo "After auto-fix:"
echo " Checked tasks: $CHECKED_COUNT"
echo " Dev Agent Record: $RECORD_FILLED"
if [ "$CHECKED_COUNT" -eq 0 ]; then
echo "β AUTO-FIX FAILED: Story file still not updated"
exit 1
fi
echo "β
AUTO-FIX SUCCESS"
Continue with story as completed
Update sprint-status to "in-progress" instead of "done"
Add to failed_stories list
Continue to next story (if continue_on_failure)
Override story status to "in-progress"
Override sprint-status to "in-progress"
Add to reconciliation_warnings
Increment completed counter
PAUSE FOR USER REVIEW
Read user input
Display story file, test results, review findings
Read user input
Jump to Step 5 (Summary)
Jump to Step 5 (Summary)
Increment completed counter (implementation was successful)
Add to reconciliation_warnings: {story_key: {{story_key}}, warning_message: "Reconciliation failed - manual verification needed"}
Increment reconciliation_warnings_count
Increment failed counter
Add story_key to failed_stories list
Jump to Step 5 (Summary)
Wait {{pause_between_stories}} seconds
After all stories processed, jump to Step 5 (Summary)
Initialize worker pool state:
- story_queue = selected_stories (all stories to process)
- active_workers = {} (map of worker_id β {story_key, task_id, started_at})
- completed_stories = []
- failed_stories = []
- next_story_index = 0
- max_workers = {{parallel_count}}
Spawn first {{max_workers}} agents (or fewer if less stories):
While next_story_index < min(max_workers, story_queue.length):
story_key = story_queue[next_story_index]
worker_id = next_story_index + 1
Spawn Task agent:
- subagent_type: "general-purpose"
- description: "Implement story {{story_key}}"
- prompt: "Execute super-dev-pipeline workflow for story {{story_key}}.
Story file: docs/sprint-artifacts/{{story_key}}.md
Complexity: {{complexity_level}}
Mode: batch
Load workflow: /Users/jonahschulte/git/BMAD-METHOD/src/modules/bmm/workflows/4-implementation/super-dev-pipeline
Follow the multi-agent pipeline (builder, inspector, reviewer, fixer).
Commit when complete, update story status, report results."
- run_in_background: true (non-blocking - critical for semaphore pattern)
Store in active_workers[worker_id]:
story_key: {{story_key}}
task_id: {{returned_task_id}}
started_at: {{timestamp}}
status: "running"
Increment next_story_index
After spawning initial workers:
SEMAPHORE PATTERN: Keep {{max_workers}} agents running continuously
While active_workers.size > 0 OR next_story_index < story_queue.length:
Poll for completed workers (check task outputs non-blocking):
For each worker_id in active_workers:
Check if worker task completed using TaskOutput(task_id, block=false)
Continue to next worker (don't wait)
Get worker details: story_key = active_workers[worker_id].story_key
Execute Step 4.5: Smart Story Reconciliation
Load reconciliation instructions: {installed_path}/step-4.5-reconcile-story-status.md
Execute reconciliation with story_key={{story_key}}
π¨ MANDATORY STORY FILE VERIFICATION - MAIN ORCHESTRATOR MUST RUN BASH
STORY_FILE="docs/sprint-artifacts/{{story_key}}.md"
echo "π Verifying story file: {{story_key}}"
CHECKED_COUNT=$(grep -c "^- \[x\]" "$STORY_FILE" 2>/dev/null || echo "0")
TOTAL_COUNT=$(grep -c "^- \[.\]" "$STORY_FILE" 2>/dev/null || echo "0")
echo " Checked tasks: $CHECKED_COUNT/$TOTAL_COUNT"
RECORD_FILLED=$(grep -A 20 "^### Dev Agent Record" "$STORY_FILE" 2>/dev/null | grep -c "Claude Sonnet" || echo "0")
echo " Dev Agent Record: $RECORD_FILLED"
echo "$CHECKED_COUNT" > /tmp/checked_{{story_key}}.txt
echo "$RECORD_FILLED" > /tmp/record_{{story_key}}.txt
AUTO-FIX PROCEDURE:
1. Read agent's commit to see what files were created/modified
2. Read story Tasks section to see what was supposed to be built
3. For each task, check if corresponding code exists in commit
4. If code exists, check off the task using Edit tool
5. Fill in Dev Agent Record with commit details
6. Verify fixes worked (re-count checked tasks)
Continue with story completion
Override story status to "in-progress"
Add to reconciliation_warnings with detailed diagnostic
Continue (do NOT kill workers)
Add to completed_stories
Add to reconciliation_warnings: {story_key: {{story_key}}, warning_message: "Only {{task_completion_pct}}% tasks checked - manual verification needed"}
Remove worker_id from active_workers (free the slot)
IMMEDIATELY refill slot if stories remain:
story_key = story_queue[next_story_index]
Spawn new Task agent for this worker_id (same parameters as init)
Update active_workers[worker_id] with new task_id and story_key
Increment next_story_index
Get worker details: story_key = active_workers[worker_id].story_key
Add to failed_stories
Remove worker_id from active_workers (free the slot)
Kill all active workers
Clear story_queue
Break worker pool loop
story_key = story_queue[next_story_index]
Spawn new Task agent for this worker_id
Update active_workers[worker_id] with new task_id and story_key
Increment next_story_index
Display live progress every 30 seconds:
Sleep 5 seconds before next poll (prevents tight loop)
After worker pool drains (all stories processed), jump to Step 5 (Summary)
Calculate end_time and total_duration
Calculate success_rate = (completed / total_count) * 100
Save batch log to {batch_log}
Log contents: start_time, end_time, total_duration, selected_stories, completed_stories, failed_stories, success_rate