feat: add story automator skills
This commit is contained in:
parent
c91db0db4b
commit
df856fad44
|
|
@ -0,0 +1,6 @@
|
|||
---
|
||||
name: bmad-story-automator-go
|
||||
description: 'Automates the Phase 4 story loop across story creation, implementation, guardrail testing, review, and retrospective using tmux-managed child sessions. Use when you want hands-off multi-story implementation after sprint planning on macOS or Linux.'
|
||||
---
|
||||
|
||||
Follow the instructions in ./workflow.md.
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
|
@ -0,0 +1,36 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
ARTIFACT_ROOT="$SCRIPT_DIR/../artifacts/story-automator/bin"
|
||||
|
||||
detect_platform() {
|
||||
local os arch
|
||||
os="$(uname -s)"
|
||||
arch="$(uname -m)"
|
||||
|
||||
case "$os" in
|
||||
Darwin) os="darwin" ;;
|
||||
Linux) os="linux" ;;
|
||||
*) echo "Unsupported OS: $os" >&2; exit 1 ;;
|
||||
esac
|
||||
|
||||
case "$arch" in
|
||||
x86_64) arch="amd64" ;;
|
||||
arm64|aarch64) arch="arm64" ;;
|
||||
*) echo "Unsupported architecture: $arch" >&2; exit 1 ;;
|
||||
esac
|
||||
|
||||
printf '%s-%s' "$os" "$arch"
|
||||
}
|
||||
|
||||
PLATFORM="$(detect_platform)"
|
||||
TARGET="$ARTIFACT_ROOT/$PLATFORM/story-automator"
|
||||
|
||||
if [ ! -x "$TARGET" ]; then
|
||||
echo "Missing story-automator binary for $PLATFORM: $TARGET" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
exec "$TARGET" "$@"
|
||||
|
|
@ -0,0 +1,102 @@
|
|||
# Adaptive Retry Strategy
|
||||
|
||||
**Purpose:** Handle dev-story failures intelligently based on progress patterns and agent switching.
|
||||
|
||||
**Version:** 2.0.0
|
||||
|
||||
**See also:** `retry-fallback-strategy.md` for the universal retry/fallback pattern.
|
||||
|
||||
---
|
||||
|
||||
## Agent Alternation
|
||||
|
||||
This strategy works WITH the retry-fallback pattern:
|
||||
- Odd attempts (1, 3, 5): Use primary agent
|
||||
- Even attempts (2, 4): Use fallback agent (if configured)
|
||||
- Plateau detection applies ACROSS agents (same task across both agents = complexity issue)
|
||||
|
||||
---
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
Track failure patterns across retries (per agent):
|
||||
|
||||
```
|
||||
attempt_1_progress = {agent: primary, tasks: 5/9}
|
||||
attempt_2_progress = {agent: fallback, tasks: 4/9}
|
||||
attempt_3_progress = {agent: primary, tasks: 5/9} # same as attempt 1
|
||||
attempt_4_progress = {agent: fallback, tasks: 5/9} # plateau detected
|
||||
attempt_5_progress = {agent: primary, tasks: 5/9} # confirmed plateau
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Decision Logic
|
||||
|
||||
| Attempt | Condition | Action |
|
||||
|---------|-----------|--------|
|
||||
| 1 | FAILURE | Switch to fallback agent, retry |
|
||||
| 2 | FAILURE, progress > attempt_1 | Switch back to primary, retry with 2x poll interval |
|
||||
| 2 | FAILURE, progress ≤ attempt_1 | Switch back to primary, analyze if same plateau point |
|
||||
| 3 | FAILURE, plateau at same task (any agent) | Continue to attempt 4 (confirm with other agent) |
|
||||
| 4 | FAILURE, plateau confirmed across agents | **DEFER** story (complexity/context limit hit) |
|
||||
| 4 | FAILURE, variable progress | One more retry with extended timeout |
|
||||
| 5 | FAILURE, plateau confirmed | **DEFER** story |
|
||||
| 5 | FAILURE, zero progress all attempts | **ESCALATE** (likely API/connection issue) |
|
||||
| 5 | FAILURE, variable but incomplete | **ESCALATE** (all retries exhausted) |
|
||||
|
||||
---
|
||||
|
||||
## Plateau Detection
|
||||
|
||||
If `tasks_completed` is identical across 2+ attempts AND the session crashed/stopped at the same task, this indicates a complexity or context limit.
|
||||
|
||||
**Indicators:**
|
||||
- Same task number across multiple attempts
|
||||
- Session crashes at same point
|
||||
- No progress despite retries
|
||||
|
||||
**Action:** Mark story as "deferred" and continue with next story.
|
||||
|
||||
---
|
||||
|
||||
## DEFER Action
|
||||
|
||||
When a story is deferred (not failed):
|
||||
|
||||
1. **Update state:** Mark story as "deferred" in progress table
|
||||
2. **Log:** "Story {N} deferred - dev-story hit complexity limit at {tasks_completed}/{tasks_total}"
|
||||
3. **Continue:** Proceed to next story (do not escalate to user unless custom instructions say otherwise)
|
||||
|
||||
**Why defer vs fail?**
|
||||
- Deferred stories can be revisited manually
|
||||
- Doesn't block automation of remaining stories
|
||||
- Distinguishes from actual errors (API failures, etc.)
|
||||
|
||||
---
|
||||
|
||||
## Integration with Crash Recovery
|
||||
|
||||
Adaptive retry works WITH crash recovery AND agent fallback:
|
||||
|
||||
| Type | Trigger | Handling |
|
||||
|------|---------|----------|
|
||||
| **Adaptive Retry** | Session completed but FAILED (wrong output, tests failed) | Progress-based retry with agent alternation |
|
||||
| **Crash Recovery** | Session DIED unexpectedly (context limit, API error, kill) | Switch agent, retry with new session |
|
||||
| **Agent Fallback** | Primary agent fails | Automatic switch to fallback agent on next attempt |
|
||||
|
||||
All three mechanisms work together:
|
||||
1. Primary crashes → switch to fallback, new session
|
||||
2. Fallback fails at task 5 → switch to primary, retry
|
||||
3. Primary fails at task 5 → plateau detected across agents → DEFER
|
||||
|
||||
**Single attempt counter across all failure types.**
|
||||
|
||||
---
|
||||
|
||||
## Network Error Handling
|
||||
|
||||
On network-related failures (see `retry-fallback-strategy.md`):
|
||||
- Sleep 60 seconds before next attempt
|
||||
- Network errors do NOT count toward plateau detection
|
||||
- Always retry after network error (up to max attempts)
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
{
|
||||
"version": "1.0.0",
|
||||
"presets": []
|
||||
}
|
||||
|
|
@ -0,0 +1,199 @@
|
|||
# Agent Configuration Prompts
|
||||
|
||||
---
|
||||
|
||||
## 🚨 PREREQUISITE (MUST BE MET BEFORE DISPLAYING)
|
||||
|
||||
Before showing agent configuration prompts, you MUST have:
|
||||
|
||||
1. ✅ **Complexity Matrix displayed** - User has seen the story complexity breakdown
|
||||
2. ✅ **`stories_json` populated** - Programmatic complexity data from `bin/story-automator parse-story --rules`
|
||||
3. ✅ **Complexity summary available** - Counts of Low/Medium/High stories
|
||||
|
||||
**If these are not met, DO NOT proceed with agent configuration. Go back and complete step 3.**
|
||||
|
||||
---
|
||||
|
||||
## Agent Configuration Display (v6.0.0)
|
||||
|
||||
**IMPORTANT:** This prompt MUST reference the actual complexity data. Do not show generic prompts.
|
||||
|
||||
**IMPORTANT:** Select the correct table variant based on `skip_automate`:
|
||||
- If `skip_automate` is **false**: show the **WITH auto** table
|
||||
- If `skip_automate` is **true**: show the **WITHOUT auto** table
|
||||
|
||||
**IMPORTANT:** Before displaying options, check for saved presets:
|
||||
```bash
|
||||
presets_result=$("{buildStateDoc}" agent-config list --file "{agentConfigPresets}")
|
||||
preset_count=$(echo "$presets_result" | jq -r '.count')
|
||||
```
|
||||
- If `preset_count > 0`: include **[L]oad saved** option in the menu
|
||||
- If `preset_count == 0`: omit [L] option (show only S/U/C)
|
||||
|
||||
### Variant A: WITH auto column (skip_automate=false)
|
||||
|
||||
```
|
||||
**AI Agent Configuration (Based on Your Complexity Analysis)**
|
||||
|
||||
Your stories by complexity:
|
||||
- Low: {low_count} stories
|
||||
- Medium: {medium_count} stories
|
||||
- High: {high_count} stories
|
||||
|
||||
**Agent Details:**
|
||||
- **Claude:** `claude --dangerously-skip-permissions` + `bmad-` command prefix
|
||||
- **Codex:** `codex exec --full-auto` + natural language prompt (no command prefix)
|
||||
|
||||
**Suggested Complexity-Based Configuration:**
|
||||
|
||||
| Complexity | create | dev | auto | review | Rationale |
|
||||
|------------|--------|-----|------|--------|-----------|
|
||||
| Low | claude | claude | claude | claude | Claude handles simple tasks well |
|
||||
| Medium | codex | codex | codex | codex | Codex for moderate complexity (Claude fallback) |
|
||||
| High | codex | codex | codex | codex | Codex for complex work (Claude fallback) |
|
||||
| Retro | claude | - | - | - | Retrospectives always use Claude |
|
||||
|
||||
**Options:**
|
||||
1. **[S]uggested** - Apply complexity-based defaults above
|
||||
2. **[U]niform** - Same agent for ALL stories (you specify which)
|
||||
3. **[C]ustom** - Define your own per-complexity or per-task settings
|
||||
{IF_PRESETS}4. **[L]oad saved** - Use a previously saved configuration{END_IF_PRESETS}
|
||||
|
||||
Enter choice ({IF_PRESETS}S/U/C/L{ELSE}S/U/C{END_IF}) or provide custom overrides:
|
||||
```
|
||||
|
||||
**Conditional display rule:** `{IF_PRESETS}` blocks render only when `preset_count > 0`.
|
||||
|
||||
### Variant B: WITHOUT auto column (skip_automate=true)
|
||||
|
||||
```
|
||||
**AI Agent Configuration (Based on Your Complexity Analysis)**
|
||||
|
||||
Your stories by complexity:
|
||||
- Low: {low_count} stories
|
||||
- Medium: {medium_count} stories
|
||||
- High: {high_count} stories
|
||||
|
||||
**Agent Details:**
|
||||
- **Claude:** `claude --dangerously-skip-permissions` + `bmad-` command prefix
|
||||
- **Codex:** `codex exec --full-auto` + natural language prompt (no command prefix)
|
||||
|
||||
**Suggested Complexity-Based Configuration:**
|
||||
|
||||
| Complexity | create | dev | review | Rationale |
|
||||
|------------|--------|-----|--------|-----------|
|
||||
| Low | claude | claude | claude | Claude handles simple tasks well |
|
||||
| Medium | codex | codex | codex | Codex for moderate complexity (Claude fallback) |
|
||||
| High | codex | codex | codex | Codex for complex work (Claude fallback) |
|
||||
| Retro | claude | - | - | Retrospectives always use Claude |
|
||||
|
||||
**Options:**
|
||||
1. **[S]uggested** - Apply complexity-based defaults above
|
||||
2. **[U]niform** - Same agent for ALL stories (you specify which)
|
||||
3. **[C]ustom** - Define your own per-complexity or per-task settings
|
||||
{IF_PRESETS}4. **[L]oad saved** - Use a previously saved configuration{END_IF_PRESETS}
|
||||
|
||||
Enter choice ({IF_PRESETS}S/U/C/L{ELSE}S/U/C{END_IF}) or provide custom overrides:
|
||||
```
|
||||
|
||||
## Load Saved Preset Prompt (Option L)
|
||||
|
||||
**Prerequisite:** `preset_count > 0` (checked before displaying main menu).
|
||||
|
||||
```bash
|
||||
presets_result=$("{buildStateDoc}" agent-config list --file "{agentConfigPresets}")
|
||||
```
|
||||
|
||||
Display:
|
||||
```
|
||||
**Saved Agent Configurations:**
|
||||
|
||||
{numbered list from presets_result, e.g.:}
|
||||
1. all-claude (saved 2026-03-10)
|
||||
2. codex-heavy (saved 2026-03-08)
|
||||
|
||||
[D]elete a preset
|
||||
|
||||
Enter preset number to load, or [B]ack to return to options:
|
||||
```
|
||||
|
||||
**Wait.**
|
||||
|
||||
**IF number selected:**
|
||||
```bash
|
||||
preset_name="{selected preset name}"
|
||||
loaded=$("{buildStateDoc}" agent-config load --file "{agentConfigPresets}" --name "$preset_name")
|
||||
agent_config_json=$(echo "$loaded" | jq -r '.config')
|
||||
```
|
||||
Display loaded config summary, then proceed with this as `agent_config_json`.
|
||||
|
||||
**IF D selected:**
|
||||
Ask which preset number to delete, then:
|
||||
```bash
|
||||
"{buildStateDoc}" agent-config delete --file "{agentConfigPresets}" --name "$delete_name"
|
||||
```
|
||||
Redisplay this prompt (or return to main options if no presets remain).
|
||||
|
||||
**IF B selected:** Return to main S/U/C/L menu.
|
||||
|
||||
---
|
||||
|
||||
## Save Configuration Prompt
|
||||
|
||||
**When to show:** After the user completes a **[C]ustom** or **[U]niform** configuration (NOT after [S]uggested or [L]oad).
|
||||
|
||||
```
|
||||
**Save this configuration for future runs?**
|
||||
|
||||
Enter a name to save (e.g., `all-claude`, `codex-heavy`) or [N]o to skip:
|
||||
```
|
||||
|
||||
**Wait.**
|
||||
|
||||
**IF name provided:**
|
||||
```bash
|
||||
"{buildStateDoc}" agent-config save --file "{agentConfigPresets}" --name "$save_name" --config-json "$agent_config_json"
|
||||
```
|
||||
Display: "Configuration saved as **{save_name}**."
|
||||
|
||||
**IF N or empty:** Skip, continue.
|
||||
|
||||
---
|
||||
|
||||
## Uniform Agent Prompt (Option U)
|
||||
|
||||
```
|
||||
**Uniform Agent Configuration**
|
||||
|
||||
Use the same agent for ALL {total_count} stories regardless of complexity.
|
||||
|
||||
Which agent for all tasks?
|
||||
- `claude` - Claude for everything (more capable, slower)
|
||||
- `codex` - Codex for everything (faster, simpler)
|
||||
- `claude, false` - Claude only, no fallback
|
||||
- `codex, claude` - Codex primary, Claude fallback
|
||||
|
||||
Enter agent config:
|
||||
```
|
||||
|
||||
## Custom Configuration Prompt (Option C)
|
||||
|
||||
```
|
||||
**Custom Agent Configuration**
|
||||
|
||||
Define agents per complexity level and/or per task.
|
||||
|
||||
**Per-Complexity Format:** `complexity.task: primary, fallback`
|
||||
- `low.dev: claude, false` → Claude for low-complexity dev, no fallback
|
||||
- `medium.create: codex, claude` → Codex for medium-complexity create
|
||||
- `high.review: claude, false` → Claude for high-complexity review
|
||||
|
||||
**Per-Task Format (applies to all complexities):** `task: primary, fallback`
|
||||
- `review: claude, false` → Claude for ALL reviews
|
||||
- `dev: codex, claude` → Codex for ALL dev
|
||||
|
||||
**Complexity levels:** low, medium, high
|
||||
**Tasks:** create, dev, auto, review
|
||||
|
||||
Enter overrides (comma-separated):
|
||||
```
|
||||
|
|
@ -0,0 +1,179 @@
|
|||
# Agent Fallback Troubleshooting
|
||||
|
||||
### Issue: Session spawns Claude instead of Codex
|
||||
|
||||
**Symptoms:**
|
||||
- Output shows Claude-specific messages (e.g., "You've used 84% of your weekly limit")
|
||||
- Expected Codex but got Claude
|
||||
|
||||
**Cause:** The `--agent` flag must be passed to `story-automator tmux-wrapper spawn`, not to `build-cmd`.
|
||||
|
||||
**Correct Usage (v1.4.0+):**
|
||||
```bash
|
||||
# Method 1: Use --agent flag on spawn (RECOMMENDED)
|
||||
session=$("$scripts" tmux-wrapper spawn dev "$epic" "$story_id" \
|
||||
--agent codex \
|
||||
--command "$("$scripts" tmux-wrapper build-cmd dev "$story_id")")
|
||||
|
||||
# Method 2: Set environment variable before spawn
|
||||
export AI_AGENT="codex"
|
||||
session=$("$scripts" tmux-wrapper spawn dev "$epic" "$story_id" \
|
||||
--command "$("$scripts" tmux-wrapper build-cmd dev "$story_id")")
|
||||
```
|
||||
|
||||
**Wrong Usage:**
|
||||
```bash
|
||||
# WRONG - this doesn't work
|
||||
session=$("$scripts" tmux-wrapper spawn dev "$epic" "$story_id" \
|
||||
--command "$("$scripts" tmux-wrapper build-cmd dev "$story_id" --agent codex)")
|
||||
```
|
||||
|
||||
### Issue: Monitor reports "stuck" but Codex is active
|
||||
|
||||
**Symptoms:**
|
||||
- `story-automator monitor-session` returns `stuck` state after 4 polls
|
||||
- Manual inspection shows Codex still producing output (no prompt, output continues to grow)
|
||||
|
||||
**Cause:** The monitoring script relied on marker detection instead of output freshness.
|
||||
|
||||
**Fixed in v2.4.0:**
|
||||
- Output freshness tracking (no marker reliance)
|
||||
- `CODEX_OUTPUT_STALE_SECONDS` controls how long Codex can be silent before "stuck"
|
||||
- Codex still gets 6 poll grace period before "stuck"
|
||||
|
||||
**Verification:**
|
||||
```bash
|
||||
# Check if session has AI_AGENT set
|
||||
tmux show-environment -t "session-name" AI_AGENT
|
||||
|
||||
# Manual session status check
|
||||
"$scripts" tmux-status-check "session-name" --project-root "$PWD"
|
||||
```
|
||||
|
||||
### Issue: log command error when using --agent flag
|
||||
|
||||
**Symptoms:**
|
||||
```
|
||||
log: Unknown subcommand 'Codex agent detected - applying 1.5x timeout (90min)'
|
||||
```
|
||||
|
||||
**Cause:** macOS has `/usr/bin/log` system command. If the `log()` bash function wasn't defined before first use, bash fell through to the system command.
|
||||
|
||||
**Fixed in v1.4.0:** The `log()` function is now defined before argument parsing in `story-automator monitor-session`.
|
||||
|
||||
### Issue: Manual polling required as workaround
|
||||
|
||||
**If monitoring still fails**, use this manual polling approach:
|
||||
```bash
|
||||
for i in {1..60}; do
|
||||
sleep 30
|
||||
# Check if session still exists
|
||||
if ! tmux has-session -t "session-name" 2>/dev/null; then
|
||||
echo "Session ended"
|
||||
break
|
||||
fi
|
||||
# Check for shell prompt (completion indicator)
|
||||
last_line=$(tmux capture-pane -t "session-name" -p | tail -1)
|
||||
if echo "$last_line" | grep -qE '❯$|\$$|#$'; then
|
||||
echo "Session complete (shell prompt detected)"
|
||||
break
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
### Issue: Codex sessions explore files but don't execute full workflow (v1.4.0)
|
||||
|
||||
**Symptoms:**
|
||||
- Session output shows file exploration (`sed`, `rg`, `cat` commands)
|
||||
- No actual review findings or story updates
|
||||
- Sprint-status never changes from "review" to "done"
|
||||
- Session completes but workflow steps 1-5 weren't followed
|
||||
|
||||
**Cause:** Codex uses natural language prompts and may not follow structured workflow instructions as reliably as Claude.
|
||||
|
||||
**Mitigation strategies:**
|
||||
1. **Use Claude for code-review by default** - More reliable at following multi-step workflows
|
||||
2. **Add explicit step markers** - Tell Codex to output "STEP 1 COMPLETE", "STEP 2 COMPLETE" etc.
|
||||
3. **Verify after session** - Check story file Status field, not just sprint-status
|
||||
|
||||
**Recommended agent configuration for reliability:**
|
||||
```yaml
|
||||
agentConfig:
|
||||
defaultPrimary: "claude"
|
||||
defaultFallback: "codex"
|
||||
perTask:
|
||||
# create-story: Either agent works well
|
||||
create:
|
||||
primary: "claude"
|
||||
# dev-story: Either agent works, Codex may be faster for simple tasks
|
||||
dev:
|
||||
primary: "codex"
|
||||
# code-review: Claude recommended - more reliable at following workflow
|
||||
review:
|
||||
primary: "claude"
|
||||
fallback: false
|
||||
```
|
||||
|
||||
### Issue: Code-review doesn't update sprint-status.yaml
|
||||
|
||||
**Symptoms:**
|
||||
- Code-review session completes
|
||||
- Story file shows review was done (Dev Agent Record updated)
|
||||
- But sprint-status.yaml still shows "review" instead of "done"
|
||||
|
||||
**Cause:** Code-review workflow step 5 updates sprint-status, but session may not reach step 5 or may use wrong story key format.
|
||||
|
||||
**Verification (v1.4.0):**
|
||||
```bash
|
||||
# Check story file status directly
|
||||
"$scripts" orchestrator-helper story-file-status 8.2
|
||||
|
||||
# Compare with sprint-status
|
||||
"$scripts" orchestrator-helper sprint-status get "8-2-flipside-crypto-provider"
|
||||
|
||||
# If story file shows "done" but sprint-status doesn't, manually sync:
|
||||
# Edit _bmad-output/implementation-artifacts/sprint-status.yaml and change "8-2-story-name: review" to "done"
|
||||
```
|
||||
|
||||
### When to manually intervene
|
||||
|
||||
**Intervene immediately if:**
|
||||
1. **5 code-review cycles with no progress** - Agent likely stuck in a loop
|
||||
2. **Story file shows "done" but sprint-status doesn't** - Sync issue, manual fix is faster
|
||||
3. **Tests passing but review keeps finding issues** - May be false positives
|
||||
4. **Codex sessions consistently incomplete** - Switch to Claude for that workflow
|
||||
|
||||
**Steps for manual intervention:**
|
||||
```bash
|
||||
# 1. Check actual story status
|
||||
"$scripts" orchestrator-helper story-file-status {story_id}
|
||||
|
||||
# 2. Run tests to verify code quality
|
||||
go test ./src/... || npm test
|
||||
|
||||
# 3. If tests pass, manually update sprint-status
|
||||
# Edit: _bmad-output/implementation-artifacts/sprint-status.yaml
|
||||
# Change: "8-2-story-name: review" to "8-2-story-name: done"
|
||||
|
||||
# 4. Resume orchestration - it will see "done" and proceed to commit
|
||||
```
|
||||
|
||||
### Debugging Agent Detection
|
||||
|
||||
```bash
|
||||
# Check current agent type detection
|
||||
"$scripts" tmux-wrapper agent-type
|
||||
|
||||
# Check what CLI command would be used
|
||||
"$scripts" tmux-wrapper agent-cli
|
||||
|
||||
# Check what command prefix would be used
|
||||
"$scripts" tmux-wrapper skill-prefix
|
||||
|
||||
# View session environment
|
||||
tmux show-environment -t "session-name"
|
||||
|
||||
# Check story key normalization (v1.4.0)
|
||||
"$scripts" orchestrator-helper normalize-key "8.2"
|
||||
"$scripts" orchestrator-helper normalize-key "8-2-flipside-crypto-provider"
|
||||
```
|
||||
|
|
@ -0,0 +1,138 @@
|
|||
# Agent Fallback Strategy (v3.0.0)
|
||||
|
||||
**Multi-Agent Support:** The orchestrator can use Claude or Codex as AI coding agents, with automatic fallback on failure.
|
||||
|
||||
## Configuration
|
||||
|
||||
From state document (v3.0.0):
|
||||
```yaml
|
||||
agentConfig:
|
||||
defaultPrimary: "claude"
|
||||
defaultFallback: "codex"
|
||||
perTask:
|
||||
dev:
|
||||
primary: "codex"
|
||||
fallback: "claude"
|
||||
complexityOverrides:
|
||||
low:
|
||||
dev:
|
||||
primary: "claude"
|
||||
fallback: false
|
||||
```
|
||||
|
||||
Agent selection is resolved via the deterministic agents file created in preflight:
|
||||
`_bmad-output/story-automator/agents/agents-{state_filename}.md`
|
||||
|
||||
## Agent Differences
|
||||
|
||||
| Agent | CLI | Prompt Style | Timeout | Todo Tracking |
|
||||
|-------|-----|--------------|---------|---------------|
|
||||
| Claude | `claude --dangerously-skip-permissions` | `bmad-` command syntax | 60min | ☒/☐ checkboxes |
|
||||
| Codex | `codex exec --full-auto` | Natural language prompt | 90min (1.5x) | Not supported |
|
||||
|
||||
**CRITICAL: Claude and Codex use DIFFERENT prompt styles:**
|
||||
- **Claude:** `bmad-dev-story 6.1` (command syntax)
|
||||
- **Codex:** Natural language explaining the workflow to execute
|
||||
|
||||
The `story-automator tmux-wrapper build-cmd` function automatically generates the correct prompt format based on `AI_AGENT` environment variable.
|
||||
|
||||
**See `workflow-commands.md` for complete Codex prompt templates.**
|
||||
|
||||
## Fallback Behavior
|
||||
|
||||
**When to fallback:**
|
||||
- Primary agent session crashes (non-zero exit)
|
||||
- Retries exhausted with primary agent
|
||||
- `fallback` is configured for the task and not disabled ("false")
|
||||
|
||||
**Fallback procedure:**
|
||||
1. Log: "Primary agent ({primary}) failed after {retries} attempts. Trying fallback ({fallback})..."
|
||||
2. Set environment: `AI_AGENT={fallback}`
|
||||
3. Respawn session with fallback agent
|
||||
4. Monitor as normal (timeouts auto-adjust based on agent type)
|
||||
5. If fallback also fails → CRITICAL escalation
|
||||
|
||||
**Environment Variable:**
|
||||
```bash
|
||||
# Set before spawning session
|
||||
export AI_AGENT="codex" # or "claude"
|
||||
|
||||
# story-automator tmux-wrapper reads this automatically and generates correct prompt format
|
||||
session=$("$scripts" tmux-wrapper spawn dev {epic} {story_id} \
|
||||
--command "$("$scripts" tmux-wrapper build-cmd dev {story_id})")
|
||||
```
|
||||
|
||||
## Codex Monitoring Notes
|
||||
|
||||
- **No todo checkboxes:** Codex doesn't use ☒/☐ - `todos_done` and `todos_total` will be 0
|
||||
- **Longer waits:** Status check script returns 90s wait estimate for Codex (vs 60s for Claude)
|
||||
- **Different activity detection:** Uses output freshness + heartbeat (no marker reliance)
|
||||
- **Output staleness window:** `CODEX_OUTPUT_STALE_SECONDS` (default: 300)
|
||||
- **1.5x timeout multiplier:** `story-automator monitor-session` applies 1.5x multiplier when `--agent codex`
|
||||
- **Fake todo progress (v2.2):** When Codex is idle after activity, reports `1/1` to indicate "work done, needs verification"
|
||||
- **Idle vs Completed (v2.2):** Codex sessions report "idle" instead of "completed" when CLI stops but no terminal markers
|
||||
|
||||
## ⚠️ Codex Code-Review Limitations (v1.5.0)
|
||||
|
||||
**CRITICAL: Codex is NOT recommended for code-review workflow.**
|
||||
|
||||
### Known Issue: Sprint-Status Not Updated
|
||||
|
||||
Codex code-review sessions often complete (CLI exits) WITHOUT updating `sprint-status.yaml` to "done". This causes:
|
||||
- Monitor reports "completed" but sprint-status unchanged
|
||||
- Orchestrator loops indefinitely, spawning new review cycles
|
||||
- 8+ cycles with 0 progress (observed in Story 8.2)
|
||||
|
||||
### Root Cause
|
||||
|
||||
Codex runs non-interactively via `codex exec`. When it finishes:
|
||||
1. Tmux session goes idle (no active CLI process)
|
||||
2. Monitor sees "idle" and marks as "completed"
|
||||
3. But workflow step 5 (update sprint-status) may not have executed
|
||||
4. No way to verify workflow actually finished
|
||||
|
||||
### Recommended Configuration
|
||||
|
||||
```yaml
|
||||
agentConfig:
|
||||
defaultPrimary: "codex"
|
||||
defaultFallback: "claude"
|
||||
perTask:
|
||||
review:
|
||||
primary: "claude" # Never use Codex for code-review
|
||||
fallback: false
|
||||
```
|
||||
|
||||
### "incomplete" State (v2.2)
|
||||
|
||||
The monitoring system now detects when Codex finishes but sprint-status wasn't updated:
|
||||
- `final_state: "completed"` → Verified: sprint-status shows "done"
|
||||
- `final_state: "incomplete"` → Session idle but sprint-status NOT "done"
|
||||
|
||||
When "incomplete" is detected:
|
||||
- **Do NOT retry automatically** (prevents infinite loop)
|
||||
- Escalate to user with options:
|
||||
1. Manual fix (update sprint-status yourself)
|
||||
2. Run code-review with Claude
|
||||
3. Skip this story
|
||||
|
||||
### Verification Command (v2.2)
|
||||
|
||||
Check if code-review actually completed:
|
||||
```bash
|
||||
"$scripts" orchestrator-helper verify-code-review {story_id}
|
||||
# Returns: {"verified":true/false, "sprint_status":"...", ...}
|
||||
```
|
||||
|
||||
## Backwards Compatibility
|
||||
|
||||
- If `agentConfig` is missing, default to Claude-only (no fallback)
|
||||
- If `aiCommand` is set (legacy), use it directly with `bmad-` prefix
|
||||
- New orchestrations should use `agentConfig` instead of `aiCommand`
|
||||
- Agents file is authoritative when present
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
See `agent-fallback-troubleshooting.md` for detailed troubleshooting steps.
|
||||
|
|
@ -0,0 +1,163 @@
|
|||
# Code Review Loop Pattern (v2.3)
|
||||
|
||||
**Purpose:** Code review loop execution using script-based automation with per-task agent configuration.
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
```
|
||||
reviewCycle = 1
|
||||
maxCycles = 5
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent Selection (v3.0)
|
||||
|
||||
Code-review uses **deterministic agent selection** from the agents file, same as all other workflow steps.
|
||||
|
||||
```bash
|
||||
# Resolve agent for review task (uses agents file)
|
||||
resolve_agent_for_task "review" "$state_file" "{story_id}"
|
||||
review_agent="$primary_agent"
|
||||
review_fallback="$fallback_agent"
|
||||
|
||||
echo "Code review using: primary=$review_agent, fallback=$review_fallback"
|
||||
```
|
||||
|
||||
**Per-task override example in state document:**
|
||||
```yaml
|
||||
agentConfig:
|
||||
defaultPrimary: "codex"
|
||||
defaultFallback: "claude"
|
||||
perTask:
|
||||
review:
|
||||
primary: "claude" # Override: use Claude for reviews
|
||||
fallback: false # Disable fallback for reviews
|
||||
```
|
||||
|
||||
**Note on Codex:** If Codex is configured for reviews and fails to update sprint-status, the `story-automator monitor-session --workflow review` verification catches this and returns `final_state: "incomplete"`, triggering the escalation path below.
|
||||
|
||||
---
|
||||
|
||||
## Loop Execution
|
||||
|
||||
**WHILE reviewCycle ≤ maxCycles:**
|
||||
|
||||
### 1. Spawn Review Session
|
||||
|
||||
```bash
|
||||
scripts="{project_root}/_bmad/bmm/4-implementation/bmad-story-automator-go/bin/story-automator"
|
||||
|
||||
# ⚠️ CRITICAL: --command is REQUIRED - without it, no command runs → never_active failure!
|
||||
# Spawn with story-automator tmux-wrapper (handles naming, state cleanup, env vars)
|
||||
session_name=$("$scripts" tmux-wrapper spawn review {epic} {story_id} \
|
||||
--agent "$review_agent" \
|
||||
--cycle $reviewCycle \
|
||||
--command "$("$scripts" tmux-wrapper build-cmd review {story_id} --agent "$review_agent")")
|
||||
```
|
||||
|
||||
### 2. Monitor Session with Verification (v2.2)
|
||||
|
||||
```bash
|
||||
# Single call replaces 14+ API roundtrips
|
||||
# Pass --workflow and --story-key for completion verification
|
||||
result=$("$scripts" monitor-session "$session_name" --json --verbose \
|
||||
--agent "$review_agent" \
|
||||
--workflow review --story-key {story_id})
|
||||
final_state=$(echo "$result" | jq -r '.final_state')
|
||||
output_file=$(echo "$result" | jq -r '.output_file')
|
||||
```
|
||||
|
||||
**Note:** The `--workflow review --story-key` parameters enable sprint-status verification before marking complete.
|
||||
|
||||
### 3. Parse Output
|
||||
|
||||
```bash
|
||||
# Sub-agent parsing (haiku, 99% cheaper than main context)
|
||||
parsed=$("$scripts" orchestrator-helper parse-output "$output_file" review)
|
||||
```
|
||||
|
||||
### 4. Verify Sprint Status
|
||||
|
||||
```bash
|
||||
status=$("$scripts" orchestrator-helper sprint-status get {story_key})
|
||||
is_done=$(echo "$status" | jq -r '.done')
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Decision Logic
|
||||
|
||||
### Handle final_state (v2.2)
|
||||
|
||||
**IF final_state == "completed":**
|
||||
- Session verified complete (sprint-status shows "done")
|
||||
- Log "Code review passed, story marked done"
|
||||
- Cleanup: `"$scripts" tmux-wrapper kill "$session_name"`
|
||||
- **EXIT LOOP** → proceed to Git Commit
|
||||
|
||||
**IF final_state == "incomplete":** (v2.2 - Codex-specific)
|
||||
- Session idle but sprint-status NOT updated
|
||||
- Cleanup: `"$scripts" tmux-wrapper kill "$session_name"`
|
||||
- Count this as a failed attempt and **retry** until `reviewCycle == maxCycles`
|
||||
- **After maxCycles exhausted:** Escalate with CRITICAL priority (Trigger #8)
|
||||
- Present options:
|
||||
1. **[1] Manual Fix** - Update sprint-status.yaml yourself
|
||||
2. **[2] Run with Claude** - Re-run code-review with Claude agent
|
||||
3. **[3] Skip Story** - Mark story as skipped and continue
|
||||
- **HALT** — wait for user choice
|
||||
|
||||
**IF final_state == "crashed" or "stuck":**
|
||||
- Log "Review session failed: $final_state"
|
||||
- Cleanup: `"$scripts" tmux-wrapper kill "$session_name"`
|
||||
- Increment reviewCycle
|
||||
- **CONTINUE** (retry with new session)
|
||||
|
||||
### Handle is_done check
|
||||
|
||||
**IF is_done == true:**
|
||||
- Log "Sprint-status verified done"
|
||||
- **EXIT LOOP** → proceed to Git Commit
|
||||
|
||||
**IF is_done == false AND final_state == "completed":**
|
||||
- This shouldn't happen with v2.2 verification
|
||||
- Fallback: check story file status
|
||||
- If story file shows "done", treat as complete
|
||||
|
||||
**IF reviewCycle > maxCycles:**
|
||||
- Check escalation: `"$scripts" orchestrator-helper escalate review-loop "cycles=$reviewCycle"`
|
||||
- **HALT** — wait for user choice
|
||||
|
||||
---
|
||||
|
||||
## Sprint-Status Verification (v3.0)
|
||||
|
||||
Status is determined by **CRITICAL issues remaining** after auto-fix:
|
||||
- "done" → 0 CRITICAL issues, proceed to commit
|
||||
- "in-progress" → 1+ CRITICAL issues, new review cycle
|
||||
|
||||
HIGH/MEDIUM/LOW issues are tracked as action items but don't block automation.
|
||||
|
||||
---
|
||||
|
||||
## Output Verification Fallback (v1.4.0)
|
||||
|
||||
If `output_verified == false` or output truncated, use story file fallback:
|
||||
|
||||
```bash
|
||||
file_status=$("$scripts" orchestrator-helper story-file-status {story_id})
|
||||
# If status == "done", skip parsing - story is complete
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verification Command (v2.2)
|
||||
|
||||
Check if code-review actually completed:
|
||||
|
||||
```bash
|
||||
"$scripts" orchestrator-helper verify-code-review {story_id}
|
||||
# Returns: {"verified":true/false, "sprint_status":"...", ...}
|
||||
```
|
||||
|
|
@ -0,0 +1,246 @@
|
|||
{
|
||||
"version": "2.0",
|
||||
"thresholds": {
|
||||
"low_max": 3,
|
||||
"medium_max": 7
|
||||
},
|
||||
"structural_rules": {
|
||||
"ac_count_medium": 6,
|
||||
"ac_count_high": 10,
|
||||
"ac_count_medium_score": 1,
|
||||
"ac_count_high_score": 2,
|
||||
"dependency_score": 1,
|
||||
"large_story_word_threshold": 400,
|
||||
"large_story_score": 1
|
||||
},
|
||||
"rules": [
|
||||
{
|
||||
"id": "external_api",
|
||||
"label": "External API integration",
|
||||
"pattern": "whatsapp|oauth|stripe|payment|third[- ]party|external api|twilio|sendgrid|mailgun|slack api|discord api|shopify|salesforce|hubspot|zapier|plaid|aws sdk|gcp sdk|azure sdk",
|
||||
"score": 2
|
||||
},
|
||||
{
|
||||
"id": "webhook_async",
|
||||
"label": "Webhook/async processing",
|
||||
"pattern": "webhook|async handler|asynchronous|message queue|queue worker|background job|event listener|pub.?sub|kafka|rabbitmq|sqs|nats|event.?driven|callback url",
|
||||
"score": 2
|
||||
},
|
||||
{
|
||||
"id": "realtime",
|
||||
"label": "Real-time communication",
|
||||
"pattern": "websocket|web socket|socket\\.io|sse|server.sent events|real.?time update|live update|push notification|long polling",
|
||||
"score": 2
|
||||
},
|
||||
{
|
||||
"id": "db_migration",
|
||||
"label": "Database schema changes",
|
||||
"pattern": "migration|schema change|new table|alter table|add column|database table|create index|foreign key|database schema|modify schema",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "db_complex_query",
|
||||
"label": "Complex database operations",
|
||||
"pattern": "complex quer|join.*join|subquer|aggregate|group by|window function|recursive.*query|materialized view|stored procedure|database transaction|deadlock|connection pool",
|
||||
"score": 2
|
||||
},
|
||||
{
|
||||
"id": "data_transform",
|
||||
"label": "Data transformation/ETL",
|
||||
"pattern": "data transform|etl|data pipeline|data migration|bulk import|bulk export|csv.*(import|export|parse)|data mapping|data sync|batch process|normalize data|denormalize",
|
||||
"score": 2
|
||||
},
|
||||
{
|
||||
"id": "caching",
|
||||
"label": "Caching layer",
|
||||
"pattern": "cache|redis|memcache|cdn|invalidat|cache.?bust|stale.?while|cache.?strategy|in.?memory store|session store",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "search_index",
|
||||
"label": "Search/indexing",
|
||||
"pattern": "elasticsearch|full.?text search|search index|algolia|typesense|meilisearch|solr|vector search|semantic search|fuzzy search|search engine",
|
||||
"score": 2
|
||||
},
|
||||
{
|
||||
"id": "file_storage",
|
||||
"label": "File upload/storage",
|
||||
"pattern": "file upload|s3|blob storage|image upload|media upload|file processing|pdf generat|csv generat|document generat|file download|cloud storage|presigned url",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "auth_system",
|
||||
"label": "Authentication system",
|
||||
"pattern": "authenticat|login flow|sign.?up flow|session management|jwt|token refresh|password reset|magic link|sso|single sign|two.?factor|2fa|mfa|social login|auth middleware|auth guard",
|
||||
"score": 2
|
||||
},
|
||||
{
|
||||
"id": "authorization",
|
||||
"label": "Authorization/permissions",
|
||||
"pattern": "authori[zs]|rbac|role.?based|permission|access control|acl|policy engine|guard|middleware.*auth|protect.*route|tenant.*isol|multi.?tenant|row.?level security",
|
||||
"score": 2
|
||||
},
|
||||
{
|
||||
"id": "encryption",
|
||||
"label": "Encryption/security",
|
||||
"pattern": "encrypt|decrypt|hash|bcrypt|argon|hmac|digital signature|certificate|ssl|tls|secret.*management|vault|key.*rotation|sanitiz|xss|csrf|sql injection|security header|cors config",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "state_management",
|
||||
"label": "Complex state management",
|
||||
"pattern": "state management|redux|zustand|recoil|jotai|context.*provider|global state|state machine|finite state|xstate|event sourc|cqrs|saga pattern|optimistic update",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "backend_frontend",
|
||||
"label": "Backend + Frontend combined",
|
||||
"pattern": "backend.*frontend|frontend.*backend|full.?stack|api.*and.*ui|server.*and.*client|both.*api.*and|endpoint.*and.*page|controller.*and.*component",
|
||||
"score": 2
|
||||
},
|
||||
{
|
||||
"id": "microservice",
|
||||
"label": "Service communication",
|
||||
"pattern": "microservice|service.to.service|grpc|inter.?service|api gateway|service mesh|service discover|distributed|cross.?service|orchestrat.*service",
|
||||
"score": 2
|
||||
},
|
||||
{
|
||||
"id": "infrastructure",
|
||||
"label": "Infrastructure changes",
|
||||
"pattern": "docker|kubernetes|k8s|terraform|ci.?cd|pipeline|deploy|nginx|caddy|load balanc|auto.?scal|infrastructure|server config|environment variable|env config|systemd|reverse proxy",
|
||||
"score": 2
|
||||
},
|
||||
{
|
||||
"id": "error_handling",
|
||||
"label": "Complex error handling",
|
||||
"pattern": "error handling|error boundar|retry logic|circuit.?break|graceful.?degrad|fallback.*strateg|dead.?letter|error recover|exception handling|rollback|compensat.*transaction|idempoten",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "transaction",
|
||||
"label": "Transaction management",
|
||||
"pattern": "transaction|atomic.*operation|two.?phase|eventual.?consisten|distributed.*lock|optimistic.*lock|pessimistic.*lock|conflict.*resolut|concurren.*control|race condition",
|
||||
"score": 2
|
||||
},
|
||||
{
|
||||
"id": "performance",
|
||||
"label": "Performance optimization",
|
||||
"pattern": "performance|optimiz|pagination|infinite scroll|virtual.*list|lazy load|code split|bundle.*size|lighthouse|core web vital|throttl|debounc|memoiz|profil",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "rate_limiting",
|
||||
"label": "Rate limiting/throttling",
|
||||
"pattern": "rate limit|throttl|quota|usage.*limit|api.*limit|request.*limit|cooldown|backoff|exponential.*back",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "batch_processing",
|
||||
"label": "Batch/bulk operations",
|
||||
"pattern": "batch.*process|bulk.*operat|mass.*update|bulk.*insert|batch.*job|scheduled.*task|cron|periodic.*task|bulk.*delete|queue.*process",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "complex_form",
|
||||
"label": "Complex forms",
|
||||
"pattern": "multi.?step form|form wizard|dynamic form|form validation|conditional field|nested form|form builder|file.*input.*form|complex.*form|form.*state",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "visualization",
|
||||
"label": "Charts/visualization",
|
||||
"pattern": "chart|graph|d3|visualization|dashboard.*widget|data.*viz|sparkline|heatmap|treemap|pie.*chart|bar.*chart|line.*chart|recharts|plotly|canvas.*draw",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "drag_drop",
|
||||
"label": "Drag and drop",
|
||||
"pattern": "drag.?and.?drop|dnd|sortable|reorder|draggable|droppable|kanban.*board|drag.*handle",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "accessibility",
|
||||
"label": "Accessibility requirements",
|
||||
"pattern": "accessib|a11y|screen reader|aria|wcag|keyboard.*navigat|focus.*management|tab.*order|assistive|color.*contrast",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "i18n",
|
||||
"label": "Internationalization",
|
||||
"pattern": "i18n|internationali[zs]|locali[zs]|translat|multi.?language|rtl|right.?to.?left|locale|plural.*form|number.*format|date.*format.*locale",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "integration_test",
|
||||
"label": "Integration testing required",
|
||||
"pattern": "integration test|e2e test|end.to.end|playwright|cypress|selenium|test.*api.*endpoint|test.*database|test.*external|contract.*test|smoke.*test",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "test_fixtures",
|
||||
"label": "Complex test setup",
|
||||
"pattern": "test fixture|mock.*service|stub.*api|seed.*data|test.*factory|test.*database|test.*container|docker.*test|test.*environment|test.*isolation",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "email_notification",
|
||||
"label": "Email/notification system",
|
||||
"pattern": "email.*send|notification.*system|push.*notif|sms.*send|in.?app.*notif|notification.*preference|email.*template|mailer|notification.*queue|alert.*system",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "logging_monitoring",
|
||||
"label": "Logging/monitoring/observability",
|
||||
"pattern": "logging.*system|monitoring|observab|telemetry|tracing|distributed.*trace|log.*aggregat|metrics.*collect|health.*check|alerting|sentry|datadog|newrelic",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "config_system",
|
||||
"label": "Configuration/feature flags",
|
||||
"pattern": "feature.*flag|feature.*toggle|config.*system|dynamic.*config|a.?b.*test|experiment|remote.*config|launch.*darkly|unleash|posthog.*flag",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "frontend_only",
|
||||
"label": "Frontend only (no backend)",
|
||||
"pattern": "frontend only|ui only|css only|layout only|style only|cosmetic|visual.*only|markup.*only|static.*page|presentation.*only",
|
||||
"score": -1
|
||||
},
|
||||
{
|
||||
"id": "simple_crud",
|
||||
"label": "Simple CRUD operations",
|
||||
"pattern": "simple crud|basic crud|create read update delete|simple.*list|basic.*form|standard.*rest|straightforward|simple.*endpoint|basic.*page|simple.*component",
|
||||
"score": -1
|
||||
},
|
||||
{
|
||||
"id": "documentation_only",
|
||||
"label": "Documentation/config only",
|
||||
"pattern": "documentation only|readme|config.*change only|env.*update only|update.*docs|comment.*only|rename only|typo|text.*change only",
|
||||
"score": -2
|
||||
},
|
||||
{
|
||||
"id": "refactor_only",
|
||||
"label": "Pure refactor (no behavior change)",
|
||||
"pattern": "refactor only|code.*cleanup|rename|extract.*method|move.*file|reorgani[zs]e|restructure|no.*behavior.*change|no.*functional.*change",
|
||||
"score": -1
|
||||
},
|
||||
{
|
||||
"id": "simple_bugfix",
|
||||
"label": "Simple/isolated bug fix",
|
||||
"pattern": "simple.*fix|minor.*bug|typo.*fix|off.?by.?one|null.*check|missing.*import|syntax.*error|small.*patch|hotfix|one.?line.*fix",
|
||||
"score": -1
|
||||
},
|
||||
{
|
||||
"id": "uncertainty",
|
||||
"label": "Uncertain/research-heavy scope",
|
||||
"pattern": "research|investigate|spike|prototype|proof of concept|poc|tbd|to be determined|unclear|explore|experiment.*with|evaluate.*option|might.*need|may.*require",
|
||||
"score": 1
|
||||
},
|
||||
{
|
||||
"id": "breaking_change",
|
||||
"label": "Breaking/migration change",
|
||||
"pattern": "breaking.*change|backward.*compat|deprecat|migration.*guide|version.*bump.*major|api.*v\\d|legacy.*support|upgrade.*path",
|
||||
"score": 2
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -0,0 +1,153 @@
|
|||
# Story Complexity Scoring (v2.0.0)
|
||||
|
||||
Estimate each story's complexity to predict dev-story success likelihood and inform agent selection. Scoring combines **regex-based pattern matching** (detecting domain signals in story text) with **structural analysis** (measuring story size and shape).
|
||||
|
||||
---
|
||||
|
||||
## How Scoring Works
|
||||
|
||||
The Go binary (`bin/story-automator parse-story --rules`) performs two passes:
|
||||
|
||||
### Pass 1: Pattern Matching (regex rules)
|
||||
|
||||
Each rule in `complexity-rules.json` has a regex pattern tested case-insensitively against the concatenation of the story's **title + description + acceptance criteria**. When a rule matches, its score is added (positive = complexity, negative = simplicity).
|
||||
|
||||
### Pass 2: Structural Analysis
|
||||
|
||||
The parser also examines the story's **structure** independent of text content:
|
||||
|
||||
| Structural Factor | Condition | Score | Reason |
|
||||
|---|---|---|---|
|
||||
| Acceptance Criteria count (medium) | AC lines > 6 | +1 | More ACs = more surface area to implement and verify |
|
||||
| Acceptance Criteria count (high) | AC lines > 10 | +2 | (replaces medium; not additive) Large AC count signals multi-faceted story |
|
||||
| Explicit dependency | Story references dependency on another story | +1 | Cross-story dependencies add coordination overhead |
|
||||
| Large story | Word count > 400 | +1 | Verbose stories indicate broader scope |
|
||||
|
||||
### Final Score
|
||||
|
||||
`final_score = sum(matched_rule_scores) + structural_bonus`
|
||||
|
||||
---
|
||||
|
||||
## Rule Categories (40 rules)
|
||||
|
||||
### External Integration (+2 each)
|
||||
|
||||
| Rule | Detects |
|
||||
|---|---|
|
||||
| External API integration | Third-party services (Stripe, Twilio, WhatsApp, AWS SDK, etc.) |
|
||||
| Webhook/async processing | Webhooks, message queues, pub/sub, background jobs, event-driven patterns |
|
||||
| Real-time communication | WebSockets, SSE, push notifications, live updates, long polling |
|
||||
|
||||
### Database & Data (+1 to +2)
|
||||
|
||||
| Rule | Score | Detects |
|
||||
|---|---|---|
|
||||
| Database schema changes | +1 | Migrations, new tables, index creation, foreign keys |
|
||||
| Complex database operations | +2 | Complex queries, joins, subqueries, aggregates, stored procedures, transactions |
|
||||
| Data transformation/ETL | +2 | Data pipelines, bulk import/export, CSV parsing, data sync, normalization |
|
||||
| Caching layer | +1 | Redis, memcache, CDN, cache invalidation, session stores |
|
||||
| Search/indexing | +2 | Elasticsearch, Algolia, full-text search, vector search |
|
||||
| File upload/storage | +1 | S3, blob storage, file processing, PDF/CSV generation, presigned URLs |
|
||||
|
||||
### Security & Auth (+1 to +2)
|
||||
|
||||
| Rule | Score | Detects |
|
||||
|---|---|---|
|
||||
| Authentication system | +2 | Login flows, JWT, password reset, SSO, 2FA/MFA, social login |
|
||||
| Authorization/permissions | +2 | RBAC, ACL, row-level security, multi-tenant isolation, route guards |
|
||||
| Encryption/security | +1 | Encryption, hashing, CSRF/XSS protection, security headers, CORS |
|
||||
|
||||
### State & Architecture (+1 to +2)
|
||||
|
||||
| Rule | Score | Detects |
|
||||
|---|---|---|
|
||||
| Complex state management | +1 | Redux, Zustand, state machines, CQRS, event sourcing, optimistic updates |
|
||||
| Backend + Frontend combined | +2 | Full-stack changes touching both API and UI layers |
|
||||
| Service communication | +2 | Microservices, gRPC, API gateway, service mesh, distributed systems |
|
||||
| Infrastructure changes | +2 | Docker, Kubernetes, CI/CD, reverse proxies, deployment, auto-scaling |
|
||||
|
||||
### Error Handling & Resilience (+1 to +2)
|
||||
|
||||
| Rule | Score | Detects |
|
||||
|---|---|---|
|
||||
| Complex error handling | +1 | Error boundaries, retry logic, circuit breakers, graceful degradation, idempotency |
|
||||
| Transaction management | +2 | Atomic operations, distributed locks, conflict resolution, race conditions |
|
||||
|
||||
### Performance (+1)
|
||||
|
||||
| Rule | Score | Detects |
|
||||
|---|---|---|
|
||||
| Performance optimization | +1 | Pagination, lazy loading, code splitting, memoization, Core Web Vitals |
|
||||
| Rate limiting/throttling | +1 | Rate limits, quotas, backoff strategies, cooldowns |
|
||||
| Batch/bulk operations | +1 | Batch processing, bulk inserts/updates, cron jobs, scheduled tasks |
|
||||
|
||||
### UI/UX Complexity (+1)
|
||||
|
||||
| Rule | Score | Detects |
|
||||
|---|---|---|
|
||||
| Complex forms | +1 | Multi-step forms, wizards, dynamic forms, conditional fields |
|
||||
| Charts/visualization | +1 | D3, Recharts, dashboards, heatmaps, canvas drawing |
|
||||
| Drag and drop | +1 | DnD, sortable lists, Kanban boards, reorderable UI |
|
||||
| Accessibility | +1 | WCAG, ARIA, screen reader support, keyboard navigation |
|
||||
| Internationalization | +1 | i18n, translations, RTL support, locale-aware formatting |
|
||||
|
||||
### Testing Signals (+1)
|
||||
|
||||
| Rule | Score | Detects |
|
||||
|---|---|---|
|
||||
| Integration testing required | +1 | E2E tests, Playwright, Cypress, contract tests, API endpoint tests |
|
||||
| Complex test setup | +1 | Test fixtures, service mocks, seed data, test containers |
|
||||
|
||||
### Cross-Cutting (+1)
|
||||
|
||||
| Rule | Score | Detects |
|
||||
|---|---|---|
|
||||
| Email/notification system | +1 | Email sending, push notifications, SMS, in-app notifications |
|
||||
| Logging/monitoring | +1 | Observability, telemetry, distributed tracing, Sentry, Datadog |
|
||||
| Configuration/feature flags | +1 | Feature toggles, A/B tests, remote config, LaunchDarkly |
|
||||
|
||||
### Simplicity Reducers (-1 to -2)
|
||||
|
||||
| Rule | Score | Detects |
|
||||
|---|---|---|
|
||||
| Frontend only | -1 | UI-only, CSS-only, layout-only, static pages |
|
||||
| Simple CRUD | -1 | Basic CRUD, standard REST, straightforward endpoints |
|
||||
| Documentation/config only | -2 | README updates, config changes, doc-only changes |
|
||||
| Pure refactor | -1 | Code cleanup, renames, restructuring with no behavior change |
|
||||
| Simple bug fix | -1 | Typo fixes, null checks, missing imports, one-line patches |
|
||||
|
||||
### Risk/Uncertainty Signals (+1 to +2)
|
||||
|
||||
| Rule | Score | Detects |
|
||||
|---|---|---|
|
||||
| Uncertain scope | +1 | Research spikes, prototypes, POCs, TBD items, exploratory work |
|
||||
| Breaking change | +2 | Breaking changes, deprecations, major version bumps, migration guides |
|
||||
|
||||
---
|
||||
|
||||
## Complexity Levels
|
||||
|
||||
| Score | Level | Meaning | Agent Recommendation |
|
||||
|---|---|---|---|
|
||||
| ≤ 3 | **Low** | High success probability | Claude handles well autonomously |
|
||||
| 4–7 | **Medium** | Normal execution, moderate risk | Codex primary with Claude fallback |
|
||||
| ≥ 8 | **High** | Consider longer timeouts, may need intervention | Codex primary with Claude fallback, monitor closely |
|
||||
|
||||
---
|
||||
|
||||
## Why This Matters
|
||||
|
||||
**Session 3 learning:** Backend WhatsApp stories (6.5-6.8) consistently failed dev-story while frontend i18n stories (7.1-7.2) succeeded. The original 8-rule system couldn't distinguish these patterns.
|
||||
|
||||
**v2.0 improvements:**
|
||||
- 40 rules across 10 categories (was 8 rules, 1 category)
|
||||
- Structural analysis adds AC count, dependency, and story size signals
|
||||
- 5 simplicity reducers (was 2) prevent over-scoring simple work
|
||||
- Expanded regex patterns catch contextual signals, not just exact keywords
|
||||
- Recalibrated thresholds account for higher score range
|
||||
|
||||
**Without accurate complexity scoring:**
|
||||
- Agent configuration cannot be informed by actual story difficulty
|
||||
- Simple stories get over-provisioned (waste) or complex stories get under-provisioned (failure)
|
||||
- The orchestration may fail or produce suboptimal results
|
||||
|
|
@ -0,0 +1,174 @@
|
|||
# Crash Recovery Pattern
|
||||
|
||||
**Purpose:** Handle sessions that crash or disappear unexpectedly.
|
||||
|
||||
---
|
||||
|
||||
## Detection
|
||||
|
||||
The status script returns `session_state` in CSV column 6:
|
||||
- `crashed` - Session exited with non-zero exit code (column 5 = exit code, column 4 = output file)
|
||||
- `not_found` - Session disappeared (killed, crashed without trace)
|
||||
|
||||
---
|
||||
|
||||
## Recovery Logic
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| `crashed` with output file | Read output, check partial progress, retry |
|
||||
| `not_found` (no output) | Session died silently, retry immediately |
|
||||
| Retry 1 failed | Retry with `-r2` suffix in session name |
|
||||
| Retry 2 failed | Escalate to user with diagnostics |
|
||||
|
||||
---
|
||||
|
||||
## Retry Pattern
|
||||
|
||||
```bash
|
||||
# On crash/not_found, spawn retry with unique suffix
|
||||
project_slug=$(basename "$PWD" | tr '[:upper:]' '[:lower:]' | tr -cd '[:alnum:]' | cut -c1-8)
|
||||
timestamp=$(date +%y%m%d-%H%M%S)
|
||||
session_name="sa-${project_slug}-${timestamp}-e{epic}-s{story_suffix}-{step}-r2"
|
||||
|
||||
# Clear stale state (project-scoped v2.0)
|
||||
PROJECT_HASH=$(echo -n "$PWD" | md5sum 2>/dev/null | cut -c1-8 || echo -n "$PWD" | md5 -q 2>/dev/null | cut -c1-8)
|
||||
rm -f "/tmp/.sa-${PROJECT_HASH}-session-${session_name}-state.json"
|
||||
# ... spawn and monitor as normal
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent Fallback (v3.0.0)
|
||||
|
||||
**Before escalating**, check if fallback agent is configured:
|
||||
|
||||
```bash
|
||||
# Resolve agents for this story/task from agents file
|
||||
selection=$("$scripts" orchestrator-helper agents-resolve \
|
||||
--state-file "$state_file" --story "{story_id}" --task "{task}")
|
||||
primary=$(echo "$selection" | jq -r '.primary')
|
||||
fallback=$(echo "$selection" | jq -r '.fallback')
|
||||
|
||||
if [ "$fallback" != "false" ] && [ -n "$fallback" ]; then
|
||||
if [ "$current_agent" = "$primary" ]; then
|
||||
export AI_AGENT="$fallback"
|
||||
retry_count=0
|
||||
session=$("$scripts" tmux-wrapper spawn dev {epic} {story_id} \
|
||||
--command "$("$scripts" tmux-wrapper build-cmd dev {story_id})")
|
||||
# Continue monitoring...
|
||||
fi
|
||||
fi
|
||||
```
|
||||
|
||||
**Fallback flow:**
|
||||
1. Primary agent crashes after 2 retries
|
||||
2. IF `fallback != "false"` AND haven't tried fallback yet
|
||||
3. Switch `AI_AGENT` to fallback agent
|
||||
4. Reset retry counter to 0
|
||||
5. Retry with fallback agent (gets 2 more attempts)
|
||||
6. IF fallback also fails after 2 retries → CRITICAL escalation
|
||||
|
||||
**Log message:**
|
||||
"Primary agent (claude) failed after 2 attempts. Switching to fallback agent (codex)..."
|
||||
|
||||
---
|
||||
|
||||
## Escalation (after exhausting all retries)
|
||||
|
||||
Display:
|
||||
```
|
||||
**Session crashed for Story {N}**
|
||||
|
||||
Primary agent: {primary} - Failed after 2 attempts
|
||||
Fallback agent: {fallback} - Failed after 2 attempts
|
||||
|
||||
Exit code: {exit_code}
|
||||
Partial progress: {tasks_completed}/{tasks_total}
|
||||
|
||||
[R]etry with primary
|
||||
[F]allback retry
|
||||
[S]kip story (mark deferred)
|
||||
[A]bort orchestration
|
||||
```
|
||||
|
||||
Show any partial output captured for diagnostics.
|
||||
|
||||
---
|
||||
|
||||
## Integration with Adaptive Retry
|
||||
|
||||
Crash recovery is SEPARATE from adaptive retry:
|
||||
- **Adaptive retry** = session completed but FAILED (wrong output, tests failed)
|
||||
- **Crash recovery** = session DIED unexpectedly (context limit, API error, kill)
|
||||
|
||||
Both can occur: a session might crash on attempt 1, then fail normally on attempt 2.
|
||||
Track both counters independently.
|
||||
|
||||
---
|
||||
|
||||
## Orchestrator Monitoring Task Crash (v1.9.0)
|
||||
|
||||
### The Problem
|
||||
|
||||
When the orchestrator uses background tasks (e.g., Bash with `run_in_background`) to monitor tmux sessions, the monitoring task itself can crash. This is **different** from the tmux session crashing.
|
||||
|
||||
**Observed failure mode:**
|
||||
1. Orchestrator spawns background task to run create+dev+monitor loop
|
||||
2. Background task crashes after dev-story completes
|
||||
3. TaskOutput shows "running" but task is dead
|
||||
4. Tmux session actually completed successfully
|
||||
5. Orchestrator waits forever on dead monitoring task
|
||||
6. Code-review never runs because monitoring never returned
|
||||
|
||||
### Detection
|
||||
|
||||
Signs that your monitoring task has crashed (not the tmux session):
|
||||
|
||||
| Signal | Meaning |
|
||||
|--------|---------|
|
||||
| `TaskOutput` returns empty 2+ times | Task may be dead |
|
||||
| Output file path doesn't exist | Task never wrote results |
|
||||
| "running" status but no progress | Task is stuck or dead |
|
||||
| Background task ID invalid | Task crashed |
|
||||
|
||||
### Recovery Sequence
|
||||
|
||||
**See `monitoring-fallback.md` for detailed fallback patterns.**
|
||||
|
||||
Quick reference:
|
||||
1. Stop waiting on dead monitoring task
|
||||
2. Find tmux sessions: `tmux list-sessions | grep "sa-.*e{epic}-s{story}"`
|
||||
3. Check session status directly: `story-automator tmux-status-check`
|
||||
4. Verify source of truth: story file, sprint-status.yaml
|
||||
5. Resume based on verified state
|
||||
|
||||
### Prevention
|
||||
|
||||
**NEVER chain multiple workflow steps in a single background task:**
|
||||
|
||||
```bash
|
||||
# ❌ WRONG - If this task crashes, all subsequent steps are lost
|
||||
for step in create dev review; do
|
||||
session=$(...spawn...)
|
||||
result=$(...monitor...)
|
||||
done
|
||||
|
||||
# ✅ CORRECT - Each step is monitored separately
|
||||
# Step 1
|
||||
session=$(...spawn create...)
|
||||
result=$(...monitor...)
|
||||
# Verify state
|
||||
|
||||
# Step 2 (only after Step 1 verified)
|
||||
session=$(...spawn dev...)
|
||||
result=$(...monitor...)
|
||||
# Verify state
|
||||
```
|
||||
|
||||
### Key Principle
|
||||
|
||||
**The tmux session is the source of truth for session state.**
|
||||
**The story file and sprint-status.yaml are the source of truth for workflow state.**
|
||||
|
||||
Monitoring is just observation - if monitoring fails, verify from source of truth and continue.
|
||||
|
|
@ -0,0 +1,100 @@
|
|||
# Data File Index (v1.9.0)
|
||||
|
||||
**Purpose:** Explicit guidance on when to load each data file during execution.
|
||||
|
||||
---
|
||||
|
||||
## Loading Rules
|
||||
|
||||
1. **LOAD ONCE** = Read at step initialization, keep in context
|
||||
2. **LOAD ON TRIGGER** = Read only when specific condition occurs
|
||||
3. **NEVER LOAD** = Reference/debug files, not for execution
|
||||
|
||||
---
|
||||
|
||||
## Step 03: Execute - File Loading Guide
|
||||
|
||||
### LOAD ONCE (at step start)
|
||||
|
||||
| File | Why |
|
||||
|------|-----|
|
||||
| `orchestrator-rules.md` | Core rules for orchestrator behavior |
|
||||
| `execution-patterns.md` | FORBIDDEN patterns - must know before any execution |
|
||||
| `scripts-reference.md` | Script usage patterns |
|
||||
|
||||
### LOAD ON TRIGGER
|
||||
|
||||
| File | When to Load |
|
||||
|------|--------------|
|
||||
| `retry-fallback-strategy.md` | When a step FAILS and you need retry logic |
|
||||
| `monitoring-fallback.md` | When monitoring FAILS (TaskOutput empty/error 2+ times) |
|
||||
| `crash-recovery.md` | When session CRASHES (not just fails) |
|
||||
| `code-review-loop.md` | When entering code review phase (Step D) |
|
||||
| `escalation-triggers.md` | When considering escalation to user |
|
||||
| `escalation-messages-core.md` | When displaying escalation message (triggers 1-4) |
|
||||
| `escalation-messages-extended.md` | When displaying escalation message (triggers 5-8) |
|
||||
| `agent-fallback.md` | When switching from primary to fallback agent |
|
||||
| `agent-fallback-troubleshooting.md` | When fallback agent also fails |
|
||||
| `adaptive-retry.md` | When same task fails 3+ times (plateau detection) |
|
||||
| `subagent-prompts.md` | When parsing session output with sub-agent |
|
||||
| `monitoring-codex.md` | When using Codex agent (not Claude) |
|
||||
|
||||
### NEVER LOAD DURING EXECUTION
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `tmux-commands.md` | Reference doc - use scripts instead |
|
||||
| `tmux-long-command-*.md` | Debug/testing docs |
|
||||
| `complexity-scoring.md` | Used during preflight, not execution |
|
||||
| `preflight-prompts.md` | Used in step-02, not step-03 |
|
||||
| `stop-hook-*.md` | Setup docs, not execution |
|
||||
| `marker-file-format.md` | Internal format reference |
|
||||
| `success-patterns.md` | Output pattern reference |
|
||||
| `workflow-commands.md` | Reference doc |
|
||||
| `wrapup-templates.md` | Used in step-04, not step-03 |
|
||||
| `retrospective-*.md` | Used in step-03b retrospective section only |
|
||||
|
||||
---
|
||||
|
||||
## Quick Decision Tree
|
||||
|
||||
```
|
||||
Starting execution?
|
||||
→ Load: orchestrator-rules.md, execution-patterns.md, scripts-reference.md
|
||||
|
||||
Step failed?
|
||||
→ Load: retry-fallback-strategy.md
|
||||
→ If 3+ same failures: Load adaptive-retry.md
|
||||
|
||||
Monitoring not responding?
|
||||
→ Load: monitoring-fallback.md
|
||||
|
||||
Session crashed?
|
||||
→ Load: crash-recovery.md
|
||||
|
||||
Entering code review?
|
||||
→ Load: code-review-loop.md
|
||||
|
||||
Need to escalate?
|
||||
→ Load: escalation-triggers.md, then escalation-messages-*.md
|
||||
|
||||
Using Codex?
|
||||
→ Load: monitoring-codex.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Anti-Pattern: Loading Everything
|
||||
|
||||
**WRONG:**
|
||||
```
|
||||
Load ALL data files at start of step-03
|
||||
```
|
||||
|
||||
**WHY WRONG:** Bloats context, increases confusion, wastes tokens.
|
||||
|
||||
**CORRECT:**
|
||||
```
|
||||
Load 3 core files at start
|
||||
Load additional files ONLY when their trigger condition occurs
|
||||
```
|
||||
|
|
@ -0,0 +1,100 @@
|
|||
# Escalation Message Templates
|
||||
|
||||
Use these templates when an escalation trigger fires.
|
||||
|
||||
## 1. Code Review Loop Exceeded
|
||||
|
||||
**Pre-Escalation Verification:**
|
||||
```bash
|
||||
file_status=$("$scripts" orchestrator-helper story-file-status {story_id})
|
||||
file_done=$(echo "$file_status" | jq -r '.status')
|
||||
if [ "$file_done" = "done" ]; then
|
||||
echo "✅ Story file shows done - sprint-status out of sync"
|
||||
fi
|
||||
|
||||
test_result=$(cd "$PROJECT_ROOT" && go test ./src/... 2>&1 || npm test 2>&1 || true)
|
||||
tests_pass=$([[ "$test_result" != *"FAIL"* ]] && echo "true" || echo "false")
|
||||
```
|
||||
|
||||
**Diagnostic Summary (required):**
|
||||
| Cycle | Agent | Issues Found | Fixed | Duration |
|
||||
|-------|-------|--------------|-------|----------|
|
||||
{cycle_history_table}
|
||||
|
||||
**Escalation message:**
|
||||
```
|
||||
🔔 DECISION NEEDED: Code Review Loop (5 cycles exhausted)
|
||||
|
||||
Story: {story_name}
|
||||
Story ID: {story_id}
|
||||
|
||||
## 2. Cannot Parse Session Output
|
||||
|
||||
**Escalation message:**
|
||||
```
|
||||
🔔 DECISION NEEDED: Ambiguous Session Output
|
||||
|
||||
Story: {story_name}
|
||||
Step: {step_name}
|
||||
Session: {session_id}
|
||||
|
||||
Unable to determine if step succeeded or failed.
|
||||
|
||||
Last 20 lines of output:
|
||||
{output_snippet}
|
||||
|
||||
Options:
|
||||
[1] Mark as success and proceed
|
||||
[2] Mark as failure and retry
|
||||
[3] View full session output
|
||||
[4] Pause for manual inspection
|
||||
|
||||
Select option:
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Session Spawn Failure
|
||||
|
||||
**Escalation message:**
|
||||
```
|
||||
🔔 DECISION NEEDED: Session Spawn Failed
|
||||
|
||||
Story: {story_name}
|
||||
Step: {step_name}
|
||||
Error: {error_message}
|
||||
|
||||
Unable to spawn T-Mux session after retry.
|
||||
|
||||
Options:
|
||||
[1] Retry again
|
||||
[2] Skip this step
|
||||
[3] Abort story
|
||||
[4] Pause orchestration
|
||||
|
||||
Select option:
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Git Commit Failure
|
||||
|
||||
**Escalation message:**
|
||||
```
|
||||
🔔 DECISION NEEDED: Git Commit Failed
|
||||
|
||||
Story: {story_name}
|
||||
Error: {error_message}
|
||||
|
||||
Unable to commit changes for this story.
|
||||
|
||||
Options:
|
||||
[1] Retry commit
|
||||
[2] Skip commit and proceed (changes remain uncommitted)
|
||||
[3] Pause for manual git resolution
|
||||
[4] Abort story
|
||||
|
||||
Select option:
|
||||
```
|
||||
|
||||
---
|
||||
|
|
@ -0,0 +1,76 @@
|
|||
# Escalation Message Templates (Extended)
|
||||
|
||||
## 5. Unexpected Error
|
||||
|
||||
**Escalation message:**
|
||||
```
|
||||
🔔 DECISION NEEDED: Unexpected Error
|
||||
|
||||
Story: {story_name}
|
||||
Step: {step_name}
|
||||
Error: {error_message}
|
||||
|
||||
An unexpected error occurred during orchestration.
|
||||
|
||||
Options:
|
||||
[1] Retry current step
|
||||
[2] Skip current step
|
||||
[3] Abort story and continue with next
|
||||
[4] Pause orchestration for investigation
|
||||
|
||||
Select option:
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Dependency Conflict
|
||||
|
||||
**Escalation message:**
|
||||
```
|
||||
🔔 DECISION NEEDED: Potential Dependency Conflict
|
||||
|
||||
Stories in parallel: {story_list}
|
||||
Detected conflict: {conflict_description}
|
||||
|
||||
These stories may have conflicting changes.
|
||||
|
||||
Options:
|
||||
[1] Continue in parallel (accept risk)
|
||||
[2] Run sequentially instead
|
||||
[3] Pause for manual review
|
||||
|
||||
Select option:
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Dev-Story Implementation Failure
|
||||
|
||||
**Pre-escalation behavior:**
|
||||
1. Check blocking status (conservative if uncertain)
|
||||
2. If BLOCKING: retry up to 3 times
|
||||
3. If NOT BLOCKING: retry once
|
||||
|
||||
**Escalation message:**
|
||||
```
|
||||
🔔 DECISION NEEDED: Dev-Story Implementation Failure
|
||||
|
||||
Story: {story_name}
|
||||
Step: dev-story
|
||||
Attempts: {attempt_count}
|
||||
Blocking: {yes/no} (affects stories: {list or "none"})
|
||||
|
||||
Latest error:
|
||||
{error_summary}
|
||||
|
||||
Options:
|
||||
[1] Retry dev-story - Spawn new session to fix
|
||||
[2] Manual fix - Pause orchestration so you can fix it
|
||||
[3] View session output - See full output
|
||||
[4] Skip story - Move to next (only if not blocking)
|
||||
[5] Abort orchestration - Stop entire build cycle
|
||||
|
||||
Select option:
|
||||
```
|
||||
|
||||
**Note:** Option [4] only valid if story is NOT blocking.
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
# Escalation Message Templates
|
||||
|
||||
See:
|
||||
- `escalation-messages-core.md` (Triggers 1-4)
|
||||
- `escalation-messages-extended.md` (Triggers 5-7)
|
||||
|
|
@ -0,0 +1,114 @@
|
|||
# Escalation Triggers
|
||||
|
||||
**Purpose:** Conditions that require human decision and cannot be resolved autonomously.
|
||||
|
||||
## Escalation Categories
|
||||
|
||||
### CRITICAL Escalations
|
||||
**Definition:** Automation CANNOT proceed - requires human decision.
|
||||
|
||||
**Behavior:**
|
||||
1. Delete marker file: `rm "{marker_file}"`
|
||||
2. Update state: set status to PAUSED in state document
|
||||
3. Present options (stop hook won't interfere)
|
||||
4. Wait for user input
|
||||
5. On resume: recreate marker, set IN_PROGRESS, continue
|
||||
|
||||
**Triggers in this category:**
|
||||
- Code Review Loop Exceeded (#1)
|
||||
- Session Spawn Failure (#3)
|
||||
- Git Commit Failure (#4)
|
||||
- Unexpected Error (#5)
|
||||
- Dev-Story Implementation Failure (#7) when blocking + retries exhausted
|
||||
- Session Incomplete (#8) - session finished but workflow not verified complete (v2.2)
|
||||
|
||||
### PREFERENCE Escalations
|
||||
**Definition:** Automation COULD proceed either way - user chooses direction.
|
||||
|
||||
**Behavior:**
|
||||
1. Keep marker file (automation still "active")
|
||||
2. Present options
|
||||
3. Act on selection immediately
|
||||
|
||||
**Triggers in this category:**
|
||||
- Cannot Parse Session Output (#2)
|
||||
- Dependency Conflict (#6)
|
||||
- Dev-Story Implementation Failure (#7) when NOT blocking
|
||||
|
||||
---
|
||||
|
||||
## Escalation Protocol
|
||||
|
||||
When an escalation trigger is hit:
|
||||
1. Categorize: CRITICAL or PREFERENCE
|
||||
2. If CRITICAL: delete marker, set status to PAUSED
|
||||
3. Notify: sound/notification
|
||||
4. Present: situation + numbered options
|
||||
5. Wait: halt until user responds
|
||||
6. Log: record decision in action log
|
||||
7. Resume: if CRITICAL, recreate marker, set IN_PROGRESS, continue
|
||||
|
||||
---
|
||||
|
||||
## Trigger Index
|
||||
|
||||
Each trigger includes its escalation message template in:
|
||||
- `data/escalation-messages-core.md` (Triggers 1-4)
|
||||
- `data/escalation-messages-extended.md` (Triggers 5-7)
|
||||
|
||||
### 1. Code Review Loop Exceeded (CRITICAL)
|
||||
**Trigger:** Code review has run 5 cycles without clean status.
|
||||
**See:** `escalation-messages-core.md#1-code-review-loop-exceeded`
|
||||
|
||||
### 2. Cannot Parse Session Output (PREFERENCE)
|
||||
**Trigger:** Output doesn't match success/failure patterns.
|
||||
**See:** `escalation-messages-core.md#2-cannot-parse-session-output`
|
||||
|
||||
### 3. Session Spawn Failure (CRITICAL)
|
||||
**Trigger:** T-Mux session failed to spawn after retries.
|
||||
**See:** `escalation-messages-core.md#3-session-spawn-failure`
|
||||
|
||||
### 4. Git Commit Failure (CRITICAL)
|
||||
**Trigger:** Git commit failed (conflict, hook error, etc.).
|
||||
**See:** `escalation-messages-core.md#4-git-commit-failure`
|
||||
|
||||
### 5. Unexpected Error (CRITICAL)
|
||||
**Trigger:** Unhandled exception or unexpected condition.
|
||||
**See:** `escalation-messages-extended.md#5-unexpected-error`
|
||||
|
||||
### 6. Dependency Conflict (PREFERENCE)
|
||||
**Trigger:** Parallelism detects potential conflict.
|
||||
**See:** `escalation-messages-extended.md#6-dependency-conflict`
|
||||
|
||||
### 7. Dev-Story Implementation Failure (CRITICAL or PREFERENCE)
|
||||
**Trigger:** dev-story completes with errors after retries.
|
||||
**See:** `escalation-messages-extended.md#7-dev-story-implementation-failure`
|
||||
|
||||
### 8. Session Incomplete (CRITICAL) [v2.2]
|
||||
**Trigger:** `story-automator monitor-session` returns `final_state: "incomplete"` **after maxCycles exhausted**
|
||||
**Condition:** Session finished (idle/exited) but workflow verification failed across all retry attempts.
|
||||
**Typical cause:** Codex code-review session ended without updating sprint-status.
|
||||
|
||||
**Why CRITICAL (not PREFERENCE):**
|
||||
- Automated retries already exhausted
|
||||
- Human must decide: manual fix, use Claude, or skip story
|
||||
|
||||
**Options:**
|
||||
1. **[1] Manual Fix** - Update sprint-status.yaml yourself
|
||||
2. **[2] Run with Claude** - Re-run code-review with Claude agent
|
||||
3. **[3] Skip Story** - Mark story as skipped and continue
|
||||
4. **[X] Pause** - Stop orchestration for investigation
|
||||
|
||||
**Verification command:**
|
||||
```bash
|
||||
"$scripts" orchestrator-helper verify-code-review {story_id}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Non-Escalation Conditions
|
||||
|
||||
Handled automatically (no escalation):
|
||||
- Optional step (automate) skipped by override → log and continue
|
||||
- Session completes with clear success → continue
|
||||
- Session completes with clear failure → retry once, then escalate if still fails
|
||||
|
|
@ -0,0 +1,59 @@
|
|||
# Execution Patterns (v1.9.0)
|
||||
|
||||
**Purpose:** Critical execution patterns and anti-patterns for the orchestrator.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 FORBIDDEN EXECUTION PATTERNS (NO EXCEPTIONS)
|
||||
|
||||
### NEVER Chain Multiple Workflow Steps
|
||||
|
||||
**FORBIDDEN:**
|
||||
```bash
|
||||
# ❌ WRONG - Chaining steps in a loop bypasses per-step error handling
|
||||
for step in create dev; do
|
||||
session=$("$scripts" tmux-wrapper spawn "$step" ...)
|
||||
result=$("$scripts" monitor-session "$session" ...)
|
||||
done
|
||||
```
|
||||
|
||||
**WHY:** If the monitoring task crashes mid-loop, ALL subsequent steps are lost. The orchestrator loses visibility even though tmux sessions may have completed successfully.
|
||||
|
||||
**REQUIRED:**
|
||||
```bash
|
||||
# ✅ CORRECT - Each step is a separate operation with its own error handling
|
||||
# Step A: Create
|
||||
session=$("$scripts" tmux-wrapper spawn create ...)
|
||||
result=$("$scripts" monitor-session "$session" ...)
|
||||
"$scripts" tmux-wrapper kill "$session"
|
||||
# VERIFY state before proceeding
|
||||
|
||||
# Step B: Dev (only after create verified)
|
||||
session=$("$scripts" tmux-wrapper spawn dev ...)
|
||||
result=$("$scripts" monitor-session "$session" ...)
|
||||
"$scripts" tmux-wrapper kill "$session"
|
||||
# VERIFY state before proceeding
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ALWAYS Verify State After Each Step
|
||||
|
||||
After each workflow step completes (create/dev/auto/review), **VERIFY state from source of truth** before proceeding to the next step:
|
||||
|
||||
1. **Story file exists and has expected content** (for create-story)
|
||||
2. **Sprint-status.yaml shows correct status** (for dev-story, code-review)
|
||||
3. **DO NOT rely solely on monitoring output** - if monitoring fails, verify directly
|
||||
|
||||
---
|
||||
|
||||
## IF Monitoring Fails
|
||||
|
||||
If `story-automator monitor-session` or background task monitoring fails:
|
||||
|
||||
1. Check if tmux session still exists: `tmux list-sessions | grep {pattern}`
|
||||
2. Check session status directly: `"$scripts" tmux-status-check "$session"`
|
||||
3. Verify story file / sprint-status regardless of monitoring output
|
||||
4. Only escalate after direct verification confirms failure
|
||||
|
||||
**See also:** `monitoring-fallback.md` for detailed fallback patterns.
|
||||
|
|
@ -0,0 +1,63 @@
|
|||
# Marker File Format
|
||||
|
||||
**Location:** `.claude/.story-automator-active` (relative to project root)
|
||||
|
||||
**Purpose:** Enables the Stop hook to prevent premature stopping during orchestration.
|
||||
|
||||
---
|
||||
|
||||
## JSON Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"epic": "{epic_id}",
|
||||
"currentStory": "{first_story_id}",
|
||||
"storiesRemaining": {story_count},
|
||||
"stateFile": "{path_to_state_document}",
|
||||
"startedAt": "{timestamp}",
|
||||
"heartbeat": "{timestamp}",
|
||||
"pid": {process_id},
|
||||
"projectSlug": "{project_slug}"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Field Descriptions
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `epic` | Epic identifier (e.g., "5") |
|
||||
| `currentStory` | Current story being processed (e.g., "5.3") |
|
||||
| `storiesRemaining` | Count of stories left in queue |
|
||||
| `stateFile` | Path to orchestration state document |
|
||||
| `startedAt` | Orchestration start timestamp (ISO 8601) |
|
||||
| `heartbeat` | Last activity timestamp, updated periodically |
|
||||
| `pid` | Process ID of orchestrator (crash detection) |
|
||||
| `projectSlug` | (v2.0) Project identifier for session naming |
|
||||
|
||||
---
|
||||
|
||||
## Heartbeat Updates
|
||||
|
||||
The orchestrator should update the heartbeat timestamp periodically during long-running operations. This prevents the marker from going stale if the orchestrator is still running but taking a while on a complex story.
|
||||
|
||||
**Staleness threshold:** 30 minutes (see story-automator stop-hook)
|
||||
|
||||
---
|
||||
|
||||
## Creation Command
|
||||
|
||||
```bash
|
||||
project_slug=$(echo "$("{deriveProjectSlug}" derive-project-slug --project-root "{project-root}")" | jq -r '.slug')
|
||||
"{stateHelper}" orchestrator-helper marker create --epic "$epic_id" --story "$first_story_id" \
|
||||
--remaining "$selected_count" --state-file "$state_path" \
|
||||
--project-slug "$project_slug" --pid "$$" --heartbeat "{timestamp}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- **Stop Hook:** See `stop-hook-config.md` for hook behavior
|
||||
- **Troubleshooting:** See `stop-hook-troubleshooting.md` for issues
|
||||
|
|
@ -0,0 +1,66 @@
|
|||
# Codex-Specific Monitoring (v2.4.0)
|
||||
|
||||
**Purpose:** Special handling for Codex CLI sessions in story-automator monitor-session
|
||||
|
||||
---
|
||||
|
||||
## Agent Detection
|
||||
|
||||
Codex sessions are detected by:
|
||||
1. `AI_AGENT` environment variable (most reliable)
|
||||
2. Explicit Codex CLI identifiers: `OpenAI Codex`, `codex exec`, `codex-cli`, `gpt-*-codex`, `tokens used`
|
||||
|
||||
---
|
||||
|
||||
## Session States for Codex
|
||||
|
||||
| State | Meaning | Detection |
|
||||
|-------|---------|-----------|
|
||||
| `in_progress` | Codex actively working | Heartbeat alive OR output changed recently |
|
||||
| `idle` | Session alive but no prompt yet | Heartbeat idle + output stale (pre-stuck window) |
|
||||
| `completed` | CLI has exited | Prompt returned, pane exited, or `tokens used` |
|
||||
| `stuck` | No recent output for too long | Output stale beyond threshold |
|
||||
|
||||
**Key Difference:** For Codex, "idle" is NOT the same as "completed". The CLI may have stopped but the workflow might not have finished.
|
||||
|
||||
---
|
||||
|
||||
## Output Freshness vs Completed Detection
|
||||
|
||||
```
|
||||
output_fresh(): Output hash changed within CODEX_OUTPUT_STALE_SECONDS
|
||||
codex_completed(): Prompt returned, pane exited, or "tokens used"
|
||||
```
|
||||
|
||||
**Priority:** `completed` > `active` > `idle` > `stuck`
|
||||
|
||||
### Output Staleness Window
|
||||
|
||||
`CODEX_OUTPUT_STALE_SECONDS` (default: 300) defines how long Codex can be silent
|
||||
before the session is considered `stuck`. Any output change refreshes the timer.
|
||||
|
||||
---
|
||||
|
||||
## Code-Review Workflow Verification
|
||||
|
||||
For code-review with Codex, story-automator monitor-session verifies completion:
|
||||
|
||||
```bash
|
||||
# Must pass --workflow and --story-key for verification
|
||||
result=$("$scripts" monitor-session "$session" --json \
|
||||
--workflow review --story-key {story_id})
|
||||
```
|
||||
|
||||
**Verification checks:**
|
||||
1. Sprint-status.yaml shows "done" for story
|
||||
2. OR story file Status field shows "done"
|
||||
3. If neither → `final_state: "incomplete"`
|
||||
|
||||
---
|
||||
|
||||
## Fake Todo Progress
|
||||
|
||||
Codex doesn't use TodoWrite, so `story-automator tmux-status-check` fakes progress:
|
||||
- Start: `todos_total=1, todos_done=0`
|
||||
- While running: Keep `0/1`
|
||||
- On idle after activity: Set `1/1` (signals "done, needs verification")
|
||||
|
|
@ -0,0 +1,85 @@
|
|||
# Monitoring Failure Fallback (v1.9.0)
|
||||
|
||||
**Purpose:** Recovery patterns when primary monitoring fails.
|
||||
|
||||
---
|
||||
|
||||
## When Primary Monitoring Fails
|
||||
|
||||
Primary monitoring can fail in several ways:
|
||||
- Background task crashes (TaskOutput returns empty/error)
|
||||
- Network timeout during monitoring
|
||||
- Process killed unexpectedly
|
||||
- Output file missing or corrupted
|
||||
|
||||
**Key insight:** The tmux session may have completed successfully even if monitoring died.
|
||||
|
||||
---
|
||||
|
||||
## Fallback Sequence
|
||||
|
||||
When `story-automator monitor-session` fails or background monitoring task dies:
|
||||
|
||||
```bash
|
||||
# STEP 1: Check if tmux session still exists
|
||||
sessions=$(tmux list-sessions 2>/dev/null | grep "sa-.*{story_pattern}" || true)
|
||||
|
||||
# STEP 2: If session exists, check its status directly
|
||||
if [ -n "$sessions" ]; then
|
||||
for session in $sessions; do
|
||||
status=$("$scripts" tmux-status-check "$session")
|
||||
session_state=$(echo "$status" | cut -d',' -f6)
|
||||
# Act based on direct status
|
||||
done
|
||||
fi
|
||||
|
||||
# STEP 3: ALWAYS verify source of truth regardless of session status
|
||||
# Story file check:
|
||||
story_file=$(ls _bmad-output/implementation-artifacts/{story_prefix}-*.md 2>/dev/null | head -1)
|
||||
if [ -f "$story_file" ]; then
|
||||
# Story file exists - check its status field
|
||||
fi
|
||||
|
||||
# Sprint-status check:
|
||||
status=$("$scripts" orchestrator-helper sprint-status get "{story_key}")
|
||||
is_done=$(echo "$status" | jq -r '.done')
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Detection: Monitoring Task Crashed
|
||||
|
||||
Signs that your monitoring task has crashed:
|
||||
|
||||
| Signal | Meaning |
|
||||
|--------|---------|
|
||||
| `TaskOutput` returns empty 2+ times | Task may be dead |
|
||||
| Output file path doesn't exist | Task never wrote results |
|
||||
| "running" status but no progress | Task is stuck or dead |
|
||||
|
||||
**Recovery:**
|
||||
1. Do NOT wait indefinitely for dead monitoring task
|
||||
2. After 2+ empty TaskOutput results, switch to direct verification
|
||||
3. Use tmux session checks + source of truth verification
|
||||
4. Resume workflow based on verified state, not monitoring state
|
||||
|
||||
---
|
||||
|
||||
## Integration with Retry Logic
|
||||
|
||||
**If fallback verification shows step succeeded:**
|
||||
- Proceed to next step (monitoring failed but workflow succeeded)
|
||||
- Log: "Monitoring failed but direct verification confirmed success"
|
||||
|
||||
**If fallback verification shows step failed/incomplete:**
|
||||
- Apply normal retry/fallback strategy
|
||||
- Do NOT treat monitoring failure as step failure
|
||||
|
||||
---
|
||||
|
||||
## Key Principle
|
||||
|
||||
**The tmux session is the source of truth for session state.**
|
||||
**The story file and sprint-status.yaml are the source of truth for workflow state.**
|
||||
|
||||
Monitoring is just observation - if monitoring fails, verify from source of truth and continue.
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
# Monitoring Pattern: Parsing & Review Handling
|
||||
|
||||
## Sub-Agent Pattern
|
||||
|
||||
**ALWAYS use sub-agent for output parsing:**
|
||||
|
||||
```bash
|
||||
# Correct: Let haiku parse
|
||||
parsed=$("$scripts" orchestrator-helper parse-output "$output_file" dev)
|
||||
action=$(echo "$parsed" | jq -r '.next_action')
|
||||
|
||||
# WRONG: Parse yourself
|
||||
# content=$(cat "$output_file") # DON'T DO THIS
|
||||
# if grep -q "SUCCESS" ... # DON'T DO THIS
|
||||
```
|
||||
|
||||
**Why:** Sub-agent costs ~200 tokens. Main context is ~50k+. Parsing yourself wastes 99% more context.
|
||||
|
||||
---
|
||||
|
||||
## Code Review Special Handling
|
||||
|
||||
See `code-review-loop.md` for review cycle logic. Key points:
|
||||
|
||||
- Auto-fix via instruction: `code-review ${story_id} auto-fix all issues without prompting`
|
||||
- No menu detection needed - instruction handles it
|
||||
- After completion, verify sprint-status before proceeding
|
||||
|
|
@ -0,0 +1,190 @@
|
|||
# Session Monitoring Pattern
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**All monitoring is handled by the story-automator binary. DO NOT manually construct tmux commands.**
|
||||
|
||||
### Binary Location
|
||||
|
||||
```
|
||||
bin/
|
||||
└── story-automator # single Go binary (use subcommands below)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🚨 FORBIDDEN PATTERNS (NO EXCEPTIONS)
|
||||
|
||||
| Pattern | Why Forbidden |
|
||||
|---------|---------------|
|
||||
| `tmux capture-pane` directly | Context bloat, use status script |
|
||||
| `while true` loops in LLM context | Session crash, use story-automator monitor-session |
|
||||
| Manual session name construction | Error-prone, use story-automator tmux-wrapper |
|
||||
| Parsing raw output yourself | Use story-automator orchestrator-helper parse-output |
|
||||
|
||||
---
|
||||
|
||||
## Standard Workflow: Spawn + Monitor + Parse
|
||||
|
||||
```bash
|
||||
# STEP 1: Spawn session (use story-automator tmux-wrapper)
|
||||
session_name=$("$scripts" tmux-wrapper spawn create 5 5.3 \
|
||||
--command "$("$scripts" tmux-wrapper build-cmd create 5.3)")
|
||||
|
||||
# STEP 2: Monitor until completion (SINGLE API CALL)
|
||||
result=$("$scripts" monitor-session "$session_name" --verbose --json)
|
||||
|
||||
# STEP 3: Parse output with sub-agent
|
||||
output_file=$(echo "$result" | jq -r '.output_file')
|
||||
parsed=$("$scripts" orchestrator-helper parse-output "$output_file" create)
|
||||
|
||||
# STEP 4: Act on parsed result
|
||||
next_action=$(echo "$parsed" | jq -r '.next_action')
|
||||
|
||||
# STEP 5: ALWAYS cleanup session (v1.2.0)
|
||||
"$scripts" tmux-wrapper kill "$session_name"
|
||||
```
|
||||
|
||||
**Context savings:** This entire cycle is 5 bash calls instead of 15+ API roundtrips.
|
||||
|
||||
**Session Cleanup (v1.2.0):** ALWAYS kill the session after processing, regardless of success or failure. Orphaned sessions consume resources and cause confusion.
|
||||
|
||||
---
|
||||
|
||||
## Script Quick Reference
|
||||
|
||||
### story-automator tmux-wrapper
|
||||
|
||||
```bash
|
||||
# Spawn session
|
||||
story-automator tmux-wrapper spawn <step> <epic> <story_id> [--command "..."] [--cycle N]
|
||||
|
||||
# Generate session name only
|
||||
story-automator tmux-wrapper name <step> <epic> <story_id> [--cycle N]
|
||||
|
||||
# Build workflow command
|
||||
story-automator tmux-wrapper build-cmd <step> <story_id> [extra_instruction]
|
||||
|
||||
# List/kill sessions
|
||||
story-automator tmux-wrapper list [--project-only]
|
||||
story-automator tmux-wrapper kill <session_name>
|
||||
story-automator tmux-wrapper kill-all [--project-only]
|
||||
```
|
||||
|
||||
### story-automator monitor-session
|
||||
|
||||
```bash
|
||||
# Monitor until completion (returns when session ends)
|
||||
story-automator monitor-session <session_name> [options]
|
||||
|
||||
# Options:
|
||||
# --max-polls N Maximum iterations (default: 30)
|
||||
# --timeout MIN Overall timeout in minutes (default: 60)
|
||||
# --verbose Print progress to stderr
|
||||
# --json Output as JSON instead of CSV
|
||||
|
||||
# Output (JSON):
|
||||
# {"final_state":"completed|crashed|stuck|timeout","output_file":"/tmp/...","exit_reason":"..."}
|
||||
```
|
||||
|
||||
### story-automator orchestrator-helper
|
||||
|
||||
```bash
|
||||
# Check sprint status
|
||||
story-automator orchestrator-helper sprint-status get <story_key>
|
||||
|
||||
# Parse session output with sub-agent (haiku)
|
||||
story-automator orchestrator-helper parse-output <file> <step_type>
|
||||
|
||||
# Marker file operations
|
||||
story-automator orchestrator-helper marker create --epic E --story S --remaining N
|
||||
story-automator orchestrator-helper marker remove
|
||||
story-automator orchestrator-helper marker check
|
||||
|
||||
# Escalation checks
|
||||
story-automator orchestrator-helper escalate <trigger> <context>
|
||||
```
|
||||
|
||||
### story-automator validate-story-creation
|
||||
|
||||
```bash
|
||||
# Count before session
|
||||
before=$(story-automator validate-story-creation count 5.3)
|
||||
|
||||
# ... run create-story session ...
|
||||
|
||||
# Count after and validate
|
||||
after=$(story-automator validate-story-creation count 5.3)
|
||||
story-automator validate-story-creation check 5.3 --before $before --after $after
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Decision Flow
|
||||
|
||||
After `story-automator monitor-session` returns:
|
||||
|
||||
| final_state | Action |
|
||||
|-------------|--------|
|
||||
| `completed` | Parse output → act on `next_action` |
|
||||
| `incomplete` | **(v2.2)** Session idle but workflow NOT verified → Escalate immediately |
|
||||
| `crashed` | Check retry count → retry or escalate |
|
||||
| `stuck` | Get output → investigate → may need restart |
|
||||
| `timeout` | Get output → escalate to user |
|
||||
| `not_found` | Session gone → check for partial work |
|
||||
|
||||
---
|
||||
|
||||
## Monitoring Failure Fallback (v1.9.0)
|
||||
|
||||
**See `monitoring-fallback.md` for complete fallback patterns when monitoring fails.**
|
||||
|
||||
Key points:
|
||||
- If monitoring crashes, tmux session may have completed successfully
|
||||
- Fall back to direct session checks + source of truth verification
|
||||
- Do NOT treat monitoring failure as step failure
|
||||
|
||||
---
|
||||
|
||||
## Statusline Time Gate (v2.6.0)
|
||||
|
||||
**Purpose:** Prevent ALL false "stuck" escalations by using the Claude Code statusline as definitive proof-of-life.
|
||||
|
||||
### How It Works
|
||||
|
||||
Claude Code displays a statusline at the bottom of the terminal:
|
||||
```
|
||||
folder | ctx(N%) | HH:MM:SS
|
||||
^^^^^^^^ <- This time updates continuously while Claude runs
|
||||
```
|
||||
|
||||
The `story-automator tmux-status-check` script:
|
||||
1. Parses the statusline time from the tmux pane
|
||||
2. Stores it in the session state file
|
||||
3. Compares with previous poll's time
|
||||
4. **If time has advanced → session is ALIVE → DO NOT escalate**
|
||||
|
||||
### Decision Matrix
|
||||
|
||||
| Previous Time | Current Time | Other Checks Say | Result |
|
||||
|---------------|--------------|------------------|--------|
|
||||
| 10:00:00 | 10:01:00 | stuck | `just_started` (time advanced = alive) |
|
||||
| 10:00:00 | 10:00:00 | stuck | `stuck` (time unchanged) |
|
||||
| (none) | 10:00:00 | stuck | `just_started` (first observation = alive) |
|
||||
| (none) | (none) | stuck | `stuck` (no statusline data) |
|
||||
|
||||
### Key Principle
|
||||
|
||||
**The statusline time gate is the FINAL AUTHORITY.** Even if all other detection methods (process checks, activity indicators, heartbeat) suggest the session is stuck, if the statusline time has advanced, the session is definitively alive and MUST NOT be escalated.
|
||||
|
||||
This prevents false escalations for:
|
||||
- Complex sessions in long thinking phases
|
||||
- Sessions with unusual output patterns
|
||||
- Edge cases where other detection fails
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- **Codex monitoring details:** `monitoring-codex.md`
|
||||
- **Output parsing + review handling:** `monitoring-pattern-parsing.md`
|
||||
|
|
@ -0,0 +1,86 @@
|
|||
# Orchestrator Rules Appendix
|
||||
|
||||
## Session Naming
|
||||
**See `tmux-commands.md` for complete session naming documentation.**
|
||||
|
||||
Pattern: `sa-{project_slug}-{timestamp}-e{epic}-s{N}-{type}` where type = `create`, `dev`, `auto`, `review-{cycle}`
|
||||
|
||||
## Workflow Command Arguments
|
||||
|
||||
**CRITICAL:** ALWAYS pass required positional arguments to BMAD workflows.
|
||||
|
||||
### Story ID Requirement
|
||||
|
||||
**create-story, dev-story, code-review, testarch-automate** — All require the story ID as a positional argument.
|
||||
|
||||
**WRONG:**
|
||||
```bash
|
||||
bmad-create-story
|
||||
```
|
||||
This causes create-story to create ALL stories in the epic, not just one.
|
||||
|
||||
**CORRECT:**
|
||||
```bash
|
||||
bmad-create-story 5.3
|
||||
```
|
||||
This creates ONLY story 5.3.
|
||||
|
||||
### Validation After create-story
|
||||
|
||||
**After create-story session completes:**
|
||||
1. Count story files BEFORE spawning session
|
||||
2. Count story files AFTER session completes
|
||||
3. Verify exactly ONE new file created
|
||||
4. IF 0 or >1 files → Escalate with file list
|
||||
|
||||
**This prevents runaway story creation** where create-story creates 5.3, 5.4, 5.5, etc. instead of just the requested story.
|
||||
|
||||
## State Updates
|
||||
|
||||
After EVERY action:
|
||||
1. Update `currentStep` in state document
|
||||
2. Log action with timestamp
|
||||
3. Update story progress table
|
||||
|
||||
## Escalation Protocol
|
||||
|
||||
**See `data/escalation-triggers.md` for complete trigger definitions and behavior.**
|
||||
|
||||
### Quick Reference
|
||||
|
||||
| Category | Marker Action | State | When |
|
||||
|----------|---------------|-------|------|
|
||||
| CRITICAL | **DELETE** | PAUSED | Cannot proceed (retries exhausted) |
|
||||
| PREFERENCE | Keep | IN_PROGRESS | Could proceed either way |
|
||||
|
||||
### CRITICAL Escalation (Key Steps)
|
||||
|
||||
1. Delete marker: `rm "{project_root}/.claude/.story-automator-active"`
|
||||
2. Set state to PAUSED
|
||||
3. Present menu (stop hook won't interfere)
|
||||
4. On resume: recreate marker, set IN_PROGRESS
|
||||
|
||||
### Dev-Story Smart Retry
|
||||
|
||||
Before escalating, check if story is blocking:
|
||||
- **Blocking:** Retry up to 3 times → then CRITICAL
|
||||
- **Not blocking:** Retry once → then PREFERENCE (can skip)
|
||||
|
||||
## Session Monitoring & Output Parsing
|
||||
|
||||
**CRITICAL:** These topics have dedicated reference files. Load them when needed:
|
||||
|
||||
- **Session Monitoring:** See `data/monitoring-pattern.md`
|
||||
- FORBIDDEN patterns (capture-pane, etc.)
|
||||
- Status script usage and CSV format
|
||||
- Decision tree for poll results
|
||||
- Polling loop with state tracking
|
||||
|
||||
- **Output Parsing:** See `data/monitoring-pattern.md` (Sub-Agent Invocation section)
|
||||
- NEVER parse output yourself
|
||||
- ALWAYS use sub-agents (Task tool, haiku)
|
||||
- Verification checkpoint before proceeding
|
||||
|
||||
- **Sub-Agent Prompts:** See `data/subagent-prompts.md`
|
||||
- Session Output Parser
|
||||
- Code Review Analyzer (also see `subagent-prompts-analysis.md`)
|
||||
|
|
@ -0,0 +1,180 @@
|
|||
# Orchestrator Rules
|
||||
|
||||
Load once at workflow start. Do not re-read in subsequent steps.
|
||||
|
||||
---
|
||||
|
||||
## Your Role
|
||||
|
||||
You are the **Build Cycle Orchestrator** — an autonomous coordinator that:
|
||||
- Spawns T-Mux sessions for each workflow step
|
||||
- Monitors progress and parses outputs
|
||||
- Handles code review loops until clean
|
||||
- Commits after each completed story
|
||||
- Escalates to user ONLY when decisions are needed
|
||||
|
||||
## Ground Truth: sprint-status.yaml
|
||||
|
||||
**CRITICAL:** `_bmad-output/implementation-artifacts/sprint-status.yaml` is the single source of truth.
|
||||
|
||||
### 🚨 ABSOLUTE RULE: NEVER UPDATE sprint-status.yaml 🚨
|
||||
|
||||
**YOU (the orchestrator) MUST NEVER, EVER write to sprint-status.yaml.**
|
||||
|
||||
- ❌ NEVER use Edit tool on sprint-status.yaml
|
||||
- ❌ NEVER use Write tool on sprint-status.yaml
|
||||
- ❌ NEVER use Bash to modify sprint-status.yaml
|
||||
- ❌ NEVER "fix" mismatches by updating sprint-status.yaml
|
||||
|
||||
**WHO updates it:** The T-Mux sessions running dev-story, code-review, etc.
|
||||
|
||||
**IF MISMATCH DETECTED:**
|
||||
1. Do NOT "correct" sprint-status.yaml
|
||||
2. Re-run the workflow that SHOULD update it (dev-story, code-review)
|
||||
3. The session will update sprint-status.yaml as part of its workflow
|
||||
|
||||
**When to READ (read-only):**
|
||||
- At initialization — check if earlier stories are incomplete
|
||||
- When resuming — verify current state matches
|
||||
- After each story "completes" — verify sprint-status shows `done`
|
||||
|
||||
**Initialization/Resume check:**
|
||||
- If earlier stories in range are not `done`, ask user: "Stories X, Y are not complete. Process them first?"
|
||||
- If yes → add them to queue before requested stories
|
||||
|
||||
**Post-story verification:**
|
||||
- After code review passes and commit succeeds, check sprint-status.yaml
|
||||
- If story is NOT marked `done` → re-run code-review (it will update sprint-status)
|
||||
- Only proceed to next story when sprint-status confirms `done`
|
||||
|
||||
### Sprint-Status "done" from Dev-Story (Session 22 Note)
|
||||
|
||||
**IMPORTANT:** If dev-story marks sprint-status as "done" but code-review later finds HIGH issues:
|
||||
- This is EXPECTED behavior - dev-story completes successfully, but code-review finds additional issues
|
||||
- The code-review workflow will update sprint-status appropriately
|
||||
- Do NOT trust "done" status from dev-story alone
|
||||
- ALWAYS run code-review to verify the implementation quality
|
||||
|
||||
## Custom Instructions
|
||||
|
||||
User-provided instructions are flexible and may apply to:
|
||||
- The orchestrator itself (e.g., "prioritize story 3")
|
||||
- Specific sessions (e.g., "always run tests" → pass to dev sessions)
|
||||
- Conditional situations (e.g., "always run tests after changes")
|
||||
|
||||
**Interpret intelligently** — Don't mechanically inject instructions everywhere. Apply judgment about when and how instructions are relevant.
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Coordinate, don't implement** — Spawn sessions, don't write code yourself
|
||||
2. **Log everything** — Update state document after every action
|
||||
3. **Escalate, don't decide** — When uncertain, ask the user
|
||||
4. **Use sub-agents for parsing** — Don't bloat context with raw output
|
||||
5. **Follow the sequence** — Don't skip or reorder steps
|
||||
6. **Sprint-status is truth** — Always sync with sprint-status.yaml
|
||||
7. **Always cleanup sessions** — Kill tmux sessions after completion (v1.2.0)
|
||||
8. **Verify state after each step** — Check source of truth, not just monitoring output (v1.9.0)
|
||||
|
||||
---
|
||||
|
||||
## State Verification After Each Step (v1.9.0)
|
||||
|
||||
### 🚨 CRITICAL: Verify Before Proceeding
|
||||
|
||||
After **EVERY** workflow step completes (create/dev/auto/review), you MUST verify state from the **source of truth** before proceeding to the next step.
|
||||
|
||||
**DO NOT rely solely on monitoring output.** Monitoring can fail, crash, or lose connection. The source of truth is:
|
||||
- **Story files** in `_bmad-output/implementation-artifacts/`
|
||||
- **sprint-status.yaml** in `_bmad-output/implementation-artifacts/`
|
||||
|
||||
### Verification Sequence
|
||||
|
||||
After each step:
|
||||
|
||||
```bash
|
||||
# 1. Get monitoring result (may be incomplete/failed)
|
||||
result=$("$scripts" monitor-session "$session" --json)
|
||||
final_state=$(echo "$result" | jq -r '.final_state')
|
||||
|
||||
# 2. ALWAYS verify from source of truth regardless of monitoring result
|
||||
# For create-story: verify story file exists
|
||||
# For dev-story: verify sprint-status updated
|
||||
# For code-review: verify sprint-status shows "done"
|
||||
|
||||
# 3. Only proceed when source of truth confirms success
|
||||
```
|
||||
|
||||
### Monitoring Failure Fallback
|
||||
|
||||
**See `monitoring-fallback.md` for complete fallback patterns.**
|
||||
|
||||
Quick reference:
|
||||
1. Check if session exists: `tmux list-sessions | grep {session_pattern}`
|
||||
2. Check session status directly: `"$scripts" tmux-status-check "$session"`
|
||||
3. Verify source of truth: story file / sprint-status.yaml
|
||||
4. Proceed based on verified state, not monitoring state
|
||||
|
||||
### Why This Matters
|
||||
|
||||
Observed failure mode: Orchestrator's monitoring task crashed after dev-story completed. The tmux session had actually succeeded, but the orchestrator lost visibility and never ran code-review. **Direct state verification would have recovered from this.**
|
||||
|
||||
---
|
||||
|
||||
## Agent Fallback Strategy
|
||||
|
||||
**See `agent-fallback.md` for complete multi-agent documentation.**
|
||||
**Troubleshooting:** `agent-fallback-troubleshooting.md`
|
||||
|
||||
**Quick Reference:**
|
||||
- Primary/fallback agents configurable (Claude or Codex)
|
||||
- Different CLI commands and prompt styles per agent
|
||||
- Automatic fallback on crash after retries exhausted
|
||||
- Codex has 1.5x timeouts, no todo tracking
|
||||
|
||||
---
|
||||
|
||||
### 🚨 ABSOLUTE RULE: NEVER Change Working Directory 🚨
|
||||
|
||||
**YOU (the orchestrator) MUST NEVER use the `cd` command.**
|
||||
|
||||
- ❌ NEVER use `cd backend && ...`
|
||||
- ❌ NEVER use `cd /path/to/dir`
|
||||
- ❌ NEVER change working directory for ANY reason
|
||||
- ✅ ALWAYS use absolute paths for all file operations
|
||||
- ✅ ALWAYS use absolute paths for script invocations
|
||||
|
||||
**Why?** When you `cd` to a different directory, all relative paths break:
|
||||
- Status script: `./bin/story-automator tmux-status-check` → "no such file"
|
||||
- Validation patterns: `_bmad-output/...` → wrong location
|
||||
- All monitoring fails, causing fallback to FORBIDDEN patterns
|
||||
|
||||
**Example - WRONG:**
|
||||
```bash
|
||||
cd backend && go test ./internal/api/...
|
||||
```
|
||||
|
||||
**Example - CORRECT:**
|
||||
```bash
|
||||
go test {project_root}/backend/internal/api/...
|
||||
```
|
||||
|
||||
### 🚨 ABSOLUTE RULE: NEVER Edit Source Code Directly 🚨
|
||||
|
||||
**YOU (the orchestrator) MUST NEVER use Edit/Write tools on source code.**
|
||||
|
||||
- ❌ NEVER use Edit tool on `.go`, `.ts`, `.tsx`, `.js`, `.py`, etc.
|
||||
- ❌ NEVER use Write tool to create source code files
|
||||
- ❌ NEVER "fix issues" by modifying code directly
|
||||
- ✅ ALWAYS spawn a T-Mux session (dev-story) to make code changes
|
||||
- ✅ ALWAYS delegate code fixes to child sessions
|
||||
|
||||
**Why?** The orchestrator's role is COORDINATION, not implementation. All code changes must go through proper workflow sessions that:
|
||||
- Have full project context
|
||||
- Run tests after changes
|
||||
- Update sprint-status appropriately
|
||||
- Can be reviewed and audited
|
||||
|
||||
## Appendix
|
||||
|
||||
See `orchestrator-rules-appendix.md` for session naming, workflow command arguments, monitoring, and output parsing details.
|
||||
|
||||
|
|
@ -0,0 +1,140 @@
|
|||
# Pre-flight Prompts
|
||||
|
||||
Reference prompts for the pre-flight configuration step.
|
||||
|
||||
---
|
||||
|
||||
## Context Gathering Questions
|
||||
|
||||
Present these questions to gather implementation context:
|
||||
|
||||
```
|
||||
**Context Gathering:**
|
||||
|
||||
To help the implementation sessions succeed, please clarify:
|
||||
|
||||
1. **Technical Context:** Are there any architectural decisions, patterns, or conventions the dev sessions should follow?
|
||||
|
||||
2. **Testing Requirements:** Any specific testing frameworks or coverage expectations?
|
||||
|
||||
3. **Dependencies:** Are there external services, APIs, or packages that need to be set up first?
|
||||
|
||||
4. **Known Challenges:** Any tricky areas or things that previous attempts struggled with?
|
||||
|
||||
5. **Anything Else:** Any other context that would help the sessions succeed?
|
||||
|
||||
Feel free to answer as much or as little as you'd like. You can also say 'none' if the stories are self-explanatory.
|
||||
```
|
||||
|
||||
**After user responds:**
|
||||
- Think about their response before continuing
|
||||
- If response raises new questions, ask 1-2 follow-up questions
|
||||
- Continue until context is sufficient
|
||||
|
||||
---
|
||||
|
||||
## Agent Configuration (v1.2.0)
|
||||
|
||||
```
|
||||
**AI Agent Selection:**
|
||||
|
||||
Which AI coding agent should run your workflows?
|
||||
|
||||
| Agent | CLI Command | Command Prefix | Best For |
|
||||
|-------|-------------|--------------|----------|
|
||||
| **Claude** | `claude --dangerously-skip-permissions` | `bmad-` | BMAD workflows |
|
||||
| **Codex** | `codex exec --full-auto` | natural language (no prefix) | OpenAI Codex users |
|
||||
|
||||
**Primary Agent:** (default: claude)
|
||||
**Fallback Agent:** (default: codex) - Used when primary fails after retries
|
||||
**Enable Fallback:** (default: yes)
|
||||
|
||||
Examples:
|
||||
- `claude` → Claude primary, Codex fallback (default)
|
||||
- `codex` → Codex primary, Claude fallback
|
||||
- `claude, none` → Claude only, no fallback
|
||||
- `codex, claude` → Codex primary, Claude fallback
|
||||
|
||||
Enter agent config or press Enter for defaults:
|
||||
```
|
||||
|
||||
Store response as `agentConfig` (v3.0.0):
|
||||
```yaml
|
||||
agentConfig:
|
||||
defaultPrimary: "claude"
|
||||
defaultFallback: "codex"
|
||||
perTask: {}
|
||||
complexityOverrides: {}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Legacy AI Command Configuration (Deprecated)
|
||||
|
||||
```
|
||||
**AI Command:**
|
||||
What command invokes Claude Code (or your AI CLI) in the terminal?
|
||||
|
||||
Examples:
|
||||
- `claude --dangerously-skip-permissions` (default - autonomous mode, no prompts)
|
||||
- `claude` (interactive mode - will prompt for permissions)
|
||||
- `cursor` (Cursor IDE)
|
||||
- `/usr/local/bin/claude --dangerously-skip-permissions` (full path)
|
||||
|
||||
Enter command or press Enter for default (`claude --dangerously-skip-permissions`):
|
||||
```
|
||||
|
||||
Store response as `aiCommand`. **Note:** This is deprecated in v1.2.0. Use `agentConfig` instead.
|
||||
|
||||
---
|
||||
|
||||
## Execution Overrides
|
||||
|
||||
```
|
||||
**Execution Overrides:**
|
||||
|
||||
By default, the orchestrator will:
|
||||
- Run all steps: create-story → dev-story → automate → code-review
|
||||
- Run stories sequentially (one at a time)
|
||||
- Commit after each completed story
|
||||
|
||||
**Would you like to change any defaults?**
|
||||
|
||||
| Option | Default | Your Choice |
|
||||
|--------|---------|-------------|
|
||||
| Skip `automate` (guardrail tests) | No | ? |
|
||||
| Max parallel stories | 1 | ? |
|
||||
|
||||
Enter changes (e.g., `skip automate, max parallel 2`) or `defaults` to keep all defaults:
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration Review Template
|
||||
|
||||
```
|
||||
**Pre-flight Complete. Here's your configuration:**
|
||||
|
||||
**Project Context Loaded:**
|
||||
- Product Brief: {loaded/not found}
|
||||
- PRD: {loaded/not found}
|
||||
- Architecture: {loaded/not found}
|
||||
- Other docs: {list or 'None'}
|
||||
|
||||
**Epic:** {epic_name}
|
||||
**Stories:** {story_range} ({count} stories)
|
||||
|
||||
**Stories to implement:**
|
||||
{story_list_with_titles}
|
||||
|
||||
**AI Command:** `{aiCommand}`
|
||||
|
||||
**Overrides:**
|
||||
- Skip automate: {yes/no}
|
||||
- Max parallel: {number}
|
||||
|
||||
**Additional Context from Conversation:**
|
||||
{context_summary_or_'None provided'}
|
||||
|
||||
**Does this look correct?** I'll create the state document and we can begin execution.
|
||||
```
|
||||
|
|
@ -0,0 +1,74 @@
|
|||
# Preflight Requirements (v1.10.0)
|
||||
|
||||
> **🚨 CRITICAL:** Load and internalize these requirements BEFORE executing any preflight steps.
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY Sequence (NO EXCEPTIONS)
|
||||
|
||||
Steps 1-3 MUST be completed IN ORDER using the Go binary BEFORE proceeding to steps 4-7:
|
||||
|
||||
1. **Step 1-2:** Request and parse epic(s) → `bin/story-automator parse-epic`
|
||||
2. **Step 3:** Parse ALL stories with complexity scoring → `bin/story-automator parse-story --rules`
|
||||
3. **GATE:** Verify `stories_json` is populated with programmatic complexity data
|
||||
4. **Step 4:** Display Complexity Matrix (from step 3 data)
|
||||
5. **Steps 5-7:** Custom instructions, agent config, execution settings
|
||||
|
||||
---
|
||||
|
||||
## 🛑 FORBIDDEN PATTERNS
|
||||
|
||||
- ❌ **NEVER** skip step 3 (complexity scoring)
|
||||
- ❌ **NEVER** manually assess complexity by reading epic/story content
|
||||
- ❌ **NEVER** proceed to agent configuration without displaying the Complexity Matrix
|
||||
- ❌ **NEVER** guess complexity levels - they MUST come from `parse-story --rules`
|
||||
- ❌ **NEVER** create state document without `stories_json` containing complexity data
|
||||
|
||||
---
|
||||
|
||||
## ✅ REQUIRED Verification
|
||||
|
||||
Before step 5 (Configure Agent), you MUST have:
|
||||
- [ ] `stories_json` variable populated with complexity data from Go binary
|
||||
- [ ] Complexity Matrix displayed to user showing all stories with levels/scores
|
||||
- [ ] User has seen the complexity breakdown before being asked about agents
|
||||
|
||||
---
|
||||
|
||||
## Why This Matters
|
||||
|
||||
Without programmatic complexity scoring:
|
||||
- Agent configuration cannot be informed by actual story difficulty
|
||||
- User cannot make informed decisions about which agents to use
|
||||
- The orchestration may fail or produce suboptimal results
|
||||
|
||||
The Go binary (`bin/story-automator parse-story --rules`) applies consistent, deterministic rules from `data/complexity-rules.json` to score each story. This data MUST be gathered before agent configuration.
|
||||
|
||||
---
|
||||
|
||||
## Complexity Matrix Display Template
|
||||
|
||||
After gathering complexity data, you MUST display:
|
||||
|
||||
```
|
||||
**Story Complexity Matrix**
|
||||
|
||||
| Story | Title | Score | Level | Reasons |
|
||||
|-------|-------|-------|-------|---------|
|
||||
| {storyId} | {title} | {score} | {level} | {reasons or "-"} |
|
||||
...
|
||||
|
||||
**Summary:**
|
||||
- Low: {count} stories
|
||||
- Medium: {count} stories
|
||||
- High: {count} stories
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verification Gate (Step 3d)
|
||||
|
||||
Before proceeding to step 4 (Custom Instructions), verify:
|
||||
- `stories_json` contains complexity data for ALL selected stories
|
||||
- Complexity Matrix has been displayed to user
|
||||
- If either is missing, DO NOT PROCEED - re-run step 3
|
||||
|
|
@ -0,0 +1,30 @@
|
|||
# Validation Report Retention Policy
|
||||
|
||||
Purpose: keep workflow repo lean while preserving historical validation evidence.
|
||||
|
||||
## Policy
|
||||
|
||||
- Keep latest 10 validation reports in `validation-reports/` as `.md`.
|
||||
- Archive older reports into `validation-reports/archive/` as `.md.gz`.
|
||||
- Keep `validation-report-*-current.md` files unarchived.
|
||||
- Never delete archived `.md.gz` files automatically.
|
||||
|
||||
## Suggested Maintenance Command
|
||||
|
||||
Run from workflow root:
|
||||
|
||||
```bash
|
||||
mkdir -p validation-reports/archive
|
||||
ls -1t validation-reports/validation-report-*.md \
|
||||
| rg -v -- '-current\.md$' \
|
||||
| awk 'NR>10' \
|
||||
| while read -r f; do
|
||||
gzip -c "$f" > "validation-reports/archive/$(basename "$f").gz" && rm "$f"
|
||||
done
|
||||
```
|
||||
|
||||
## Operational Notes
|
||||
|
||||
- This policy applies to historical reports only.
|
||||
- Current run artifacts remain readable markdown.
|
||||
- Archival is optional during active development, recommended during wrap-up.
|
||||
|
|
@ -0,0 +1,139 @@
|
|||
# Retrospective Automation Data
|
||||
|
||||
This file provides instructions for running retrospectives in YOLO mode (fully automated, no user input expected).
|
||||
|
||||
---
|
||||
|
||||
## YOLO Mode Principles
|
||||
|
||||
1. **No User Input Expected**: The retrospective must complete autonomously
|
||||
2. **Data-Driven Decisions**: All decisions based on sprint-status, story files, and artifacts
|
||||
3. **Safe Failure**: If anything goes wrong, log and skip - never escalate
|
||||
4. **Claude Only**: Retrospectives DO NOT support Codex - always use Claude agent
|
||||
|
||||
---
|
||||
|
||||
## Agent Constraints
|
||||
|
||||
### MUST Use Claude
|
||||
|
||||
Retrospectives have complex multi-agent "party mode" interactions that require:
|
||||
- Natural language dialogue synthesis
|
||||
- Multi-step reasoning across story analysis
|
||||
- Document generation with rich context
|
||||
|
||||
Codex is **not compatible** with these requirements. Always spawn retrospective sessions with `--agent "claude"`.
|
||||
|
||||
### Timeout Configuration
|
||||
|
||||
Retrospectives analyze all stories in an epic and generate comprehensive reports:
|
||||
- **Base timeout**: 60 minutes (3600000ms)
|
||||
- **Extended timeout for large epics (>10 stories)**: 90 minutes (5400000ms)
|
||||
|
||||
---
|
||||
|
||||
## YOLO Mode Prompt Template
|
||||
|
||||
When spawning a retrospective in YOLO mode, use this prompt:
|
||||
|
||||
```
|
||||
bmad-retrospective {epic_number}
|
||||
|
||||
Run the retrospective in #YOLO mode.
|
||||
Assume the user will NOT provide any input to the retrospective directly.
|
||||
For ALL prompts that expect user input, make reasonable autonomous decisions based on:
|
||||
- Sprint status data
|
||||
- Story files and their dev notes
|
||||
- Previous retrospective if available
|
||||
- Architecture and PRD documents
|
||||
|
||||
Key behaviors:
|
||||
- When asked to confirm epic number: auto-confirm based on sprint-status
|
||||
- When asked for observations: synthesize from story analysis
|
||||
- When asked for decisions: make data-driven choices
|
||||
- When presented menus: select the most appropriate option based on context
|
||||
- Skip all "WAIT for user" instructions - continue autonomously
|
||||
|
||||
After the retrospective has run and created documents, you MUST:
|
||||
1. Create a list of documentation that may need updates based on implementation learnings
|
||||
2. For each doc in the list, verify whether updates are actually needed by:
|
||||
- Reading the current doc content
|
||||
- Comparing against actual implementation code
|
||||
- Checking for discrepancies between doc and code
|
||||
3. Update docs that have verified discrepancies
|
||||
4. Discard proposed updates where code matches docs
|
||||
|
||||
Focus on these doc types:
|
||||
- Architecture decisions that changed during implementation
|
||||
- API documentation that diverged from specs
|
||||
- README files with outdated instructions
|
||||
- Configuration documentation
|
||||
|
||||
EVERYTHING SHOULD BE AUTOMATED. THIS IS NOT A SESSION WHERE YOU SHOULD BE EXPECTING USER INPUT.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Multi-Epic Support
|
||||
|
||||
When multiple epics are provided to story-automator:
|
||||
|
||||
### Tracking Multiple Epics
|
||||
|
||||
State document should track:
|
||||
```yaml
|
||||
epics:
|
||||
- epicNumber: 1
|
||||
storyRange: ["1-1", "1-2", "1-3"]
|
||||
status: "completed"
|
||||
retrospectiveStatus: "completed"
|
||||
- epicNumber: 2
|
||||
storyRange: ["2-1", "2-2"]
|
||||
status: "in_progress"
|
||||
retrospectiveStatus: "pending"
|
||||
```
|
||||
|
||||
### Aggregation Rules
|
||||
|
||||
1. **Complete epics during run**: If epic N completes while stories from epic N+1 are being processed, trigger retrospective for epic N
|
||||
2. **Batch retrospectives**: After all stories complete, run retrospectives for all completed epics in order
|
||||
3. **Independent failures**: If retrospective for epic N fails, continue to epic N+1 retrospective
|
||||
|
||||
### Safe Skip on Failure
|
||||
|
||||
If a retrospective fails:
|
||||
1. Log: `⚠️ Retrospective for Epic {N} skipped: {reason}`
|
||||
2. Update state: `retrospectives.epic-{N}.status = "skipped"`
|
||||
3. Update state: `retrospectives.epic-{N}.reason = "{reason}"`
|
||||
4. Continue to next epic - **NEVER ESCALATE**
|
||||
|
||||
---
|
||||
|
||||
## Documentation Verification
|
||||
|
||||
See `retrospective-doc-verification.md` for doc verification patterns and output parsing.
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Network Errors
|
||||
|
||||
If retrospective session fails due to network:
|
||||
1. Wait 60 seconds
|
||||
2. Retry once
|
||||
3. If retry fails, mark as skipped
|
||||
|
||||
### Session Crashes
|
||||
|
||||
If retrospective session crashes:
|
||||
1. Check output file for partial progress
|
||||
2. If retro doc was partially created, mark as partial
|
||||
3. Log crash reason
|
||||
4. Skip to next epic
|
||||
|
||||
### Timeout
|
||||
|
||||
If retrospective exceeds timeout:
|
||||
1. Check if core analysis completed
|
||||
2. If retro doc exists, mark as partial success
|
||||
3. Skip doc verification phase
|
||||
4. Continue to next epic
|
||||
|
|
@ -0,0 +1,94 @@
|
|||
# Retrospective Doc Verification
|
||||
|
||||
Companion to `retrospective-automation.md`. Contains doc verification patterns and output parsing guidance.
|
||||
|
||||
## Doc Verification Patterns
|
||||
|
||||
After retrospective generates documents, verify updates against code:
|
||||
|
||||
### Documents to Check
|
||||
|
||||
| Doc Type | Pattern | Verification Method |
|
||||
|----------|---------|---------------------|
|
||||
| Architecture | `*architecture*.md` | Compare decisions against implementation |
|
||||
| API Docs | `*api*.md`, `*openapi*.yaml` | Verify endpoints match code |
|
||||
| README | `README.md` | Check setup/usage instructions |
|
||||
| Config Docs | `*config*.md` | Verify env vars and settings |
|
||||
|
||||
### Verification Prompt Template
|
||||
|
||||
```
|
||||
Verify whether this documentation update is needed:
|
||||
|
||||
**Document:** {doc_path}
|
||||
**Proposed Change:** {change_summary}
|
||||
**Reason:** {reason}
|
||||
|
||||
Instructions:
|
||||
1. Read the current document at {doc_path}
|
||||
2. Read the relevant implementation code referenced
|
||||
3. Compare doc against actual implementation
|
||||
4. Determine if update is genuinely needed
|
||||
|
||||
Output JSON:
|
||||
{
|
||||
"should_update": true|false,
|
||||
"confidence": "high"|"medium"|"low",
|
||||
"reason": "explanation",
|
||||
"discrepancies": ["list", "of", "specific", "issues"]
|
||||
}
|
||||
|
||||
If discrepancies exist, apply the fix directly.
|
||||
```
|
||||
|
||||
### Confidence Thresholds
|
||||
|
||||
- **High confidence**: Auto-apply update
|
||||
- **Medium confidence**: Auto-apply with log note
|
||||
- **Low confidence**: Skip update, log for manual review
|
||||
|
||||
---
|
||||
|
||||
## Output Parsing
|
||||
|
||||
### Parse Doc Proposals from Retrospective Output
|
||||
|
||||
Look for sections in retrospective output:
|
||||
|
||||
```
|
||||
## Documentation Updates Needed
|
||||
|
||||
### {doc_path}
|
||||
- **Change:** {summary}
|
||||
- **Reason:** {reason}
|
||||
- **Impact:** {impact}
|
||||
```
|
||||
|
||||
Extract into structured format:
|
||||
```json
|
||||
{
|
||||
"proposals": [
|
||||
{
|
||||
"path": "{doc_path}",
|
||||
"summary": "{summary}",
|
||||
"reason": "{reason}",
|
||||
"impact": "{impact}"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Retrospective Completion Markers
|
||||
|
||||
Successful completion indicators:
|
||||
- "Retrospective Complete" in output
|
||||
- "epic-{N}-retro-*.md" file created
|
||||
- Sprint status updated with retrospective done
|
||||
|
||||
Failure indicators:
|
||||
- Session timeout
|
||||
- Error messages in output
|
||||
- No retro file created after 30+ minutes
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -0,0 +1,83 @@
|
|||
# Retrospective Prompts
|
||||
|
||||
Prompts used by step-05-retrospective for automated retrospective execution.
|
||||
|
||||
---
|
||||
|
||||
## YOLO Mode Retrospective Prompt
|
||||
|
||||
Use this prompt when spawning the retrospective session:
|
||||
|
||||
```
|
||||
bmad-retrospective {epic_number}
|
||||
|
||||
Run the retrospective in #YOLO mode.
|
||||
Assume the user will NOT provide any input to the retrospective directly.
|
||||
For ALL prompts that expect user input, make reasonable autonomous decisions based on:
|
||||
- Sprint status data
|
||||
- Story files and their dev notes
|
||||
- Previous retrospective if available
|
||||
- Architecture and PRD documents
|
||||
|
||||
Key behaviors:
|
||||
- When asked to confirm epic number: auto-confirm based on sprint-status
|
||||
- When asked for observations: synthesize from story analysis
|
||||
- When asked for decisions: make data-driven choices
|
||||
- When presented menus: select the most appropriate option based on context
|
||||
- Skip all "WAIT for user" instructions - continue autonomously
|
||||
|
||||
After the retrospective has run and created documents, you MUST:
|
||||
1. Create a list of documentation that may need updates based on implementation learnings
|
||||
2. For each doc in the list, verify whether updates are actually needed by:
|
||||
- Reading the current doc content
|
||||
- Comparing against actual implementation code
|
||||
- Checking for discrepancies between doc and code
|
||||
3. Update docs that have verified discrepancies
|
||||
4. Discard proposed updates where code matches docs
|
||||
|
||||
Focus on these doc types:
|
||||
- Architecture decisions that changed during implementation
|
||||
- API documentation that diverged from specs
|
||||
- README files with outdated instructions
|
||||
- Configuration documentation
|
||||
|
||||
EVERYTHING SHOULD BE AUTOMATED. THIS IS NOT A SESSION WHERE YOU SHOULD BE EXPECTING USER INPUT.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Doc Verification Prompt
|
||||
|
||||
Use this prompt when spawning doc verification subagents:
|
||||
|
||||
```
|
||||
Verify whether this documentation update is needed:
|
||||
|
||||
**Document:** ${proposed_doc.path}
|
||||
**Proposed Change:** ${proposed_doc.summary}
|
||||
**Reason:** ${proposed_doc.reason}
|
||||
|
||||
Instructions:
|
||||
1. Read the current document at ${proposed_doc.path}
|
||||
2. Read the relevant implementation code referenced
|
||||
3. Compare doc against actual implementation
|
||||
4. Determine if update is genuinely needed
|
||||
|
||||
Output JSON:
|
||||
{
|
||||
"should_update": true|false,
|
||||
"confidence": "high"|"medium"|"low",
|
||||
"reason": "explanation",
|
||||
"discrepancies": ["list", "of", "specific", "issues"] // only if should_update
|
||||
}
|
||||
|
||||
If discrepancies exist, apply the fix directly. Output should_update=true only if you made changes.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage Notes
|
||||
|
||||
- **YOLO Prompt:** Replace `{epic_number}` with actual epic number
|
||||
- **Doc Verification Prompt:** Replace `${proposed_doc.*}` variables with actual values
|
||||
- Both prompts are designed for fully automated execution (no user input expected)
|
||||
|
|
@ -0,0 +1,101 @@
|
|||
# Retry & Fallback Implementation Examples
|
||||
|
||||
**Purpose:** Detailed implementation wrapper and step-specific validation patterns.
|
||||
|
||||
---
|
||||
|
||||
## Implementation Pattern
|
||||
|
||||
```bash
|
||||
# Universal retry wrapper with deterministic agent resolution
|
||||
task_type="{step}" # create, dev, auto, or review
|
||||
resolve_agent_for_task "$task_type" "$state_file" "{story_id}"
|
||||
# Now primary_agent and fallback_agent are set for this story/task
|
||||
|
||||
max_attempts=5
|
||||
attempt=0
|
||||
success=false
|
||||
|
||||
while [ $attempt -lt $max_attempts ] && [ "$success" = "false" ]; do
|
||||
attempt=$((attempt + 1))
|
||||
|
||||
# Alternate agent: odd attempts = primary, even = fallback (if available)
|
||||
if [ $((attempt % 2)) -eq 1 ] || [ -z "$fallback_agent" ]; then
|
||||
current_agent="$primary_agent"
|
||||
else
|
||||
current_agent="$fallback_agent"
|
||||
fi
|
||||
|
||||
# Delay logic (after first attempt)
|
||||
if [ $attempt -gt 1 ]; then
|
||||
if [ $attempt -ge 4 ] || [ "$last_was_network_error" = "true" ]; then
|
||||
echo "Waiting 60s before retry (attempt $attempt)..."
|
||||
sleep 60
|
||||
fi
|
||||
fi
|
||||
|
||||
# Execute workflow step
|
||||
session=$("$scripts" tmux-wrapper spawn {step} {epic} {story_id} \
|
||||
--agent "$current_agent" \
|
||||
--command "$("$scripts" tmux-wrapper build-cmd {step} {story_id} --agent "$current_agent")")
|
||||
result=$("$scripts" monitor-session "$session" --json --agent "$current_agent")
|
||||
|
||||
# Cleanup session
|
||||
"$scripts" tmux-wrapper kill "$session"
|
||||
|
||||
# Check for network errors
|
||||
last_was_network_error="false"
|
||||
if echo "$result" | grep -qiE "(connection refused|timeout|rate limit|503|502|never_active)"; then
|
||||
last_was_network_error="true"
|
||||
fi
|
||||
if [ "$(echo "$result" | jq -r '.final_state')" = "crashed" ]; then
|
||||
output_size=$(wc -c < "$(echo "$result" | jq -r '.output_file')" 2>/dev/null || echo "0")
|
||||
[ "$output_size" -lt 100 ] && last_was_network_error="true"
|
||||
fi
|
||||
|
||||
# Check success (step-specific validation)
|
||||
# ... validation logic here ...
|
||||
|
||||
if [ "$validation_passed" = "true" ]; then
|
||||
success=true
|
||||
else
|
||||
echo "Attempt $attempt failed (agent: $current_agent). $([ $attempt -lt $max_attempts ] && echo "Retrying..." || echo "Escalating.")"
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$success" = "false" ]; then
|
||||
# All attempts exhausted - NOW escalate
|
||||
escalate_to_user "Step failed after $max_attempts attempts"
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step-Specific Validation
|
||||
|
||||
### Create Story
|
||||
```bash
|
||||
after=$("$scripts" validate-story-creation count {story_id})
|
||||
validation=$("$scripts" validate-story-creation check {story_id} --before $before --after $after)
|
||||
validation_passed=$(echo "$validation" | jq -r '.valid')
|
||||
```
|
||||
|
||||
### Dev Story
|
||||
```bash
|
||||
parsed=$("$scripts" orchestrator-helper parse-output "$output_file" dev)
|
||||
next_action=$(echo "$parsed" | jq -r '.next_action')
|
||||
validation_passed=$([ "$next_action" = "proceed" ] && echo "true" || echo "false")
|
||||
```
|
||||
|
||||
### Automate
|
||||
```bash
|
||||
parsed=$("$scripts" orchestrator-helper parse-output "$output_file" auto)
|
||||
# Non-blocking: log warning but continue
|
||||
validation_passed="true" # Always proceed (automate is non-blocking)
|
||||
```
|
||||
|
||||
### Code Review
|
||||
```bash
|
||||
# See code-review-loop.md for specific review cycle handling
|
||||
# Reviews have their own internal retry loop
|
||||
```
|
||||
|
|
@ -0,0 +1,131 @@
|
|||
# Retry & Fallback Strategy
|
||||
|
||||
**Purpose:** Universal retry and fallback agent pattern for all workflow steps (create, dev, auto, review).
|
||||
|
||||
**Version:** 2.0.0
|
||||
|
||||
---
|
||||
|
||||
## Core Principle
|
||||
|
||||
**NEVER escalate to user on first failure.** Exhaust all retry options first:
|
||||
1. Try fallback agent (if configured for this task)
|
||||
2. Retry with alternating agents up to 5 total attempts
|
||||
3. Sleep between retries if network issues detected
|
||||
4. Only escalate after all attempts exhausted
|
||||
|
||||
---
|
||||
|
||||
## Agent Configuration (v3.0.0)
|
||||
|
||||
**Deterministic agent resolution via agents file:**
|
||||
|
||||
```bash
|
||||
# Resolve agent for a specific task (create, dev, auto, review)
|
||||
# Uses agents file generated during preflight (complexity-aware)
|
||||
resolve_agent_for_task() {
|
||||
local task="$1"
|
||||
local state_file="$2"
|
||||
local story_id="$3"
|
||||
|
||||
result=$("$scripts" orchestrator-helper agents-resolve \
|
||||
--state-file "$state_file" \
|
||||
--story "$story_id" \
|
||||
--task "$task")
|
||||
|
||||
primary_agent=$(echo "$result" | jq -r '.primary')
|
||||
fallback_agent=$(echo "$result" | jq -r '.fallback')
|
||||
|
||||
# Handle "false"/null meaning disabled
|
||||
[ "$fallback_agent" = "false" ] && fallback_agent=""
|
||||
}
|
||||
|
||||
# Usage:
|
||||
resolve_agent_for_task "review" "$state_file" "{story_id}"
|
||||
echo "Review task: primary=$primary_agent, fallback=$fallback_agent"
|
||||
```
|
||||
|
||||
**Fallback behavior:**
|
||||
- If `fallback_agent` is empty, "false", or same as primary → retry with primary only
|
||||
- If `fallback_agent` differs → alternate between agents on retries
|
||||
- Complexity overrides win per task, then per-task overrides, then defaults
|
||||
|
||||
---
|
||||
|
||||
## Retry Sequence (5 Attempts Max)
|
||||
|
||||
| Attempt | Agent | Delay Before | Notes |
|
||||
|---------|-------|--------------|-------|
|
||||
| 1 | primary | none | Initial attempt |
|
||||
| 2 | fallback | 0-60s | Switch agent; delay if network error |
|
||||
| 3 | primary | 0-60s | Back to primary |
|
||||
| 4 | fallback | 60s | Always delay by attempt 4 |
|
||||
| 5 | primary | 60s | Final attempt |
|
||||
|
||||
**If no fallback configured:** All 5 attempts use primary agent.
|
||||
|
||||
---
|
||||
|
||||
## Network Error Detection
|
||||
|
||||
**Indicators of network/transient issues:**
|
||||
- Session output contains: "connection refused", "timeout", "rate limit", "503", "502"
|
||||
- Session crashed with zero output
|
||||
- `story-automator monitor-session` returns `final_state: "crashed"` with empty output
|
||||
- Session stuck at "never_active" state (no response from API)
|
||||
|
||||
**On network error detection:**
|
||||
- Sleep 60 seconds before next attempt
|
||||
- Log: "Network issue detected, waiting 60s before retry..."
|
||||
|
||||
---
|
||||
|
||||
## Implementation & Validation Examples
|
||||
|
||||
Detailed bash patterns and step-specific validation examples are moved to:
|
||||
|
||||
- **`retry-fallback-implementation.md`** (implementation wrapper + per-step validation)
|
||||
|
||||
---
|
||||
|
||||
## Escalation (After All Attempts)
|
||||
|
||||
Only after exhausting all 5 attempts:
|
||||
|
||||
1. Update state: `status = "AWAITING_DECISION"`
|
||||
2. Log all attempt details:
|
||||
```
|
||||
[timestamp] ESCALATION: {step} failed after 5 attempts
|
||||
- Attempt 1 (primary): {result}
|
||||
- Attempt 2 (fallback): {result}
|
||||
- Attempt 3 (primary): {result}
|
||||
- Attempt 4 (fallback): {result}
|
||||
- Attempt 5 (primary): {result}
|
||||
```
|
||||
3. Present options to user:
|
||||
- Retry with different settings
|
||||
- Skip this story
|
||||
- Abort orchestration
|
||||
|
||||
---
|
||||
|
||||
## Integration with Adaptive Retry
|
||||
|
||||
This strategy **replaces** the simple retry logic. The adaptive-retry.md plateau detection still applies within this framework:
|
||||
|
||||
- If same task plateau detected across 3+ attempts → DEFER instead of escalate
|
||||
- Plateau detection runs AFTER agent switching (so both agents hit same wall)
|
||||
|
||||
---
|
||||
|
||||
## Logging
|
||||
|
||||
All retry attempts should be logged in the action log:
|
||||
```
|
||||
[timestamp] {step} attempt {N}/{max} with {agent}: {result}
|
||||
```
|
||||
|
||||
On success after retry:
|
||||
```
|
||||
[timestamp] {step} succeeded on attempt {N} with {agent} (after {N-1} failures)
|
||||
```
|
||||
|
|
@ -0,0 +1,93 @@
|
|||
# Command Reference
|
||||
|
||||
All operations use the `story-automator` binary. **DO NOT construct tmux commands manually.**
|
||||
|
||||
## Core Commands
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `story-automator tmux-wrapper` | Session spawning, naming, lifecycle |
|
||||
| `story-automator monitor-session` | Batched polling (14+ API calls → 1) |
|
||||
| `story-automator tmux-status-check` | Context-efficient status checking (v2.4.0) |
|
||||
| `story-automator codex-status-check` | Codex-specific status with heartbeat (v2.4.0) |
|
||||
| `story-automator heartbeat-check` | CPU-based process heartbeat detection |
|
||||
| `story-automator orchestrator-helper` | Sprint-status, parsing, markers |
|
||||
| `story-automator orchestrator-helper agents-build` | Deterministic agents file generation |
|
||||
| `story-automator orchestrator-helper agents-resolve` | Agent lookup per story/task |
|
||||
| `story-automator validate-story-creation` | Story file count validation |
|
||||
| `story-automator commit-story` | Deterministic git commit with JSON output |
|
||||
|
||||
## Usage Pattern
|
||||
|
||||
> **⚠️ CRITICAL: `--command` IS REQUIRED**
|
||||
> You MUST pass `--command` with the built command string to `spawn`.
|
||||
> Without `--command`, the tmux session will be created but NO command runs → `never_active` failure.
|
||||
|
||||
```bash
|
||||
scripts="{scriptsDir}"
|
||||
|
||||
# ⚠️ --command is REQUIRED - without it, session sits idle!
|
||||
# Spawn session
|
||||
session=$("$scripts" tmux-wrapper spawn {type} {epic} {story_id} \
|
||||
--agent "$agent" \
|
||||
--command "$("$scripts" tmux-wrapper build-cmd {type} {story_id} --agent "$agent")")
|
||||
|
||||
# Monitor session
|
||||
result=$("$scripts" monitor-session "$session" --json --agent "$agent")
|
||||
|
||||
# Parse output
|
||||
parsed=$("$scripts" orchestrator-helper parse-output "$(echo $result | jq -r '.output_file')" {type})
|
||||
|
||||
# Cleanup
|
||||
"$scripts" tmux-wrapper kill "$session"
|
||||
```
|
||||
|
||||
## Deterministic Agent Selection
|
||||
|
||||
Agent selection is driven by the agents file created during preflight:
|
||||
`_bmad-output/story-automator/agents/agents-{state_filename}.md`
|
||||
|
||||
To resolve agents for a specific story/task:
|
||||
```bash
|
||||
selection=$("$scripts" orchestrator-helper agents-resolve --state-file "$state_file" --story "{story_id}" --task "{task}")
|
||||
primary=$(echo "$selection" | jq -r '.primary')
|
||||
fallback=$(echo "$selection" | jq -r '.fallback')
|
||||
```
|
||||
|
||||
## Step Types
|
||||
|
||||
| Type | Description | Agent Support |
|
||||
|------|-------------|---------------|
|
||||
| `create` | Create story from epic | Claude, Codex |
|
||||
| `dev` | Implement story tasks | Claude, Codex |
|
||||
| `auto` | Test automation | Claude, Codex |
|
||||
| `review` | Code review with auto-fix | Claude, Codex |
|
||||
| `retro` | Retrospective (YOLO mode) | **Claude ONLY** |
|
||||
|
||||
## Retrospective Commands (v1.5.0)
|
||||
|
||||
**CRITICAL:** Retrospectives use a special step type that:
|
||||
- Always uses Claude (Codex not supported)
|
||||
- Returns full YOLO mode prompt with doc verification instructions
|
||||
- Uses epic_number instead of story_id
|
||||
|
||||
```bash
|
||||
# For retro, "story_id" parameter is actually the epic_number
|
||||
cmd=$("$scripts" tmux-wrapper build-cmd retro {epic_number} --agent "claude")
|
||||
session=$("$scripts" tmux-wrapper spawn retro "" {epic_number} --agent "claude" --command "$cmd")
|
||||
|
||||
# Monitor (retrospectives never block, failures just logged)
|
||||
result=$("$scripts" monitor-session "$session" --json --agent "claude")
|
||||
"$scripts" tmux-wrapper kill "$session"
|
||||
```
|
||||
|
||||
The `build-cmd retro` command automatically includes:
|
||||
- The bmad-retrospective command invocation
|
||||
- Full YOLO mode instructions (no user input expected)
|
||||
- Key autonomous behaviors for menus/prompts
|
||||
- Doc verification instructions with subagent patterns
|
||||
- Instructions to update docs that have verified discrepancies
|
||||
|
||||
## Binary Location
|
||||
|
||||
The binary lives at `../bin/story-automator` relative to step files.
|
||||
|
|
@ -0,0 +1,142 @@
|
|||
# Stop Hook Configuration
|
||||
|
||||
This document defines the Claude Code Stop hook required for the story-automator to prevent premature stopping during orchestration.
|
||||
|
||||
**Related:** See `stop-hook-troubleshooting.md` for child session handling, manual override, and troubleshooting.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Stop hook uses a **marker file approach**:
|
||||
1. When story-automator starts → Creates marker file with orchestration context
|
||||
2. When Claude tries to stop → Hook script checks marker file
|
||||
3. If no marker or completed → Allow stop (normal Claude usage)
|
||||
4. If marker exists with pending stories → Block stop with continuation guidance
|
||||
5. When story-automator completes → Removes marker file
|
||||
|
||||
**Important (v2 fix):** The hook intentionally does NOT check the `stop_hook_active` flag. This flag stays `true` for the entire session after one blocked stop, which caused premature exits in long orchestrations. The marker file alone is the source of truth.
|
||||
|
||||
---
|
||||
|
||||
## Multi-Project Support (v2.0)
|
||||
|
||||
**CRITICAL:** The marker file is now PROJECT-SCOPED to support running story-automator on multiple projects simultaneously.
|
||||
|
||||
**Old location (DEPRECATED):** `/tmp/.story-automator-active`
|
||||
**New location:** `{project_root}/.claude/.story-automator-active`
|
||||
|
||||
### Why Project-Scoped?
|
||||
|
||||
When running story-automator on multiple projects at the same time:
|
||||
- Old: All projects shared `/tmp/.story-automator-active` → Cross-project interference
|
||||
- New: Each project has its own marker in `.claude/` → Full isolation
|
||||
|
||||
### How It Works
|
||||
|
||||
1. Stop hook uses `$PWD` to determine current project root
|
||||
2. Marker file is read from `{PWD}/.claude/.story-automator-active`
|
||||
3. Project A's stop hook only sees Project A's marker
|
||||
4. Project B's stop hook only sees Project B's marker
|
||||
|
||||
### State Files Also Scoped
|
||||
|
||||
The status check script state files are also project-scoped:
|
||||
- **Old:** `/tmp/.tmux-session-{SESSION}-state.json`
|
||||
- **New:** `/tmp/.sa-{project_hash}-session-{SESSION}-state.json`
|
||||
|
||||
Where `project_hash` = first 8 chars of MD5 hash of project root path.
|
||||
|
||||
---
|
||||
|
||||
## Hook Configuration
|
||||
|
||||
Add this to the target project's `.claude/settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"hooks": {
|
||||
"Stop": [
|
||||
{
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "/absolute/path/to/bin/story-automator stop-hook",
|
||||
"timeout": 10
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Binary Path is Always Absolute
|
||||
|
||||
**The stop hook binary resolves itself to an absolute path via `os.Executable()`.** Regardless of how the caller passes the `--command` argument (relative, project-relative, or absolute), the binary self-resolves and stores a consistent absolute path in settings.json.
|
||||
|
||||
This prevents the inconsistency where the AI agent resolves frontmatter paths differently across sessions, which previously caused repeated hook installations and unnecessary restart loops.
|
||||
|
||||
**Migration:** If an existing settings.json contains a relative or project-relative path, `ensure-stop-hook` will normalize it to absolute in-place without triggering a restart (`reason: "path_normalized"`).
|
||||
|
||||
**When hook fails with "no such file or directory":**
|
||||
- Verify BMAD is installed in the target project
|
||||
- Check the binary exists: `ls -la _bmad/bmm/4-implementation/bmad-story-automator-go/bin/story-automator`
|
||||
- Ensure binary is executable: `chmod +x _bmad/bmm/4-implementation/bmad-story-automator-go/bin/story-automator`
|
||||
|
||||
---
|
||||
|
||||
## Marker File Format
|
||||
|
||||
**Location (v2.0):** `{project_root}/.claude/.story-automator-active`
|
||||
|
||||
*Note: Ensure `.claude/.story-automator-active` is in your `.gitignore`*
|
||||
|
||||
Content (JSON - v1.2.0 with heartbeat):
|
||||
```json
|
||||
{
|
||||
"epic": "epic-01",
|
||||
"currentStory": "story-01",
|
||||
"storiesRemaining": 3,
|
||||
"stateFile": "/path/to/orchestration-epic01.md",
|
||||
"startedAt": "2026-01-13T10:00:00Z",
|
||||
"heartbeat": "2026-01-13T10:30:00Z",
|
||||
"pid": 12345
|
||||
}
|
||||
```
|
||||
|
||||
### Fields (v1.2.0):
|
||||
- `heartbeat`: Last activity timestamp, updated periodically during execution
|
||||
- `pid`: Process ID of the orchestrator (helps detect crashed sessions)
|
||||
|
||||
### Staleness Check
|
||||
|
||||
The stop hook checks if marker heartbeat is older than 30 minutes (stale = orchestrator crashed). If stale, allow stop. See `story-automator stop-hook` for implementation.
|
||||
|
||||
---
|
||||
|
||||
## Verification Logic
|
||||
|
||||
The orchestrator verifies hook installation at startup:
|
||||
|
||||
```
|
||||
1. Check if .claude/settings.json exists
|
||||
2. Parse JSON and look for hooks.Stop array
|
||||
3. Check if any hook command contains "story-automator stop-hook"
|
||||
|
||||
IF found → Continue
|
||||
IF not found → Add hook, instruct restart
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Hook Behavior
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| `STORY_AUTOMATOR_CHILD=true` | `exit 0` → Always allow (child session) |
|
||||
| No marker file | `exit 0` → Allow stop |
|
||||
| Marker exists, `storiesRemaining=0` | `exit 0` → Allow stop |
|
||||
| Marker exists, `storiesRemaining > 0` | Output JSON → Block stop with reason |
|
||||
|
||||
**Key fix (Session 10):** The hook no longer checks `stop_hook_active`. This flag was causing premature exits in long orchestrations because it stays `true` for the entire session after the first blocked stop.
|
||||
|
|
@ -0,0 +1,87 @@
|
|||
# Stop Hook Recovery
|
||||
|
||||
**You are reading this because the Stop hook blocked your attempt to stop.**
|
||||
|
||||
---
|
||||
|
||||
## Decision Matrix
|
||||
|
||||
**Find your scenario and follow the action exactly:**
|
||||
|
||||
| Scenario | Action | Why |
|
||||
|----------|--------|-----|
|
||||
| Just finished a story, more in queue | **CONTINUE** → Process next story | Stopping between stories wastes progress |
|
||||
| Session completed successfully | **CONTINUE** → Check state, process output | Session done ≠ orchestration done |
|
||||
| Session failed/errored | **CONTINUE** → Follow retry/fallback strategy | Retries not exhausted yet |
|
||||
| Feeling uncertain what to do | **CONTINUE** → Check state document, follow step instructions | Uncertainty is not a blocker |
|
||||
| Want to "check in" with user | **CONTINUE** → User will interrupt if needed | User doesn't want status updates |
|
||||
| Waiting at a menu for user input | **DECIDE** → Use sub-agent to gather context, make informed decision | Don't block on user, keep moving |
|
||||
| Multiple valid paths forward | **DECIDE** → Pick the path most likely to finish stories in order | Your job is to finish, not ask |
|
||||
| Unrecoverable error (all retries exhausted) | **STOP** → Follow stop procedure below | Cannot proceed without intervention |
|
||||
| External dependency down (API, service) | **RETRY** → Sleep with increasing delay (1m, 2m, 4m, 8m, 16m), max 5 attempts | Often recovers on its own |
|
||||
| User explicitly requested stop earlier | **STOP** → Follow stop procedure below | Honoring user intent |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### CONTINUE (Most Scenarios)
|
||||
|
||||
1. Read your state document
|
||||
2. Check `currentStep`, `storyQueue`, `currentStoryId`
|
||||
3. Resume execution from where you left off
|
||||
4. Process the next story in queue
|
||||
|
||||
### DECIDE (Autonomous Decision Required)
|
||||
|
||||
1. Spawn sub-agent to analyze current context
|
||||
2. Gather: state document, recent session output, story requirements
|
||||
3. Make the decision most likely to complete stories in order
|
||||
4. Document decision in state log, then execute
|
||||
5. **Never wait for user** - your job is to keep moving
|
||||
|
||||
### RETRY (External Dependency Issues)
|
||||
|
||||
1. Log the failure with timestamp
|
||||
2. Sleep with exponential backoff: 1m → 2m → 4m → 8m → 16m
|
||||
3. Retry the operation after each sleep
|
||||
4. After 5 failed attempts → treat as unrecoverable → STOP
|
||||
|
||||
### STOP (Genuinely Unrecoverable)
|
||||
|
||||
1. Update state document status to `STOPPED`
|
||||
2. Delete marker: `rm "{project_root}/.claude/.story-automator-active"`
|
||||
3. Log WHY you stopped (be specific)
|
||||
4. Exit cleanly
|
||||
|
||||
---
|
||||
|
||||
## Stop Hook Messages ≠ User Input
|
||||
|
||||
The stop hook fires when Claude pauses, not just when explicitly stopping.
|
||||
|
||||
**If you see this while at a decision point:**
|
||||
- This message is NOT telling you what to choose
|
||||
- Use a sub-agent to gather context and DECIDE autonomously
|
||||
- Do NOT wait for user - make the call yourself
|
||||
|
||||
---
|
||||
|
||||
## Core Principle
|
||||
|
||||
**Your job is to finish every story in the correct order.**
|
||||
|
||||
Do whatever it takes. Make autonomous decisions. Only stop when genuinely unrecoverable (all retries exhausted, user explicitly requested stop).
|
||||
|
||||
---
|
||||
|
||||
## Common Mistakes to Avoid
|
||||
|
||||
| Mistake | Correct Behavior |
|
||||
|---------|------------------|
|
||||
| Stopping to report progress | Continue silently, user sees state doc |
|
||||
| Stopping after one story completes | Continue to next story |
|
||||
| Stopping because session errored | Follow retry strategy first |
|
||||
| Waiting for user at decision points | Decide autonomously, keep moving |
|
||||
| Stopping on first API/service failure | Retry with exponential backoff (5 attempts) |
|
||||
| Asking user which path to take | Pick the path that finishes stories in order |
|
||||
|
|
@ -0,0 +1,99 @@
|
|||
# Stop Hook Troubleshooting
|
||||
|
||||
**Related:** See `stop-hook-config.md` for core configuration.
|
||||
|
||||
---
|
||||
|
||||
## Child Session Handling (Session 19 Fix)
|
||||
|
||||
**CRITICAL:** The stop hook is installed at the PROJECT level. When the orchestrator spawns T-Mux sessions (create-story, dev-story, code-review), those child Claude instances:
|
||||
1. Run in the same project directory
|
||||
2. Read the same `.claude/settings.json`
|
||||
3. Have the same stop hook configured
|
||||
4. See the same marker file
|
||||
|
||||
**Problem:** Without distinction, the stop hook blocks child sessions from completing, creating infinite loops.
|
||||
|
||||
**Solution:** All T-Mux child sessions MUST be spawned with:
|
||||
|
||||
```bash
|
||||
tmux new-session -d -s "SESSION_NAME" -e STORY_AUTOMATOR_CHILD=true
|
||||
```
|
||||
|
||||
The `-e STORY_AUTOMATOR_CHILD=true` flag exports the environment variable to the session. The stop hook checks this FIRST and immediately allows stop if set.
|
||||
|
||||
**Who gets blocked vs allowed:**
|
||||
|
||||
| Session Type | STORY_AUTOMATOR_CHILD | Stop Hook Behavior |
|
||||
|--------------|----------------------|-------------------|
|
||||
| Orchestrator | not set | BLOCKED (if marker + stories remaining) |
|
||||
| create-story | `true` | ALLOWED (always) |
|
||||
| dev-story | `true` | ALLOWED (always) |
|
||||
| code-review | `true` | ALLOWED (always) |
|
||||
| testarch-automate | `true` | ALLOWED (always) |
|
||||
| Internal scripts (e.g., haiku calls) | `true` | ALLOWED (always) |
|
||||
|
||||
---
|
||||
|
||||
## Internal Claude Calls (Session 20 Fix)
|
||||
|
||||
**CRITICAL:** Scripts that internally call `claude` (like `story-automator tmux-status-check` using Haiku for wait estimation) MUST prefix the call with the environment variable:
|
||||
|
||||
```bash
|
||||
# WRONG - will hang when stop hook blocks the claude exit
|
||||
RESULT=$(claude -p --model haiku "..." 2>/dev/null)
|
||||
|
||||
# CORRECT - allows claude to exit normally
|
||||
RESULT=$(STORY_AUTOMATOR_CHILD=true claude -p --model haiku "..." 2>/dev/null)
|
||||
```
|
||||
|
||||
**Why:** Even non-interactive `claude -p` calls trigger the stop hook when they exit. Without the env var, the hook sees the marker file and blocks, causing the script to hang indefinitely.
|
||||
|
||||
---
|
||||
|
||||
## Stop Hook Messages Are NOT User Input
|
||||
|
||||
**When you present a menu and wait for user input, the stop hook may fire with messages like:**
|
||||
> "Story Automator is running with N stories remaining. Continue processing..."
|
||||
|
||||
**THIS IS NOT USER INPUT.** Do not interpret stop hook feedback as a menu selection.
|
||||
|
||||
- NEVER treat "continue processing" as selecting [R]esume
|
||||
- NEVER proceed past a menu because the stop hook fired
|
||||
- ALWAYS wait for ACTUAL user input (typed response)
|
||||
- Stop hook messages are about STOPPING behavior only
|
||||
|
||||
**Why this happens:** The stop hook fires when Claude pauses, not just when explicitly stopping. During menu waits, it may fire repeatedly. Ignore these messages when waiting for user input.
|
||||
|
||||
---
|
||||
|
||||
## Manual Override
|
||||
|
||||
If the orchestrator gets stuck, users can:
|
||||
1. Remove the marker file: `rm .claude/.story-automator-active` (from project root)
|
||||
2. Stop Claude normally
|
||||
3. Resume later with the continue flow
|
||||
|
||||
**For multi-project cleanup:**
|
||||
```bash
|
||||
# Remove marker for current project only
|
||||
rm -f .claude/.story-automator-active
|
||||
|
||||
# Clean up project-scoped state files (optional)
|
||||
PROJECT_HASH=$(echo -n "$PWD" | md5sum | cut -c1-8)
|
||||
rm -f /tmp/.sa-${PROJECT_HASH}-session-*
|
||||
rm -f /tmp/sa-${PROJECT_HASH}-output-*
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Check |
|
||||
|-------|-------|
|
||||
| Hook not running | Valid JSON in settings? Script executable? Session restarted? |
|
||||
| "no such file" | BMAD installed? Path correct? `ls -la _bmad/.../bin/` |
|
||||
| Premature stops | Marker exists? `storiesRemaining > 0`? v2 fix applied? |
|
||||
| Child sessions blocked | `STORY_AUTOMATOR_CHILD=true` set? Check spawn command. |
|
||||
| Script hangs | Internal claude calls missing env var? See Session 20 Fix. |
|
||||
| Hook fires during menus | Normal behavior - ignore messages, wait for real input. |
|
||||
|
|
@ -0,0 +1,87 @@
|
|||
# Sub-Agent Analysis Prompts
|
||||
|
||||
**Purpose:** Analysis-focused prompt templates for sub-agents spawned during story-automator execution.
|
||||
|
||||
**Related:** See `subagent-prompts.md` for core execution prompts (parser, reader, updater).
|
||||
|
||||
---
|
||||
|
||||
## Code Review Analyzer
|
||||
|
||||
**Use:** Analyze code review output to determine review status and next steps.
|
||||
|
||||
**Prompt:**
|
||||
```
|
||||
You are a code review analyzer. Analyze the code review session output.
|
||||
|
||||
Story: {story_name}
|
||||
Review cycle: {cycle_number} of 3
|
||||
Review output:
|
||||
---
|
||||
{review_output}
|
||||
---
|
||||
|
||||
Determine the review outcome by looking for:
|
||||
1. "Story Status: done" or "Story Status: in-progress"
|
||||
2. "Issues Fixed: N" count
|
||||
3. "Issues Found: N High, N Medium, N Low"
|
||||
|
||||
Return:
|
||||
{
|
||||
"storyStatus": "done|in-progress|unknown",
|
||||
"issuesFixed": N,
|
||||
"highIssues": N,
|
||||
"mediumIssues": N,
|
||||
"lowIssues": N,
|
||||
"recommendation": "proceed|retry|escalate",
|
||||
"summary": "brief description of outcome"
|
||||
}
|
||||
```
|
||||
|
||||
**Decision logic:**
|
||||
- storyStatus == "done" → proceed (exit review loop)
|
||||
- storyStatus == "in-progress" → retry (new review cycle needed)
|
||||
- storyStatus == "unknown" → check sprint-status.yaml directly
|
||||
|
||||
**CRITICAL:** The orchestrator MUST verify sprint-status.yaml after review completes. The sub-agent analysis is advisory; sprint-status.yaml is the source of truth.
|
||||
|
||||
---
|
||||
|
||||
## Dependency Analyzer
|
||||
|
||||
**Use:** Analyze stories for parallel execution safety.
|
||||
|
||||
**Prompt:**
|
||||
```
|
||||
You are a dependency analyzer. Determine if these stories can safely run in parallel.
|
||||
|
||||
Stories to analyze:
|
||||
{stories_list}
|
||||
|
||||
For each pair of stories, check for:
|
||||
- File conflicts (modifying same files)
|
||||
- Logical dependencies (one builds on another)
|
||||
- Resource conflicts (same database tables, API endpoints)
|
||||
- Test conflicts (interfering test data)
|
||||
|
||||
Return:
|
||||
{
|
||||
"parallelSafe": true|false,
|
||||
"conflicts": [
|
||||
{
|
||||
"story1": "...",
|
||||
"story2": "...",
|
||||
"conflictType": "file|logical|resource|test",
|
||||
"description": "..."
|
||||
}
|
||||
],
|
||||
"recommendation": "parallel|sequential|partial",
|
||||
"suggestedOrder": ["story order if sequential needed"]
|
||||
}
|
||||
```
|
||||
|
||||
**Parallel safety indicators:**
|
||||
- Different feature areas → likely safe
|
||||
- Same component/module → check files
|
||||
- Database migrations → sequential only
|
||||
- Shared test fixtures → check for conflicts
|
||||
|
|
@ -0,0 +1,153 @@
|
|||
# Sub-Agent Prompt Templates
|
||||
|
||||
**Purpose:** Core prompt templates for sub-agents spawned during story-automator execution.
|
||||
|
||||
**Related:** See `subagent-prompts-analysis.md` for analysis prompts (code review, dependency).
|
||||
|
||||
---
|
||||
|
||||
## Session Output Parser
|
||||
|
||||
**Use:** Parse T-Mux session output to determine success/failure status.
|
||||
|
||||
**Prompt (v1.2.0 - strengthened):**
|
||||
```
|
||||
You are a session output parser. Your job is CRITICAL - incorrect parsing leads to workflow failures.
|
||||
|
||||
## MANDATORY STEPS (do these IN ORDER):
|
||||
|
||||
1. **READ THE ENTIRE FILE FIRST** - Use the Read tool to load the complete file
|
||||
2. **COUNT LINES** - Note total line count. If <50 lines, output may be truncated
|
||||
3. **SCAN FOR KEY MARKERS** - Look for these patterns:
|
||||
- SUCCESS: "✅", "complete", "done", "Story file created", "Tests passed"
|
||||
- FAILURE: "❌", "error", "failed", "Exception", "panic"
|
||||
- TRUNCATED: File ends mid-sentence, no clear conclusion
|
||||
|
||||
4. **ANALYZE TASK PROGRESS** - Look for todo markers:
|
||||
- "☒" = completed task
|
||||
- "☐" = pending task
|
||||
- Extract: tasks_completed / tasks_total
|
||||
|
||||
5. **DETERMINE STATUS:**
|
||||
- SUCCESS: Clear completion markers AND file not truncated
|
||||
- FAILURE: Error markers OR crash indicators
|
||||
- AMBIGUOUS: Truncated output OR no clear markers (recommend escalate)
|
||||
|
||||
Session: {session_id}
|
||||
Step: {step_name}
|
||||
Story: {story_name}
|
||||
|
||||
Output file: {output_file_path}
|
||||
|
||||
## RESPONSE FORMAT (strict JSON):
|
||||
{
|
||||
"status": "SUCCESS|FAILURE|AMBIGUOUS",
|
||||
"summary": "1-2 sentence description",
|
||||
"tasks_completed": 0,
|
||||
"tasks_total": 0,
|
||||
"issues": ["list any errors found"],
|
||||
"nextAction": "proceed|retry|escalate",
|
||||
"confidence": "high|medium|low",
|
||||
"line_count": 0,
|
||||
"reasoning": "brief explanation of how you determined status"
|
||||
}
|
||||
|
||||
## CRITICAL RULES:
|
||||
- If output appears truncated (ends abruptly), set status="AMBIGUOUS" and nextAction="escalate"
|
||||
- NEVER guess status - if unclear, use AMBIGUOUS
|
||||
- Include line_count to verify you read the whole file
|
||||
- For dev-story: tasks_completed < tasks_total with idle session = FAILURE (session crashed)
|
||||
```
|
||||
|
||||
**Context for parser:**
|
||||
- For create-story: Look for "Story file created" or file path in output. Verify file exists.
|
||||
- For dev-story: Look for "Implementation complete", "Status: review/done", test pass indicators
|
||||
- For code-review: Look for issue counts by severity (CRITICAL, HIGH, MEDIUM, LOW)
|
||||
- For automate: Look for test file creation confirmation
|
||||
|
||||
**Why strengthened (Session 3):** Sub-agent sometimes returned incomplete analysis because it didn't read the entire file or missed truncation indicators.
|
||||
|
||||
---
|
||||
|
||||
## Story Reader
|
||||
|
||||
**Use:** Read a story file and produce a structured summary for pre-flight context.
|
||||
|
||||
**Prompt:**
|
||||
```
|
||||
You are a story reader. Analyze the following story file and extract key information for orchestration.
|
||||
|
||||
Story file: {story_file_path}
|
||||
|
||||
Content:
|
||||
---
|
||||
{story_content}
|
||||
---
|
||||
|
||||
Extract and return:
|
||||
{
|
||||
"storyId": "...",
|
||||
"title": "...",
|
||||
"type": "feature|bugfix|refactor|test|docs",
|
||||
"complexity": "simple|moderate|complex",
|
||||
"dependencies": ["list of dependencies or blockers"],
|
||||
"acceptanceCriteria": ["list of key acceptance criteria"],
|
||||
"technicalNotes": "any technical implementation hints",
|
||||
"estimatedSteps": ["create-story", "dev-story", "automate?", "code-review"],
|
||||
"parallelSafe": true|false,
|
||||
"parallelReason": "why parallel execution is safe or not"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## State Document Updater
|
||||
|
||||
**Use:** Generate state document update entries.
|
||||
|
||||
**Prompt:**
|
||||
```
|
||||
You are a state document updater. Generate the appropriate update for the orchestration state.
|
||||
|
||||
Action type: {action_type}
|
||||
Story: {story_name}
|
||||
Step: {step_name}
|
||||
Result: {result}
|
||||
Details: {details}
|
||||
|
||||
Generate:
|
||||
1. Action log entry (timestamped)
|
||||
2. Progress table update (if applicable)
|
||||
3. Session reference update (if applicable)
|
||||
|
||||
Return:
|
||||
{
|
||||
"actionLogEntry": "timestamp | story | step | action | result",
|
||||
"progressUpdate": {
|
||||
"story": "...",
|
||||
"column": "...",
|
||||
"value": "..."
|
||||
},
|
||||
"sessionRef": {
|
||||
"sessionId": "...",
|
||||
"status": "...",
|
||||
"completedAt": "..."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage Notes
|
||||
|
||||
1. **Context Isolation:** Each sub-agent runs in its own context. Pass only necessary information.
|
||||
|
||||
2. **Return Format:** Always expect JSON responses for easy parsing.
|
||||
|
||||
3. **Error Handling:** If sub-agent response doesn't parse, escalate to user.
|
||||
|
||||
4. **Timeout:** Sub-agent calls should complete within 60 seconds by default but should be adaptive based on task and context. If timeout, retry once then escalate.
|
||||
|
||||
5. **Logging:** Log all sub-agent calls and responses to action log for debugging.
|
||||
|
||||
6. **Analysis Prompts:** For code review and dependency analysis prompts, see `subagent-prompts-analysis.md`.
|
||||
|
|
@ -0,0 +1,93 @@
|
|||
# Success Patterns
|
||||
|
||||
**Purpose:** Patterns for detecting when each workflow step has completed successfully.
|
||||
|
||||
---
|
||||
|
||||
## create-story
|
||||
|
||||
**Success indicators:**
|
||||
- Story file created at expected path
|
||||
- Story file contains required sections (title, acceptance criteria, etc.)
|
||||
- Session output contains "Story created" or similar confirmation
|
||||
|
||||
**Failure indicators:**
|
||||
- Error messages in session output
|
||||
- Story file not found after session completes
|
||||
- Session exits with non-zero code
|
||||
|
||||
---
|
||||
|
||||
## dev-story
|
||||
|
||||
**Success indicators:**
|
||||
- Code changes committed or staged
|
||||
- Tests pass (if applicable)
|
||||
- Session output contains "Implementation complete" or similar
|
||||
- No unresolved errors in session output
|
||||
|
||||
**Failure indicators:**
|
||||
- Test failures
|
||||
- Unresolved compilation/lint errors
|
||||
- Session output contains error messages
|
||||
- Session times out or crashes
|
||||
|
||||
---
|
||||
|
||||
## automate (guardrail tests)
|
||||
|
||||
**Success indicators:**
|
||||
- Test files created
|
||||
- Tests pass when run
|
||||
- Session output confirms test generation complete
|
||||
|
||||
**Failure indicators:**
|
||||
- Test generation errors
|
||||
- Generated tests fail immediately
|
||||
- Session output contains errors
|
||||
|
||||
---
|
||||
|
||||
## code-review
|
||||
|
||||
**Success indicators (clean):**
|
||||
- "No issues found" or "LGTM" in session output
|
||||
- Zero blocking issues reported
|
||||
- Only informational/optional suggestions remain
|
||||
|
||||
**Success indicators (issues found):**
|
||||
- Clear list of issues with file:line references
|
||||
- Issues categorized by severity
|
||||
- Actionable fix suggestions provided
|
||||
|
||||
**Failure indicators:**
|
||||
- Unable to complete review
|
||||
- Session crashes or times out
|
||||
- Ambiguous output that can't be parsed
|
||||
|
||||
---
|
||||
|
||||
## git-commit
|
||||
|
||||
**Success indicators:**
|
||||
- Commit created successfully
|
||||
- Commit message follows convention
|
||||
- No uncommitted changes remain (for story scope)
|
||||
|
||||
**Failure indicators:**
|
||||
- Git errors (merge conflicts, etc.)
|
||||
- Commit hook failures
|
||||
- Unable to stage changes
|
||||
|
||||
---
|
||||
|
||||
## retrospective
|
||||
|
||||
**Success indicators:**
|
||||
- Retrospective session completes
|
||||
- Summary document generated
|
||||
- Learnings captured
|
||||
|
||||
**Failure indicators:**
|
||||
- Session incomplete
|
||||
- Unable to generate summary
|
||||
|
|
@ -0,0 +1,190 @@
|
|||
# T-Mux Commands Reference
|
||||
|
||||
**Related:** See `workflow-commands.md` for BMAD workflow invocation commands.
|
||||
|
||||
---
|
||||
|
||||
## Session Names
|
||||
|
||||
**Pattern (v3.0 - MULTI-PROJECT):** `sa-{project_slug}-{YYMMDD}-{HHMMSS}-e{epic}-s{story}-{step}`
|
||||
|
||||
**Examples:**
|
||||
- `sa-myproj-260114-223045-e6-s64-dev` (Project "myproject", Epic 6, Story 6.4, dev step)
|
||||
- `sa-webapp-260114-223512-e6-s64-review-1` (Project "webapp", review cycle 1)
|
||||
|
||||
### Project Slug for Multi-Project Support
|
||||
|
||||
**Why project slug (v3.0):**
|
||||
- **Isolates sessions per project** - List only current project's sessions
|
||||
- **Prevents cross-project interference** - Won't kill another project's sessions
|
||||
- **Enables parallel orchestration** - Run story-automator on multiple projects simultaneously
|
||||
|
||||
**Generate project slug:**
|
||||
```bash
|
||||
# First 8 chars of project directory name (lowercase, alphanumeric only)
|
||||
project_slug=$(basename "$PWD" | tr '[:upper:]' '[:lower:]' | tr -cd '[:alnum:]' | cut -c1-8)
|
||||
```
|
||||
|
||||
**Example:** Project at `/home/user/my-awesome-project` → `project_slug="myawesom"`
|
||||
|
||||
**Why timestamps with seconds (v2.1):**
|
||||
- Prevents collisions when multiple sessions spawn in same minute
|
||||
- Easier debugging across multiple orchestration runs
|
||||
- Session names are unique even if re-running same story
|
||||
- Can identify stale sessions from crashed runs
|
||||
|
||||
**Generate full session name:**
|
||||
```bash
|
||||
project_slug=$(basename "$PWD" | tr '[:upper:]' '[:lower:]' | tr -cd '[:alnum:]' | cut -c1-8)
|
||||
timestamp=$(date +%y%m%d-%H%M%S) # Returns "260114-223045"
|
||||
session_name="sa-${project_slug}-${timestamp}-e{epic}-s{story_suffix}-{step}"
|
||||
```
|
||||
|
||||
### Listing/Killing Project-Specific Sessions
|
||||
|
||||
**List only current project's sessions:**
|
||||
```bash
|
||||
project_slug=$(basename "$PWD" | tr '[:upper:]' '[:lower:]' | tr -cd '[:alnum:]' | cut -c1-8)
|
||||
tmux list-sessions 2>/dev/null | grep "^sa-${project_slug}-"
|
||||
```
|
||||
|
||||
**Kill only current project's sessions:**
|
||||
```bash
|
||||
project_slug=$(basename "$PWD" | tr '[:upper:]' '[:lower:]' | tr -cd '[:alnum:]' | cut -c1-8)
|
||||
tmux list-sessions -F '#{session_name}' 2>/dev/null | grep "^sa-${project_slug}-" | xargs -I {} tmux kill-session -t {}
|
||||
```
|
||||
|
||||
### No Dots in Session Names
|
||||
|
||||
**T-Mux session names CANNOT contain dots (`.`).** Story IDs like "6.2" must be converted to hyphens.
|
||||
|
||||
```bash
|
||||
# Story ID to session name conversion
|
||||
# Story ID "6.2" → session suffix "s6-2" (NOT "s6.2")
|
||||
session_suffix=$(echo "{story_id}" | tr '.' '-')
|
||||
```
|
||||
|
||||
**WRONG:** `sa-epic6-s6.2-review-1` ← Will fail with "can't find pane" error
|
||||
**RIGHT:** `sa-epic6-s6-2-review-1` ← Works correctly
|
||||
|
||||
---
|
||||
|
||||
## Status Check Script (PREFERRED)
|
||||
|
||||
**ALWAYS use the status check script instead of raw pane capture.**
|
||||
|
||||
Script: `{project_root}/_bmad/bmm/4-implementation/bmad-story-automator-go/bin/story-automator tmux-status-check`
|
||||
|
||||
```bash
|
||||
# ALWAYS use absolute path - relative paths break when directory changes
|
||||
{project_root}/_bmad/bmm/4-implementation/bmad-story-automator-go/bin/story-automator tmux-status-check "SESSION_NAME"
|
||||
```
|
||||
|
||||
**Returns CSV:** `status,todos_done,todos_total,active_task,wait_estimate,session_state`
|
||||
|
||||
```
|
||||
active,3,7,Running tests,90,in_progress
|
||||
idle,0,0,,0,just_started
|
||||
idle,0,0,,0,completed
|
||||
not_found,0,0,,0,not_found
|
||||
error,0,0,capture_failed,30,error
|
||||
```
|
||||
|
||||
**CSV Columns:**
|
||||
1. `status` - "active" | "idle" | "not_found" | "error" | "crashed"
|
||||
2. `todos_done` - completed todo count (Claude only; Codex returns 0)
|
||||
3. `todos_total` - total todo count (Claude only; Codex returns 0)
|
||||
4. `active_task` - current task (truncated, no commas) OR output file path (for --full/crashed)
|
||||
5. `wait_estimate` - seconds to wait before next check (heuristic-based). For crashed: exit code.
|
||||
6. `session_state` - **KEY COLUMN** for decision making:
|
||||
- `just_started` - Session spawned, agent loading
|
||||
- `in_progress` - Actively working
|
||||
- `completed` - Was active, now finished cleanly
|
||||
- `crashed` - Session exited with non-zero status (v2)
|
||||
- `stuck` - Never became active after multiple polls
|
||||
- `not_found` / `error` - Problem states
|
||||
|
||||
**Agent Detection (v1.3.0):**
|
||||
The status check script automatically detects Claude vs Codex sessions:
|
||||
- **Claude:** Looks for `ctrl+c to interrupt`, `☒`/`☐` checkboxes
|
||||
- **Codex:** Looks for `OpenAI Codex`, `codex exec`, `codex-cli`, `gpt-*-codex`, `tokens used`
|
||||
- **Codex completion cues:** `tokens used` line, shell prompt return (e.g., `❯`, `$`, `#`), or clean tmux exit
|
||||
- Codex sessions get 1.5x longer wait estimates (90s vs 60s default); "succeeded" alone is not treated as active
|
||||
|
||||
**For full output (when completed/stuck):**
|
||||
```bash
|
||||
./bin/story-automator tmux-status-check "SESSION_NAME" --full
|
||||
```
|
||||
Returns: `idle,0,0,/tmp/sa-output-SESSION_NAME.txt,0,completed`
|
||||
|
||||
---
|
||||
|
||||
## Polling Pattern (for step-03-execute)
|
||||
|
||||
**Use `wait_estimate` from CSV - heuristic estimates optimal interval.**
|
||||
|
||||
| status | Action |
|
||||
|--------|--------|
|
||||
| `active` | Log: "{todos_done}/{todos_total} - {active_task}". Sleep `wait_estimate` seconds, re-poll |
|
||||
| `idle` | Run `--full`, parse output per success-patterns.md |
|
||||
| `crashed` | Session crashed! Column 4 = output file, Column 5 = exit code. Apply adaptive retry strategy. |
|
||||
| `not_found` | Session ended unexpectedly, escalate |
|
||||
| `error` | Retry once, then escalate |
|
||||
|
||||
**Crashed vs Completed (v2):**
|
||||
- `completed` = session was active, then exited cleanly (exit code 0)
|
||||
- `crashed` = session exited with non-zero exit code (context limit, API error, etc.)
|
||||
- Always check session_state to distinguish between success and failure!
|
||||
|
||||
---
|
||||
|
||||
## Core Commands
|
||||
|
||||
### Create Session + Run Command
|
||||
|
||||
**CRITICAL: All child sessions MUST set `STORY_AUTOMATOR_CHILD=true`**
|
||||
|
||||
This environment variable tells the stop hook to allow the session to complete normally.
|
||||
Without it, the stop hook will block child sessions from stopping, causing infinite loops.
|
||||
|
||||
```bash
|
||||
# CRITICAL: Always use -x 200 -y 50 for wide terminal (prevents line-wrap issues with long commands)
|
||||
tmux new-session -d -s "SESSION_NAME" -x 200 -y 50 -c "PROJECT_PATH" -e STORY_AUTOMATOR_CHILD=true
|
||||
tmux send-keys -t "SESSION_NAME" "COMMAND_HERE" Enter
|
||||
```
|
||||
|
||||
**Terminal Dimensions:** The `-x 200 -y 50` flags create a wider terminal window. This is **REQUIRED** for commands longer than 80 characters (e.g., YOLO mode retrospective prompts ~1500 chars). Without this, line-wrapping causes shell parsing failures and silent command execution failures.
|
||||
|
||||
**Long Command Script Files:** Commands exceeding 500 characters are written to `/tmp/sa-cmd-{session}.sh` and executed via `bash /tmp/sa-cmd-{session}.sh`. The `bash` prefix is critical — without it, the shell receives a raw path and silently fails. These script files are not auto-cleaned; they persist in `/tmp/` until system cleanup.
|
||||
|
||||
See `data/tmux-long-command-debugging.md` for detailed troubleshooting.
|
||||
|
||||
### Other Commands
|
||||
|
||||
```bash
|
||||
tmux has-session -t "SESSION" 2>/dev/null # Check exists
|
||||
tmux kill-session -t "SESSION" # Kill session
|
||||
tmux list-sessions # List all
|
||||
tmux capture-pane -t "SESSION" -p -S -100 # Raw capture (use sparingly)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variables
|
||||
|
||||
**Agent Configuration (v1.3.0):**
|
||||
|
||||
| Variable | Claude | Codex |
|
||||
|----------|--------|-------|
|
||||
| CLI | `claude --dangerously-skip-permissions` | `codex exec --full-auto` |
|
||||
| Prompt Style | `/bmad-bmb-workflow` command | Natural language |
|
||||
| Timeout Multiplier | 1x (60min) | 1.5x (90min) |
|
||||
| Todo Tracking | ☒/☐ checkboxes | Not supported |
|
||||
|
||||
**Environment Variables:**
|
||||
- `AI_AGENT` = `claude` or `codex` (used by story-automator tmux-wrapper and story-automator monitor-session)
|
||||
- `AI_COMMAND` = Full CLI (legacy, deprecated)
|
||||
|
||||
`{projectPath}` = project root
|
||||
|
||||
*See `workflow-commands.md` for BMAD workflow command patterns (including Codex natural language prompts).*
|
||||
|
|
@ -0,0 +1,138 @@
|
|||
# Tmux Long Command Debugging Guide
|
||||
|
||||
**Created:** 2026-01-21
|
||||
**Context:** Debugging retrospective session failures in story-automator
|
||||
**Root Cause:** Terminal width causes line-wrap corruption of long commands
|
||||
|
||||
**Related:** See `tmux-long-command-testing.md` for detailed investigation steps and test scripts.
|
||||
|
||||
---
|
||||
|
||||
## Problem Summary
|
||||
|
||||
Tmux sessions spawned via `tmux send-keys` were failing silently when commands exceeded ~1000 characters. Sessions would spawn successfully but the command would never execute, resulting in `stuck/never_active` status.
|
||||
|
||||
**Symptoms:**
|
||||
- Session spawns successfully (tmux session exists)
|
||||
- Command appears in terminal output (visible in capture-pane)
|
||||
- No child processes running (Claude never starts)
|
||||
- No error messages visible
|
||||
- Monitor reports `stuck` or `never_active`
|
||||
|
||||
---
|
||||
|
||||
## Root Cause
|
||||
|
||||
**Default tmux terminal dimensions:** 80 columns × 24 rows
|
||||
|
||||
When `tmux send-keys` sends a command longer than the terminal width:
|
||||
1. The command wraps across multiple lines in the terminal buffer
|
||||
2. The shell receives the wrapped input as if it were multiple lines
|
||||
3. Shell parsing fails or behaves unexpectedly with multi-line wrapped input
|
||||
4. The command silently fails or produces syntax errors
|
||||
|
||||
**Critical insight:** This is NOT a tmux bug or a shell bug individually - it's an interaction problem between how `tmux send-keys` delivers characters and how the shell's line editor handles wrapped input.
|
||||
|
||||
---
|
||||
|
||||
## Solution
|
||||
|
||||
Add explicit dimensions when creating tmux sessions:
|
||||
|
||||
```bash
|
||||
# Before (BROKEN for long commands):
|
||||
tmux new-session -d -s "$session_name" -c "$PROJECT_ROOT"
|
||||
|
||||
# After (FIXED):
|
||||
tmux new-session -d -s "$session_name" -x 200 -y 50 -c "$PROJECT_ROOT"
|
||||
```
|
||||
|
||||
**Why 200×50:**
|
||||
- 200 columns handles commands up to ~3000 chars without wrapping
|
||||
- 50 rows provides adequate scrollback for monitoring
|
||||
- These dimensions don't affect the actual terminal the user might attach to
|
||||
|
||||
---
|
||||
|
||||
## Key Insights
|
||||
|
||||
### 1. Silent Failures are Deceptive
|
||||
|
||||
The command appears in the terminal output but never executes. This makes debugging difficult because:
|
||||
- `tmux capture-pane` shows the command was "sent"
|
||||
- No error message is visible
|
||||
- The session exists and appears healthy
|
||||
|
||||
**Lesson:** Always verify command execution by checking for child processes or activity indicators, not just command presence.
|
||||
|
||||
### 2. Length Threshold is Approximate
|
||||
|
||||
The exact failure point depends on:
|
||||
- Terminal width (obviously)
|
||||
- Command content (special characters, quotes)
|
||||
- Shell type (bash vs zsh)
|
||||
- tmux version
|
||||
|
||||
**Lesson:** Use generous margins. If your longest expected command is 1500 chars, use 200+ column width.
|
||||
|
||||
### 3. Quote Escaping is NOT the Issue
|
||||
|
||||
Initial hypothesis was that escaped quotes (`\"`) or special characters caused parsing failures. Testing proved this wrong:
|
||||
|
||||
```bash
|
||||
# This works fine with wide terminal:
|
||||
cmd='claude "test with \"quotes\" inside"'
|
||||
tmux send-keys -t "$sess" "$cmd" Enter # SUCCESS at 200 cols
|
||||
```
|
||||
|
||||
**Lesson:** Don't chase red herrings. Test the simplest hypothesis (length/width) before investigating complex escaping issues.
|
||||
|
||||
### 4. Process Detection is Reliable
|
||||
|
||||
The most reliable way to verify command execution:
|
||||
|
||||
```bash
|
||||
PANE_PID=$(tmux display -t "$session" -p '#{pane_pid}')
|
||||
if pgrep -P "$PANE_PID" >/dev/null 2>&1; then
|
||||
echo "Command is running"
|
||||
else
|
||||
echo "No child processes - command failed"
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Checklist for Future Debugging
|
||||
|
||||
When tmux commands fail silently:
|
||||
|
||||
- [ ] Check command length: `echo ${#cmd}`
|
||||
- [ ] Check terminal dimensions: `tmux display -t "$sess" -p '#{pane_width}'`
|
||||
- [ ] Test with wider terminal: `-x 200 -y 50`
|
||||
- [ ] Verify with process check: `pgrep -P $PANE_PID`
|
||||
- [ ] Check pane status: `tmux display -t "$sess" -p '#{pane_dead}'`
|
||||
- [ ] Capture full output: `tmux capture-pane -t "$sess" -p -S -100`
|
||||
|
||||
---
|
||||
|
||||
## Bug: Script File Path Not Executed (2026-02-09)
|
||||
|
||||
**Symptoms identical to the terminal-width issue**, but with a different root cause.
|
||||
|
||||
When `spawn` receives a command longer than 500 characters, it writes the command to a script file (`/tmp/sa-cmd-{session}.sh`) and sends the path via `tmux send-keys`. However, the path was sent **without the `bash` prefix**, so the shell received a raw file path instead of an executable command.
|
||||
|
||||
**Affected commands:** Retrospective prompts (~1577 chars) — all other steps (create-story, dev-story, code-review) are under 500 chars and use direct `send-keys`.
|
||||
|
||||
**Fix:** `tmux_cmds.go` — changed `tmuxSendKeys(sessionName, scriptFile, true)` to `tmuxSendKeys(sessionName, "bash "+scriptFile, true)`. Also added error handling for `os.WriteFile` and `tmuxSendKeys` (previously silently discarded with `_ =`).
|
||||
|
||||
**Lesson:** Two independent failure modes can produce identical symptoms (`never_active`). The `-x 200 -y 50` fix handles line-wrapping for direct `send-keys`, but the script-file fallback path had its own bug. Always check both paths when debugging.
|
||||
|
||||
---
|
||||
|
||||
## Related Files
|
||||
|
||||
- `bin/story-automator tmux-wrapper` - Session spawning with `-x 200 -y 50` fix + script file `bash` prefix fix
|
||||
- `bin/story-automator monitor-session` - Polling loop that detects stuck sessions
|
||||
- `bin/story-automator tmux-status-check` - Status detection with activity indicators
|
||||
- `data/monitoring-pattern.md` - Overall monitoring architecture
|
||||
- `data/tmux-long-command-testing.md` - Detailed investigation and test scripts
|
||||
|
|
@ -0,0 +1,184 @@
|
|||
# Tmux Long Command Testing & Investigation
|
||||
|
||||
**Related:** See `tmux-long-command-debugging.md` for root cause analysis and solution.
|
||||
|
||||
---
|
||||
|
||||
## Investigation Process
|
||||
|
||||
### Step 1: Verify Command Syntax
|
||||
|
||||
First, confirm the command itself is valid:
|
||||
|
||||
```bash
|
||||
# Build the command
|
||||
cmd=$("$scripts" tmux-wrapper build-cmd retro 2 --agent "claude")
|
||||
|
||||
# Check for syntax issues
|
||||
echo "$cmd" | od -c | head -20 # Look for unexpected characters
|
||||
|
||||
# Test parsing
|
||||
bash -n -c "$cmd" # Syntax check only
|
||||
```
|
||||
|
||||
**Finding:** Command syntax was correct. Quotes and escapes were properly formed.
|
||||
|
||||
### Step 2: Test Progressive Lengths
|
||||
|
||||
Binary search to find the breaking point:
|
||||
|
||||
```bash
|
||||
test_length() {
|
||||
local len=$1
|
||||
local sess="test-len-$len-$$"
|
||||
local prompt="bmad-retrospective 2 $(printf 'x%.0s' $(seq 1 $len))"
|
||||
|
||||
tmux new-session -d -s "$sess"
|
||||
tmux send-keys -t "$sess" "claude --dangerously-skip-permissions \"$prompt\"" Enter
|
||||
sleep 5
|
||||
|
||||
local capture=$(tmux capture-pane -t "$sess" -p)
|
||||
tmux kill-session -t "$sess" 2>/dev/null
|
||||
|
||||
if echo "$capture" | grep -qiE "interrupt|Working|Running"; then
|
||||
echo "Length $len: SUCCESS"
|
||||
else
|
||||
echo "Length $len: FAILED"
|
||||
fi
|
||||
}
|
||||
|
||||
# Test different lengths
|
||||
test_length 200 # SUCCESS
|
||||
test_length 500 # SUCCESS
|
||||
test_length 800 # SUCCESS
|
||||
test_length 1000 # SUCCESS
|
||||
test_length 1200 # FAILED
|
||||
```
|
||||
|
||||
**Finding:** Commands failed around 1000-1200 characters.
|
||||
|
||||
### Step 3: Test Terminal Width Hypothesis
|
||||
|
||||
```bash
|
||||
# Default dimensions
|
||||
sess="test-default-$$"
|
||||
tmux new-session -d -s "$sess"
|
||||
tmux display -t "$sess" -p 'cols:#{pane_width} rows:#{pane_height}'
|
||||
# Output: cols:80 rows:24
|
||||
|
||||
# Send long command
|
||||
tmux send-keys -t "$sess" "$long_cmd" Enter
|
||||
sleep 10
|
||||
# Result: FAILED - no activity
|
||||
|
||||
# Wide terminal
|
||||
sess="test-wide-$$"
|
||||
tmux new-session -d -s "$sess" -x 200 -y 50
|
||||
tmux display -t "$sess" -p 'cols:#{pane_width} rows:#{pane_height}'
|
||||
# Output: cols:200 rows:50
|
||||
|
||||
# Send same long command
|
||||
tmux send-keys -t "$sess" "$long_cmd" Enter
|
||||
sleep 10
|
||||
# Result: SUCCESS - Claude running!
|
||||
```
|
||||
|
||||
**Finding:** Wide terminal (200 cols) prevents the failure.
|
||||
|
||||
### Step 4: Understand the Mechanism
|
||||
|
||||
The shell's line editor (readline/zle) handles input differently when lines wrap:
|
||||
|
||||
1. **Normal input:** Characters arrive, shell builds command buffer
|
||||
2. **Wrapped input:** Terminal sends characters that visually wrap
|
||||
3. **Problem:** Some shell/terminal combinations mishandle the wrap points
|
||||
4. **Result:** Command buffer corruption or premature execution
|
||||
|
||||
This is why the command "appears" in the terminal (tmux captured it) but doesn't execute properly (shell didn't parse it correctly).
|
||||
|
||||
---
|
||||
|
||||
## Testing Methodology
|
||||
|
||||
### Quick Smoke Test
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# smoke-test-tmux-command.sh
|
||||
|
||||
cmd="$1"
|
||||
cmd_len=${#cmd}
|
||||
|
||||
echo "Testing command of length: $cmd_len"
|
||||
|
||||
# Test with default dimensions
|
||||
sess="smoke-default-$$"
|
||||
tmux new-session -d -s "$sess"
|
||||
tmux send-keys -t "$sess" "$cmd" Enter
|
||||
sleep 5
|
||||
if tmux capture-pane -t "$sess" -p | grep -qiE "interrupt|Working|Running|Read"; then
|
||||
echo "Default (80x24): SUCCESS"
|
||||
else
|
||||
echo "Default (80x24): FAILED"
|
||||
fi
|
||||
tmux kill-session -t "$sess" 2>/dev/null
|
||||
|
||||
# Test with wide dimensions
|
||||
sess="smoke-wide-$$"
|
||||
tmux new-session -d -s "$sess" -x 200 -y 50
|
||||
tmux send-keys -t "$sess" "$cmd" Enter
|
||||
sleep 5
|
||||
if tmux capture-pane -t "$sess" -p | grep -qiE "interrupt|Working|Running|Read"; then
|
||||
echo "Wide (200x50): SUCCESS"
|
||||
else
|
||||
echo "Wide (200x50): FAILED"
|
||||
fi
|
||||
tmux kill-session -t "$sess" 2>/dev/null
|
||||
```
|
||||
|
||||
### Comprehensive Test
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# test-tmux-long-commands.sh
|
||||
|
||||
test_at_width() {
|
||||
local width=$1
|
||||
local cmd_len=$2
|
||||
local sess="test-w${width}-l${cmd_len}-$$"
|
||||
|
||||
# Generate command of specific length
|
||||
local padding=$(printf 'x%.0s' $(seq 1 $cmd_len))
|
||||
local cmd="echo \"test $padding\""
|
||||
|
||||
tmux new-session -d -s "$sess" -x "$width" -y 24
|
||||
tmux send-keys -t "$sess" "$cmd" Enter
|
||||
sleep 2
|
||||
|
||||
local output=$(tmux capture-pane -t "$sess" -p)
|
||||
tmux kill-session -t "$sess" 2>/dev/null
|
||||
|
||||
if echo "$output" | grep -q "test xxx"; then
|
||||
echo "Width $width, Length $cmd_len: PASS"
|
||||
return 0
|
||||
else
|
||||
echo "Width $width, Length $cmd_len: FAIL"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test matrix
|
||||
for width in 80 120 160 200; do
|
||||
for len in 500 1000 1500 2000; do
|
||||
test_at_width $width $len
|
||||
done
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- tmux manual: `man tmux` (see `new-session` options)
|
||||
- Shell line editing: readline (bash) / zle (zsh)
|
||||
- Related issue: Commands with many arguments or long strings failing in tmux
|
||||
|
|
@ -0,0 +1,174 @@
|
|||
# Workflow Commands Reference
|
||||
|
||||
**Related:** See `tmux-commands.md` for session naming and management.
|
||||
|
||||
---
|
||||
|
||||
## Multi-Agent Support (v1.3.0)
|
||||
|
||||
| Agent | CLI Command | Prompt Style |
|
||||
|-------|-------------|--------------|
|
||||
| **Claude** | `claude --dangerously-skip-permissions` | Command syntax: `/bmad-bmb-workflow` |
|
||||
| **Codex** | `codex exec --full-auto` | Natural language prompt |
|
||||
|
||||
**CRITICAL: Claude and Codex use DIFFERENT prompt styles:**
|
||||
- **Claude:** `bmad-create-story 6.1` (command syntax)
|
||||
- **Codex:** Natural language explaining what workflow to run (see below)
|
||||
|
||||
**Why Codex is different:** Codex doesn't use slash commands like Claude. It takes plain text prompts and figures out what to do.
|
||||
|
||||
---
|
||||
|
||||
## Command Syntax
|
||||
|
||||
### Claude Syntax
|
||||
|
||||
**Commands take POSITIONAL ARGUMENTS, not flags. MUST be quoted.**
|
||||
|
||||
```bash
|
||||
claude --dangerously-skip-permissions "bmad-command-name ARG1 ARG2"
|
||||
```
|
||||
|
||||
**WRONG:** `claude bmad-dev-story --story file.md` (flags don't exist)
|
||||
**WRONG:** `claude bmad-dev-story file.md` (missing quotes - args not passed)
|
||||
**RIGHT:** `claude "bmad-dev-story file.md"` (quoted - args passed correctly)
|
||||
|
||||
### Codex Syntax (v1.3.0)
|
||||
|
||||
**Codex uses natural language prompts that explain the workflow to execute.**
|
||||
|
||||
```bash
|
||||
codex exec "Execute the BMAD workflow-name workflow for story STORY_ID.
|
||||
|
||||
Workflow location: _bmad/bmm/workflows/path/to/workflow/
|
||||
Story file: _bmad-output/implementation-artifacts/STORY_PREFIX-*.md
|
||||
[Additional instructions specific to the workflow]
|
||||
|
||||
Story ID: STORY_ID" --full-auto
|
||||
```
|
||||
|
||||
**CRITICAL:** The prompt must include:
|
||||
1. Which workflow to execute
|
||||
2. Where the workflow files are located
|
||||
3. Where to find/create story files
|
||||
4. The story ID
|
||||
|
||||
---
|
||||
|
||||
## dev-story
|
||||
|
||||
**Claude:**
|
||||
```bash
|
||||
tmux send-keys -t "SESSION" 'claude --dangerously-skip-permissions "bmad-dev-story STORY_ID"' Enter
|
||||
```
|
||||
|
||||
**Codex (v1.3.0):**
|
||||
```bash
|
||||
codex exec "Execute the BMAD dev-story workflow for story STORY_ID.
|
||||
|
||||
Workflow location: _bmad/bmm/4-implementation/bmad-dev-story/
|
||||
Story file: _bmad-output/implementation-artifacts/STORY_PREFIX-*.md
|
||||
Implement all tasks marked [ ]. Run tests. Update checkboxes.
|
||||
|
||||
Story ID: STORY_ID" --full-auto
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## code-review (REQUIRED after dev-story)
|
||||
|
||||
**MUST use BMAD /code-review workflow. Do NOT use Task agent for reviews.**
|
||||
|
||||
**CRITICAL (v2.0):** Include auto-fix instruction to prevent menu prompts.
|
||||
|
||||
**Claude:**
|
||||
```bash
|
||||
tmux send-keys -t "SESSION" 'claude --dangerously-skip-permissions "bmad-story-automator-review STORY_ID auto-fix all issues without prompting"' Enter
|
||||
```
|
||||
|
||||
**Codex (v1.3.0):**
|
||||
```bash
|
||||
codex exec "Execute the BMAD code-review workflow for story STORY_ID.
|
||||
|
||||
Workflow location: _bmad/bmm/4-implementation/bmad-story-automator-review/
|
||||
Story file: _bmad-output/implementation-artifacts/STORY_PREFIX-*.md
|
||||
Review implementation, find issues, fix them automatically.
|
||||
auto-fix all issues without prompting
|
||||
|
||||
Story ID: STORY_ID" --full-auto
|
||||
```
|
||||
|
||||
**Why `auto-fix all issues without prompting`:** The code-review workflow normally presents a findings menu. This instruction tells it to automatically fix issues without prompting.
|
||||
|
||||
---
|
||||
|
||||
## create-story
|
||||
|
||||
**Requires story ID as positional argument.**
|
||||
|
||||
**Claude:**
|
||||
```bash
|
||||
tmux send-keys -t "SESSION" 'claude --dangerously-skip-permissions "bmad-create-story STORY_ID"' Enter
|
||||
```
|
||||
|
||||
**Codex (v1.3.0):**
|
||||
```bash
|
||||
codex exec "Execute the BMAD create-story workflow for story STORY_ID.
|
||||
|
||||
Workflow location: _bmad/bmm/4-implementation/bmad-create-story/
|
||||
- Read workflow.yaml for the process
|
||||
- Use template.md as the output template
|
||||
- Follow instructions.xml for detailed steps
|
||||
|
||||
Create story file at: _bmad-output/implementation-artifacts/STORY_PREFIX-*.md
|
||||
|
||||
Story ID: STORY_ID" --full-auto
|
||||
```
|
||||
|
||||
**CRITICAL:** Always pass the story ID (e.g., "5.3") to ensure create-story only creates that ONE story.
|
||||
|
||||
---
|
||||
|
||||
## testarch-automate
|
||||
|
||||
**Claude:**
|
||||
```bash
|
||||
tmux send-keys -t "SESSION" 'claude --dangerously-skip-permissions "bmad-tea-testarch-automate STORY_ID"' Enter
|
||||
```
|
||||
|
||||
**Codex (v1.3.0):**
|
||||
```bash
|
||||
codex exec "Execute the BMAD testarch-automate workflow for story STORY_ID.
|
||||
|
||||
Workflow location: _bmad/tea/workflows/testarch/automate/
|
||||
Story file: _bmad-output/implementation-artifacts/STORY_PREFIX-*.md
|
||||
Generate test automation for the implemented story.
|
||||
|
||||
Story ID: STORY_ID" --full-auto
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variables
|
||||
|
||||
**Agent Configuration (v1.3.0):**
|
||||
|
||||
| Agent | CLI Command | Prompt Style |
|
||||
|-------|-------------|--------------|
|
||||
| Claude | `claude --dangerously-skip-permissions` | `/bmad-bmb-workflow` command syntax |
|
||||
| Codex | `codex exec --full-auto` | Natural language (see examples above) |
|
||||
|
||||
`{projectPath}` = project root
|
||||
`STORY_PREFIX` = story ID with dots replaced by hyphens (e.g., 6.1 → 6-1)
|
||||
|
||||
**Environment Variables (for scripts):**
|
||||
- `AI_AGENT` = `claude` or `codex`
|
||||
- `AI_COMMAND` = Full CLI command (legacy, deprecated)
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- Retrospectives are manual-only. Do not spawn in automated sessions.
|
||||
- All commands assume session already created with `STORY_AUTOMATOR_CHILD=true`
|
||||
- See `tmux-commands.md` for session creation patterns
|
||||
|
|
@ -0,0 +1,131 @@
|
|||
# Wrapup Templates
|
||||
|
||||
Templates for the wrapup step summary, learnings, and recommendations.
|
||||
|
||||
---
|
||||
|
||||
## Summary Report Template
|
||||
|
||||
```
|
||||
**📊 Build Cycle Summary**
|
||||
|
||||
**Epic:** {epic_name}
|
||||
**Stories:** {story_range} ({completed}/{total} completed)
|
||||
**Duration:** {start_time} to {end_time}
|
||||
|
||||
---
|
||||
|
||||
**Story Results:**
|
||||
|
||||
| Story | Title | Status | Review Cycles | Notes |
|
||||
|-------|-------|--------|---------------|-------|
|
||||
{story_results_table}
|
||||
|
||||
---
|
||||
|
||||
**Execution Statistics:**
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Stories Completed | {count} |
|
||||
| Stories Skipped/Aborted | {count} |
|
||||
| Total Code Review Cycles | {count} |
|
||||
| Escalations | {count} |
|
||||
| Git Commits | {count} |
|
||||
|
||||
---
|
||||
|
||||
**Session Summary:**
|
||||
|
||||
| Session Type | Count | Avg Duration |
|
||||
|--------------|-------|--------------|
|
||||
| create-story | {count} | {avg} |
|
||||
| dev-story | {count} | {avg} |
|
||||
| automate | {count} | {avg} |
|
||||
| code-review | {count} | {avg} |
|
||||
|
||||
---
|
||||
|
||||
**Escalations Encountered:**
|
||||
{escalation_list_or_'None'}
|
||||
|
||||
**Issues Resolved:**
|
||||
{issues_resolved_list_or_'None'}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Learnings Entry Template
|
||||
|
||||
Append this to the sidecar learnings file:
|
||||
|
||||
```markdown
|
||||
## Run: {timestamp}
|
||||
|
||||
**Epic:** {epic_name}
|
||||
**Stories:** {story_range}
|
||||
|
||||
### Patterns Observed
|
||||
- {pattern_1}
|
||||
- {pattern_2}
|
||||
|
||||
### Code Review Insights
|
||||
- Common issues: {list}
|
||||
- Average cycles to clean: {avg}
|
||||
|
||||
### Timing Estimates
|
||||
- create-story: ~{avg_time}
|
||||
- dev-story: ~{avg_time}
|
||||
- code-review: ~{avg_time} per cycle
|
||||
|
||||
### Recommendations for Future Runs
|
||||
- {recommendation_1}
|
||||
- {recommendation_2}
|
||||
```
|
||||
|
||||
**Patterns to capture:**
|
||||
- Common code review issues (what kept failing?)
|
||||
- Steps that frequently needed escalation
|
||||
- Stories that took longer than expected
|
||||
- Successful patterns (what worked well?)
|
||||
|
||||
---
|
||||
|
||||
## Recommendations Template
|
||||
|
||||
```
|
||||
**💡 Recommendations**
|
||||
|
||||
Based on this build cycle run:
|
||||
|
||||
**For Future Runs:**
|
||||
{recommendations_based_on_patterns}
|
||||
|
||||
**Process Improvements:**
|
||||
{suggestions_for_workflow_improvements}
|
||||
|
||||
**Technical Debt:**
|
||||
{any_tech_debt_identified}
|
||||
|
||||
**Documentation Needs:**
|
||||
{any_docs_that_should_be_updated}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Completion Message Template
|
||||
|
||||
```
|
||||
**✅ Story Automator Complete**
|
||||
|
||||
**Results saved to:**
|
||||
- State document: `{state_document_path}`
|
||||
- Learnings: `{sidecarFile}`
|
||||
|
||||
**Stories implemented:** {count}
|
||||
**Git commits made:** {count}
|
||||
|
||||
Thank you for using Story Automator. The state document contains full history for reference.
|
||||
|
||||
To run another build cycle, invoke the story-automator workflow again.
|
||||
```
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" build-state-doc "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" codex-status-check "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" commit-story "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" derive-project-slug "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" ensure-marker-gitignore "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" ensure-stop-hook "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" epic-complete "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" heartbeat-check "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" list-sessions "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" monitor-session "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" orchestrator-helper "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" parse-epic "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" parse-story-range "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" parse-story "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" sprint-compare "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" state-metrics "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" stop-hook "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" tmux-status-check "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" tmux-wrapper "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" validate-state "$@"
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
BIN="$SCRIPT_DIR/../bin/story-automator"
|
||||
|
||||
exec "$BIN" validate-story-creation "$@"
|
||||
|
|
@ -0,0 +1,226 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"os"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type agentConfigPreset struct {
|
||||
Name string `json:"name"`
|
||||
CreatedAt string `json:"createdAt"`
|
||||
Config map[string]any `json:"config"`
|
||||
}
|
||||
|
||||
type agentConfigPresetsFile struct {
|
||||
Version string `json:"version"`
|
||||
Presets []agentConfigPreset `json:"presets"`
|
||||
}
|
||||
|
||||
func cmdAgentConfig(args []string) int {
|
||||
if len(args) < 1 {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_subcommand"})
|
||||
return 1
|
||||
}
|
||||
|
||||
sub := args[0]
|
||||
subArgs := args[1:]
|
||||
|
||||
switch sub {
|
||||
case "list":
|
||||
return agentConfigList(subArgs)
|
||||
case "save":
|
||||
return agentConfigSave(subArgs)
|
||||
case "load":
|
||||
return agentConfigLoad(subArgs)
|
||||
case "delete":
|
||||
return agentConfigDelete(subArgs)
|
||||
default:
|
||||
writeJSON(map[string]any{"ok": false, "error": "unknown_subcommand", "subcommand": sub})
|
||||
return 1
|
||||
}
|
||||
}
|
||||
|
||||
func parseAgentConfigArgs(args []string) (file, name, configJSON string) {
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--file":
|
||||
if i+1 < len(args) {
|
||||
file = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--name":
|
||||
if i+1 < len(args) {
|
||||
name = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--config-json":
|
||||
if i+1 < len(args) {
|
||||
configJSON = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func loadPresetsFile(path string) (agentConfigPresetsFile, error) {
|
||||
data := agentConfigPresetsFile{Version: "1.0.0", Presets: []agentConfigPreset{}}
|
||||
if !fileExists(path) {
|
||||
return data, nil
|
||||
}
|
||||
raw, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return data, err
|
||||
}
|
||||
if err := json.Unmarshal(raw, &data); err != nil {
|
||||
return data, err
|
||||
}
|
||||
if data.Presets == nil {
|
||||
data.Presets = []agentConfigPreset{}
|
||||
}
|
||||
return data, nil
|
||||
}
|
||||
|
||||
func savePresetsFile(path string, data agentConfigPresetsFile) error {
|
||||
b, err := json.MarshalIndent(data, "", " ")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
b = append(b, '\n')
|
||||
return writeFileAtomic(path, b)
|
||||
}
|
||||
|
||||
func agentConfigList(args []string) int {
|
||||
file, _, _ := parseAgentConfigArgs(args)
|
||||
if file == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_file"})
|
||||
return 1
|
||||
}
|
||||
|
||||
data, err := loadPresetsFile(file)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "read_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
summaries := make([]map[string]string, 0, len(data.Presets))
|
||||
for _, p := range data.Presets {
|
||||
summaries = append(summaries, map[string]string{
|
||||
"name": p.Name,
|
||||
"createdAt": p.CreatedAt,
|
||||
})
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "presets": summaries, "count": len(summaries)})
|
||||
return 0
|
||||
}
|
||||
|
||||
func agentConfigSave(args []string) int {
|
||||
file, name, configJSON := parseAgentConfigArgs(args)
|
||||
if file == "" || strings.TrimSpace(name) == "" || strings.TrimSpace(configJSON) == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_args"})
|
||||
return 1
|
||||
}
|
||||
|
||||
var config map[string]any
|
||||
if err := json.Unmarshal([]byte(configJSON), &config); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "invalid_config_json"})
|
||||
return 1
|
||||
}
|
||||
|
||||
data, err := loadPresetsFile(file)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "read_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
action := "created"
|
||||
found := false
|
||||
for i, p := range data.Presets {
|
||||
if strings.EqualFold(p.Name, name) {
|
||||
data.Presets[i].Config = config
|
||||
data.Presets[i].CreatedAt = nowUTC().Format("2006-01-02T15:04:05Z")
|
||||
found = true
|
||||
action = "updated"
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
data.Presets = append(data.Presets, agentConfigPreset{
|
||||
Name: name,
|
||||
CreatedAt: nowUTC().Format("2006-01-02T15:04:05Z"),
|
||||
Config: config,
|
||||
})
|
||||
}
|
||||
|
||||
if err := savePresetsFile(file, data); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "write_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "name": name, "action": action})
|
||||
return 0
|
||||
}
|
||||
|
||||
func agentConfigLoad(args []string) int {
|
||||
file, name, _ := parseAgentConfigArgs(args)
|
||||
if file == "" || strings.TrimSpace(name) == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_args"})
|
||||
return 1
|
||||
}
|
||||
|
||||
data, err := loadPresetsFile(file)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "read_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
for _, p := range data.Presets {
|
||||
if strings.EqualFold(p.Name, name) {
|
||||
writeJSON(map[string]any{"ok": true, "name": p.Name, "config": p.Config})
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": false, "error": "preset_not_found", "name": name})
|
||||
return 1
|
||||
}
|
||||
|
||||
func agentConfigDelete(args []string) int {
|
||||
file, name, _ := parseAgentConfigArgs(args)
|
||||
if file == "" || strings.TrimSpace(name) == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_args"})
|
||||
return 1
|
||||
}
|
||||
|
||||
data, err := loadPresetsFile(file)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "read_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
filtered := make([]agentConfigPreset, 0, len(data.Presets))
|
||||
found := false
|
||||
for _, p := range data.Presets {
|
||||
if strings.EqualFold(p.Name, name) {
|
||||
found = true
|
||||
continue
|
||||
}
|
||||
filtered = append(filtered, p)
|
||||
}
|
||||
|
||||
if !found {
|
||||
writeJSON(map[string]any{"ok": false, "error": "preset_not_found", "name": name})
|
||||
return 1
|
||||
}
|
||||
|
||||
data.Presets = filtered
|
||||
if err := savePresetsFile(file, data); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "write_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "name": name, "action": "deleted"})
|
||||
return 0
|
||||
}
|
||||
|
|
@ -0,0 +1,453 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func cmdDeriveProjectSlug(args []string) int {
|
||||
projectRoot := getPWD()
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--project-root":
|
||||
if i+1 < len(args) {
|
||||
projectRoot = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
base := filepath.Base(projectRoot)
|
||||
lower := strings.ToLower(base)
|
||||
var b strings.Builder
|
||||
for _, r := range lower {
|
||||
if (r >= 'a' && r <= 'z') || (r >= '0' && r <= '9') {
|
||||
b.WriteRune(r)
|
||||
}
|
||||
}
|
||||
slug := b.String()
|
||||
if len(slug) > 8 {
|
||||
slug = slug[:8]
|
||||
}
|
||||
if slug == "" {
|
||||
slug = "project"
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "slug": slug, "projectRoot": projectRoot})
|
||||
return 0
|
||||
}
|
||||
|
||||
func cmdEnsureMarkerGitignore(args []string) int {
|
||||
gitignorePath := ""
|
||||
entry := ""
|
||||
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--gitignore":
|
||||
if i+1 < len(args) {
|
||||
gitignorePath = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--entry":
|
||||
if i+1 < len(args) {
|
||||
entry = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if gitignorePath == "" || entry == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_args"})
|
||||
return 1
|
||||
}
|
||||
|
||||
if !fileExists(gitignorePath) {
|
||||
if err := os.WriteFile(gitignorePath, []byte(""), 0o644); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "touch_failed"})
|
||||
return 1
|
||||
}
|
||||
}
|
||||
|
||||
content, err := readFile(gitignorePath)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "read_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
if strings.Contains(content, entry) {
|
||||
writeJSON(map[string]any{"ok": true, "changed": false, "path": gitignorePath})
|
||||
return 0
|
||||
}
|
||||
|
||||
f, err := os.OpenFile(gitignorePath, os.O_APPEND|os.O_WRONLY, 0o644)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "append_failed"})
|
||||
return 1
|
||||
}
|
||||
defer f.Close()
|
||||
if _, err := f.WriteString(entry + "\n"); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "append_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "changed": true, "path": gitignorePath})
|
||||
return 0
|
||||
}
|
||||
|
||||
func cmdEnsureStopHook(args []string) int {
|
||||
settingsPath := ""
|
||||
commandPath := ""
|
||||
timeout := 10
|
||||
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--settings":
|
||||
if i+1 < len(args) {
|
||||
settingsPath = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--command":
|
||||
if i+1 < len(args) {
|
||||
commandPath = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--timeout":
|
||||
if i+1 < len(args) {
|
||||
if v, err := strconv.Atoi(args[i+1]); err == nil {
|
||||
timeout = v
|
||||
}
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if settingsPath == "" || commandPath == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_required_args"})
|
||||
return 1
|
||||
}
|
||||
|
||||
// Resolve command binary to absolute path using own executable.
|
||||
// The AI agent inconsistently resolves relative frontmatter paths
|
||||
// (../bin/story-automator) — sometimes relative, sometimes absolute,
|
||||
// sometimes project-relative. Self-resolving via os.Executable()
|
||||
// guarantees a consistent absolute path every time.
|
||||
cmdParts := strings.Fields(commandPath)
|
||||
if len(cmdParts) >= 1 {
|
||||
exe, err := os.Executable()
|
||||
if err == nil {
|
||||
resolved, err := filepath.EvalSymlinks(exe)
|
||||
if err == nil {
|
||||
exe = resolved
|
||||
}
|
||||
cmdParts[0] = exe
|
||||
commandPath = strings.Join(cmdParts, " ")
|
||||
}
|
||||
}
|
||||
|
||||
if err := ensureDir(filepath.Dir(settingsPath)); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "mkdir_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
if !fileExists(settingsPath) {
|
||||
payload := map[string]any{
|
||||
"hooks": map[string]any{
|
||||
"Stop": []any{
|
||||
map[string]any{
|
||||
"hooks": []any{
|
||||
map[string]any{
|
||||
"type": "command",
|
||||
"command": commandPath,
|
||||
"timeout": timeout,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
b, _ := json.MarshalIndent(payload, "", " ")
|
||||
if err := os.WriteFile(settingsPath, b, 0o644); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "write_failed"})
|
||||
return 1
|
||||
}
|
||||
writeJSON(map[string]any{"ok": true, "changed": true, "reason": "created", "path": settingsPath})
|
||||
return 0
|
||||
}
|
||||
|
||||
raw, err := os.ReadFile(settingsPath)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "read_failed", "path": settingsPath})
|
||||
return 1
|
||||
}
|
||||
|
||||
var root map[string]any
|
||||
if err := json.Unmarshal(raw, &root); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "invalid_json", "path": settingsPath})
|
||||
return 1
|
||||
}
|
||||
|
||||
hooks, _ := root["hooks"].(map[string]any)
|
||||
if hooks == nil {
|
||||
hooks = map[string]any{}
|
||||
root["hooks"] = hooks
|
||||
}
|
||||
stopHooks, _ := hooks["Stop"].([]any)
|
||||
if stopHooks == nil {
|
||||
stopHooks = []any{}
|
||||
}
|
||||
|
||||
exists := false
|
||||
needsPathUpdate := false
|
||||
for _, entry := range stopHooks {
|
||||
entryMap, ok := entry.(map[string]any)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
inner, _ := entryMap["hooks"].([]any)
|
||||
for _, h := range inner {
|
||||
m, ok := h.(map[string]any)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
if cmd, ok := m["command"].(string); ok {
|
||||
if cmd == commandPath {
|
||||
exists = true
|
||||
break
|
||||
}
|
||||
// Flexible match: any command referencing story-automator stop-hook
|
||||
// regardless of path format (relative, absolute, project-relative).
|
||||
if strings.Contains(cmd, "story-automator") && strings.Contains(cmd, "stop-hook") {
|
||||
exists = true
|
||||
if cmd != commandPath {
|
||||
// Migrate stale path to resolved absolute path in-place.
|
||||
m["command"] = commandPath
|
||||
needsPathUpdate = true
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
if exists {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if exists && !needsPathUpdate {
|
||||
writeJSON(map[string]any{"ok": true, "changed": false, "reason": "already_configured", "path": settingsPath})
|
||||
return 0
|
||||
}
|
||||
|
||||
if exists && needsPathUpdate {
|
||||
// Path normalized to absolute — write updated settings.
|
||||
// Return changed:false because the hook functionally existed;
|
||||
// no session restart is needed.
|
||||
b, _ := json.MarshalIndent(root, "", " ")
|
||||
if err := writeFileAtomic(settingsPath, b); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "write_failed", "path": settingsPath})
|
||||
return 1
|
||||
}
|
||||
writeJSON(map[string]any{"ok": true, "changed": false, "reason": "path_normalized", "path": settingsPath})
|
||||
return 0
|
||||
}
|
||||
|
||||
newEntry := map[string]any{
|
||||
"hooks": []any{
|
||||
map[string]any{
|
||||
"type": "command",
|
||||
"command": commandPath,
|
||||
"timeout": timeout,
|
||||
},
|
||||
},
|
||||
}
|
||||
stopHooks = append(stopHooks, newEntry)
|
||||
hooks["Stop"] = stopHooks
|
||||
|
||||
b, _ := json.MarshalIndent(root, "", " ")
|
||||
if err := writeFileAtomic(settingsPath, b); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "write_failed", "path": settingsPath})
|
||||
return 1
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "changed": true, "reason": "added", "path": settingsPath})
|
||||
return 0
|
||||
}
|
||||
|
||||
func cmdStopHook(_ []string) int {
|
||||
_, _ = ioReadAll(os.Stdin)
|
||||
|
||||
if strings.ToLower(os.Getenv("STORY_AUTOMATOR_CHILD")) == "true" {
|
||||
return 0
|
||||
}
|
||||
|
||||
markerFile := filepath.Join(getPWD(), ".claude", ".story-automator-active")
|
||||
if !fileExists(markerFile) {
|
||||
return 0
|
||||
}
|
||||
|
||||
content, err := os.ReadFile(markerFile)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
var marker map[string]any
|
||||
if err := json.Unmarshal(content, &marker); err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
storiesRemaining := 0
|
||||
if val, ok := marker["storiesRemaining"]; ok {
|
||||
switch v := val.(type) {
|
||||
case float64:
|
||||
storiesRemaining = int(v)
|
||||
case int:
|
||||
storiesRemaining = v
|
||||
case string:
|
||||
if n, err := strconv.Atoi(v); err == nil {
|
||||
storiesRemaining = n
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if storiesRemaining == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
reason := fmt.Sprintf("Story Automator active (%d stories remaining). Read _bmad/bmm/4-implementation/bmad-story-automator-go/data/stop-hook-recovery.md", storiesRemaining)
|
||||
fmt.Printf("{\n \"decision\": \"block\",\n \"reason\": %q\n}\n", reason)
|
||||
return 0
|
||||
}
|
||||
|
||||
func cmdCommitStory(args []string) int {
|
||||
repo := ""
|
||||
storyID := ""
|
||||
title := ""
|
||||
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--repo":
|
||||
if i+1 < len(args) {
|
||||
repo = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--story":
|
||||
if i+1 < len(args) {
|
||||
storyID = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--title":
|
||||
if i+1 < len(args) {
|
||||
title = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if repo == "" || storyID == "" || title == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_args"})
|
||||
return 1
|
||||
}
|
||||
if !dirExists(repo) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "repo_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
statusOut, err := runCmd("git", "-C", repo, "status", "--porcelain")
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "git_status_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
lines := strings.Split(strings.TrimSpace(statusOut), "\n")
|
||||
changes := 0
|
||||
if len(lines) == 1 && strings.TrimSpace(lines[0]) == "" {
|
||||
changes = 0
|
||||
} else if strings.TrimSpace(statusOut) != "" {
|
||||
changes = len(lines)
|
||||
}
|
||||
|
||||
if changes == 0 {
|
||||
writeJSON(map[string]any{"ok": false, "error": "no_changes"})
|
||||
return 0
|
||||
}
|
||||
|
||||
_, err = runCmd("git", "-C", repo, "add", "-A")
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "git_add_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
msg := fmt.Sprintf("feat(story-%s): %s", storyID, title)
|
||||
_, err = runCmd("git", "-C", repo, "commit", "-m", msg)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "commit_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
sha, _ := runCmd("git", "-C", repo, "rev-parse", "HEAD")
|
||||
sha = strings.TrimSpace(sha)
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "commit": sha})
|
||||
return 0
|
||||
}
|
||||
|
||||
func cmdListSessions(args []string) int {
|
||||
slug := ""
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--slug":
|
||||
if i+1 < len(args) {
|
||||
slug = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if slug == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_slug"})
|
||||
return 1
|
||||
}
|
||||
|
||||
if _, err := execLookPath("tmux"); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "tmux_not_found", "sessions": []string{}, "count": 0})
|
||||
return 0
|
||||
}
|
||||
|
||||
out, err := runCmd("tmux", "list-sessions", "-F", "#{session_name}")
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": true, "sessions": []string{}, "count": 0})
|
||||
return 0
|
||||
}
|
||||
|
||||
var sessions []string
|
||||
prefix := "sa-" + slug + "-"
|
||||
for _, line := range trimLines(out) {
|
||||
if strings.HasPrefix(line, prefix) {
|
||||
sessions = append(sessions, line)
|
||||
}
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "sessions": sessions, "count": len(sessions)})
|
||||
return 0
|
||||
}
|
||||
|
||||
func ioReadAll(r *os.File) ([]byte, error) {
|
||||
buf := make([]byte, 0, 4096)
|
||||
for {
|
||||
tmp := make([]byte, 4096)
|
||||
n, err := r.Read(tmp)
|
||||
if n > 0 {
|
||||
buf = append(buf, tmp[:n]...)
|
||||
}
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
return buf, nil
|
||||
}
|
||||
return buf, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,98 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
)
|
||||
|
||||
func main() {
|
||||
if len(os.Args) < 2 {
|
||||
usage()
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
cmd := os.Args[1]
|
||||
args := os.Args[2:]
|
||||
var code int
|
||||
|
||||
switch cmd {
|
||||
case "derive-project-slug":
|
||||
code = cmdDeriveProjectSlug(args)
|
||||
case "ensure-marker-gitignore":
|
||||
code = cmdEnsureMarkerGitignore(args)
|
||||
case "ensure-stop-hook":
|
||||
code = cmdEnsureStopHook(args)
|
||||
case "stop-hook":
|
||||
code = cmdStopHook(args)
|
||||
case "build-state-doc":
|
||||
code = cmdBuildStateDoc(args)
|
||||
case "commit-story":
|
||||
code = cmdCommitStory(args)
|
||||
case "parse-epic":
|
||||
code = cmdParseEpic(args)
|
||||
case "parse-story":
|
||||
code = cmdParseStory(args)
|
||||
case "parse-story-range":
|
||||
code = cmdParseStoryRange(args)
|
||||
case "epic-complete":
|
||||
code = cmdEpicComplete(args)
|
||||
case "sprint-compare":
|
||||
code = cmdSprintCompare(args)
|
||||
case "state-metrics":
|
||||
code = cmdStateMetrics(args)
|
||||
case "validate-state":
|
||||
code = cmdValidateState(args)
|
||||
case "validate-story-creation":
|
||||
code = cmdValidateStoryCreation(args)
|
||||
case "list-sessions":
|
||||
code = cmdListSessions(args)
|
||||
case "tmux-wrapper":
|
||||
code = cmdTmuxWrapper(args)
|
||||
case "heartbeat-check":
|
||||
code = cmdHeartbeatCheck(args)
|
||||
case "codex-status-check":
|
||||
code = cmdCodexStatusCheck(args)
|
||||
case "tmux-status-check":
|
||||
code = cmdTmuxStatusCheck(args)
|
||||
case "monitor-session":
|
||||
code = cmdMonitorSession(args)
|
||||
case "orchestrator-helper":
|
||||
code = cmdOrchestratorHelper(args)
|
||||
case "agent-config":
|
||||
code = cmdAgentConfig(args)
|
||||
default:
|
||||
fmt.Fprintf(os.Stderr, "Unknown command: %s\n", cmd)
|
||||
usage()
|
||||
code = 1
|
||||
}
|
||||
|
||||
os.Exit(code)
|
||||
}
|
||||
|
||||
func usage() {
|
||||
fmt.Fprintln(os.Stderr, "story-automator <command> [args]")
|
||||
fmt.Fprintln(os.Stderr, "")
|
||||
fmt.Fprintln(os.Stderr, "Commands:")
|
||||
fmt.Fprintln(os.Stderr, " derive-project-slug")
|
||||
fmt.Fprintln(os.Stderr, " ensure-marker-gitignore")
|
||||
fmt.Fprintln(os.Stderr, " ensure-stop-hook")
|
||||
fmt.Fprintln(os.Stderr, " stop-hook")
|
||||
fmt.Fprintln(os.Stderr, " build-state-doc")
|
||||
fmt.Fprintln(os.Stderr, " commit-story")
|
||||
fmt.Fprintln(os.Stderr, " parse-epic")
|
||||
fmt.Fprintln(os.Stderr, " parse-story")
|
||||
fmt.Fprintln(os.Stderr, " parse-story-range")
|
||||
fmt.Fprintln(os.Stderr, " epic-complete")
|
||||
fmt.Fprintln(os.Stderr, " sprint-compare")
|
||||
fmt.Fprintln(os.Stderr, " state-metrics")
|
||||
fmt.Fprintln(os.Stderr, " validate-state")
|
||||
fmt.Fprintln(os.Stderr, " validate-story-creation")
|
||||
fmt.Fprintln(os.Stderr, " list-sessions")
|
||||
fmt.Fprintln(os.Stderr, " tmux-wrapper")
|
||||
fmt.Fprintln(os.Stderr, " heartbeat-check")
|
||||
fmt.Fprintln(os.Stderr, " codex-status-check")
|
||||
fmt.Fprintln(os.Stderr, " tmux-status-check")
|
||||
fmt.Fprintln(os.Stderr, " monitor-session")
|
||||
fmt.Fprintln(os.Stderr, " orchestrator-helper")
|
||||
fmt.Fprintln(os.Stderr, " agent-config")
|
||||
}
|
||||
|
|
@ -0,0 +1,390 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type agentTaskConfig struct {
|
||||
Primary string `json:"primary"`
|
||||
Fallback any `json:"fallback"`
|
||||
}
|
||||
|
||||
type agentConfigResolved struct {
|
||||
DefaultPrimary string
|
||||
DefaultFallback string
|
||||
PerTask map[string]agentTaskConfig
|
||||
ComplexityOverrides map[string]map[string]agentTaskConfig
|
||||
}
|
||||
|
||||
type complexityStory struct {
|
||||
StoryID string `json:"storyId"`
|
||||
Title string `json:"title"`
|
||||
Complexity struct {
|
||||
Level string `json:"level"`
|
||||
Score int `json:"score"`
|
||||
} `json:"complexity"`
|
||||
}
|
||||
|
||||
type complexityFile struct {
|
||||
Stories []complexityStory `json:"stories"`
|
||||
}
|
||||
|
||||
type agentsStory struct {
|
||||
StoryID string `json:"storyId"`
|
||||
Title string `json:"title"`
|
||||
Complexity string `json:"complexity"`
|
||||
Tasks map[string]agentTaskConfig `json:"tasks"`
|
||||
}
|
||||
|
||||
type agentsFile struct {
|
||||
Version string `json:"version"`
|
||||
StateFile string `json:"stateFile"`
|
||||
Epic string `json:"epic"`
|
||||
EpicName string `json:"epicName"`
|
||||
CreatedAt string `json:"createdAt"`
|
||||
Stories []agentsStory `json:"stories"`
|
||||
}
|
||||
|
||||
func orchestratorAgentsBuild(args []string) int {
|
||||
stateFile := ""
|
||||
complexityFilePath := ""
|
||||
outputPath := ""
|
||||
configJSON := ""
|
||||
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--state-file":
|
||||
if i+1 < len(args) {
|
||||
stateFile = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--complexity-file":
|
||||
if i+1 < len(args) {
|
||||
complexityFilePath = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--output":
|
||||
if i+1 < len(args) {
|
||||
outputPath = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--config-json":
|
||||
if i+1 < len(args) {
|
||||
configJSON = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if stateFile == "" || complexityFilePath == "" || outputPath == "" || strings.TrimSpace(configJSON) == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_args"})
|
||||
return 1
|
||||
}
|
||||
if !fileExists(stateFile) || !fileExists(complexityFilePath) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "file_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
cfg, err := parseAgentConfigJSON(configJSON)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "invalid_config"})
|
||||
return 1
|
||||
}
|
||||
|
||||
raw, err := readFile(complexityFilePath)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "complexity_read_failed"})
|
||||
return 1
|
||||
}
|
||||
var comp complexityFile
|
||||
if err := json.Unmarshal([]byte(raw), &comp); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "complexity_parse_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
epic := findFrontmatterValue(stateFile, "epic")
|
||||
epicName := findFrontmatterValue(stateFile, "epicName")
|
||||
|
||||
tasks := []string{"create", "dev", "auto", "review"}
|
||||
stories := []agentsStory{}
|
||||
|
||||
for _, story := range comp.Stories {
|
||||
level := strings.ToLower(strings.TrimSpace(story.Complexity.Level))
|
||||
if level == "" {
|
||||
level = "medium"
|
||||
}
|
||||
taskMap := map[string]agentTaskConfig{}
|
||||
for _, task := range tasks {
|
||||
primary, fallback := resolveAgentForTask(cfg, level, task)
|
||||
fallbackVal := any(fallback)
|
||||
if strings.ToLower(strings.TrimSpace(fallback)) == "false" {
|
||||
fallbackVal = false
|
||||
}
|
||||
taskMap[task] = agentTaskConfig{
|
||||
Primary: primary,
|
||||
Fallback: fallbackVal,
|
||||
}
|
||||
}
|
||||
stories = append(stories, agentsStory{
|
||||
StoryID: story.StoryID,
|
||||
Title: story.Title,
|
||||
Complexity: level,
|
||||
Tasks: taskMap,
|
||||
})
|
||||
}
|
||||
|
||||
payload := agentsFile{
|
||||
Version: "1.0.0",
|
||||
StateFile: stateFile,
|
||||
Epic: epic,
|
||||
EpicName: epicName,
|
||||
CreatedAt: nowUTC().Format("2006-01-02T15:04:05Z"),
|
||||
Stories: stories,
|
||||
}
|
||||
|
||||
jsonBytes, err := json.MarshalIndent(payload, "", " ")
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "agents_json_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
header := fmt.Sprintf("---\nstateFile: %q\ncreatedAt: %q\n---\n\n# Agents Plan: %s\n\n", payload.StateFile, payload.CreatedAt, payload.EpicName)
|
||||
content := header + "```json\n" + string(jsonBytes) + "\n```\n"
|
||||
|
||||
if err := ensureDir(filepath.Dir(outputPath)); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "output_dir_failed"})
|
||||
return 1
|
||||
}
|
||||
if err := os.WriteFile(outputPath, []byte(content), 0o644); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "agents_write_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "path": outputPath, "stories": len(stories)})
|
||||
return 0
|
||||
}
|
||||
|
||||
func orchestratorAgentsResolve(args []string) int {
|
||||
stateFile := ""
|
||||
agentsPath := ""
|
||||
storyID := ""
|
||||
task := ""
|
||||
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--state-file":
|
||||
if i+1 < len(args) {
|
||||
stateFile = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--agents-file":
|
||||
if i+1 < len(args) {
|
||||
agentsPath = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--story":
|
||||
if i+1 < len(args) {
|
||||
storyID = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--task":
|
||||
if i+1 < len(args) {
|
||||
task = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if stateFile == "" || storyID == "" || task == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_args"})
|
||||
return 1
|
||||
}
|
||||
|
||||
if agentsPath == "" {
|
||||
agentsPath = findFrontmatterValue(stateFile, "agentsFile")
|
||||
}
|
||||
if agentsPath == "" || !fileExists(agentsPath) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "agents_file_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
text, err := readFile(agentsPath)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "agents_read_failed"})
|
||||
return 1
|
||||
}
|
||||
jsonBlock := extractJSONBlock(text)
|
||||
if jsonBlock == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "agents_json_missing"})
|
||||
return 1
|
||||
}
|
||||
|
||||
var payload agentsFile
|
||||
if err := json.Unmarshal([]byte(jsonBlock), &payload); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "agents_json_invalid"})
|
||||
return 1
|
||||
}
|
||||
|
||||
for _, story := range payload.Stories {
|
||||
if story.StoryID != storyID {
|
||||
continue
|
||||
}
|
||||
selection, ok := story.Tasks[task]
|
||||
if !ok {
|
||||
writeJSON(map[string]any{"ok": false, "error": "task_not_found"})
|
||||
return 1
|
||||
}
|
||||
fallback := normalizeFallbackValue(selection.Fallback)
|
||||
writeJSON(map[string]any{
|
||||
"ok": true,
|
||||
"story": storyID,
|
||||
"task": task,
|
||||
"primary": selection.Primary,
|
||||
"fallback": fallback,
|
||||
"complexity": story.Complexity,
|
||||
})
|
||||
return 0
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": false, "error": "story_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
func parseAgentConfigJSON(raw string) (agentConfigResolved, error) {
|
||||
cfg := agentConfigResolved{
|
||||
DefaultPrimary: "claude",
|
||||
DefaultFallback: "codex",
|
||||
PerTask: map[string]agentTaskConfig{},
|
||||
ComplexityOverrides: map[string]map[string]agentTaskConfig{},
|
||||
}
|
||||
|
||||
var data map[string]any
|
||||
if err := json.Unmarshal([]byte(raw), &data); err != nil {
|
||||
return cfg, err
|
||||
}
|
||||
|
||||
if v, ok := data["defaultPrimary"].(string); ok && v != "" {
|
||||
cfg.DefaultPrimary = v
|
||||
} else if v, ok := data["primary"].(string); ok && v != "" {
|
||||
cfg.DefaultPrimary = v
|
||||
}
|
||||
if v, ok := data["defaultFallback"].(string); ok && v != "" {
|
||||
cfg.DefaultFallback = v
|
||||
} else if v, ok := data["fallback"].(string); ok && v != "" {
|
||||
cfg.DefaultFallback = v
|
||||
}
|
||||
|
||||
cfg.PerTask = parseAgentTaskMap(data["perTask"])
|
||||
if rawOverrides, ok := data["complexityOverrides"].(map[string]any); ok {
|
||||
for level, rawMap := range rawOverrides {
|
||||
cfg.ComplexityOverrides[level] = parseAgentTaskMap(rawMap)
|
||||
}
|
||||
}
|
||||
|
||||
// Accept complexity levels at root level (step-02a format)
|
||||
for _, level := range []string{"low", "medium", "high"} {
|
||||
if _, exists := cfg.ComplexityOverrides[level]; exists {
|
||||
continue // complexityOverrides takes precedence
|
||||
}
|
||||
if rawMap, ok := data[level]; ok {
|
||||
parsed := parseAgentTaskMap(rawMap)
|
||||
if len(parsed) > 0 {
|
||||
cfg.ComplexityOverrides[level] = parsed
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return cfg, nil
|
||||
}
|
||||
|
||||
func parseAgentTaskMap(raw any) map[string]agentTaskConfig {
|
||||
out := map[string]agentTaskConfig{}
|
||||
taskMap, ok := raw.(map[string]any)
|
||||
if !ok {
|
||||
return out
|
||||
}
|
||||
for task, val := range taskMap {
|
||||
entry, ok := val.(map[string]any)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
cfg := agentTaskConfig{}
|
||||
if v, ok := entry["primary"].(string); ok {
|
||||
cfg.Primary = v
|
||||
}
|
||||
if v, ok := entry["fallback"]; ok {
|
||||
cfg.Fallback = v
|
||||
}
|
||||
out[task] = cfg
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func resolveAgentForTask(cfg agentConfigResolved, complexity string, task string) (string, string) {
|
||||
primary := cfg.DefaultPrimary
|
||||
fallback := cfg.DefaultFallback
|
||||
|
||||
if per, ok := cfg.PerTask[task]; ok {
|
||||
if per.Primary != "" {
|
||||
primary = per.Primary
|
||||
}
|
||||
if per.Fallback != nil {
|
||||
fallback = normalizeFallbackValue(per.Fallback)
|
||||
}
|
||||
}
|
||||
|
||||
if byLevel, ok := cfg.ComplexityOverrides[complexity]; ok {
|
||||
if per, ok := byLevel[task]; ok {
|
||||
if per.Primary != "" {
|
||||
primary = per.Primary
|
||||
}
|
||||
if per.Fallback != nil {
|
||||
fallback = normalizeFallbackValue(per.Fallback)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if strings.TrimSpace(primary) == "" {
|
||||
primary = "claude"
|
||||
}
|
||||
if strings.TrimSpace(fallback) == "" {
|
||||
fallback = "codex"
|
||||
}
|
||||
return primary, fallback
|
||||
}
|
||||
|
||||
func normalizeFallbackValue(raw any) string {
|
||||
switch v := raw.(type) {
|
||||
case string:
|
||||
lower := strings.ToLower(strings.TrimSpace(v))
|
||||
if lower == "false" || lower == "none" || lower == "null" {
|
||||
return "false"
|
||||
}
|
||||
return v
|
||||
case bool:
|
||||
if !v {
|
||||
return "false"
|
||||
}
|
||||
return "true"
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
|
||||
func extractJSONBlock(text string) string {
|
||||
re := regexp.MustCompile("(?s)```json\\s*(\\{.*?\\})\\s*```")
|
||||
m := re.FindStringSubmatch(text)
|
||||
if m != nil {
|
||||
return m[1]
|
||||
}
|
||||
trimmed := strings.TrimSpace(text)
|
||||
if strings.HasPrefix(trimmed, "{") && strings.HasSuffix(trimmed, "}") {
|
||||
return trimmed
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
|
@ -0,0 +1,85 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
)
|
||||
|
||||
func cmdOrchestratorHelper(args []string) int {
|
||||
if len(args) == 0 {
|
||||
return orchestratorUsage()
|
||||
}
|
||||
action := args[0]
|
||||
args = args[1:]
|
||||
|
||||
switch action {
|
||||
case "sprint-status":
|
||||
return orchestratorSprintStatus(args)
|
||||
case "parse-output":
|
||||
return orchestratorParseOutput(args)
|
||||
case "marker":
|
||||
return orchestratorMarker(args)
|
||||
case "state-list":
|
||||
return orchestratorStateList(args)
|
||||
case "state-latest":
|
||||
return orchestratorStateLatest(args)
|
||||
case "state-latest-incomplete":
|
||||
return orchestratorStateLatestIncomplete(args)
|
||||
case "state-summary":
|
||||
return orchestratorStateSummary(args)
|
||||
case "state-update":
|
||||
return orchestratorStateUpdate(args)
|
||||
case "escalate":
|
||||
return orchestratorEscalate(args)
|
||||
case "commit-ready":
|
||||
return orchestratorCommitReady(args)
|
||||
case "normalize-key":
|
||||
return orchestratorNormalizeKey(args)
|
||||
case "story-file-status":
|
||||
return orchestratorStoryFileStatus(args)
|
||||
case "verify-code-review":
|
||||
return orchestratorVerifyCodeReview(args)
|
||||
case "check-epic-complete":
|
||||
return orchestratorCheckEpicComplete(args)
|
||||
case "get-epic-stories":
|
||||
return orchestratorGetEpicStories(args)
|
||||
case "check-blocking":
|
||||
return orchestratorCheckBlocking(args)
|
||||
case "agents-build":
|
||||
return orchestratorAgentsBuild(args)
|
||||
case "agents-resolve":
|
||||
return orchestratorAgentsResolve(args)
|
||||
default:
|
||||
return orchestratorUsage()
|
||||
}
|
||||
}
|
||||
|
||||
func orchestratorUsage() int {
|
||||
fmt.Fprintln(os.Stderr, "Usage: orchestrator-helper <action> [args]")
|
||||
fmt.Fprintln(os.Stderr, "")
|
||||
fmt.Fprintln(os.Stderr, "Actions:")
|
||||
fmt.Fprintln(os.Stderr, " sprint-status get <story_key>")
|
||||
fmt.Fprintln(os.Stderr, " sprint-status exists")
|
||||
fmt.Fprintln(os.Stderr, " sprint-status check-epic <epic>")
|
||||
fmt.Fprintln(os.Stderr, " parse-output <file> <step>")
|
||||
fmt.Fprintln(os.Stderr, " marker create --epic E --story S --remaining N --state-file F")
|
||||
fmt.Fprintln(os.Stderr, " marker remove")
|
||||
fmt.Fprintln(os.Stderr, " marker check")
|
||||
fmt.Fprintln(os.Stderr, " marker heartbeat")
|
||||
fmt.Fprintln(os.Stderr, " state-list <folder>")
|
||||
fmt.Fprintln(os.Stderr, " state-latest <folder> [status]")
|
||||
fmt.Fprintln(os.Stderr, " state-latest-incomplete <folder>")
|
||||
fmt.Fprintln(os.Stderr, " state-summary <file>")
|
||||
fmt.Fprintln(os.Stderr, " state-update <file> --set k=v")
|
||||
fmt.Fprintln(os.Stderr, " escalate <trigger> <context>")
|
||||
fmt.Fprintln(os.Stderr, " commit-ready <story_id>")
|
||||
fmt.Fprintln(os.Stderr, " normalize-key <input> [--to id|key|prefix|json]")
|
||||
fmt.Fprintln(os.Stderr, " story-file-status <story>")
|
||||
fmt.Fprintln(os.Stderr, " verify-code-review <story>")
|
||||
fmt.Fprintln(os.Stderr, " check-epic-complete <epic> <story> [--state-file path]")
|
||||
fmt.Fprintln(os.Stderr, " get-epic-stories <epic> [--state-file path]")
|
||||
fmt.Fprintln(os.Stderr, " check-blocking <story_id>")
|
||||
fmt.Fprintln(os.Stderr, " agents-build --state-file path --complexity-file path --output path --config-json '{}'")
|
||||
fmt.Fprintln(os.Stderr, " agents-resolve --state-file path --story ID --task create|dev|auto|review [--agents-file path]")
|
||||
return 1
|
||||
}
|
||||
|
|
@ -0,0 +1,247 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func orchestratorGetEpicStories(args []string) int {
|
||||
if len(args) < 1 {
|
||||
writeJSON(map[string]any{"ok": false, "error": "epic_number_required"})
|
||||
return 1
|
||||
}
|
||||
epic := args[0]
|
||||
args = args[1:]
|
||||
|
||||
stateFile := ""
|
||||
for i := 0; i < len(args); i++ {
|
||||
if args[i] == "--state-file" && i+1 < len(args) {
|
||||
stateFile = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
|
||||
if stateFile != "" && fileExists(stateFile) {
|
||||
storyRange := readStoryRangeFromState(stateFile)
|
||||
epicStories := filterEpicStories(storyRange, epic)
|
||||
if len(epicStories) > 0 {
|
||||
writeJSON(map[string]any{"ok": true, "epic": epic, "stories": epicStories, "count": len(epicStories), "source": "state_file"})
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
statusFile := sprintStatusFile(getProjectRoot())
|
||||
stories, _ := sprintStatusEpic(statusFile, epic)
|
||||
if len(stories) > 0 {
|
||||
writeJSON(map[string]any{"ok": true, "epic": epic, "stories": stories, "count": len(stories), "source": "sprint_status"})
|
||||
return 0
|
||||
}
|
||||
|
||||
epicFile := findEpicFile(getProjectRoot(), epic)
|
||||
if epicFile != "" {
|
||||
content, _ := readFile(epicFile)
|
||||
re := regexp.MustCompile(`\b` + regexp.QuoteMeta(epic) + `\.[0-9]+`)
|
||||
ids := re.FindAllString(content, -1)
|
||||
ids = uniqueSortedStories(ids)
|
||||
if len(ids) > 0 {
|
||||
writeJSON(map[string]any{"ok": true, "epic": epic, "stories": ids, "count": len(ids), "source": "epic_file"})
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": false, "epic": epic, "error": "no_stories_found", "count": 0})
|
||||
return 0
|
||||
}
|
||||
|
||||
func orchestratorCheckBlocking(args []string) int {
|
||||
if len(args) < 1 {
|
||||
writeJSON(map[string]any{"ok": false, "error": "story_id_required"})
|
||||
return 1
|
||||
}
|
||||
storyInput := args[0]
|
||||
projectRoot := getProjectRoot()
|
||||
|
||||
norm, ok := normalizeStoryKey(projectRoot, storyInput)
|
||||
if !ok || norm.ID == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "could_not_normalize_key", "input": storyInput})
|
||||
return 1
|
||||
}
|
||||
|
||||
epicNumber := strings.SplitN(norm.ID, ".", 2)[0]
|
||||
epicFile := findEpicFile(projectRoot, epicNumber)
|
||||
if epicFile == "" {
|
||||
writeJSON(map[string]any{"ok": true, "blocking": true, "story": norm.ID, "epic": epicNumber, "dependents": []string{}, "reason": "epic_file_not_found", "source": "unknown"})
|
||||
return 0
|
||||
}
|
||||
|
||||
dependents := findEpicDependents(epicFile, norm.ID, norm.Prefix)
|
||||
if len(dependents) > 0 {
|
||||
writeJSON(map[string]any{"ok": true, "blocking": true, "story": norm.ID, "epic": epicNumber, "dependents": dependents, "reason": "dependent_stories", "source": "epic_file"})
|
||||
return 0
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "blocking": false, "story": norm.ID, "epic": epicNumber, "dependents": []string{}, "reason": "no_dependents_found", "source": "epic_file"})
|
||||
return 0
|
||||
}
|
||||
|
||||
func orchestratorCheckEpicComplete(args []string) int {
|
||||
if len(args) < 2 {
|
||||
writeJSON(map[string]any{"ok": false, "error": "epic_number and story_id required"})
|
||||
return 1
|
||||
}
|
||||
epicNumber := args[0]
|
||||
storyID := args[1]
|
||||
args = args[2:]
|
||||
stateFile := ""
|
||||
for i := 0; i < len(args); i++ {
|
||||
if args[i] == "--state-file" && i+1 < len(args) {
|
||||
stateFile = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
|
||||
storyEpic := strings.SplitN(storyID, ".", 2)[0]
|
||||
if storyEpic != epicNumber {
|
||||
writeJSON(map[string]any{"ok": true, "isLastStory": false, "epic": mustAtoi(epicNumber), "storyId": storyID, "reason": "story_not_in_epic"})
|
||||
return 0
|
||||
}
|
||||
|
||||
if stateFile != "" && fileExists(stateFile) {
|
||||
storyRange := readStoryRangeFromState(stateFile)
|
||||
epicStories := filterEpicStories(storyRange, epicNumber)
|
||||
if len(epicStories) > 0 {
|
||||
last := epicStories[len(epicStories)-1]
|
||||
if storyID == last {
|
||||
writeJSON(map[string]any{"ok": true, "isLastStory": true, "epic": mustAtoi(epicNumber), "storyId": storyID, "lastInEpic": last, "epicStoryCount": len(epicStories), "source": "state_file"})
|
||||
} else {
|
||||
writeJSON(map[string]any{"ok": true, "isLastStory": false, "epic": mustAtoi(epicNumber), "storyId": storyID, "lastInEpic": last, "source": "state_file"})
|
||||
}
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
sprintFile := sprintStatusFile(getProjectRoot())
|
||||
if fileExists(sprintFile) {
|
||||
content, _ := readFile(sprintFile)
|
||||
re := regexp.MustCompile(`(?m)^\s*` + regexp.QuoteMeta(epicNumber) + `\.[0-9]+:`)
|
||||
matches := re.FindAllString(content, -1)
|
||||
stories := []string{}
|
||||
for _, line := range matches {
|
||||
id := strings.TrimSpace(strings.SplitN(line, ":", 2)[0])
|
||||
stories = append(stories, id)
|
||||
}
|
||||
stories = uniqueSortedStories(stories)
|
||||
if len(stories) > 0 {
|
||||
last := stories[len(stories)-1]
|
||||
if storyID == last {
|
||||
writeJSON(map[string]any{"ok": true, "isLastStory": true, "epic": mustAtoi(epicNumber), "storyId": storyID, "lastInEpic": last, "epicStoryCount": len(stories), "source": "sprint_status"})
|
||||
} else {
|
||||
writeJSON(map[string]any{"ok": true, "isLastStory": false, "epic": mustAtoi(epicNumber), "storyId": storyID, "lastInEpic": last, "source": "sprint_status"})
|
||||
}
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
epicsDir := filepath.Join(getProjectRoot(), "_bmad-output", "implementation-artifacts")
|
||||
matches, _ := filepath.Glob(filepath.Join(epicsDir, fmt.Sprintf("epic-%s-*.md", epicNumber)))
|
||||
if len(matches) > 0 {
|
||||
content, _ := readFile(matches[0])
|
||||
re := regexp.MustCompile(regexp.QuoteMeta(epicNumber) + `\.[0-9]+`)
|
||||
ids := re.FindAllString(content, -1)
|
||||
ids = uniqueSortedStories(ids)
|
||||
if len(ids) > 0 {
|
||||
last := ids[len(ids)-1]
|
||||
if storyID == last {
|
||||
writeJSON(map[string]any{"ok": true, "isLastStory": true, "epic": mustAtoi(epicNumber), "storyId": storyID, "lastInEpic": last, "epicStoryCount": len(ids), "source": "epic_file"})
|
||||
} else {
|
||||
writeJSON(map[string]any{"ok": true, "isLastStory": false, "epic": mustAtoi(epicNumber), "storyId": storyID, "lastInEpic": last, "source": "epic_file"})
|
||||
}
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "isLastStory": false, "epic": mustAtoi(epicNumber), "storyId": storyID, "reason": "could_not_determine", "source": "fallback"})
|
||||
return 0
|
||||
}
|
||||
|
||||
func findEpicFile(projectRoot, epicNumber string) string {
|
||||
if projectRoot == "" || epicNumber == "" {
|
||||
return ""
|
||||
}
|
||||
paths := []string{
|
||||
filepath.Join(projectRoot, "_bmad-output", "implementation-artifacts", fmt.Sprintf("epic-%s-*.md", epicNumber)),
|
||||
filepath.Join(projectRoot, "docs", "epics", fmt.Sprintf("epic-%s-*.md", epicNumber)),
|
||||
}
|
||||
for _, pattern := range paths {
|
||||
matches, _ := filepath.Glob(pattern)
|
||||
if len(matches) > 0 {
|
||||
return matches[0]
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func findEpicDependents(epicFile, targetID, targetPrefix string) []string {
|
||||
content, err := readFile(epicFile)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
lines := trimLines(content)
|
||||
storyRe := regexp.MustCompile(`^###\s+Story\s+(\d+\.\d+):`)
|
||||
depRe := regexp.MustCompile(`(?i)Dependencies:|\*\*Dependencies\*\*:`)
|
||||
idRe := regexp.MustCompile(`\b` + regexp.QuoteMeta(targetID) + `\b`)
|
||||
prefixRe := regexp.MustCompile(`\b` + regexp.QuoteMeta(targetPrefix) + `\b`)
|
||||
currentStory := ""
|
||||
dependents := map[string]bool{}
|
||||
|
||||
for _, line := range lines {
|
||||
if m := storyRe.FindStringSubmatch(line); m != nil {
|
||||
currentStory = m[1]
|
||||
continue
|
||||
}
|
||||
if currentStory != "" && depRe.MatchString(line) {
|
||||
if idRe.MatchString(line) || (targetPrefix != "" && prefixRe.MatchString(line)) {
|
||||
dependents[currentStory] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
list := []string{}
|
||||
for id := range dependents {
|
||||
list = append(list, id)
|
||||
}
|
||||
return uniqueSortedStories(list)
|
||||
}
|
||||
|
||||
func filterEpicStories(storyRange []string, epicNumber string) []string {
|
||||
stories := []string{}
|
||||
for _, sid := range storyRange {
|
||||
if strings.HasPrefix(sid, epicNumber+".") {
|
||||
stories = append(stories, sid)
|
||||
}
|
||||
}
|
||||
return uniqueSortedStories(stories)
|
||||
}
|
||||
|
||||
func uniqueSortedStories(ids []string) []string {
|
||||
set := map[string]bool{}
|
||||
for _, id := range ids {
|
||||
set[id] = true
|
||||
}
|
||||
list := []string{}
|
||||
for id := range set {
|
||||
list = append(list, id)
|
||||
}
|
||||
sort.Slice(list, func(i, j int) bool {
|
||||
a := strings.SplitN(list[i], ".", 2)
|
||||
b := strings.SplitN(list[j], ".", 2)
|
||||
ai, _ := strconv.Atoi(a[1])
|
||||
bi, _ := strconv.Atoi(b[1])
|
||||
return ai < bi
|
||||
})
|
||||
return list
|
||||
}
|
||||
|
|
@ -0,0 +1,105 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func orchestratorEscalate(args []string) int {
|
||||
if len(args) < 1 {
|
||||
writeJSON(map[string]any{"escalate": false, "reason": "Unknown trigger"})
|
||||
return 0
|
||||
}
|
||||
trigger := args[0]
|
||||
context := ""
|
||||
if len(args) > 1 {
|
||||
context = args[1]
|
||||
}
|
||||
|
||||
switch trigger {
|
||||
case "review-loop":
|
||||
cycles := parseContextInt(context, "cycles")
|
||||
maxCycles := envInt("MAX_REVIEW_CYCLES", 5)
|
||||
if cycles >= maxCycles {
|
||||
writeJSON(map[string]any{"escalate": true, "reason": fmt.Sprintf("Review loop exceeded max cycles (%d/%d)", cycles, maxCycles)})
|
||||
return 0
|
||||
}
|
||||
writeJSON(map[string]any{"escalate": false})
|
||||
return 0
|
||||
|
||||
case "session-crash":
|
||||
retries := parseContextInt(context, "retries")
|
||||
maxRetries := envInt("MAX_CRASH_RETRIES", 2)
|
||||
if retries >= maxRetries {
|
||||
writeJSON(map[string]any{"escalate": true, "reason": fmt.Sprintf("Session crashed after %d retries", retries)})
|
||||
return 0
|
||||
}
|
||||
writeJSON(map[string]any{"escalate": false, "action": "retry"})
|
||||
return 0
|
||||
|
||||
case "story-validation":
|
||||
created := parseContextInt(context, "created")
|
||||
if created == 0 {
|
||||
writeJSON(map[string]any{"escalate": true, "reason": "No story file created"})
|
||||
return 0
|
||||
}
|
||||
if created > 1 {
|
||||
writeJSON(map[string]any{"escalate": true, "reason": fmt.Sprintf("Runaway creation: %d files", created)})
|
||||
return 0
|
||||
}
|
||||
writeJSON(map[string]any{"escalate": false})
|
||||
return 0
|
||||
default:
|
||||
writeJSON(map[string]any{"escalate": false, "reason": "Unknown trigger"})
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
func orchestratorCommitReady(args []string) int {
|
||||
if len(args) < 1 {
|
||||
writeJSON(map[string]any{"ready": false, "reason": "story_id required"})
|
||||
return 1
|
||||
}
|
||||
storyID := args[0]
|
||||
projectRoot := getProjectRoot()
|
||||
statusFile := sprintStatusFile(projectRoot)
|
||||
status := sprintStatusGet(statusFile, storyID)
|
||||
if status.Done {
|
||||
out, _ := runCmd("git", "-C", projectRoot, "status", "--porcelain")
|
||||
changes := 0
|
||||
if strings.TrimSpace(out) != "" {
|
||||
changes = len(strings.Split(strings.TrimSpace(out), "\n"))
|
||||
}
|
||||
if changes > 0 {
|
||||
writeJSON(map[string]any{"ready": true, "story": storyID, "status": "done", "uncommitted_changes": true})
|
||||
return 0
|
||||
}
|
||||
writeJSON(map[string]any{"ready": false, "reason": "No uncommitted changes", "story": storyID})
|
||||
return 0
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ready": false, "reason": "Story not done yet", "story": storyID, "current_status": status.Status})
|
||||
return 0
|
||||
}
|
||||
|
||||
func parseContextInt(context, key string) int {
|
||||
re := regexp.MustCompile(key + `=([0-9]+)`)
|
||||
m := re.FindStringSubmatch(context)
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
val, _ := strconv.Atoi(m[1])
|
||||
return val
|
||||
}
|
||||
|
||||
func envInt(key string, def int) int {
|
||||
if v := os.Getenv(key); v != "" {
|
||||
if n, err := strconv.Atoi(v); err == nil {
|
||||
return n
|
||||
}
|
||||
}
|
||||
return def
|
||||
}
|
||||
|
|
@ -0,0 +1,122 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func orchestratorMarker(args []string) int {
|
||||
if len(args) == 0 {
|
||||
fmt.Fprintln(os.Stderr, "Usage: orchestrator-helper marker <create|remove|check|heartbeat> [args]")
|
||||
return 1
|
||||
}
|
||||
action := args[0]
|
||||
args = args[1:]
|
||||
|
||||
projectRoot := getProjectRoot()
|
||||
markerFile := filepath.Join(projectRoot, ".claude", ".story-automator-active")
|
||||
|
||||
switch action {
|
||||
case "create":
|
||||
epic := ""
|
||||
story := ""
|
||||
remaining := "0"
|
||||
stateFile := ""
|
||||
projectSlug := ""
|
||||
pid := "0"
|
||||
heartbeat := ""
|
||||
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--epic":
|
||||
if i+1 < len(args) {
|
||||
epic = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--story":
|
||||
if i+1 < len(args) {
|
||||
story = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--remaining":
|
||||
if i+1 < len(args) {
|
||||
remaining = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--state-file":
|
||||
if i+1 < len(args) {
|
||||
stateFile = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--project-slug":
|
||||
if i+1 < len(args) {
|
||||
projectSlug = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--pid":
|
||||
if i+1 < len(args) {
|
||||
pid = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--heartbeat":
|
||||
if i+1 < len(args) {
|
||||
heartbeat = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
_ = ensureDir(filepath.Dir(markerFile))
|
||||
|
||||
if heartbeat == "" {
|
||||
heartbeat = nowUTC().Format("2006-01-02T15:04:05Z")
|
||||
}
|
||||
|
||||
payload := fmt.Sprintf("{\n \"epic\": %q,\n \"currentStory\": %q,\n \"storiesRemaining\": %s,\n \"stateFile\": %q,\n \"createdAt\": %q,\n \"heartbeat\": %q,\n \"pid\": %s,\n \"projectSlug\": %q\n}\n",
|
||||
epic, story, remaining, stateFile, nowUTC().Format("2006-01-02T15:04:05Z"), heartbeat, pid, projectSlug)
|
||||
_ = os.WriteFile(markerFile, []byte(payload), 0o644)
|
||||
fmt.Printf("Marker created: %s\n", markerFile)
|
||||
return 0
|
||||
|
||||
case "remove":
|
||||
_ = os.Remove(markerFile)
|
||||
fmt.Println("Marker removed")
|
||||
return 0
|
||||
|
||||
case "check":
|
||||
if fileExists(markerFile) {
|
||||
fmt.Printf("{\"exists\":true,\"file\":%q}\n", markerFile)
|
||||
content, _ := readFile(markerFile)
|
||||
fmt.Print(content)
|
||||
if !strings.HasSuffix(content, "\n") {
|
||||
fmt.Println("")
|
||||
}
|
||||
} else {
|
||||
fmt.Println("{\"exists\":false}")
|
||||
}
|
||||
return 0
|
||||
|
||||
case "heartbeat":
|
||||
if !fileExists(markerFile) {
|
||||
fmt.Println("No marker file to update")
|
||||
return 1
|
||||
}
|
||||
content, err := readFile(markerFile)
|
||||
if err != nil {
|
||||
fmt.Println("No marker file to update")
|
||||
return 1
|
||||
}
|
||||
newHeartbeat := nowUTC().Format("2006-01-02T15:04:05Z")
|
||||
updated := regexp.MustCompile(`"heartbeat":.*$`).ReplaceAllString(content, fmt.Sprintf("\"heartbeat\": \"%s\"", newHeartbeat))
|
||||
_ = os.WriteFile(markerFile, []byte(updated), 0o644)
|
||||
fmt.Printf("Heartbeat updated: %s\n", newHeartbeat)
|
||||
return 0
|
||||
|
||||
default:
|
||||
fmt.Fprintln(os.Stderr, "Usage: orchestrator-helper marker <create|remove|check|heartbeat> [args]")
|
||||
return 1
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,97 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func orchestratorParseOutput(args []string) int {
|
||||
if len(args) < 2 {
|
||||
fmt.Println("{\"status\":\"error\",\"reason\":\"output file not found or empty\"}")
|
||||
return 1
|
||||
}
|
||||
|
||||
outputFile := args[0]
|
||||
stepType := args[1]
|
||||
|
||||
if !fileExists(outputFile) {
|
||||
fmt.Println("{\"status\":\"error\",\"reason\":\"output file not found or empty\"}")
|
||||
return 1
|
||||
}
|
||||
|
||||
content, err := readFile(outputFile)
|
||||
if err != nil {
|
||||
fmt.Println("{\"status\":\"error\",\"reason\":\"output file not found or empty\"}")
|
||||
return 1
|
||||
}
|
||||
|
||||
lines := trimLines(content)
|
||||
if len(lines) > 150 {
|
||||
lines = lines[:150]
|
||||
}
|
||||
content = strings.Join(lines, "\n")
|
||||
|
||||
prompt := buildParsePrompt(stepType, content)
|
||||
|
||||
cmd := exec.Command("claude", "-p", "--model", "haiku", prompt)
|
||||
env := []string{}
|
||||
for _, e := range os.Environ() {
|
||||
if !strings.HasPrefix(e, "CLAUDECODE=") {
|
||||
env = append(env, e)
|
||||
}
|
||||
}
|
||||
cmd.Env = append(env, "STORY_AUTOMATOR_CHILD=true")
|
||||
out, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
fmt.Println("{\"status\":\"error\",\"reason\":\"sub-agent call failed\"}")
|
||||
return 1
|
||||
}
|
||||
|
||||
result := string(out)
|
||||
jsonLine := extractJSONLine(result)
|
||||
if jsonLine != "" {
|
||||
fmt.Println(jsonLine)
|
||||
} else {
|
||||
fmt.Println(result)
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func buildParsePrompt(stepType, content string) string {
|
||||
switch stepType {
|
||||
case "create":
|
||||
return "Analyze this create-story session output. Return JSON only:\n" +
|
||||
"{\"status\":\"SUCCESS|FAILURE|AMBIGUOUS\",\"story_created\":true/false,\"story_file\":\"path or null\",\"summary\":\"brief description\",\"next_action\":\"proceed|retry|escalate\"}\n\n" +
|
||||
"Session output:\n---\n" + content + "\n---"
|
||||
case "dev":
|
||||
return "Analyze this dev-story session output. Return JSON only:\n" +
|
||||
"{\"status\":\"SUCCESS|FAILURE|AMBIGUOUS\",\"tests_passed\":true/false,\"build_passed\":true/false,\"summary\":\"brief description\",\"next_action\":\"proceed|retry|escalate\"}\n\n" +
|
||||
"Session output:\n---\n" + content + "\n---"
|
||||
case "auto":
|
||||
return "Analyze this testarch-automate session output. Return JSON only:\n" +
|
||||
"{\"status\":\"SUCCESS|FAILURE|AMBIGUOUS\",\"tests_added\":N,\"coverage_improved\":true/false,\"summary\":\"brief description\",\"next_action\":\"proceed|retry|escalate\"}\n\n" +
|
||||
"Session output:\n---\n" + content + "\n---"
|
||||
case "review":
|
||||
return "Analyze this code-review session output. Return JSON only:\n" +
|
||||
"{\"status\":\"SUCCESS|FAILURE|AMBIGUOUS\",\"issues_found\":{\"critical\":N,\"high\":N,\"medium\":N,\"low\":N},\"all_fixed\":true/false,\"summary\":\"brief description\",\"next_action\":\"proceed|retry|escalate\"}\n\n" +
|
||||
"Session output:\n---\n" + content + "\n---"
|
||||
default:
|
||||
return "Analyze this session output. Return JSON only:\n" +
|
||||
"{\"status\":\"SUCCESS|FAILURE|AMBIGUOUS\",\"summary\":\"brief description\",\"next_action\":\"proceed|retry|escalate\"}\n\n" +
|
||||
"Session output:\n---\n" + content + "\n---"
|
||||
}
|
||||
}
|
||||
|
||||
func extractJSONLine(result string) string {
|
||||
lines := trimLines(result)
|
||||
jsonRe := regexp.MustCompile(`\{.*\}`)
|
||||
for _, line := range lines {
|
||||
if jsonRe.MatchString(line) {
|
||||
return jsonRe.FindString(line)
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
|
@ -0,0 +1,162 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type sprintStatus struct {
|
||||
Found bool
|
||||
Story string
|
||||
Status string
|
||||
Done bool
|
||||
Reason string
|
||||
}
|
||||
|
||||
func orchestratorSprintStatus(args []string) int {
|
||||
if len(args) == 0 {
|
||||
fmt.Fprintln(os.Stderr, "Usage: orchestrator-helper sprint-status <get|exists> [args]")
|
||||
return 1
|
||||
}
|
||||
|
||||
action := args[0]
|
||||
args = args[1:]
|
||||
projectRoot := getProjectRoot()
|
||||
statusFile := sprintStatusFile(projectRoot)
|
||||
|
||||
switch action {
|
||||
case "get":
|
||||
if len(args) == 0 {
|
||||
fmt.Fprintln(os.Stderr, "Usage: orchestrator-helper sprint-status get <story_key>")
|
||||
return 1
|
||||
}
|
||||
storyKey := args[0]
|
||||
status := sprintStatusGet(statusFile, storyKey)
|
||||
if !status.Found && status.Reason != "" {
|
||||
fmt.Printf("{\"found\":false,\"status\":%q,\"reason\":%q}\n", status.Status, status.Reason)
|
||||
return 0
|
||||
}
|
||||
if !status.Found {
|
||||
fmt.Printf("{\"found\":false,\"story\":%q,\"status\":%q}\n", storyKey, "not_found")
|
||||
return 0
|
||||
}
|
||||
fmt.Printf("{\"found\":true,\"story\":%q,\"status\":%q,\"done\":%t}\n", storyKey, status.Status, status.Done)
|
||||
return 0
|
||||
|
||||
case "exists":
|
||||
if fileExists(statusFile) {
|
||||
fmt.Println("true")
|
||||
} else {
|
||||
fmt.Println("false")
|
||||
}
|
||||
return 0
|
||||
case "check-epic":
|
||||
if len(args) == 0 {
|
||||
fmt.Fprintln(os.Stderr, "Usage: orchestrator-helper sprint-status check-epic <epic>")
|
||||
return 1
|
||||
}
|
||||
epic := args[0]
|
||||
if !fileExists(statusFile) {
|
||||
writeJSON(map[string]any{"ok": false, "epic": epic, "allStoriesDone": false, "reason": "sprint-status.yaml not found", "count": 0})
|
||||
return 0
|
||||
}
|
||||
stories, done := sprintStatusEpic(statusFile, epic)
|
||||
if len(stories) == 0 {
|
||||
writeJSON(map[string]any{"ok": false, "epic": epic, "allStoriesDone": false, "reason": "no_stories_found", "count": 0})
|
||||
return 0
|
||||
}
|
||||
allDone := done == len(stories)
|
||||
writeJSON(map[string]any{"ok": true, "epic": epic, "allStoriesDone": allDone, "total": len(stories), "done": done, "count": len(stories), "stories": stories})
|
||||
return 0
|
||||
default:
|
||||
fmt.Fprintln(os.Stderr, "Usage: orchestrator-helper sprint-status <get|exists|check-epic> [args]")
|
||||
return 1
|
||||
}
|
||||
}
|
||||
|
||||
func sprintStatusFile(projectRoot string) string {
|
||||
preferred := filepath.Join(projectRoot, "_bmad-output", "implementation-artifacts", "sprint-status.yaml")
|
||||
if fileExists(preferred) {
|
||||
return preferred
|
||||
}
|
||||
legacy := filepath.Join(projectRoot, "_bmad-output", "sprint-status.yaml")
|
||||
if fileExists(legacy) {
|
||||
return legacy
|
||||
}
|
||||
return preferred
|
||||
}
|
||||
|
||||
func sprintStatusGet(statusFile, storyKey string) sprintStatus {
|
||||
if !fileExists(statusFile) {
|
||||
return sprintStatus{Found: false, Status: "unknown", Reason: "sprint-status.yaml not found"}
|
||||
}
|
||||
|
||||
content, err := readFile(statusFile)
|
||||
if err != nil {
|
||||
return sprintStatus{Found: false, Status: "unknown", Reason: "sprint-status.yaml not found"}
|
||||
}
|
||||
|
||||
re := regexp.MustCompile(`(?m)^\s*` + regexp.QuoteMeta(storyKey) + `:\s*(\S+)`)
|
||||
m := re.FindStringSubmatch(content)
|
||||
if m == nil {
|
||||
prefix := storyKey
|
||||
if strings.Contains(storyKey, ".") {
|
||||
prefix = strings.ReplaceAll(storyKey, ".", "-")
|
||||
} else if regexp.MustCompile(`^[0-9]+-[0-9]+-`).MatchString(storyKey) {
|
||||
parts := strings.SplitN(storyKey, "-", 3)
|
||||
prefix = parts[0] + "-" + parts[1]
|
||||
}
|
||||
if regexp.MustCompile(`^[0-9]+-[0-9]+$`).MatchString(prefix) {
|
||||
prefixRe := regexp.MustCompile(`(?m)^\s*(` + regexp.QuoteMeta(prefix) + `-[^\s:]+)\s*:\s*(\S+)`)
|
||||
pm := prefixRe.FindStringSubmatch(content)
|
||||
if pm != nil {
|
||||
status := strings.TrimSpace(pm[2])
|
||||
done := status == "done"
|
||||
return sprintStatus{Found: true, Story: pm[1], Status: status, Done: done}
|
||||
}
|
||||
}
|
||||
return sprintStatus{Found: false, Story: storyKey, Status: "not_found"}
|
||||
}
|
||||
|
||||
status := strings.TrimSpace(m[1])
|
||||
done := status == "done"
|
||||
return sprintStatus{Found: true, Story: storyKey, Status: status, Done: done}
|
||||
}
|
||||
|
||||
func sprintStatusEpic(statusFile, epic string) ([]string, int) {
|
||||
if !fileExists(statusFile) {
|
||||
return nil, 0
|
||||
}
|
||||
content, err := readFile(statusFile)
|
||||
if err != nil {
|
||||
return nil, 0
|
||||
}
|
||||
stories := []string{}
|
||||
seen := map[string]bool{}
|
||||
doneCount := 0
|
||||
for _, line := range trimLines(content) {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" || strings.HasPrefix(line, "#") {
|
||||
continue
|
||||
}
|
||||
if strings.HasPrefix(line, epic+".") || strings.HasPrefix(line, epic+"-") {
|
||||
parts := strings.SplitN(line, ":", 2)
|
||||
if len(parts) < 2 {
|
||||
continue
|
||||
}
|
||||
key := strings.TrimSpace(parts[0])
|
||||
status := strings.Fields(strings.TrimSpace(parts[1]))
|
||||
if !seen[key] {
|
||||
stories = append(stories, key)
|
||||
seen[key] = true
|
||||
}
|
||||
if len(status) > 0 && status[0] == "done" {
|
||||
doneCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
return stories, doneCount
|
||||
}
|
||||
|
|
@ -0,0 +1,224 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func orchestratorStateList(args []string) int {
|
||||
if len(args) < 1 {
|
||||
writeJSON(map[string]any{"ok": false, "error": "folder_not_found", "files": []any{}})
|
||||
return 1
|
||||
}
|
||||
folder := args[0]
|
||||
if !dirExists(folder) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "folder_not_found", "files": []any{}})
|
||||
return 1
|
||||
}
|
||||
|
||||
matches, _ := filepath.Glob(filepath.Join(folder, "orchestration-*.md"))
|
||||
files := []map[string]string{}
|
||||
for _, f := range matches {
|
||||
status := findFrontmatterValue(f, "status")
|
||||
updated := findFrontmatterValue(f, "lastUpdated")
|
||||
files = append(files, map[string]string{"path": f, "status": defaultString(status, "unknown"), "lastUpdated": defaultString(updated, "unknown")})
|
||||
}
|
||||
writeJSON(map[string]any{"ok": true, "files": files})
|
||||
return 0
|
||||
}
|
||||
|
||||
func orchestratorStateLatest(args []string) int {
|
||||
if len(args) < 1 {
|
||||
writeJSON(map[string]any{"ok": false, "error": "folder_not_found"})
|
||||
return 1
|
||||
}
|
||||
folder := args[0]
|
||||
statusFilter := ""
|
||||
if len(args) > 1 {
|
||||
statusFilter = args[1]
|
||||
}
|
||||
if !dirExists(folder) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "folder_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
matches, _ := filepath.Glob(filepath.Join(folder, "orchestration-*.md"))
|
||||
latest := ""
|
||||
latestTime := ""
|
||||
for _, f := range matches {
|
||||
status := findFrontmatterValue(f, "status")
|
||||
if statusFilter != "" && status != statusFilter {
|
||||
continue
|
||||
}
|
||||
updated := findFrontmatterValue(f, "lastUpdated")
|
||||
if latestTime == "" || updated > latestTime {
|
||||
latest = f
|
||||
latestTime = updated
|
||||
}
|
||||
}
|
||||
|
||||
if latest == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "no_match"})
|
||||
return 0
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "path": latest, "lastUpdated": latestTime})
|
||||
return 0
|
||||
}
|
||||
|
||||
func orchestratorStateLatestIncomplete(args []string) int {
|
||||
if len(args) < 1 {
|
||||
writeJSON(map[string]any{"ok": false, "error": "folder_not_found"})
|
||||
return 1
|
||||
}
|
||||
folder := args[0]
|
||||
if !dirExists(folder) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "folder_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
matches, _ := filepath.Glob(filepath.Join(folder, "orchestration-*.md"))
|
||||
latest := ""
|
||||
latestTime := ""
|
||||
latestStatus := ""
|
||||
for _, f := range matches {
|
||||
status := findFrontmatterValue(f, "status")
|
||||
// Skip COMPLETE states - we want incomplete ones
|
||||
if status == "COMPLETE" {
|
||||
continue
|
||||
}
|
||||
updated := findFrontmatterValue(f, "lastUpdated")
|
||||
if latestTime == "" || updated > latestTime {
|
||||
latest = f
|
||||
latestTime = updated
|
||||
latestStatus = status
|
||||
}
|
||||
}
|
||||
|
||||
if latest == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "no_incomplete_state"})
|
||||
return 0
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "path": latest, "lastUpdated": latestTime, "status": latestStatus})
|
||||
return 0
|
||||
}
|
||||
|
||||
func orchestratorStateSummary(args []string) int {
|
||||
if len(args) < 1 {
|
||||
writeJSON(map[string]any{"ok": false, "error": "file_not_found"})
|
||||
return 1
|
||||
}
|
||||
file := args[0]
|
||||
if !fileExists(file) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "file_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
epic := findFrontmatterValue(file, "epic")
|
||||
epicName := findFrontmatterValue(file, "epicName")
|
||||
currentStory := findFrontmatterValue(file, "currentStory")
|
||||
currentStep := findFrontmatterValue(file, "currentStep")
|
||||
status := findFrontmatterValue(file, "status")
|
||||
lastUpdated := findFrontmatterValue(file, "lastUpdated")
|
||||
lastAction := extractLastAction(file)
|
||||
|
||||
writeJSON(map[string]any{
|
||||
"ok": true,
|
||||
"epic": epic,
|
||||
"epicName": epicName,
|
||||
"currentStory": currentStory,
|
||||
"currentStep": currentStep,
|
||||
"status": status,
|
||||
"lastUpdated": lastUpdated,
|
||||
"lastAction": lastAction,
|
||||
})
|
||||
return 0
|
||||
}
|
||||
|
||||
func orchestratorStateUpdate(args []string) int {
|
||||
if len(args) < 1 {
|
||||
writeJSON(map[string]any{"ok": false, "error": "file_not_found"})
|
||||
return 1
|
||||
}
|
||||
file := args[0]
|
||||
args = args[1:]
|
||||
if !fileExists(file) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "file_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
updatedKeys := []string{}
|
||||
content, err := readFile(file)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "file_not_found"})
|
||||
return 1
|
||||
}
|
||||
lines := trimLines(content)
|
||||
|
||||
for i := 0; i < len(args); i++ {
|
||||
if args[i] == "--set" && i+1 < len(args) {
|
||||
kv := args[i+1]
|
||||
i++
|
||||
parts := strings.SplitN(kv, "=", 2)
|
||||
if len(parts) != 2 {
|
||||
continue
|
||||
}
|
||||
key := parts[0]
|
||||
val := parts[1]
|
||||
for idx, line := range lines {
|
||||
if strings.HasPrefix(line, key+":") {
|
||||
lines[idx] = fmt.Sprintf("%s: %s", key, val)
|
||||
}
|
||||
}
|
||||
updatedKeys = append(updatedKeys, key)
|
||||
}
|
||||
}
|
||||
|
||||
_ = os.WriteFile(file, []byte(strings.Join(lines, "\n")), 0o644)
|
||||
writeJSON(map[string]any{"ok": true, "updated": updatedKeys})
|
||||
return 0
|
||||
}
|
||||
|
||||
func readStoryRangeFromState(stateFile string) []string {
|
||||
content, err := readFile(stateFile)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
blocks := []string{extractFrontmatter(content), content}
|
||||
for _, block := range blocks {
|
||||
if strings.TrimSpace(block) == "" {
|
||||
continue
|
||||
}
|
||||
lines := trimLines(block)
|
||||
storyRange := []string{}
|
||||
inRange := false
|
||||
for _, line := range lines {
|
||||
if strings.HasPrefix(strings.TrimSpace(line), "storyRange:") {
|
||||
inRange = true
|
||||
if strings.HasSuffix(strings.TrimSpace(line), "[]") {
|
||||
storyRange = []string{}
|
||||
}
|
||||
continue
|
||||
}
|
||||
if inRange && strings.HasPrefix(strings.TrimSpace(line), "-") {
|
||||
val := strings.TrimSpace(strings.TrimPrefix(strings.TrimSpace(line), "-"))
|
||||
val = strings.Trim(val, "\"")
|
||||
if val != "" {
|
||||
storyRange = append(storyRange, val)
|
||||
}
|
||||
continue
|
||||
}
|
||||
if inRange && regexp.MustCompile(`^\S+:`).MatchString(line) && !strings.HasPrefix(strings.TrimSpace(line), "-") {
|
||||
inRange = false
|
||||
}
|
||||
}
|
||||
if len(storyRange) > 0 {
|
||||
return storyRange
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
@ -0,0 +1,175 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type reviewVerification struct {
|
||||
Verified bool
|
||||
Reason string
|
||||
Status string
|
||||
Note string
|
||||
}
|
||||
|
||||
func orchestratorNormalizeKey(args []string) int {
|
||||
if len(args) < 1 {
|
||||
writeJSON(map[string]any{"ok": false, "error": "input required"})
|
||||
return 1
|
||||
}
|
||||
input := args[0]
|
||||
format := "auto"
|
||||
if len(args) >= 3 && args[1] == "--to" {
|
||||
format = args[2]
|
||||
}
|
||||
|
||||
result, ok := normalizeStoryKey(getProjectRoot(), input)
|
||||
if !ok {
|
||||
writeJSON(map[string]any{"ok": false, "error": "unrecognized format", "input": input})
|
||||
return 1
|
||||
}
|
||||
|
||||
switch format {
|
||||
case "id":
|
||||
println(result.ID)
|
||||
case "prefix":
|
||||
println(result.Prefix)
|
||||
case "key":
|
||||
println(result.Key)
|
||||
default:
|
||||
writeJSON(map[string]any{"ok": true, "id": result.ID, "prefix": result.Prefix, "key": result.Key})
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func orchestratorStoryFileStatus(args []string) int {
|
||||
if len(args) < 1 {
|
||||
writeJSON(map[string]any{"ok": false, "error": "story input required"})
|
||||
return 1
|
||||
}
|
||||
storyInput := args[0]
|
||||
projectRoot := getProjectRoot()
|
||||
|
||||
norm, ok := normalizeStoryKey(projectRoot, storyInput)
|
||||
if !ok || norm.Prefix == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "could not normalize story key", "input": storyInput})
|
||||
return 1
|
||||
}
|
||||
|
||||
artifactsDir := filepath.Join(projectRoot, "_bmad-output", "implementation-artifacts")
|
||||
matches, _ := filepath.Glob(filepath.Join(artifactsDir, norm.Prefix+"-*.md"))
|
||||
if len(matches) == 0 {
|
||||
writeJSON(map[string]any{"ok": false, "error": "story file not found", "prefix": norm.Prefix})
|
||||
return 1
|
||||
}
|
||||
storyFile := matches[0]
|
||||
|
||||
status := findFrontmatterValueCase(storyFile, "Status")
|
||||
title := findFrontmatterValueCase(storyFile, "Title")
|
||||
writeJSON(map[string]any{"ok": true, "story_key": norm.Key, "file": storyFile, "status": defaultString(status, "unknown"), "title": title})
|
||||
return 0
|
||||
}
|
||||
|
||||
func orchestratorVerifyCodeReview(args []string) int {
|
||||
if len(args) < 1 {
|
||||
writeJSON(map[string]any{"verified": false, "reason": "story_key_required"})
|
||||
return 1
|
||||
}
|
||||
|
||||
storyInput := args[0]
|
||||
projectRoot := getProjectRoot()
|
||||
|
||||
norm, ok := normalizeStoryKey(projectRoot, storyInput)
|
||||
if !ok || norm.ID == "" {
|
||||
writeJSON(map[string]any{"verified": false, "reason": "could_not_normalize_key", "input": storyInput})
|
||||
return 1
|
||||
}
|
||||
|
||||
statusFile := sprintStatusFile(projectRoot)
|
||||
status := sprintStatusGet(statusFile, norm.ID)
|
||||
if status.Done {
|
||||
writeJSON(map[string]any{"verified": true, "story": norm.Key, "sprint_status": "done", "source": "sprint-status.yaml"})
|
||||
return 0
|
||||
}
|
||||
|
||||
storyStatus, ok := storyFileStatus(projectRoot, storyInput)
|
||||
if ok && storyStatus == "done" {
|
||||
writeJSON(map[string]any{"verified": true, "story": norm.Key, "sprint_status": status.Status, "story_file_status": "done", "source": "story-file", "note": "sprint_status_not_updated"})
|
||||
return 0
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"verified": false, "story": norm.Key, "sprint_status": status.Status, "story_file_status": defaultString(storyStatus, "unknown"), "reason": "workflow_not_complete"})
|
||||
return 1
|
||||
}
|
||||
|
||||
func verifyCodeReviewCompletion(projectRoot, storyKey string) reviewVerification {
|
||||
statusFile := sprintStatusFile(projectRoot)
|
||||
status := sprintStatusGet(statusFile, storyKey)
|
||||
if status.Done {
|
||||
return reviewVerification{Verified: true, Status: "done"}
|
||||
}
|
||||
storyStatus, ok := storyFileStatus(projectRoot, storyKey)
|
||||
if ok && storyStatus == "done" {
|
||||
return reviewVerification{Verified: true, Status: status.Status, Note: "story_file_done_but_sprint_status_not_updated"}
|
||||
}
|
||||
return reviewVerification{Verified: false, Status: status.Status, Reason: "workflow_not_complete"}
|
||||
}
|
||||
|
||||
func normalizeStoryKey(projectRoot, input string) (struct{ ID, Prefix, Key string }, bool) {
|
||||
result := struct{ ID, Prefix, Key string }{}
|
||||
if regexp.MustCompile(`^[0-9]+\.[0-9]+$`).MatchString(input) {
|
||||
result.ID = input
|
||||
result.Prefix = strings.ReplaceAll(input, ".", "-")
|
||||
} else if regexp.MustCompile(`^[0-9]+-[0-9]+$`).MatchString(input) {
|
||||
result.Prefix = input
|
||||
result.ID = strings.ReplaceAll(input, "-", ".")
|
||||
} else if regexp.MustCompile(`^[0-9]+-[0-9]+-`).MatchString(input) {
|
||||
result.Key = input
|
||||
parts := strings.SplitN(input, "-", 3)
|
||||
result.Prefix = parts[0] + "-" + parts[1]
|
||||
result.ID = strings.ReplaceAll(result.Prefix, "-", ".")
|
||||
} else {
|
||||
return result, false
|
||||
}
|
||||
|
||||
if result.Key == "" {
|
||||
artifactsDir := filepath.Join(projectRoot, "_bmad-output", "implementation-artifacts")
|
||||
matches, _ := filepath.Glob(filepath.Join(artifactsDir, result.Prefix+"-*.md"))
|
||||
if len(matches) > 0 {
|
||||
result.Key = strings.TrimSuffix(filepath.Base(matches[0]), ".md")
|
||||
}
|
||||
}
|
||||
|
||||
if result.Key == "" {
|
||||
statusFile := sprintStatusFile(projectRoot)
|
||||
if fileExists(statusFile) {
|
||||
content, _ := readFile(statusFile)
|
||||
re := regexp.MustCompile(`(?m)^\s*` + regexp.QuoteMeta(result.Prefix) + `-`) // full key
|
||||
lines := re.FindAllString(content, -1)
|
||||
if len(lines) > 0 {
|
||||
result.Key = strings.TrimSpace(strings.SplitN(lines[0], ":", 2)[0])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if result.Key == "" {
|
||||
result.Key = result.Prefix
|
||||
}
|
||||
|
||||
return result, true
|
||||
}
|
||||
|
||||
func storyFileStatus(projectRoot, storyInput string) (string, bool) {
|
||||
norm, ok := normalizeStoryKey(projectRoot, storyInput)
|
||||
if !ok {
|
||||
return "", false
|
||||
}
|
||||
artifactsDir := filepath.Join(projectRoot, "_bmad-output", "implementation-artifacts")
|
||||
matches, _ := filepath.Glob(filepath.Join(artifactsDir, norm.Prefix+"-*.md"))
|
||||
if len(matches) == 0 {
|
||||
return "", false
|
||||
}
|
||||
status := findFrontmatterValueCase(matches[0], "Status")
|
||||
return status, true
|
||||
}
|
||||
|
|
@ -0,0 +1,65 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"os/exec"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func findFrontmatterValue(path, key string) string {
|
||||
content, err := readFile(path)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
front := extractFrontmatter(content)
|
||||
lines := trimLines(front)
|
||||
for _, line := range lines {
|
||||
if strings.HasPrefix(line, key+":") {
|
||||
return strings.Trim(strings.TrimSpace(strings.TrimPrefix(line, key+":")), "\"")
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func findFrontmatterValueCase(path, key string) string {
|
||||
content, err := readFile(path)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
front := extractFrontmatter(content)
|
||||
lines := trimLines(front)
|
||||
for _, line := range lines {
|
||||
if strings.HasPrefix(line, key+":") {
|
||||
return strings.Trim(strings.TrimSpace(strings.TrimPrefix(line, key+":")), "\"")
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func extractLastAction(path string) string {
|
||||
content, err := readFile(path)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
lines := trimLines(content)
|
||||
for i := 0; i < len(lines); i++ {
|
||||
if strings.HasPrefix(lines[i], "## Action Log") {
|
||||
if i+2 < len(lines) {
|
||||
line := lines[i+2]
|
||||
return strings.TrimLeft(strings.TrimSpace(line), "* ")
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func defaultString(val, def string) string {
|
||||
if val == "" {
|
||||
return def
|
||||
}
|
||||
return val
|
||||
}
|
||||
|
||||
func execCommand(name string, args ...string) *exec.Cmd {
|
||||
return exec.Command(name, args...)
|
||||
}
|
||||
|
|
@ -0,0 +1,480 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type complexityRule struct {
|
||||
Label string `json:"label"`
|
||||
Pattern string `json:"pattern"`
|
||||
Score int `json:"score"`
|
||||
}
|
||||
|
||||
type structuralRules struct {
|
||||
ACCountMedium int `json:"ac_count_medium"`
|
||||
ACCountHigh int `json:"ac_count_high"`
|
||||
ACCountMediumScore int `json:"ac_count_medium_score"`
|
||||
ACCountHighScore int `json:"ac_count_high_score"`
|
||||
DependencyScore int `json:"dependency_score"`
|
||||
LargeStoryWordThreshold int `json:"large_story_word_threshold"`
|
||||
LargeStoryScore int `json:"large_story_score"`
|
||||
}
|
||||
|
||||
type complexityRules struct {
|
||||
Thresholds struct {
|
||||
LowMax int `json:"low_max"`
|
||||
MediumMax int `json:"medium_max"`
|
||||
} `json:"thresholds"`
|
||||
StructuralRules structuralRules `json:"structural_rules"`
|
||||
Rules []complexityRule `json:"rules"`
|
||||
}
|
||||
|
||||
func cmdParseEpic(args []string) int {
|
||||
epicFile := ""
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--file":
|
||||
if i+1 < len(args) {
|
||||
epicFile = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if epicFile == "" || !fileExists(epicFile) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "epic_file_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
content, err := readFile(epicFile)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "epic_file_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
lines := trimLines(content)
|
||||
epicTitle := ""
|
||||
for _, line := range lines {
|
||||
if strings.HasPrefix(line, "# ") {
|
||||
epicTitle = strings.TrimSpace(strings.TrimPrefix(line, "# "))
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
storyRe := regexp.MustCompile(`^###\s+Story\s+(\d+)\.(\d+):\s*(.*)$`)
|
||||
epicRe := regexp.MustCompile(`^##\s+Epic\s+(\d+):\s*(.*)$`)
|
||||
|
||||
currentEpicTitle := ""
|
||||
stories := make([]map[string]string, 0)
|
||||
|
||||
for _, line := range lines {
|
||||
if m := epicRe.FindStringSubmatch(line); m != nil {
|
||||
currentEpicTitle = strings.TrimSpace(m[2])
|
||||
continue
|
||||
}
|
||||
if m := storyRe.FindStringSubmatch(line); m != nil {
|
||||
epicNum := m[1]
|
||||
storyNum := m[2]
|
||||
storyTitle := strings.TrimSpace(m[3])
|
||||
storyID := fmt.Sprintf("%s.%s", epicNum, storyNum)
|
||||
stories = append(stories, map[string]string{
|
||||
"epicNum": epicNum,
|
||||
"epicTitle": currentEpicTitle,
|
||||
"storyNum": storyNum,
|
||||
"storyId": storyID,
|
||||
"title": storyTitle,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{
|
||||
"ok": true,
|
||||
"epicTitle": epicTitle,
|
||||
"stories": stories,
|
||||
"count": len(stories),
|
||||
"file": epicFile,
|
||||
})
|
||||
return 0
|
||||
}
|
||||
|
||||
func cmdParseStory(args []string) int {
|
||||
epicFile := ""
|
||||
storyID := ""
|
||||
rulesFile := ""
|
||||
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--epic":
|
||||
if i+1 < len(args) {
|
||||
epicFile = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--story":
|
||||
if i+1 < len(args) {
|
||||
storyID = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--rules":
|
||||
if i+1 < len(args) {
|
||||
rulesFile = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if epicFile == "" || storyID == "" || !fileExists(epicFile) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_epic_or_story"})
|
||||
return 1
|
||||
}
|
||||
if rulesFile == "" || !fileExists(rulesFile) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "rules_file_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
content, err := readFile(epicFile)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_epic_or_story"})
|
||||
return 1
|
||||
}
|
||||
lines := trimLines(content)
|
||||
|
||||
headPattern := regexp.MustCompile(`^###\s+Story\s+` + regexp.QuoteMeta(storyID) + `:\s*(.*)$`)
|
||||
headLine := ""
|
||||
title := ""
|
||||
startIndex := -1
|
||||
for idx, line := range lines {
|
||||
if m := headPattern.FindStringSubmatch(line); m != nil {
|
||||
headLine = line
|
||||
title = strings.TrimSpace(m[1])
|
||||
startIndex = idx
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if headLine == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "story_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
descriptionLines := []string{}
|
||||
acLines := []string{}
|
||||
dependencies := ""
|
||||
|
||||
inStory := false
|
||||
inAC := false
|
||||
for i := startIndex + 1; i < len(lines); i++ {
|
||||
line := lines[i]
|
||||
if strings.HasPrefix(line, "### Story ") || strings.HasPrefix(line, "## Epic ") {
|
||||
break
|
||||
}
|
||||
if !inStory {
|
||||
inStory = true
|
||||
}
|
||||
|
||||
if strings.Contains(line, "Acceptance Criteria") {
|
||||
inAC = true
|
||||
continue
|
||||
}
|
||||
|
||||
if !inAC {
|
||||
trimmed := strings.TrimSpace(line)
|
||||
if trimmed != "" {
|
||||
descriptionLines = append(descriptionLines, trimmed)
|
||||
}
|
||||
} else {
|
||||
trimmed := strings.TrimSpace(line)
|
||||
if trimmed != "" {
|
||||
acLines = append(acLines, trimmed)
|
||||
}
|
||||
}
|
||||
|
||||
if dependencies == "" {
|
||||
if strings.Contains(line, "Dependencies:") || strings.Contains(line, "**Dependencies**:") {
|
||||
dep := line
|
||||
dep = strings.ReplaceAll(dep, "**Dependencies**:", "")
|
||||
dep = strings.ReplaceAll(dep, "Dependencies:", "")
|
||||
dependencies = strings.TrimSpace(dep)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
description := strings.Join(descriptionLines, " ")
|
||||
description = strings.Join(strings.Fields(description), " ")
|
||||
|
||||
rulesRaw, err := os.ReadFile(rulesFile)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "rules_file_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
var rules complexityRules
|
||||
if err := json.Unmarshal(rulesRaw, &rules); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "rules_file_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
contentForScore := strings.TrimSpace(strings.Join([]string{title, description, strings.Join(acLines, " ")}, " "))
|
||||
score := 0
|
||||
reasons := []string{}
|
||||
for _, rule := range rules.Rules {
|
||||
pattern := rule.Pattern
|
||||
re, err := regexp.Compile("(?i)" + pattern)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if re.MatchString(contentForScore) {
|
||||
score += rule.Score
|
||||
reasons = append(reasons, rule.Label)
|
||||
}
|
||||
}
|
||||
|
||||
// Structural analysis pass
|
||||
sr := rules.StructuralRules
|
||||
structuralReasons := []string{}
|
||||
|
||||
// AC count scoring (high replaces medium, not additive)
|
||||
if sr.ACCountHigh > 0 && len(acLines) > sr.ACCountHigh {
|
||||
score += sr.ACCountHighScore
|
||||
structuralReasons = append(structuralReasons, fmt.Sprintf("High AC count (%d)", len(acLines)))
|
||||
} else if sr.ACCountMedium > 0 && len(acLines) > sr.ACCountMedium {
|
||||
score += sr.ACCountMediumScore
|
||||
structuralReasons = append(structuralReasons, fmt.Sprintf("Elevated AC count (%d)", len(acLines)))
|
||||
}
|
||||
|
||||
// Dependency scoring
|
||||
if sr.DependencyScore > 0 && dependencies != "" && strings.ToLower(dependencies) != "none" {
|
||||
score += sr.DependencyScore
|
||||
structuralReasons = append(structuralReasons, "Has explicit dependencies")
|
||||
}
|
||||
|
||||
// Large story scoring (word count)
|
||||
if sr.LargeStoryWordThreshold > 0 {
|
||||
wordCount := len(strings.Fields(contentForScore))
|
||||
if wordCount > sr.LargeStoryWordThreshold {
|
||||
score += sr.LargeStoryScore
|
||||
structuralReasons = append(structuralReasons, fmt.Sprintf("Large story (%d words)", wordCount))
|
||||
}
|
||||
}
|
||||
|
||||
reasons = append(reasons, structuralReasons...)
|
||||
|
||||
level := "High"
|
||||
if score <= rules.Thresholds.LowMax {
|
||||
level = "Low"
|
||||
} else if score <= rules.Thresholds.MediumMax {
|
||||
level = "Medium"
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{
|
||||
"ok": true,
|
||||
"storyId": storyID,
|
||||
"title": title,
|
||||
"description": description,
|
||||
"acceptanceCriteria": acLines,
|
||||
"dependencies": dependencies,
|
||||
"complexity": map[string]any{
|
||||
"score": score,
|
||||
"level": level,
|
||||
"reasons": reasons,
|
||||
},
|
||||
})
|
||||
return 0
|
||||
}
|
||||
|
||||
func cmdParseStoryRange(args []string) int {
|
||||
input := ""
|
||||
total := 0
|
||||
idsCSV := ""
|
||||
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--input":
|
||||
if i+1 < len(args) {
|
||||
input = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--total":
|
||||
if i+1 < len(args) {
|
||||
if v, err := strconv.Atoi(args[i+1]); err == nil {
|
||||
total = v
|
||||
}
|
||||
i++
|
||||
}
|
||||
case "--ids":
|
||||
if i+1 < len(args) {
|
||||
idsCSV = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if input == "" || total <= 0 {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_input_or_total"})
|
||||
return 1
|
||||
}
|
||||
|
||||
idsArr := []string{}
|
||||
if idsCSV != "" {
|
||||
for _, part := range strings.Split(idsCSV, ",") {
|
||||
idsArr = append(idsArr, strings.TrimSpace(part))
|
||||
}
|
||||
}
|
||||
|
||||
selected := map[int]bool{}
|
||||
addSelected := func(n int) {
|
||||
if !selected[n] {
|
||||
selected[n] = true
|
||||
}
|
||||
}
|
||||
|
||||
normalized := strings.ToLower(strings.ReplaceAll(input, " ", ""))
|
||||
if normalized == "all" {
|
||||
for i := 1; i <= total; i++ {
|
||||
addSelected(i)
|
||||
}
|
||||
} else {
|
||||
parts := strings.Split(normalized, ",")
|
||||
for _, part := range parts {
|
||||
part = strings.ReplaceAll(part, " ", "")
|
||||
if part == "" {
|
||||
continue
|
||||
}
|
||||
if strings.Contains(part, "-") {
|
||||
bounds := strings.SplitN(part, "-", 2)
|
||||
start, err1 := strconv.Atoi(bounds[0])
|
||||
end, err2 := strconv.Atoi(bounds[1])
|
||||
if err1 != nil || err2 != nil {
|
||||
continue
|
||||
}
|
||||
if start <= end {
|
||||
for i := start; i <= end; i++ {
|
||||
addSelected(i)
|
||||
}
|
||||
} else {
|
||||
for i := end; i <= start; i++ {
|
||||
addSelected(i)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
n, err := strconv.Atoi(part)
|
||||
if err == nil {
|
||||
addSelected(n)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
indices := []int{}
|
||||
for n := range selected {
|
||||
if n >= 1 && n <= total {
|
||||
indices = append(indices, n)
|
||||
}
|
||||
}
|
||||
|
||||
sort.Ints(indices)
|
||||
|
||||
storyIDs := []string{}
|
||||
if idsCSV != "" && len(idsArr) > 0 {
|
||||
for _, idx := range indices {
|
||||
if idx-1 >= 0 && idx-1 < len(idsArr) {
|
||||
storyIDs = append(storyIDs, idsArr[idx-1])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "indices": indices, "storyIds": storyIDs, "count": len(indices)})
|
||||
return 0
|
||||
}
|
||||
|
||||
func cmdEpicComplete(args []string) int {
|
||||
epicFile := ""
|
||||
rangeCSV := ""
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--epic":
|
||||
if i+1 < len(args) {
|
||||
epicFile = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--range":
|
||||
if i+1 < len(args) {
|
||||
rangeCSV = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if epicFile == "" || !fileExists(epicFile) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "epic_file_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
content, err := readFile(epicFile)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "epic_file_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
storyRe := regexp.MustCompile(`^###\s+Story\s+(\d+)\.(\d+):`)
|
||||
lines := trimLines(content)
|
||||
|
||||
storyIDs := []string{}
|
||||
for _, line := range lines {
|
||||
if m := storyRe.FindStringSubmatch(line); m != nil {
|
||||
storyIDs = append(storyIDs, fmt.Sprintf("%s.%s", m[1], m[2]))
|
||||
}
|
||||
}
|
||||
|
||||
if len(storyIDs) == 0 {
|
||||
writeJSON(map[string]any{"ok": false, "error": "no_stories_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
maxEpicNum := 0
|
||||
maxStoryNum := 0
|
||||
for _, sid := range storyIDs {
|
||||
parts := strings.SplitN(sid, ".", 2)
|
||||
if len(parts) != 2 {
|
||||
continue
|
||||
}
|
||||
epicNum, _ := strconv.Atoi(parts[0])
|
||||
storyNum, _ := strconv.Atoi(parts[1])
|
||||
if epicNum > maxEpicNum || (epicNum == maxEpicNum && storyNum > maxStoryNum) {
|
||||
maxEpicNum = epicNum
|
||||
maxStoryNum = storyNum
|
||||
}
|
||||
}
|
||||
|
||||
maxEpicID := fmt.Sprintf("%d.%d", maxEpicNum, maxStoryNum)
|
||||
|
||||
maxRangeEpic := 0
|
||||
maxRangeStory := 0
|
||||
for _, sid := range strings.Split(rangeCSV, ",") {
|
||||
sid = strings.TrimSpace(sid)
|
||||
if sid == "" {
|
||||
continue
|
||||
}
|
||||
parts := strings.SplitN(sid, ".", 2)
|
||||
if len(parts) != 2 {
|
||||
continue
|
||||
}
|
||||
epicNum, _ := strconv.Atoi(parts[0])
|
||||
storyNum, _ := strconv.Atoi(parts[1])
|
||||
if epicNum > maxRangeEpic || (epicNum == maxRangeEpic && storyNum > maxRangeStory) {
|
||||
maxRangeEpic = epicNum
|
||||
maxRangeStory = storyNum
|
||||
}
|
||||
}
|
||||
|
||||
maxRangeID := fmt.Sprintf("%d.%d", maxRangeEpic, maxRangeStory)
|
||||
epicComplete := maxRangeID == maxEpicID
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "epicComplete": epicComplete, "maxEpicStory": maxEpicID})
|
||||
return 0
|
||||
}
|
||||
|
|
@ -0,0 +1,624 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func cmdBuildStateDoc(args []string) int {
|
||||
templatePath := ""
|
||||
outputFolder := ""
|
||||
configFile := ""
|
||||
configJSON := ""
|
||||
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--template":
|
||||
if i+1 < len(args) {
|
||||
templatePath = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--output-folder":
|
||||
if i+1 < len(args) {
|
||||
outputFolder = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--config-file":
|
||||
if i+1 < len(args) {
|
||||
configFile = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--config-json":
|
||||
if i+1 < len(args) {
|
||||
configJSON = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if templatePath == "" || !fileExists(templatePath) || outputFolder == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_template_or_output"})
|
||||
return 1
|
||||
}
|
||||
|
||||
if configFile != "" && fileExists(configFile) {
|
||||
if raw, err := readFile(configFile); err == nil {
|
||||
configJSON = raw
|
||||
}
|
||||
}
|
||||
|
||||
if strings.TrimSpace(configJSON) == "" {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_config"})
|
||||
return 1
|
||||
}
|
||||
|
||||
if err := ensureDir(outputFolder); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "output_folder_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
var config map[string]any
|
||||
if err := json.Unmarshal([]byte(configJSON), &config); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_config"})
|
||||
return 1
|
||||
}
|
||||
|
||||
now := nowUTC().Format("2006-01-02T15:04:05Z")
|
||||
stamp := nowUTC().Format("20060102-150405")
|
||||
|
||||
epicID, _ := config["epic"].(string)
|
||||
if epicID == "" {
|
||||
epicID = "epic"
|
||||
}
|
||||
safeEpic := regexp.MustCompile(`[^a-zA-Z0-9]+`).ReplaceAllString(epicID, "-")
|
||||
safeEpic = strings.Trim(safeEpic, "-")
|
||||
if safeEpic == "" {
|
||||
safeEpic = "epic"
|
||||
}
|
||||
|
||||
outputName := fmt.Sprintf("orchestration-%s-%s.md", safeEpic, stamp)
|
||||
outputPath := filepath.Join(outputFolder, outputName)
|
||||
|
||||
text, err := readFile(templatePath)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "missing_template_or_output"})
|
||||
return 1
|
||||
}
|
||||
|
||||
getString := func(key, def string) string {
|
||||
if v, ok := config[key].(string); ok {
|
||||
return v
|
||||
}
|
||||
return def
|
||||
}
|
||||
|
||||
getSlice := func(key string) []any {
|
||||
if v, ok := config[key].([]any); ok {
|
||||
return v
|
||||
}
|
||||
return []any{}
|
||||
}
|
||||
|
||||
replacements := map[string]any{
|
||||
"epic": getString("epic", ""),
|
||||
"epicName": getString("epicName", ""),
|
||||
"storyRange": config["storyRange"],
|
||||
"status": getString("status", "READY"),
|
||||
"currentStory": config["currentStory"],
|
||||
"currentStep": config["currentStep"],
|
||||
"stepsCompleted": getSlice("stepsCompleted"),
|
||||
"lastUpdated": now,
|
||||
"createdAt": now,
|
||||
"aiCommand": getString("aiCommand", ""),
|
||||
"agentsFile": getString("agentsFile", ""),
|
||||
"complexityFile": getString("complexityFile", ""),
|
||||
}
|
||||
|
||||
if overrides, ok := config["overrides"].(map[string]any); ok {
|
||||
skip := false
|
||||
if v, ok := overrides["skipAutomate"].(bool); ok {
|
||||
skip = v
|
||||
}
|
||||
maxParallel := 1
|
||||
switch v := overrides["maxParallel"].(type) {
|
||||
case float64:
|
||||
maxParallel = int(v)
|
||||
case int:
|
||||
maxParallel = v
|
||||
}
|
||||
repl := fmt.Sprintf("overrides:\n skipAutomate: %t\n maxParallel: %d\n", skip, maxParallel)
|
||||
re := regexp.MustCompile(`(?m)^overrides:\n(?:(?:\s{2}.*\n)*)`)
|
||||
text = re.ReplaceAllString(text, repl)
|
||||
}
|
||||
|
||||
if custom, ok := config["customInstructions"]; ok {
|
||||
b, _ := json.Marshal(custom)
|
||||
re := regexp.MustCompile(`(?m)^customInstructions:.*$`)
|
||||
text = re.ReplaceAllString(text, "customInstructions: "+string(b))
|
||||
}
|
||||
|
||||
if agent, ok := config["agentConfig"].(map[string]any); ok {
|
||||
getFallback := func(v any) (string, bool) {
|
||||
switch val := v.(type) {
|
||||
case string:
|
||||
return val, true
|
||||
case bool:
|
||||
if !val {
|
||||
return "false", true
|
||||
}
|
||||
return "true", true
|
||||
default:
|
||||
return "", false
|
||||
}
|
||||
}
|
||||
formatFallback := func(v any) string {
|
||||
if s, ok := v.(string); ok {
|
||||
lower := strings.ToLower(strings.TrimSpace(s))
|
||||
if lower == "false" || lower == "none" || lower == "null" {
|
||||
return "false"
|
||||
}
|
||||
return mustJSON(s)
|
||||
}
|
||||
if b, ok := v.(bool); ok {
|
||||
if !b {
|
||||
return "false"
|
||||
}
|
||||
return "true"
|
||||
}
|
||||
return "false"
|
||||
}
|
||||
parseTaskOverrides := func(raw any) map[string]map[string]any {
|
||||
out := map[string]map[string]any{}
|
||||
taskMap, ok := raw.(map[string]any)
|
||||
if !ok {
|
||||
return out
|
||||
}
|
||||
for task, val := range taskMap {
|
||||
entry, ok := val.(map[string]any)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
primary := ""
|
||||
if v, ok := entry["primary"].(string); ok {
|
||||
primary = v
|
||||
}
|
||||
fallbackVal, hasFallback := getFallback(entry["fallback"])
|
||||
if primary == "" && !hasFallback {
|
||||
continue
|
||||
}
|
||||
out[task] = map[string]any{
|
||||
"primary": primary,
|
||||
"fallback": fallbackVal,
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
defaultPrimary := "claude"
|
||||
defaultFallback := "codex"
|
||||
if v, ok := agent["defaultPrimary"].(string); ok && v != "" {
|
||||
defaultPrimary = v
|
||||
} else if v, ok := agent["primary"].(string); ok && v != "" {
|
||||
defaultPrimary = v
|
||||
}
|
||||
if v, ok := agent["defaultFallback"].(string); ok && v != "" {
|
||||
defaultFallback = v
|
||||
} else if v, ok := agent["fallback"].(string); ok && v != "" {
|
||||
defaultFallback = v
|
||||
}
|
||||
perTask := parseTaskOverrides(agent["perTask"])
|
||||
complexityOverrides := map[string]map[string]map[string]any{}
|
||||
if raw, ok := agent["complexityOverrides"].(map[string]any); ok {
|
||||
for level, v := range raw {
|
||||
complexityOverrides[level] = parseTaskOverrides(v)
|
||||
}
|
||||
}
|
||||
|
||||
lines := []string{
|
||||
"agentConfig:",
|
||||
fmt.Sprintf(" defaultPrimary: %s", mustJSON(defaultPrimary)),
|
||||
fmt.Sprintf(" defaultFallback: %s", mustJSON(defaultFallback)),
|
||||
}
|
||||
|
||||
if len(perTask) > 0 {
|
||||
lines = append(lines, " perTask:")
|
||||
keys := make([]string, 0, len(perTask))
|
||||
for k := range perTask {
|
||||
keys = append(keys, k)
|
||||
}
|
||||
sort.Strings(keys)
|
||||
for _, task := range keys {
|
||||
entry := perTask[task]
|
||||
lines = append(lines, fmt.Sprintf(" %s:", task))
|
||||
if p, ok := entry["primary"].(string); ok && p != "" {
|
||||
lines = append(lines, fmt.Sprintf(" primary: %s", mustJSON(p)))
|
||||
}
|
||||
if f, ok := entry["fallback"]; ok {
|
||||
lines = append(lines, fmt.Sprintf(" fallback: %s", formatFallback(f)))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(complexityOverrides) > 0 {
|
||||
lines = append(lines, " complexityOverrides:")
|
||||
levels := make([]string, 0, len(complexityOverrides))
|
||||
for level := range complexityOverrides {
|
||||
levels = append(levels, level)
|
||||
}
|
||||
sort.Strings(levels)
|
||||
for _, level := range levels {
|
||||
taskMap := complexityOverrides[level]
|
||||
if len(taskMap) == 0 {
|
||||
continue
|
||||
}
|
||||
lines = append(lines, fmt.Sprintf(" %s:", level))
|
||||
taskKeys := make([]string, 0, len(taskMap))
|
||||
for k := range taskMap {
|
||||
taskKeys = append(taskKeys, k)
|
||||
}
|
||||
sort.Strings(taskKeys)
|
||||
for _, task := range taskKeys {
|
||||
entry := taskMap[task]
|
||||
lines = append(lines, fmt.Sprintf(" %s:", task))
|
||||
if p, ok := entry["primary"].(string); ok && p != "" {
|
||||
lines = append(lines, fmt.Sprintf(" primary: %s", mustJSON(p)))
|
||||
}
|
||||
if f, ok := entry["fallback"]; ok {
|
||||
lines = append(lines, fmt.Sprintf(" fallback: %s", formatFallback(f)))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
block := strings.Join(lines, "\n") + "\n"
|
||||
re := regexp.MustCompile(`(?m)^agentConfig:\n(?:(?:\s{2}.*\n)*)`)
|
||||
text = re.ReplaceAllString(text, block)
|
||||
}
|
||||
|
||||
for key, value := range replacements {
|
||||
b, _ := json.Marshal(value)
|
||||
re := regexp.MustCompile(`(?m)^` + regexp.QuoteMeta(key) + `:.*$`)
|
||||
text = re.ReplaceAllString(text, fmt.Sprintf("%s: %s", key, string(b)))
|
||||
}
|
||||
|
||||
storyRange := []string{}
|
||||
if sr, ok := config["storyRange"].([]any); ok {
|
||||
for _, v := range sr {
|
||||
if s, ok := v.(string); ok {
|
||||
storyRange = append(storyRange, s)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
overridesSkip := false
|
||||
overridesMax := 1
|
||||
if overrides, ok := config["overrides"].(map[string]any); ok {
|
||||
if v, ok := overrides["skipAutomate"].(bool); ok {
|
||||
overridesSkip = v
|
||||
}
|
||||
switch v := overrides["maxParallel"].(type) {
|
||||
case float64:
|
||||
overridesMax = int(v)
|
||||
case int:
|
||||
overridesMax = v
|
||||
}
|
||||
}
|
||||
|
||||
bodyReplacements := map[string]string{
|
||||
"{{epicName}}": getString("epicName", ""),
|
||||
"{{epic}}": getString("epic", ""),
|
||||
"{{storyRange}}": strings.Join(storyRange, ", "),
|
||||
"{{createdAt}}": now,
|
||||
"{{overrides.skipAutomate}}": fmt.Sprintf("%t", overridesSkip),
|
||||
"{{overrides.maxParallel}}": fmt.Sprintf("%d", overridesMax),
|
||||
"{{customInstructions}}": getString("customInstructions", ""),
|
||||
}
|
||||
|
||||
for k, v := range bodyReplacements {
|
||||
text = strings.ReplaceAll(text, k, v)
|
||||
}
|
||||
|
||||
rows := []string{}
|
||||
for _, sid := range storyRange {
|
||||
rows = append(rows, fmt.Sprintf("| %s | ⏳ | ⏳ | ⏳ | ⏳ | ⏳ | pending |", sid))
|
||||
}
|
||||
progressRows := strings.Join(rows, "\n")
|
||||
text = strings.ReplaceAll(text, "<!-- Progress rows will be appended here -->", progressRows)
|
||||
|
||||
if err := os.WriteFile(outputPath, []byte(text), 0o644); err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "write_failed"})
|
||||
return 1
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "path": outputPath, "createdAt": now})
|
||||
return 0
|
||||
}
|
||||
|
||||
func cmdSprintCompare(args []string) int {
|
||||
stateFile := ""
|
||||
sprintFile := ""
|
||||
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--state":
|
||||
if i+1 < len(args) {
|
||||
stateFile = args[i+1]
|
||||
i++
|
||||
}
|
||||
case "--sprint":
|
||||
if i+1 < len(args) {
|
||||
sprintFile = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if stateFile == "" || !fileExists(stateFile) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "state_not_found"})
|
||||
return 1
|
||||
}
|
||||
if sprintFile == "" || !fileExists(sprintFile) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "sprint_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
text, err := readFile(stateFile)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "state_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
front := extractFrontmatter(text)
|
||||
lines := trimLines(front)
|
||||
|
||||
storyRange := []string{}
|
||||
currentStory := ""
|
||||
key := ""
|
||||
for _, line := range lines {
|
||||
if strings.HasPrefix(line, "currentStory:") {
|
||||
val := strings.TrimSpace(strings.TrimPrefix(line, "currentStory:"))
|
||||
val = strings.Trim(val, "\"")
|
||||
if val != "null" && val != "" {
|
||||
currentStory = val
|
||||
}
|
||||
}
|
||||
if strings.HasPrefix(line, "storyRange:") {
|
||||
key = "storyRange"
|
||||
if strings.HasSuffix(strings.TrimSpace(line), "[]") {
|
||||
storyRange = []string{}
|
||||
}
|
||||
continue
|
||||
}
|
||||
if key == "storyRange" && strings.HasPrefix(strings.TrimSpace(line), "-") {
|
||||
storyRange = append(storyRange, strings.TrimSpace(strings.TrimPrefix(strings.TrimSpace(line), "-")))
|
||||
continue
|
||||
}
|
||||
if regexp.MustCompile(`^\S+:`).MatchString(line) && !strings.HasPrefix(line, "storyRange:") {
|
||||
key = ""
|
||||
}
|
||||
}
|
||||
|
||||
before := []string{}
|
||||
if currentStory != "" {
|
||||
idx := -1
|
||||
for i, sid := range storyRange {
|
||||
if sid == currentStory {
|
||||
idx = i
|
||||
break
|
||||
}
|
||||
}
|
||||
if idx >= 0 {
|
||||
before = append(before, storyRange[:idx]...)
|
||||
} else {
|
||||
before = append(before, storyRange...)
|
||||
}
|
||||
} else {
|
||||
before = append(before, storyRange...)
|
||||
}
|
||||
|
||||
statusText, err := readFile(sprintFile)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "sprint_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
incomplete := []string{}
|
||||
for _, sid := range before {
|
||||
re := regexp.MustCompile(`(?m)^\s*` + regexp.QuoteMeta(sid) + `:\s*(\S+)`)
|
||||
m := re.FindStringSubmatch(statusText)
|
||||
if m == nil || m[1] != "done" {
|
||||
incomplete = append(incomplete, sid)
|
||||
}
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "incomplete": incomplete, "checked": before})
|
||||
return 0
|
||||
}
|
||||
|
||||
func cmdStateMetrics(args []string) int {
|
||||
stateFile := ""
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--state":
|
||||
if i+1 < len(args) {
|
||||
stateFile = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if stateFile == "" || !fileExists(stateFile) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "state_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
text, err := readFile(stateFile)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "state_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
lines := trimLines(text)
|
||||
inTable := false
|
||||
total := 0
|
||||
completed := 0
|
||||
for _, line := range lines {
|
||||
if strings.HasPrefix(line, "| Story ") {
|
||||
inTable = true
|
||||
continue
|
||||
}
|
||||
if inTable && regexp.MustCompile(`^\|[- ]*\|`).MatchString(line) {
|
||||
continue
|
||||
}
|
||||
if inTable && strings.HasPrefix(line, "|") {
|
||||
parts := strings.Split(line, "|")
|
||||
if len(parts) < 8 {
|
||||
continue
|
||||
}
|
||||
story := strings.TrimSpace(parts[1])
|
||||
status := strings.TrimSpace(parts[7])
|
||||
if story != "" {
|
||||
total++
|
||||
statusLower := strings.ToLower(status)
|
||||
if strings.Contains(statusLower, "done") || strings.Contains(statusLower, "complete") || strings.Contains(statusLower, "completed") {
|
||||
completed++
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
if inTable && !strings.HasPrefix(line, "|") {
|
||||
inTable = false
|
||||
}
|
||||
}
|
||||
|
||||
reviewCycles := countMatches(text, `(?i)review cycle|code review cycle`)
|
||||
escalations := countMatches(text, `(?i)escalation|escalated`)
|
||||
|
||||
fmt.Printf("{\"ok\":true,\"storiesCompleted\":%d,\"total\":%d,\"reviewCycles\":%d,\"escalations\":%d}\n", completed, total, reviewCycles, escalations)
|
||||
return 0
|
||||
}
|
||||
|
||||
func cmdValidateState(args []string) int {
|
||||
stateFile := ""
|
||||
for i := 0; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--state":
|
||||
if i+1 < len(args) {
|
||||
stateFile = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if stateFile == "" || !fileExists(stateFile) {
|
||||
writeJSON(map[string]any{"ok": false, "error": "state_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
text, err := readFile(stateFile)
|
||||
if err != nil {
|
||||
writeJSON(map[string]any{"ok": false, "error": "state_not_found"})
|
||||
return 1
|
||||
}
|
||||
|
||||
front := extractFrontmatter(text)
|
||||
lines := trimLines(front)
|
||||
|
||||
fields := map[string]any{}
|
||||
currentKey := ""
|
||||
|
||||
keyRe := regexp.MustCompile(`^\S[^:]*:`)
|
||||
for _, line := range lines {
|
||||
if strings.HasPrefix(strings.TrimSpace(line), "#") {
|
||||
continue
|
||||
}
|
||||
if keyRe.MatchString(line) {
|
||||
parts := strings.SplitN(line, ":", 2)
|
||||
key := strings.TrimSpace(parts[0])
|
||||
val := strings.TrimSpace(parts[1])
|
||||
if val == "" {
|
||||
fields[key] = []string{}
|
||||
currentKey = key
|
||||
} else {
|
||||
fields[key] = val
|
||||
currentKey = ""
|
||||
}
|
||||
continue
|
||||
}
|
||||
if currentKey != "" && strings.HasPrefix(strings.TrimSpace(line), "-") {
|
||||
items, _ := fields[currentKey].([]string)
|
||||
items = append(items, strings.TrimSpace(strings.TrimPrefix(strings.TrimSpace(line), "-")))
|
||||
fields[currentKey] = items
|
||||
}
|
||||
}
|
||||
|
||||
issues := []string{}
|
||||
|
||||
required := func(key string, check func(any) bool) {
|
||||
val, ok := fields[key]
|
||||
if !ok {
|
||||
issues = append(issues, "Missing or empty "+key)
|
||||
return
|
||||
}
|
||||
switch v := val.(type) {
|
||||
case string:
|
||||
if strings.TrimSpace(v) == "" {
|
||||
issues = append(issues, "Missing or empty "+key)
|
||||
return
|
||||
}
|
||||
case []string:
|
||||
if len(v) == 0 {
|
||||
issues = append(issues, "Missing or empty "+key)
|
||||
return
|
||||
}
|
||||
}
|
||||
if check != nil && !check(val) {
|
||||
issues = append(issues, "Invalid "+key)
|
||||
}
|
||||
}
|
||||
|
||||
required("epic", func(_ any) bool { return true })
|
||||
required("epicName", func(_ any) bool { return true })
|
||||
required("storyRange", func(_ any) bool { return true })
|
||||
required("status", func(v any) bool {
|
||||
if s, ok := v.(string); ok {
|
||||
allowed := map[string]bool{"INITIALIZING": true, "READY": true, "IN_PROGRESS": true, "PAUSED": true, "COMPLETE": true, "ABORTED": true}
|
||||
return allowed[s]
|
||||
}
|
||||
return false
|
||||
})
|
||||
required("lastUpdated", func(v any) bool {
|
||||
if s, ok := v.(string); ok {
|
||||
return regexp.MustCompile(`\d{4}-\d{2}-\d{2}T`).MatchString(s)
|
||||
}
|
||||
return false
|
||||
})
|
||||
required("aiCommand", func(_ any) bool { return true })
|
||||
|
||||
structure := "ok"
|
||||
if len(issues) > 0 {
|
||||
structure = "issues"
|
||||
}
|
||||
|
||||
writeJSON(map[string]any{"ok": true, "structure": structure, "issues": issues})
|
||||
return 0
|
||||
}
|
||||
|
||||
func extractFrontmatter(text string) string {
|
||||
if strings.HasPrefix(text, "---") {
|
||||
parts := strings.SplitN(text, "---", 3)
|
||||
if len(parts) >= 3 {
|
||||
return strings.TrimPrefix(parts[1], "\n")
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func countMatches(text, pattern string) int {
|
||||
re := regexp.MustCompile(pattern)
|
||||
return len(re.FindAllStringIndex(text, -1))
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,162 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/md5"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func mustJSON(v any) string {
|
||||
b, err := json.Marshal(v)
|
||||
if err != nil {
|
||||
return "{}"
|
||||
}
|
||||
return string(b)
|
||||
}
|
||||
|
||||
func writeJSON(v any) {
|
||||
fmt.Println(mustJSON(v))
|
||||
}
|
||||
|
||||
func writeJSONTo(w io.Writer, v any) {
|
||||
fmt.Fprintln(w, mustJSON(v))
|
||||
}
|
||||
|
||||
func readFile(path string) (string, error) {
|
||||
b, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return string(b), nil
|
||||
}
|
||||
|
||||
func fileExists(path string) bool {
|
||||
info, err := os.Stat(path)
|
||||
return err == nil && !info.IsDir()
|
||||
}
|
||||
|
||||
func dirExists(path string) bool {
|
||||
info, err := os.Stat(path)
|
||||
return err == nil && info.IsDir()
|
||||
}
|
||||
|
||||
func ensureDir(path string) error {
|
||||
return os.MkdirAll(path, 0o755)
|
||||
}
|
||||
|
||||
func getPWD() string {
|
||||
wd, err := os.Getwd()
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return wd
|
||||
}
|
||||
|
||||
func md5Hex8(input string) string {
|
||||
h := md5.Sum([]byte(input))
|
||||
return hex.EncodeToString(h[:])[:8]
|
||||
}
|
||||
|
||||
func runCmd(name string, args ...string) (string, error) {
|
||||
cmd := exec.Command(name, args...)
|
||||
var out bytes.Buffer
|
||||
cmd.Stdout = &out
|
||||
cmd.Stderr = &out
|
||||
err := cmd.Run()
|
||||
return out.String(), err
|
||||
}
|
||||
|
||||
func runCmdExit(name string, args ...string) (string, int, error) {
|
||||
cmd := exec.Command(name, args...)
|
||||
var out bytes.Buffer
|
||||
cmd.Stdout = &out
|
||||
cmd.Stderr = &out
|
||||
err := cmd.Run()
|
||||
if err == nil {
|
||||
return out.String(), 0, nil
|
||||
}
|
||||
var exitErr *exec.ExitError
|
||||
if errors.As(err, &exitErr) {
|
||||
return out.String(), exitErr.ExitCode(), err
|
||||
}
|
||||
return out.String(), 1, err
|
||||
}
|
||||
|
||||
func execLookPath(bin string) (string, error) {
|
||||
return exec.LookPath(bin)
|
||||
}
|
||||
|
||||
func writeFileAtomic(path string, data []byte) error {
|
||||
dir := filepath.Dir(path)
|
||||
tmp := filepath.Join(dir, fmt.Sprintf(".%s.tmp", filepath.Base(path)))
|
||||
if err := os.WriteFile(tmp, data, 0o644); err != nil {
|
||||
return err
|
||||
}
|
||||
return os.Rename(tmp, path)
|
||||
}
|
||||
|
||||
func filterInputBox(input string) string {
|
||||
lines := strings.Split(input, "\n")
|
||||
var out []string
|
||||
inBox := false
|
||||
startRe := regexp.MustCompile(`^\s*[╭┌]`)
|
||||
endRe := regexp.MustCompile(`^\s*[╰└]`)
|
||||
boxLineRe := regexp.MustCompile(`^\s*[│|]`)
|
||||
for _, line := range lines {
|
||||
if startRe.MatchString(line) {
|
||||
inBox = true
|
||||
continue
|
||||
}
|
||||
if endRe.MatchString(line) {
|
||||
inBox = false
|
||||
continue
|
||||
}
|
||||
if inBox && boxLineRe.MatchString(line) {
|
||||
continue
|
||||
}
|
||||
out = append(out, line)
|
||||
}
|
||||
return strings.Join(out, "\n")
|
||||
}
|
||||
|
||||
func nowUTC() time.Time {
|
||||
return time.Now().UTC()
|
||||
}
|
||||
|
||||
func trimLines(input string) []string {
|
||||
raw := strings.Split(input, "\n")
|
||||
lines := make([]string, 0, len(raw))
|
||||
for _, line := range raw {
|
||||
lines = append(lines, strings.TrimRight(line, "\r"))
|
||||
}
|
||||
return lines
|
||||
}
|
||||
|
||||
func containsAnyPrefix(line string, prefixes []string) bool {
|
||||
for _, p := range prefixes {
|
||||
if strings.HasPrefix(line, p) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func clampInt(val, min, max int) int {
|
||||
if val < min {
|
||||
return min
|
||||
}
|
||||
if val > max {
|
||||
return max
|
||||
}
|
||||
return val
|
||||
}
|
||||
|
|
@ -0,0 +1,164 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func cmdValidateStoryCreation(args []string) int {
|
||||
action := ""
|
||||
if len(args) > 0 {
|
||||
action = args[0]
|
||||
args = args[1:]
|
||||
}
|
||||
|
||||
projectRoot := os.Getenv("PROJECT_ROOT")
|
||||
if projectRoot == "" {
|
||||
projectRoot = getPWD()
|
||||
}
|
||||
artifactsDir := filepath.Join(projectRoot, "_bmad-output", "implementation-artifacts")
|
||||
|
||||
storyIDToPrefix := func(id string) string {
|
||||
return strings.ReplaceAll(id, ".", "-")
|
||||
}
|
||||
|
||||
countStoryFiles := func(id string) int {
|
||||
prefix := storyIDToPrefix(id)
|
||||
matches, _ := filepath.Glob(filepath.Join(artifactsDir, prefix+"-*.md"))
|
||||
return len(matches)
|
||||
}
|
||||
|
||||
validate := func(id string, before, after int) {
|
||||
created := after - before
|
||||
prefix := storyIDToPrefix(id)
|
||||
valid := false
|
||||
action := "escalate"
|
||||
reason := ""
|
||||
switch {
|
||||
case created == 1:
|
||||
valid = true
|
||||
action = "proceed"
|
||||
reason = "Exactly 1 story file created as expected"
|
||||
case created == 0:
|
||||
valid = false
|
||||
action = "escalate"
|
||||
reason = "No story file created - session may have failed"
|
||||
case created < 0:
|
||||
valid = false
|
||||
action = "escalate"
|
||||
reason = fmt.Sprintf("Story files decreased (%d) - unexpected deletion", created)
|
||||
default:
|
||||
valid = false
|
||||
action = "escalate"
|
||||
reason = fmt.Sprintf("RUNAWAY CREATION: %d files created instead of 1", created)
|
||||
}
|
||||
|
||||
fmt.Printf("{\"valid\":%t,\"created_count\":%d,\"expected\":1,\"before\":%d,\"after\":%d,\"prefix\":%q,\"action\":%q,\"reason\":%q}\n",
|
||||
valid, created, before, after, prefix, action, reason)
|
||||
}
|
||||
|
||||
listStoryFiles := func(id string) {
|
||||
prefix := storyIDToPrefix(id)
|
||||
fmt.Printf("Story files matching %s-*.md:\n", prefix)
|
||||
matches, _ := filepath.Glob(filepath.Join(artifactsDir, prefix+"-*.md"))
|
||||
if len(matches) == 0 {
|
||||
fmt.Println(" (none found)")
|
||||
return
|
||||
}
|
||||
for _, m := range matches {
|
||||
info, _ := os.Stat(m)
|
||||
if info != nil {
|
||||
fmt.Printf("-rw-r--r-- 1 %s %d %s\n", info.Mode().String(), info.Size(), m)
|
||||
} else {
|
||||
fmt.Println(m)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
switch action {
|
||||
case "count":
|
||||
if len(args) == 0 || args[0] == "" {
|
||||
fmt.Fprintln(os.Stderr, "Usage: validate-story-creation count <story_id>")
|
||||
return 1
|
||||
}
|
||||
storyID := args[0]
|
||||
for i := 1; i < len(args); i++ {
|
||||
if args[i] == "--artifacts-dir" && i+1 < len(args) {
|
||||
artifactsDir = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
fmt.Println(countStoryFiles(storyID))
|
||||
return 0
|
||||
|
||||
case "check":
|
||||
if len(args) == 0 {
|
||||
fmt.Fprintln(os.Stderr, "Usage: validate-story-creation check <story_id> --before N --after N")
|
||||
return 1
|
||||
}
|
||||
storyID := args[0]
|
||||
before := -1
|
||||
after := -1
|
||||
for i := 1; i < len(args); i++ {
|
||||
switch args[i] {
|
||||
case "--before":
|
||||
if i+1 < len(args) {
|
||||
before, _ = strconv.Atoi(args[i+1])
|
||||
i++
|
||||
}
|
||||
case "--after":
|
||||
if i+1 < len(args) {
|
||||
after, _ = strconv.Atoi(args[i+1])
|
||||
i++
|
||||
}
|
||||
case "--artifacts-dir":
|
||||
if i+1 < len(args) {
|
||||
artifactsDir = args[i+1]
|
||||
i++
|
||||
}
|
||||
}
|
||||
}
|
||||
if storyID == "" || before < 0 || after < 0 {
|
||||
fmt.Fprintln(os.Stderr, "Usage: validate-story-creation check <story_id> --before N --after N")
|
||||
return 1
|
||||
}
|
||||
validate(storyID, before, after)
|
||||
return 0
|
||||
|
||||
case "list":
|
||||
if len(args) == 0 || args[0] == "" {
|
||||
fmt.Fprintln(os.Stderr, "Usage: validate-story-creation list <story_id>")
|
||||
return 1
|
||||
}
|
||||
listStoryFiles(args[0])
|
||||
return 0
|
||||
|
||||
case "prefix":
|
||||
if len(args) == 0 {
|
||||
return 1
|
||||
}
|
||||
fmt.Println(storyIDToPrefix(args[0]))
|
||||
return 0
|
||||
|
||||
default:
|
||||
if action != "" && len(args) >= 2 {
|
||||
before, err1 := strconv.Atoi(args[0])
|
||||
after, err2 := strconv.Atoi(args[1])
|
||||
if err1 == nil && err2 == nil {
|
||||
validate(action, before, after)
|
||||
return 0
|
||||
}
|
||||
}
|
||||
fmt.Fprintln(os.Stderr, "Usage: validate-story-creation <action> [args]")
|
||||
fmt.Fprintln(os.Stderr, "")
|
||||
fmt.Fprintln(os.Stderr, "Actions:")
|
||||
fmt.Fprintln(os.Stderr, " count <story_id> - Count current story files")
|
||||
fmt.Fprintln(os.Stderr, " check <story_id> --before N --after N - Validate creation")
|
||||
fmt.Fprintln(os.Stderr, " list <story_id> - List matching files")
|
||||
fmt.Fprintln(os.Stderr, " prefix <story_id> - Convert story ID to file prefix")
|
||||
return 1
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
module story-automator-go
|
||||
|
||||
go 1.21
|
||||
|
|
@ -0,0 +1,123 @@
|
|||
---
|
||||
nextStep: './step-02-preflight.md'
|
||||
continueStep: './step-01b-continue.md'
|
||||
outputFolder: '{output_folder}/story-automator'
|
||||
outputFile: '{outputFolder}/init-log-{timestamp}.md'
|
||||
rules: '../data/orchestrator-rules.md'
|
||||
markerFile: '{project-root}/.claude/.story-automator-active'
|
||||
scripts: '../bin/story-automator'
|
||||
ensureStopHook: '../bin/story-automator'
|
||||
stateHelper: '../bin/story-automator'
|
||||
settingsFile: '{project-root}/.claude/settings.json'
|
||||
---
|
||||
|
||||
# Step 1: Initialize
|
||||
|
||||
**Goal:** Verify safeguards, check for existing state → resume or start fresh.
|
||||
|
||||
---
|
||||
|
||||
## Do
|
||||
|
||||
### 1. Verify Stop Hook Installation
|
||||
|
||||
**CRITICAL:** The Stop hook prevents premature stopping during orchestration.
|
||||
|
||||
Use script to ensure the Stop hook exists:
|
||||
```bash
|
||||
result=$("{ensureStopHook}" ensure-stop-hook --settings "{settingsFile}" \
|
||||
--command "{scripts} stop-hook" --timeout 10)
|
||||
ok=$(echo "$result" | jq -r '.ok')
|
||||
changed=$(echo "$result" | jq -r '.changed')
|
||||
```
|
||||
|
||||
**IF ok == false:** Report error and STOP.
|
||||
|
||||
**IF changed == true:**
|
||||
Display:
|
||||
```
|
||||
**Stop Hook Installed**
|
||||
|
||||
I've added the story-automator Stop hook to .claude/settings.json.
|
||||
This prevents the orchestrator from randomly stopping mid-workflow.
|
||||
|
||||
⚠️ **Please restart this Claude session** for the hook to take effect.
|
||||
|
||||
After restarting, run the story-automator workflow again.
|
||||
```
|
||||
**HALT** - Do not proceed until user restarts
|
||||
|
||||
**IF changed == false:**
|
||||
Display: "✓ Stop hook verified"
|
||||
Continue to step 2
|
||||
|
||||
### 2. Load Rules
|
||||
Load `{rules}` once. These apply to all subsequent steps.
|
||||
|
||||
### 3. Check for Existing State
|
||||
Search `{outputFolder}` for `orchestration-*.md` files.
|
||||
|
||||
Use deterministic state listing:
|
||||
```bash
|
||||
state_list=$("{stateHelper}" orchestrator-helper state-list "{outputFolder}")
|
||||
latest_incomplete=$(echo "$state_list" | jq -r '.files | map(select(.status == "COMPLETE" | not)) | sort_by(.lastUpdated) | last | .path // empty')
|
||||
```
|
||||
|
||||
**IF latest_incomplete is non-empty:**
|
||||
- Display: "**Found existing orchestration in progress.**"
|
||||
- Show: epic name, current story, current step, last updated
|
||||
- → Load `{continueStep}`
|
||||
- **STOP** (don't continue below)
|
||||
|
||||
**IF none found:**
|
||||
- Continue to step 4
|
||||
|
||||
### 4. Welcome
|
||||
Display:
|
||||
```
|
||||
**Welcome to Story Automator.**
|
||||
|
||||
I'll automate story implementation by spawning isolated sessions,
|
||||
handling code review loops, and committing completed stories.
|
||||
|
||||
Everything is logged for full resumability.
|
||||
```
|
||||
|
||||
### 5. Check Sprint Status (MANDATORY)
|
||||
```bash
|
||||
has_status=$("{stateHelper}" orchestrator-helper sprint-status exists)
|
||||
sprint_ok=$(echo "$has_status" | jq -r '.exists')
|
||||
```
|
||||
|
||||
**IF sprint_ok == false:** ABORT immediately.
|
||||
|
||||
Display:
|
||||
```
|
||||
**❌ Sprint status file not found.**
|
||||
|
||||
Expected: `_bmad-output/implementation-artifacts/sprint-status.yaml`
|
||||
|
||||
This file is required before running the story automator.
|
||||
Please run the **sprint-planning** workflow first to generate it.
|
||||
```
|
||||
**HALT** - Do not proceed.
|
||||
|
||||
**IF sprint_ok == true:**
|
||||
- Store for later reference during preflight
|
||||
- Will be used to check if earlier stories need completion
|
||||
|
||||
### 6. Setup
|
||||
Ensure `{outputFolder}` exists.
|
||||
|
||||
Append an initialization entry to `{outputFile}`:
|
||||
```bash
|
||||
printf \"[%s] init: stop-hook=%s existing_state=%s\\n\" \
|
||||
\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\" \"${changed}\" \"${latest_incomplete}\" >> \"{outputFile}\"
|
||||
```
|
||||
|
||||
**Note:** Marker file (`{markerFile}`) is created in step-02b-preflight-finalize after epic/story context is established.
|
||||
|
||||
---
|
||||
|
||||
## Then
|
||||
→ Load `{nextStep}`
|
||||
|
|
@ -0,0 +1,194 @@
|
|||
---
|
||||
outputFolder: '{output_folder}/story-automator'
|
||||
outputFile: '{outputFolder}/orchestration-{epic_id}-{timestamp}.md'
|
||||
preflightStep: './step-02-preflight.md'
|
||||
preflightConfigStep: './step-02a-preflight-config.md'
|
||||
preflightFinalizeStep: './step-02b-preflight-finalize.md'
|
||||
executeStep: './step-03-execute.md'
|
||||
executeReviewStep: './step-03a-execute-review.md'
|
||||
executeFinishStep: './step-03b-execute-finish.md'
|
||||
executeCompleteStep: './step-03c-execute-complete.md'
|
||||
wrapupStep: './step-04-wrapup.md'
|
||||
markerFile: '{project-root}/.claude/.story-automator-active'
|
||||
stateFilePattern: '{outputFolder}/orchestration-*.md'
|
||||
stateHelper: '../bin/story-automator'
|
||||
deriveProjectSlug: '../bin/story-automator'
|
||||
listSessions: '../bin/story-automator'
|
||||
sprintCompare: '../bin/story-automator'
|
||||
tmuxCommands: '../data/tmux-commands.md'
|
||||
# Optional: provided by workflow.md when using Resume mode (skips state search)
|
||||
resumeStatePath: ''
|
||||
---
|
||||
|
||||
# Step 1b: Continue Previous Session
|
||||
|
||||
**Goal:** Load existing state and let user choose how to proceed.
|
||||
|
||||
---
|
||||
|
||||
## Do
|
||||
|
||||
### 1. Load State Document
|
||||
|
||||
**IF `{resumeStatePath}` is provided (from workflow.md Resume routing):**
|
||||
Use it directly: `state_file="{resumeStatePath}"`
|
||||
|
||||
**ELSE (called from step-01-init or no path provided):**
|
||||
Find the most recent incomplete state document using `{stateFilePattern}`:
|
||||
```bash
|
||||
result=$("{stateHelper}" orchestrator-helper state-latest-incomplete "{outputFolder}")
|
||||
state_file=$(echo "$result" | jq -r '.path // empty')
|
||||
```
|
||||
|
||||
**IF state_file is empty:** Display "No incomplete orchestration found." and HALT.
|
||||
|
||||
**Then extract from state_file:**
|
||||
- `epic`, `epicName`, `storyRange`
|
||||
- `currentStep`, `status`
|
||||
- `stepsCompleted`, `storiesCompleted`
|
||||
- Last action from action log
|
||||
|
||||
Use deterministic summary:
|
||||
```bash
|
||||
summary=$("{stateHelper}" orchestrator-helper state-summary "$state_file")
|
||||
```
|
||||
|
||||
### 2. Verify Against Sprint Status
|
||||
Load `_bmad-output/implementation-artifacts/sprint-status.yaml`.
|
||||
|
||||
**Compare with state document (run in parallel with session inventory):**
|
||||
- Check if earlier stories (before `currentStory`) are marked `done` in sprint-status
|
||||
- If any earlier stories are NOT `done`:
|
||||
```
|
||||
**Warning:** Stories {X, Y} are not complete in sprint-status.yaml.
|
||||
|
||||
[B]atch them first - Add to queue before continuing
|
||||
[S]kip - Continue from current story anyway
|
||||
```
|
||||
**Wait.**
|
||||
- If B: Add incomplete stories to beginning of queue
|
||||
- If S: Note skip in action log, continue
|
||||
|
||||
Use deterministic parallel baseline:
|
||||
```bash
|
||||
tmp_compare=$(mktemp)
|
||||
tmp_sessions=$(mktemp)
|
||||
|
||||
("{sprintCompare}" sprint-compare --state "$state_file" --sprint "_bmad-output/implementation-artifacts/sprint-status.yaml" > "$tmp_compare") &
|
||||
compare_pid=$!
|
||||
|
||||
project_slug=$(echo "$("{deriveProjectSlug}" derive-project-slug --project-root "{project-root}")" | jq -r '.slug')
|
||||
("{listSessions}" list-sessions --slug "$project_slug" > "$tmp_sessions") &
|
||||
sessions_pid=$!
|
||||
|
||||
wait "$compare_pid"
|
||||
wait "$sessions_pid"
|
||||
|
||||
compare=$(cat "$tmp_compare")
|
||||
sessions=$(cat "$tmp_sessions")
|
||||
rm -f "$tmp_compare" "$tmp_sessions"
|
||||
|
||||
incomplete=$(echo "$compare" | jq -r '.incomplete | join(\", \")')
|
||||
session_count=$(echo "$sessions" | jq -r '.count')
|
||||
```
|
||||
|
||||
### 3. Check Active Sessions
|
||||
Using `{tmuxCommands}`, check for existing T-Mux sessions for THIS PROJECT ONLY.
|
||||
|
||||
**Generate project slug first:**
|
||||
```bash
|
||||
project_slug=$(echo "$("{deriveProjectSlug}" derive-project-slug --project-root "{project-root}")" | jq -r '.slug')
|
||||
```
|
||||
|
||||
**Then list sessions matching:** `sa-{project_slug}-*`
|
||||
|
||||
This ensures we only see sessions spawned by THIS project's story-automator, not sessions from other projects.
|
||||
|
||||
Use `sessions` and `session_count` from step 2 parallel baseline.
|
||||
|
||||
### 4. Present Status
|
||||
```
|
||||
**Resuming: {epicName}**
|
||||
|
||||
Status: {status}
|
||||
Progress: {storiesCompleted}/{totalStories} stories
|
||||
Current: Story {N}, Step: {currentStep}
|
||||
Last action: {lastAction}
|
||||
|
||||
Active sessions: {count or 'None'}
|
||||
```
|
||||
|
||||
### 5. Present Options
|
||||
```
|
||||
[R]esume - Continue from where you left off
|
||||
[V]iew - See action log details
|
||||
[M]odify - Change overrides or context
|
||||
[S]tart Over - Restart this epic (keeps backup)
|
||||
[X]Abort - Cancel orchestration
|
||||
```
|
||||
|
||||
**Wait for user input.**
|
||||
|
||||
#### Menu Handling Logic:
|
||||
- IF R: Create marker file, then route based on `status` and `currentStep`:
|
||||
- READY → `{preflightFinalizeStep}`
|
||||
- INITIALIZING → `{preflightConfigStep}`
|
||||
- IN_PROGRESS / PAUSED → route by `currentStep`:
|
||||
- `step-03-execute` or `create` or `dev` → `{executeStep}`
|
||||
- `step-03a-execute-review` or `auto` or `review` → `{executeReviewStep}`
|
||||
- `step-03b-execute-finish` or `commit` or `retro` → `{executeFinishStep}`
|
||||
- `step-03c-execute-complete` → `{executeCompleteStep}`
|
||||
- (default) → `{executeStep}`
|
||||
- EXECUTION_COMPLETE → `{wrapupStep}`
|
||||
- COMPLETE → `{wrapupStep}`
|
||||
- ABORTED → display warning and redisplay this menu
|
||||
- IF V: Show last 20 action log entries, then redisplay this menu
|
||||
- IF M: Allow override changes, save, then redisplay this menu
|
||||
- IF S: Rename state to `.backup-{timestamp}` then load `{preflightStep}` (new state will be created at `{outputFile}`)
|
||||
- IF X: Set status="ABORTED", display confirmation, end workflow
|
||||
- IF Any other: help user respond, then redisplay this menu
|
||||
|
||||
#### EXECUTION RULES:
|
||||
- ALWAYS halt and wait for user input after presenting menu
|
||||
- ONLY route to a step after handling the selected option
|
||||
- After non-routing options, return to this menu
|
||||
- Keep prompts concise; if user is unsure, ask one clarifying question before redisplaying options
|
||||
|
||||
### 6. Handle Choice
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| **R** | **First:** Create marker file (see below), **then** route based on `status` |
|
||||
| **V** | Show last 20 action log entries → redisplay options |
|
||||
| **M** | Allow override changes, save → redisplay options |
|
||||
| **S** | Rename state to `.backup-{timestamp}` → `{preflightStep}` |
|
||||
| **X** | Set status="ABORTED", display confirmation, end workflow |
|
||||
|
||||
#### On [R]esume: Create Marker File BEFORE Routing
|
||||
|
||||
**CRITICAL:** Only create marker file when user confirms resume. This prevents stop hook from firing during menu wait.
|
||||
|
||||
Create `{markerFile}` with orchestration context:
|
||||
```json
|
||||
{
|
||||
"epic": "{epic}",
|
||||
"currentStory": "{currentStory}",
|
||||
"storiesRemaining": {remaining_count},
|
||||
"stateFile": "{state_document_path}",
|
||||
"startedAt": "{timestamp}"
|
||||
}
|
||||
```
|
||||
|
||||
Use deterministic marker creation:
|
||||
```bash
|
||||
"{stateHelper}" orchestrator-helper marker create --epic "{epic}" --story "{currentStory}" \
|
||||
--remaining {remaining_count} --state-file "{state_document_path}" \
|
||||
--project-slug "$project_slug" --pid "$$" --heartbeat "{timestamp}"
|
||||
```
|
||||
|
||||
**Then** route per Menu Handling Logic in section 5 above.
|
||||
|
||||
---
|
||||
|
||||
## Then
|
||||
→ Load appropriate step based on choice
|
||||
|
|
@ -0,0 +1,196 @@
|
|||
---
|
||||
nextStep: './step-02a-preflight-config.md'
|
||||
outputFolder: '{output_folder}/story-automator'
|
||||
outputFile: '{outputFolder}/preflight-{epic_id}-{timestamp}.md'
|
||||
parseEpic: '../bin/story-automator'
|
||||
parseStoryRange: '../bin/story-automator'
|
||||
parseStory: '../bin/story-automator'
|
||||
stateHelper: '../bin/story-automator'
|
||||
defaultEpicPath: '{output_folder}/planning-artifacts/epics.md'
|
||||
defaultSprintStatusFile: '{output_folder}/implementation-artifacts/sprint-status.yaml'
|
||||
complexityRules: '../data/complexity-rules.json'
|
||||
complexityScoring: '../data/complexity-scoring.md'
|
||||
preflightRequirements: '../data/preflight-requirements.md'
|
||||
---
|
||||
# Step 2: Pre-flight (Epic + Complexity)
|
||||
|
||||
**Goal:** Gather epic, story range, complexity analysis, and custom instructions.
|
||||
**Interaction mode:** Collaborative discovery and clarification.
|
||||
|
||||
---
|
||||
|
||||
## 🚨 BEFORE STARTING: Load Requirements
|
||||
|
||||
**CRITICAL:** Load and read `{preflightRequirements}` FIRST. It contains MANDATORY sequence rules, FORBIDDEN patterns, and verification gates that MUST be followed.
|
||||
|
||||
---
|
||||
|
||||
## Do
|
||||
|
||||
### 1. Confirm Epic File
|
||||
```
|
||||
**Epic source**
|
||||
|
||||
Default epic file: `{defaultEpicPath}`
|
||||
Use this file? [Y/n]
|
||||
```
|
||||
|
||||
If user confirms (Y/Enter), set `epic_path="{defaultEpicPath}"`.
|
||||
If user says no, ask for epic file path and set `epic_path` from response.
|
||||
If confirmed default does not exist, tell user and request explicit path.
|
||||
|
||||
**Wait.**
|
||||
|
||||
### 2. Review Epic
|
||||
Parse epic file deterministically:
|
||||
```bash
|
||||
epic_json=$("{parseEpic}" parse-epic --file "{epic_path}")
|
||||
epic_name=$(echo "$epic_json" | jq -r '.epicTitle')
|
||||
story_count=$(echo "$epic_json" | jq -r '.count')
|
||||
story_titles=$(echo "$epic_json" | jq -r '.stories[] | "\(.storyId) \(.title)"')
|
||||
story_ids_csv=$(echo "$epic_json" | jq -r '.stories[] | .storyId' | paste -sd, -)
|
||||
sprint_exists=$("{stateHelper}" orchestrator-helper sprint-status exists)
|
||||
story_status_rows="(sprint-status unavailable at {defaultSprintStatusFile})"
|
||||
if [ "$sprint_exists" = "true" ]; then
|
||||
story_status_rows=$(echo "$epic_json" | jq -r '.stories[] | .storyId' | while read -r sid; do
|
||||
status_json=$("{stateHelper}" orchestrator-helper sprint-status get "$sid")
|
||||
st=$(echo "$status_json" | jq -r '.status // "unknown"')
|
||||
printf -- "- %s | %s\n" "$sid" "$st"
|
||||
done)
|
||||
fi
|
||||
```
|
||||
|
||||
Display:
|
||||
```
|
||||
**Epic:** {epic_name}
|
||||
|
||||
Stories found:
|
||||
1. {storyId} {title}
|
||||
2. {storyId} {title}
|
||||
...
|
||||
|
||||
Total: {story_count}
|
||||
|
||||
Current sprint-status ({defaultSprintStatusFile}):
|
||||
{story_status_rows}
|
||||
|
||||
Which stories? (e.g., `1-3`, `all`, `1,3,5`)
|
||||
```
|
||||
If user hesitates, suggest `all` as default and confirm.
|
||||
|
||||
**Wait.**
|
||||
|
||||
### 3. Read Stories and Compute Complexity (MANDATORY - DO NOT SKIP)
|
||||
|
||||
> **🚨 CRITICAL:** This step MUST use the Go binary for complexity scoring. NEVER manually assess complexity by reading story content.
|
||||
|
||||
For each story in range, extract complexity **programmatically**:
|
||||
|
||||
**3a. Parse story range:**
|
||||
```bash
|
||||
range_json=$("{parseStoryRange}" parse-story-range --input "{user_selection}" --total "$story_count" --ids "$story_ids_csv")
|
||||
selected_ids=$(echo "$range_json" | jq -r '.storyIds[]')
|
||||
selected_count=$(echo "$range_json" | jq -r '.count')
|
||||
first_story_id=$(echo "$range_json" | jq -r '.storyIds[0]')
|
||||
epic_id=$(echo "$first_story_id" | cut -d. -f1)
|
||||
```
|
||||
|
||||
**3b. Get complexity for EACH story using Go binary:**
|
||||
```bash
|
||||
# Initialize accumulator - REQUIRED
|
||||
stories_json='[]'
|
||||
|
||||
# For EACH story_id in selected_ids, run:
|
||||
story_json=$("{parseStory}" parse-story --epic "{epic_path}" --story "$story_id" --rules "{complexityRules}")
|
||||
|
||||
# Extract and accumulate - REQUIRED
|
||||
story_title=$(echo "$story_json" | jq -r '.title')
|
||||
story_level=$(echo "$story_json" | jq -r '.complexity.level')
|
||||
story_score=$(echo "$story_json" | jq -r '.complexity.score')
|
||||
story_reasons=$(echo "$story_json" | jq -r '.complexity.reasons // []')
|
||||
stories_json=$(echo "$stories_json" | jq -c --arg id "$story_id" --arg title "$story_title" --arg level "$story_level" --argjson score "$story_score" --argjson reasons "$story_reasons" \
|
||||
'. + [{storyId:$id,title:$title,complexity:{level:$level,score:$score,reasons:$reasons}}]')
|
||||
```
|
||||
|
||||
Refer to `{complexityScoring}` for scoring criteria and thresholds.
|
||||
|
||||
**Parallelism Policy (MANDATORY):**
|
||||
|
||||
- If `selected_count >= 4`: run per-story complexity parsing in parallel subprocesses (max 4 workers).
|
||||
- If `selected_count < 4`: run sequentially.
|
||||
- In both modes, return only summary fields to parent context: `storyId`, `title`, `complexity.level`, `complexity.score`, `complexity.reasons`.
|
||||
|
||||
```bash
|
||||
# Deterministic threshold
|
||||
if [ "$selected_count" -ge 4 ]; then
|
||||
# Parallel mode (max 4 workers)
|
||||
printf "%s\n" $selected_ids | xargs -I{} -P 4 sh -c '
|
||||
"{parseStory}" parse-story --epic "{epic_path}" --story "{}" --rules "{complexityRules}" \
|
||||
| jq -c "{storyId:.storyId,title:.title,complexity:.complexity}"
|
||||
' > /tmp/story-complexity.ndjson
|
||||
stories_json=$(jq -s '.' /tmp/story-complexity.ndjson)
|
||||
else
|
||||
# Sequential mode
|
||||
stories_json='[]'
|
||||
for story_id in $selected_ids; do
|
||||
story_json=$("{parseStory}" parse-story --epic "{epic_path}" --story "$story_id" --rules "{complexityRules}")
|
||||
stories_json=$(echo "$stories_json" | jq -c --argjson s "$(echo "$story_json" | jq -c '{storyId:.storyId,title:.title,complexity:.complexity}')" '. + [$s]')
|
||||
done
|
||||
fi
|
||||
```
|
||||
|
||||
**3c. Display Complexity Matrix (REQUIRED):**
|
||||
|
||||
Display the Complexity Matrix using the template from `{preflightRequirements}`.
|
||||
|
||||
**3d. VERIFICATION GATE:**
|
||||
|
||||
Follow the verification gate from `{preflightRequirements}` before proceeding.
|
||||
|
||||
---
|
||||
|
||||
### 4. Custom Instructions
|
||||
```
|
||||
**Any custom instructions?**
|
||||
|
||||
Examples:
|
||||
- "Always run tests after changes"
|
||||
- "Prioritize stories 3 and 5"
|
||||
- "Be extra careful with database migrations"
|
||||
- "Use strict typing throughout"
|
||||
|
||||
Enter instructions or 'none':
|
||||
```
|
||||
If user is unsure, recommend `none` and continue.
|
||||
|
||||
**Wait.**
|
||||
|
||||
Store response as `custom_instructions` (use "" for none).
|
||||
|
||||
### 5. Proceed to Configuration
|
||||
|
||||
Persist preflight snapshot before continuing:
|
||||
```bash
|
||||
mkdir -p "{outputFolder}"
|
||||
cat > "{outputFile}" <<EOF
|
||||
# Preflight Snapshot
|
||||
|
||||
- Timestamp: {timestamp}
|
||||
- Epic path: {epic_path}
|
||||
- Epic name: {epic_name}
|
||||
- Story count: {story_count}
|
||||
- Selected count: {selected_count}
|
||||
- Selected IDs: {selected_ids}
|
||||
- Custom instructions: {custom_instructions}
|
||||
|
||||
## Complexity Summary
|
||||
$(echo "$stories_json" | jq -r '.[] | "- \(.storyId) | \(.complexity.level) | score=\(.complexity.score)"')
|
||||
EOF
|
||||
```
|
||||
|
||||
Carry forward: `epic_path`, `epic_name`, `story_count`, `story_ids_csv`, `range_json`, `selected_ids`, `selected_count`, `stories_json`, `epic_id`, `first_story_id`, `custom_instructions`.
|
||||
|
||||
---
|
||||
|
||||
## Then
|
||||
→ Load and execute `{nextStep}`
|
||||
|
|
@ -0,0 +1,161 @@
|
|||
---
|
||||
nextStep: './step-02b-preflight-finalize.md'
|
||||
stateTemplate: '../templates/state-document.md'
|
||||
outputFolder: '{output_folder}/story-automator'
|
||||
outputFile: '{outputFolder}/orchestration-{epic_id}-{timestamp}.md'
|
||||
buildStateDoc: '../bin/story-automator'
|
||||
agentConfigPrompts: '../data/agent-config-prompts.md'
|
||||
agentConfigPresets: '../data/agent-config-presets.json'
|
||||
---
|
||||
# Step 2a: Pre-flight Configuration
|
||||
|
||||
**Goal:** Configure agents and execution settings, then create the orchestration state document.
|
||||
**Interaction mode:** Guided configuration (collaborative inputs, deterministic state creation).
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Step 2 completed.
|
||||
- Variables available: `epic_id`, `epic_name`, `range_json`, `stories_json`, `selected_count`, `custom_instructions`.
|
||||
|
||||
---
|
||||
|
||||
## Do
|
||||
|
||||
### 1. Configure Execution Preferences
|
||||
|
||||
> **PREREQUISITE:** Step 2 (preflight) MUST be complete. The Complexity Matrix MUST have been displayed. If not, STOP and complete step 2 first.
|
||||
|
||||
```
|
||||
**Execution Settings:**
|
||||
|
||||
1. **Skip the 'automate' step (test automation)?** [N]o (default) / [Y]es
|
||||
2. **Max parallel sessions?** (tmux sessions running concurrently, default: 1)
|
||||
|
||||
Enter choices (e.g., `N 1` or `Y 3`):
|
||||
```
|
||||
|
||||
**Wait.**
|
||||
|
||||
Store responses as `skip_automate` (true/false) and `max_parallel` (integer).
|
||||
|
||||
### 2. Configure Agent (Complexity-Aware)
|
||||
|
||||
Using the complexity data from `stories_json`, present agent configuration options that reference the actual complexity breakdown.
|
||||
|
||||
**2a. Check for Saved Presets**
|
||||
|
||||
```bash
|
||||
presets_result=$("{buildStateDoc}" agent-config list --file "{agentConfigPresets}")
|
||||
preset_count=$(echo "$presets_result" | jq -r '.count')
|
||||
```
|
||||
|
||||
Store `preset_count` — this determines whether [L]oad option appears in the menu.
|
||||
|
||||
**2b. Present Complexity-Based Agent Options**
|
||||
|
||||
Display prompts from `{agentConfigPrompts}`, selecting the appropriate table variant:
|
||||
- If `skip_automate` is false: show table WITH `auto` column
|
||||
- If `skip_automate` is true: show table WITHOUT `auto` column
|
||||
- If `preset_count > 0`: include [L]oad saved option
|
||||
- If `preset_count == 0`: omit [L] option
|
||||
|
||||
**Wait.**
|
||||
|
||||
**2c. Handle Selection**
|
||||
|
||||
- **IF S:** Build `agent_config_json` from defaults (no save prompt).
|
||||
- **IF U or C:** Follow Uniform/Custom prompts from `{agentConfigPrompts}`, build `agent_config_json`, then proceed to **2d (Save Prompt)**.
|
||||
- **IF L:** Follow Load Saved Preset prompt from `{agentConfigPrompts}`. Load preset config as `agent_config_json` (no save prompt).
|
||||
|
||||
```bash
|
||||
# Example shape with complexity-based config (auto column included when not skipped)
|
||||
agent_config_json='{
|
||||
"complexityBased": true,
|
||||
"low": {"create":{"primary":"...","fallback":"..."},"dev":{...},"auto":{...},"review":{...}},
|
||||
"medium": {"create":{...},"dev":{...},"auto":{...},"review":{...}},
|
||||
"high": {"create":{...},"dev":{...},"auto":{...},"review":{...}},
|
||||
"retro": {"primary":"claude","fallback":false},
|
||||
"auto": {"skip": $skip_automate}
|
||||
}'
|
||||
```
|
||||
|
||||
Store:
|
||||
- `agent_config_json` = full config object
|
||||
- `primary_agent` = default primary (for backwards compatibility)
|
||||
|
||||
**2d. Save Prompt (U/C only)**
|
||||
|
||||
Only when user chose **[U]niform** or **[C]ustom**, follow the Save Configuration prompt from `{agentConfigPrompts}`:
|
||||
|
||||
```bash
|
||||
# If user provides a name:
|
||||
"{buildStateDoc}" agent-config save --file "{agentConfigPresets}" --name "$save_name" --config-json "$agent_config_json"
|
||||
```
|
||||
|
||||
### 3. Review
|
||||
|
||||
Display configuration summary:
|
||||
- Epic and story range
|
||||
- Custom instructions (if any)
|
||||
- Agent configuration
|
||||
- Execution settings
|
||||
|
||||
Pause for confirmation before starting execution.
|
||||
|
||||
### 3b. Confirm Autonomous Start (Optional Checkpoint)
|
||||
|
||||
Before creating state and launching autonomous phases, confirm:
|
||||
```
|
||||
Proceed with autonomous execution after preflight? [Y/n]
|
||||
```
|
||||
|
||||
**Wait.**
|
||||
|
||||
- If `Y`/Enter: continue.
|
||||
- If `n`: return to Step 1 (settings) for adjustments.
|
||||
|
||||
### 4. Create State Document
|
||||
|
||||
From `{stateTemplate}`:
|
||||
- Generate: `orchestration-{epic_id}-{timestamp}.md`
|
||||
- Fill frontmatter with all config
|
||||
- Initialize story progress table
|
||||
- Set status: "READY"
|
||||
- Save to `{outputFolder}`
|
||||
|
||||
Deterministic creation:
|
||||
```bash
|
||||
agent_cmd="claude --dangerously-skip-permissions"
|
||||
if [ "$primary_agent" = "codex" ]; then agent_cmd="codex exec --full-auto"; fi
|
||||
|
||||
config_json=$(jq -n \
|
||||
--arg epic "$epic_id" \
|
||||
--arg epicName "$epic_name" \
|
||||
--argjson storyRange "$(echo "$range_json" | jq '.storyIds')" \
|
||||
--arg status "READY" \
|
||||
--arg currentStory "null" \
|
||||
--arg currentStep "preflight" \
|
||||
--arg aiCommand "$agent_cmd" \
|
||||
--arg customInstructions "$custom_instructions" \
|
||||
--argjson overrides "{\"skipAutomate\":$skip_automate,\"maxParallel\":$max_parallel}" \
|
||||
--argjson agentConfig "$agent_config_json" \
|
||||
'{epic:$epic,epicName:$epicName,storyRange:$storyRange,status:$status,currentStory:null,currentStep:$currentStep,aiCommand:$aiCommand,customInstructions:$customInstructions,overrides:$overrides,agentConfig:$agentConfig}'
|
||||
)
|
||||
|
||||
state_result=$("{buildStateDoc}" build-state-doc --template "{stateTemplate}" --output-folder "{outputFolder}" --config-json "$config_json")
|
||||
state_path=$(echo "$state_result" | jq -r '.path')
|
||||
```
|
||||
|
||||
Display: "**State document created.**"
|
||||
Record: `state_path` is the resolved `{outputFile}` for this run.
|
||||
|
||||
### 5. Auto-Proceed to Finalize
|
||||
|
||||
Persist any preflight notes to `{outputFile}`, update frontmatter (append `step-02-preflight` and `step-02a-preflight-config`, set `lastUpdated`).
|
||||
|
||||
---
|
||||
|
||||
## Then
|
||||
→ Load, read entire file, and execute `{nextStep}`
|
||||
|
|
@ -0,0 +1,76 @@
|
|||
---
|
||||
nextStep: './step-03-execute.md'
|
||||
outputFolder: '{output_folder}/story-automator'
|
||||
outputFile: '{outputFolder}/orchestration-{epic_id}-{timestamp}.md'
|
||||
stateHelper: '../bin/story-automator'
|
||||
ensureMarkerGitignore: '../bin/story-automator'
|
||||
deriveProjectSlug: '../bin/story-automator'
|
||||
markerFormat: '../data/marker-file-format.md'
|
||||
---
|
||||
|
||||
# Step 2b: Pre-flight Finalize
|
||||
|
||||
**Goal:** Finalize preflight artifacts, create marker, and start execution.
|
||||
**Interaction mode:** Deterministic auto-proceed.
|
||||
|
||||
---
|
||||
|
||||
## Do
|
||||
|
||||
### 1. Create Complexity + Agents Files
|
||||
|
||||
Derive deterministic filenames:
|
||||
```bash
|
||||
state_base=$(basename "{outputFile}" .md)
|
||||
complexity_path="{outputFolder}/complexity-${state_base}.json"
|
||||
agents_dir="{outputFolder}/agents"
|
||||
agents_path="$agents_dir/agents-${state_base}.md"
|
||||
```
|
||||
|
||||
Write complexity file:
|
||||
```bash
|
||||
mkdir -p "$(dirname "$complexity_path")"
|
||||
echo "$stories_json" | jq -c '{stories:.}' > "$complexity_path"
|
||||
```
|
||||
|
||||
Build deterministic agents file:
|
||||
```bash
|
||||
mkdir -p "$agents_dir"
|
||||
"{stateHelper}" orchestrator-helper agents-build \
|
||||
--state-file "{outputFile}" \
|
||||
--complexity-file "$complexity_path" \
|
||||
--output "$agents_path" \
|
||||
--config-json "$agent_config_json"
|
||||
```
|
||||
|
||||
Update state frontmatter with file paths:
|
||||
```bash
|
||||
agents_path_json=$(printf '%s' "$agents_path" | jq -R '.')
|
||||
complexity_path_json=$(printf '%s' "$complexity_path" | jq -R '.')
|
||||
"{stateHelper}" orchestrator-helper state-update "{outputFile}" \
|
||||
--set "agentsFile=$agents_path_json" \
|
||||
--set "complexityFile=$complexity_path_json"
|
||||
```
|
||||
|
||||
### 2. Create Marker and Begin Execution
|
||||
|
||||
**Create marker file** (see `{markerFormat}` for JSON structure):
|
||||
```bash
|
||||
# Ensure .claude/ exists and is gitignored
|
||||
mkdir -p .claude
|
||||
"{ensureMarkerGitignore}" ensure-marker-gitignore --gitignore ".gitignore" --entry ".claude/.story-automator-active"
|
||||
|
||||
# Create marker
|
||||
project_slug=$(echo "$("{deriveProjectSlug}" derive-project-slug --project-root "{project-root}")" | jq -r '.slug')
|
||||
"{stateHelper}" orchestrator-helper marker create --epic "$epic_id" --story "$first_story_id" \
|
||||
--remaining "$selected_count" --state-file "{outputFile}" \
|
||||
--project-slug "$project_slug" --pid "$$" --heartbeat "{timestamp}"
|
||||
```
|
||||
|
||||
Set status="IN_PROGRESS", log "Execution started".
|
||||
Update frontmatter (append `step-02b-preflight-finalize`, set `lastUpdated`).
|
||||
|
||||
---
|
||||
|
||||
## Then
|
||||
→ Load, read entire file, and execute `{nextStep}`
|
||||
|
|
@ -0,0 +1,194 @@
|
|||
---
|
||||
nextStep: './step-03a-execute-review.md'
|
||||
dataFileIndex: '../data/data-file-index.md'
|
||||
scriptsDir: '../bin/story-automator'
|
||||
outputFolder: '{output_folder}/story-automator'
|
||||
stateFilePattern: '{outputFolder}/orchestration-*.md'
|
||||
outputFile: '{outputFolder}/orchestration-{epic_id}-{timestamp}.md'
|
||||
retryStrategy: '../data/retry-fallback-strategy.md'
|
||||
executionPatterns: '../data/execution-patterns.md'
|
||||
subagentPrompts: '../data/subagent-prompts.md'
|
||||
---
|
||||
|
||||
## 🚨 CRITICAL: Load Data File Index FIRST
|
||||
|
||||
**BEFORE ANY EXECUTION**, load and read `{dataFileIndex}` completely.
|
||||
**DO NOT proceed until you have read the index and loaded the required files.**
|
||||
|
||||
---
|
||||
Set: `scripts="{scriptsDir}"`
|
||||
|
||||
## 🚨 CRITICAL: CLI Contract Check (Interface Drift Guard)
|
||||
|
||||
Before running any story loop logic, verify required helper commands/flags still exist.
|
||||
|
||||
```bash
|
||||
# Core command availability
|
||||
"$scripts" tmux-wrapper --help >/dev/null
|
||||
"$scripts" monitor-session --help >/dev/null
|
||||
"$scripts" orchestrator-helper --help >/dev/null
|
||||
|
||||
# Required spawn contract: --command must exist
|
||||
"$scripts" tmux-wrapper spawn --help | grep -q -- "--command"
|
||||
|
||||
# Build command contract must be available
|
||||
"$scripts" tmux-wrapper build-cmd --help >/dev/null
|
||||
```
|
||||
|
||||
If any check fails: **STOP and escalate immediately** with "helper CLI contract changed".
|
||||
|
||||
---
|
||||
|
||||
# Step 3: Execute Build Cycle
|
||||
|
||||
**Goal:** Autonomously execute all stories. Escalate only when decisions needed.
|
||||
**Interaction mode:** Deterministic autonomous execution.
|
||||
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
Load from state document (located via `{stateFilePattern}`; output folder `{outputFolder}`; resolved path stored as `{outputFile}` for this run):
|
||||
- `storyRange`, `currentStory`, `currentStep`
|
||||
- `overrides` (skipAutomate, maxParallel)
|
||||
- `customInstructions`
|
||||
|
||||
Resolve agent configuration using deterministic agents file (see `{retryStrategy}` for full function):
|
||||
```bash
|
||||
state_file="{outputFile}"
|
||||
# resolve_agent_for_task "{task}" "$state_file" "{story_id}" -> sets primary_agent,fallback_agent
|
||||
```
|
||||
|
||||
**IF resuming** (currentStory set): Skip to that point in loop.
|
||||
**IF fresh**: Display "**Starting build cycle for {count} stories...**"
|
||||
|
||||
## 🚨 CRITICAL: Execution Patterns
|
||||
|
||||
**BEFORE executing any steps, read `{executionPatterns}` for:**
|
||||
- FORBIDDEN patterns (never chain multiple workflow steps)
|
||||
- REQUIRED patterns (verify state after each step)
|
||||
- Monitoring failure fallback sequence
|
||||
|
||||
**Key rule:** Each step (create/dev/auto/review) MUST be executed and monitored separately. NEVER chain steps in loops.
|
||||
|
||||
## Story Loop
|
||||
|
||||
> **⚠️ SPAWN PATTERN - READ THIS:**
|
||||
> Every `story-automator tmux-wrapper spawn` call **MUST** include `--command` with the built command:
|
||||
> ```bash
|
||||
> session=$("$scripts" tmux-wrapper spawn {step} {epic} {story_id} \
|
||||
> --agent "$agent" \
|
||||
> --command "$("$scripts" tmux-wrapper build-cmd {step} {story_id} --agent "$agent")")
|
||||
> ```
|
||||
> **Missing `--command` = session sits idle → `never_active` failure!**
|
||||
|
||||
**FOR EACH story in range:**
|
||||
|
||||
```bash
|
||||
"$scripts" orchestrator-helper state-update "$state_file" \
|
||||
--set currentStory={story_id} --set currentStep=step-03-execute \
|
||||
--set lastUpdated="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
echo "- **[$(date -u +%Y-%m-%dT%H:%M:%SZ)]** Starting story {story_id}" >> "$state_file"
|
||||
|
||||
# Initialize Story Progress row
|
||||
sed -i '' "/<!-- Progress rows/i\\
|
||||
| {story_id} | - | - | - | - | - | in-progress |" "$state_file"
|
||||
```
|
||||
|
||||
Display: "**Story {N}/{total}: {title}**"
|
||||
Use compact operator output format for routine progress:
|
||||
```text
|
||||
[story {N}/{total}] {step} -> {state} (agent={agent}, retries={attempts})
|
||||
```
|
||||
After any session completes (create/dev/auto/review): `"$scripts" tmux-wrapper kill "$session"`
|
||||
|
||||
**MANDATORY log pre-filter (all sessions):** Before any deep parsing, pre-filter logs with a single grep/regex pass and pass only focused output forward.
|
||||
```bash
|
||||
log_file=$(echo "$result" | jq -r '.output_file')
|
||||
log_focus=$(grep -nE "SUCCESS|FAIL|ERROR|CRITICAL|WARN|RETRY|ESCALATE" "$log_file" | head -n 120)
|
||||
if [ -z "$log_focus" ]; then
|
||||
log_focus=$(tail -n 120 "$log_file")
|
||||
fi
|
||||
```
|
||||
If multiple logs exist, run one grep/regex pass across all log files and forward only matched lines + file names.
|
||||
|
||||
**Compact result contract (required):**
|
||||
- Return only: `next_action`, `confidence`, `error_class`, `retryable`, `reasons`, `session_id`
|
||||
- Do not pass full raw logs to parent flow unless escalation explicitly requires evidence payload
|
||||
|
||||
### A. Create Story
|
||||
*Skip if story file exists*
|
||||
|
||||
**Apply retry/fallback pattern from `{retryStrategy}`:** Up to 5 attempts, alternating agents, network-aware delays.
|
||||
|
||||
```bash
|
||||
before=$("$scripts" validate-story-creation count {story_id})
|
||||
# Retry loop: see {retryStrategy}
|
||||
session=$("$scripts" tmux-wrapper spawn create {epic} {story_id} \
|
||||
--agent "$current_agent" \
|
||||
--command "$("$scripts" tmux-wrapper build-cmd create {story_id} --agent "$current_agent")")
|
||||
result=$("$scripts" monitor-session "$session" --json --agent "$current_agent")
|
||||
"$scripts" tmux-wrapper kill "$session"
|
||||
after=$("$scripts" validate-story-creation count {story_id})
|
||||
validation=$("$scripts" validate-story-creation check {story_id} --before $before --after $after)
|
||||
```
|
||||
|
||||
- If `validation.valid == true`:
|
||||
```bash
|
||||
# Update Story Progress: mark create-story done
|
||||
sed -i '' "s/^| ${story_id} |.*$/| ${story_id} | done | - | - | - | - | in-progress |/" "$state_file"
|
||||
```
|
||||
→ proceed to B
|
||||
- If `validation.valid == false` AND attempts < 5 → retry with next agent (see `{retryStrategy}`)
|
||||
- If `validation.valid == false` AND attempts == 5 → escalate (all retries exhausted)
|
||||
|
||||
### B. Dev Story
|
||||
|
||||
**Apply retry/fallback pattern from `{retryStrategy}`:** Up to 5 attempts, alternating agents.
|
||||
|
||||
```bash
|
||||
# Retry loop with agent alternation: see {retryStrategy}
|
||||
session=$("$scripts" tmux-wrapper spawn dev {epic} {story_id} \
|
||||
--agent "$current_agent" \
|
||||
--command "$("$scripts" tmux-wrapper build-cmd dev {story_id} --agent "$current_agent")")
|
||||
result=$("$scripts" monitor-session "$session" --json --agent "$current_agent")
|
||||
"$scripts" tmux-wrapper kill "$session"
|
||||
```
|
||||
|
||||
**Session Parsing Contract (required):**
|
||||
- Preferred: use Session Output Parser prompt from `{subagentPrompts}` on `result.output_file`
|
||||
- Fallback: use local parser below
|
||||
- Return normalized schema only: `next_action`, `confidence`, `error_class`, `reasons`
|
||||
|
||||
```bash
|
||||
parsed=$("$scripts" orchestrator-helper parse-output "$(echo $result | jq -r '.output_file')" dev)
|
||||
next_action=$(echo "$parsed" | jq -r '.next_action')
|
||||
confidence=$(echo "$parsed" | jq -r '.confidence // 0.0')
|
||||
error_class=$(echo "$parsed" | jq -r '.error_class // "none"')
|
||||
reasons=$(echo "$parsed" | jq -c '.reasons // []')
|
||||
```
|
||||
|
||||
- If `next_action == "proceed"`:
|
||||
```bash
|
||||
# Update Story Progress: mark dev-story done
|
||||
sed -i '' "s/^| ${story_id} |.*$/| ${story_id} | done | done | - | - | - | in-progress |/" "$state_file"
|
||||
```
|
||||
→ proceed to C (next step)
|
||||
- If `next_action == "retry"` OR `result.final_state == "crashed"`:
|
||||
- Attempts < 5 → retry with next agent (see `{retryStrategy}`)
|
||||
- Plateau detected (same task 3x) → DEFER story, continue to next
|
||||
- Attempts == 5 → escalate (all retries exhausted)
|
||||
|
||||
## Auto-Proceed to Review Phase
|
||||
|
||||
Display: "**Dev story complete. Proceeding to automate and code review...**"
|
||||
|
||||
```bash
|
||||
"$scripts" orchestrator-helper state-update "$state_file" \
|
||||
--set currentStep=step-03a-execute-review \
|
||||
--set lastUpdated="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
echo "- **[$(date -u +%Y-%m-%dT%H:%M:%SZ)]** Dev complete, proceeding to review phase" >> "$state_file"
|
||||
```
|
||||
|
||||
## Then
|
||||
→ Immediately load and execute `{nextStep}`
|
||||
|
|
@ -0,0 +1,114 @@
|
|||
---
|
||||
nextStep: './step-03b-execute-finish.md'
|
||||
scriptsDir: '../bin/story-automator'
|
||||
outputFile: '{output_folder}/story-automator/orchestration-{epic_id}-{timestamp}.md'
|
||||
retryStrategy: '../data/retry-fallback-strategy.md'
|
||||
reviewLoop: '../data/code-review-loop.md'
|
||||
---
|
||||
|
||||
# Step 3a: Execute Review Phase
|
||||
|
||||
**Goal:** Run automate (guardrails) and code review loop for the current story.
|
||||
**Interaction mode:** Deterministic autonomous execution.
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Step 3 completed (create-story and dev-story done)
|
||||
- State document updated with current story progress
|
||||
|
||||
Set: `scripts="{scriptsDir}"`
|
||||
|
||||
---
|
||||
|
||||
## Story Loop (Continue from Step 3)
|
||||
|
||||
### C. Automate (Guardrails)
|
||||
*Skip if `overrides.skipAutomate`*
|
||||
|
||||
**Apply retry/fallback pattern from `{retryStrategy}`:** Non-blocking, but still retry on failure.
|
||||
|
||||
```bash
|
||||
# --command required (see Spawn Pattern in step-03)
|
||||
session=$("$scripts" tmux-wrapper spawn auto {epic} {story_id} \
|
||||
--agent "$current_agent" \
|
||||
--command "$("$scripts" tmux-wrapper build-cmd auto {story_id} --agent "$current_agent")")
|
||||
result=$("$scripts" monitor-session "$session" --json --agent "$current_agent")
|
||||
"$scripts" tmux-wrapper kill "$session"
|
||||
```
|
||||
|
||||
- SUCCESS:
|
||||
```bash
|
||||
# Update Story Progress: mark automate done
|
||||
sed -i '' "s/^| ${story_id} |.*$/| ${story_id} | done | done | done | - | - | in-progress |/" "{outputFile}"
|
||||
```
|
||||
Display: `[story {N}/{total}] automate -> done`
|
||||
→ proceed to D
|
||||
- FAILURE → retry up to 3 attempts (non-blocking, so fewer retries), then log warning:
|
||||
```bash
|
||||
# Update Story Progress: mark automate skipped
|
||||
sed -i '' "s/^| ${story_id} |.*$/| ${story_id} | done | done | skip | - | - | in-progress |/" "{outputFile}"
|
||||
```
|
||||
Display: `[story {N}/{total}] automate -> skip (non-blocking)`
|
||||
→ proceed to D
|
||||
|
||||
### D. Code Review Loop
|
||||
|
||||
**See `{reviewLoop}` for complete script-based review cycle with v2.3 per-task agent configuration.**
|
||||
|
||||
**MANDATORY log-summary contract (every review cycle):**
|
||||
- Run a single grep/regex pass over review output first.
|
||||
- Return only compact fields to parent flow: `next_action`, `confidence`, `error_class`, `issues_count`, `top_issues`.
|
||||
- Do not carry full log payloads forward unless escalation requires raw evidence.
|
||||
|
||||
```bash
|
||||
review_log=$(echo "$result" | jq -r '.output_file')
|
||||
review_focus=$(grep -nE "SUCCESS|FAIL|ERROR|CRITICAL|WARN|RETRY|ESCALATE|ISSUE" "$review_log" | head -n 120)
|
||||
if [ -z "$review_focus" ]; then
|
||||
review_focus=$(tail -n 120 "$review_log")
|
||||
fi
|
||||
|
||||
# Compact subprocess-style summary contract for parent flow
|
||||
review_summary=$("$scripts" orchestrator-helper parse-output "$review_log" review | jq -c '
|
||||
{
|
||||
next_action: (.next_action // "retry"),
|
||||
confidence: (.confidence // 0),
|
||||
error_class: (.error_class // "unknown"),
|
||||
issues_count: ((.issues // []) | length),
|
||||
top_issues: ((.issues // [])[:3])
|
||||
}
|
||||
')
|
||||
```
|
||||
|
||||
Key points:
|
||||
- Up to 5 cycles using `story-automator tmux-wrapper spawn review` + `story-automator monitor-session`
|
||||
- **Agent:** Uses per-task config from state document (`resolve_agent_for_task "review"`)
|
||||
- **Verification:** Uses `--workflow review --story-key` for sprint-status verification
|
||||
- **States:** `completed` (verified):
|
||||
```bash
|
||||
# Update Story Progress: mark code-review done
|
||||
sed -i '' "s/^| ${story_id} |.*$/| ${story_id} | done | done | done | done | - | in-progress |/" "{outputFile}"
|
||||
```
|
||||
Display: `[story {N}/{total}] review -> done`
|
||||
→ E | `incomplete` → count as failed attempt, retry until maxCycles, then CRITICAL escalate (Trigger #8)
|
||||
- Exit loop when sprint-status shows "done"
|
||||
- If `review_summary.next_action` is ambiguous, ask one clarifying question before escalating.
|
||||
|
||||
---
|
||||
|
||||
## Auto-Proceed to Finalization
|
||||
|
||||
Display: "**Code review complete. Proceeding to finalize commits and status checks...**"
|
||||
|
||||
```bash
|
||||
"$scripts" orchestrator-helper state-update "{outputFile}" \
|
||||
--set currentStep=step-03b-execute-finish \
|
||||
--set lastUpdated="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
echo "- **[$(date -u +%Y-%m-%dT%H:%M:%SZ)]** Code review complete, proceeding to finalization" >> "{outputFile}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Then
|
||||
→ Immediately load and execute `{nextStep}`
|
||||
|
|
@ -0,0 +1,166 @@
|
|||
---
|
||||
nextStep: './step-03c-execute-complete.md'
|
||||
scriptsDir: '../bin/story-automator'
|
||||
outputFile: '{output_folder}/story-automator/orchestration-{epic_id}-{timestamp}.md'
|
||||
---
|
||||
|
||||
# Step 3b: Finalize Story + Wrap Execution
|
||||
|
||||
**Goal:** After code review completes for a story, commit changes, verify sprint status, update progress, and finish the loop.
|
||||
**Interaction mode:** Deterministic autonomous execution.
|
||||
|
||||
---
|
||||
|
||||
## Story Loop (Continue from Step 3)
|
||||
|
||||
### E. Git Commit
|
||||
|
||||
**Required:** Commit after every story (do not skip).
|
||||
|
||||
```bash
|
||||
commit=$("{scriptsDir}" commit-story --repo "{project-root}" --story {story_id} --title "{title}")
|
||||
ok=$(echo "$commit" | jq -r '.ok')
|
||||
```
|
||||
|
||||
- If `ok == true`:
|
||||
```bash
|
||||
# Update Story Progress: mark git-commit done
|
||||
sed -i '' "s/^| ${story_id} |.*$/| ${story_id} | done | done | done | done | done | in-progress |/" "{outputFile}"
|
||||
```
|
||||
→ proceed to F
|
||||
- If `ok == false` → log warning and escalate
|
||||
|
||||
### F. Verify Sprint Status
|
||||
|
||||
```bash
|
||||
# Check sprint-status with story file fallback (v1.4.0)
|
||||
normalized=$("{scriptsDir}" orchestrator-helper normalize-key {story_id})
|
||||
story_key=$(echo "$normalized" | jq -r '.key')
|
||||
status=$("{scriptsDir}" orchestrator-helper sprint-status get "$story_key")
|
||||
is_done=$(echo "$status" | jq -r '.done')
|
||||
|
||||
# Fallback: trust story file if sprint-status disagrees
|
||||
if [ "$is_done" != "true" ]; then
|
||||
file_done=$("{scriptsDir}" orchestrator-helper story-file-status {story_id} | jq -r '.status')
|
||||
[ "$file_done" = "done" ] && is_done="true"
|
||||
fi
|
||||
```
|
||||
|
||||
- If `is_done == false` → return to Code Review Loop (Step 3, section D)
|
||||
- If `is_done == true` → proceed to G
|
||||
|
||||
### G. Story Complete
|
||||
Display: "**✅ Story {N} complete.**"
|
||||
```bash
|
||||
"{scriptsDir}" orchestrator-helper state-update "{outputFile}" \
|
||||
--set lastUpdated="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
echo "- **[$(date -u +%Y-%m-%dT%H:%M:%SZ)]** Story {story_id}: ✅ complete (commit + sprint-status verified)" >> "{outputFile}"
|
||||
|
||||
# Update Story Progress: mark story done
|
||||
sed -i '' "s/^| ${story_id} |.*$/| ${story_id} | done | done | done | done | done | done |/" "{outputFile}"
|
||||
```
|
||||
Display: `[story {N}/{total}] finalize -> done`
|
||||
|
||||
### H. Check Epic Completion & Trigger Retrospective (Multi-Epic Support)
|
||||
|
||||
After each story completes, check if ALL stories in this epic are now done. Retrospective only triggers when every story in the epic has passed code review and sprint status confirms all are "done".
|
||||
|
||||
#### H.1 Check All Stories Done
|
||||
|
||||
```bash
|
||||
# Run epic-level check in parallel with per-story checks
|
||||
tmp_epic_status=$(mktemp)
|
||||
("{scriptsDir}" orchestrator-helper sprint-status check-epic {epic_number} > "$tmp_epic_status") &
|
||||
epic_status_pid=$!
|
||||
|
||||
# Get all stories for this epic and verify each is done
|
||||
epic_stories=$("{scriptsDir}" orchestrator-helper get-epic-stories {epic_number} --state-file "{outputFile}")
|
||||
stories_ok=$(echo "$epic_stories" | jq -r '.ok')
|
||||
story_count=$(echo "$epic_stories" | jq -r '.count')
|
||||
all_done=true
|
||||
|
||||
if [ "$stories_ok" != "true" ] || [ "$story_count" -eq 0 ]; then
|
||||
all_done=false
|
||||
else
|
||||
tmp_story_checks=$(mktemp)
|
||||
echo "$epic_stories" | jq -r '.stories[]' \
|
||||
| xargs -I{} -P 4 sh -c '
|
||||
status=$("'"{scriptsDir}"'" orchestrator-helper sprint-status get "{}")
|
||||
done=$(echo "$status" | jq -r ".done")
|
||||
[ "$done" = "true" ] && echo "{}|done" || echo "{}|not_done"
|
||||
' > "$tmp_story_checks"
|
||||
|
||||
if rg -q '\|not_done$' "$tmp_story_checks"; then
|
||||
all_done=false
|
||||
fi
|
||||
rm -f "$tmp_story_checks"
|
||||
fi
|
||||
```
|
||||
|
||||
#### H.2 Secondary Verification via Sprint Status
|
||||
|
||||
```bash
|
||||
# Double-check: use result from parallel epic-level check
|
||||
wait "$epic_status_pid"
|
||||
epic_status=$(cat "$tmp_epic_status")
|
||||
rm -f "$tmp_epic_status"
|
||||
|
||||
epic_complete=$(echo "$epic_status" | jq -r '.allStoriesDone')
|
||||
epic_ok=$(echo "$epic_status" | jq -r '.ok')
|
||||
|
||||
# Both checks must pass
|
||||
if [ "$all_done" = "true" ] && [ "$epic_ok" = "true" ] && [ "$epic_complete" = "true" ]; then
|
||||
trigger_retro=true
|
||||
else
|
||||
trigger_retro=false
|
||||
fi
|
||||
```
|
||||
|
||||
#### H.3 Trigger Retrospective (Only When Epic Fully Complete)
|
||||
|
||||
**IF trigger_retro == true:**
|
||||
|
||||
1. Display: "**✅ Epic {epic_number} complete! All stories passed code review. Triggering retrospective (YOLO mode)...**"
|
||||
2. Log: `- **[{timestamp}]** Epic {epic_number}: ALL STORIES DONE - triggering retrospective`
|
||||
|
||||
```bash
|
||||
# CRITICAL: Use build-cmd to get full YOLO prompt with doc verification
|
||||
cmd=$("{scriptsDir}" tmux-wrapper build-cmd retro {epic_number} --agent "claude")
|
||||
session=$("{scriptsDir}" tmux-wrapper spawn retro "" {epic_number} --agent "claude" --command "$cmd")
|
||||
|
||||
# Monitor with safe failure (never escalate on retro failure)
|
||||
result=$("{scriptsDir}" monitor-session "$session" --json --agent "claude")
|
||||
"{scriptsDir}" tmux-wrapper kill "$session"
|
||||
|
||||
retro_status=$(echo "$result" | jq -r '.final_state')
|
||||
|
||||
if [ "$retro_status" = "completed" ] || [ "$retro_status" = "success" ]; then
|
||||
echo "- **[{timestamp}]** Epic {epic_number} retrospective: completed successfully" >> "{outputFile}"
|
||||
else
|
||||
echo "- **[{timestamp}]** Epic {epic_number} retrospective: skipped (reason: $retro_status)" >> "{outputFile}"
|
||||
fi
|
||||
```
|
||||
|
||||
3. Update state document with retrospective status:
|
||||
```yaml
|
||||
retrospectives:
|
||||
epic-{epic_number}:
|
||||
status: "completed" | "skipped"
|
||||
reason: "{reason_if_skipped}"
|
||||
timestamp: "{timestamp}"
|
||||
```
|
||||
|
||||
4. **Continue to next story regardless of retrospective result** (retrospectives never block)
|
||||
|
||||
**IF trigger_retro == false:**
|
||||
- Continue to next story (epic not yet complete)
|
||||
|
||||
**IMPORTANT RULES:**
|
||||
- **ALL stories must be done**: Retrospective only triggers when every story in the epic shows "done" in sprint status
|
||||
- **Use `build-cmd retro` with Claude**: Retrospectives do not support Codex
|
||||
- **Never escalate; non-blocking**: If retrospective fails for any reason, log warning and continue
|
||||
|
||||
**END FOR EACH**
|
||||
|
||||
## Then
|
||||
→ After all stories complete, load and execute `{nextStep}`
|
||||
|
|
@ -0,0 +1,66 @@
|
|||
---
|
||||
nextStep: './step-04-wrapup.md'
|
||||
scriptsDir: '../bin/story-automator'
|
||||
outputFile: '{output_folder}/story-automator/orchestration-{epic_id}-{timestamp}.md'
|
||||
executionPatterns: '../data/execution-patterns.md'
|
||||
retryStrategy: '../data/retry-fallback-strategy.md'
|
||||
triggers: '../data/escalation-triggers.md'
|
||||
---
|
||||
|
||||
# Step 3c: Execution Complete
|
||||
|
||||
**Goal:** Summarize results after all stories finish, persist final status, and transition to wrapup.
|
||||
**Interaction mode:** Deterministic auto-proceed.
|
||||
|
||||
---
|
||||
|
||||
## All Complete
|
||||
|
||||
Display:
|
||||
```
|
||||
**All {count} stories completed!**
|
||||
|
||||
If `{count} <= 10`:
|
||||
| Story | Status |
|
||||
|-------|--------|
|
||||
{summary_table}
|
||||
|
||||
If `{count} > 10`:
|
||||
- Completed: {completed_count}
|
||||
- Warnings: {warning_count}
|
||||
- Escalations: {escalation_count}
|
||||
- See state log for full per-story table.
|
||||
|
||||
Proceeding to wrap-up...
|
||||
```
|
||||
|
||||
```bash
|
||||
"{scriptsDir}" orchestrator-helper state-update "{outputFile}" \
|
||||
--set status=EXECUTION_COMPLETE --set lastUpdated="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
echo "- **[$(date -u +%Y-%m-%dT%H:%M:%SZ)]** All stories complete — execution finished" >> "{outputFile}"
|
||||
```
|
||||
|
||||
## Parallelism & Escalation
|
||||
|
||||
**Parallelism:** When `overrides.maxParallel > 1`, batch independent stories into concurrent groups:
|
||||
1. Check story dependency graph — only stories with no shared file dependencies can run in parallel
|
||||
2. Spawn up to `maxParallel` tmux sessions simultaneously (each runs steps A→F independently)
|
||||
3. Wait for all sessions in the batch to complete before starting the next batch
|
||||
4. Epic completion check (H) runs only after all batches finish
|
||||
|
||||
See `{executionPatterns}` for forbidden patterns and session isolation rules.
|
||||
|
||||
**Escalation:** See `{triggers}` for trigger definitions and `{retryStrategy}` for retry/fallback patterns. Escalation only after exhausting all retry attempts.
|
||||
|
||||
## Auto-Proceed to Wrap-up
|
||||
|
||||
Display: "**Execution loop complete. Proceeding to wrap-up...**"
|
||||
|
||||
```bash
|
||||
"{scriptsDir}" orchestrator-helper state-update "{outputFile}" \
|
||||
--set currentStep=step-04-wrapup \
|
||||
--set lastUpdated="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
```
|
||||
|
||||
## Then
|
||||
→ Immediately load and execute `{nextStep}`
|
||||
|
|
@ -0,0 +1,127 @@
|
|||
---
|
||||
learningsFile: '{output_folder}/story-automator/learnings.md'
|
||||
templates: '../data/wrapup-templates.md'
|
||||
markerFile: '{project-root}/.claude/.story-automator-active'
|
||||
stateFilePattern: '{output_folder}/story-automator/orchestration-*.md'
|
||||
outputFile: '{output_folder}/story-automator/orchestration-{epic_id}-{timestamp}.md'
|
||||
stateMetrics: '../bin/story-automator'
|
||||
reportRetentionPolicy: '../data/report-retention-policy.md'
|
||||
---
|
||||
|
||||
# Step 4: Wrap-up
|
||||
|
||||
**Goal:** Generate summary, capture learnings, finalize state.
|
||||
**Interaction mode:** Structured wrap-up with recommendation output.
|
||||
|
||||
---
|
||||
|
||||
## Do
|
||||
|
||||
### 1. Load Final State
|
||||
From state document (located via `{stateFilePattern}`; resolved path stored as `{outputFile}` for this run), extract:
|
||||
- Story progress table
|
||||
- Action log
|
||||
- Session references
|
||||
|
||||
Calculate:
|
||||
- Stories completed vs total
|
||||
- Code review cycles
|
||||
- Escalations encountered
|
||||
|
||||
Use the existing state document path from execution, and derive `story_range_csv` from frontmatter `storyRange`.
|
||||
|
||||
Deterministic metrics:
|
||||
```bash
|
||||
metrics=$("{stateMetrics}" state-metrics --state "{state_document_path}")
|
||||
```
|
||||
|
||||
Parallel optimization (metrics + retention policy extraction):
|
||||
```bash
|
||||
tmp_metrics=$(mktemp)
|
||||
tmp_retention=$(mktemp)
|
||||
|
||||
("{stateMetrics}" state-metrics --state "{state_document_path}" > "$tmp_metrics") &
|
||||
metrics_pid=$!
|
||||
|
||||
(awk '/^```bash/{flag=1;next}/^```/{flag=0}flag{print}' "{reportRetentionPolicy}" > "$tmp_retention") &
|
||||
retention_pid=$!
|
||||
|
||||
wait "$metrics_pid"
|
||||
wait "$retention_pid"
|
||||
|
||||
metrics=$(cat "$tmp_metrics")
|
||||
retention_cmds=$(cat "$tmp_retention")
|
||||
rm -f "$tmp_metrics" "$tmp_retention"
|
||||
```
|
||||
|
||||
**Optimization (data ops):** If action log exceeds 200 lines, use compact summary by default.
|
||||
```bash
|
||||
log_block=$(awk '/^## Action Log/{flag=1;next}/^## /{if(flag){exit}}flag{print}' "{state_document_path}")
|
||||
log_lines=$(printf "%s\n" "$log_block" | wc -l | tr -d ' ')
|
||||
if [ "$log_lines" -gt 200 ]; then
|
||||
log_focus=$(printf "%s\n" "$log_block" | tail -n 50)
|
||||
else
|
||||
log_focus="$log_block"
|
||||
fi
|
||||
```
|
||||
|
||||
### 2. Generate Summary
|
||||
From `{templates}`, use **Summary Report Template**.
|
||||
|
||||
Fill in all stats and display to user.
|
||||
|
||||
### 3. Capture Learnings
|
||||
Analyze run for patterns:
|
||||
- Common code review issues
|
||||
- Steps needing escalation
|
||||
- Timing patterns
|
||||
- What worked well
|
||||
|
||||
**IF `{learningsFile}` exists:** Load and merge
|
||||
**ELSE:** Create new
|
||||
|
||||
Append entry using **Learnings Entry Template** from `{templates}`.
|
||||
|
||||
### 4. Recommendations
|
||||
From `{templates}`, use **Recommendations Template**.
|
||||
|
||||
Present actionable suggestions based on patterns observed.
|
||||
|
||||
### 4b. Validation Report Housekeeping
|
||||
Load `{reportRetentionPolicy}` and apply its retention guidance when needed.
|
||||
|
||||
If validation report history is large, run the suggested maintenance command from that policy file.
|
||||
|
||||
### 5. Finalize State
|
||||
Update state document:
|
||||
- `status = 'COMPLETE'`
|
||||
- `completedAt = {timestamp}`
|
||||
- Append final summary to action log
|
||||
|
||||
Display: "**State document finalized.**"
|
||||
|
||||
### 6. Remove Marker File
|
||||
Delete `{markerFile}` to disable the Stop hook safeguard.
|
||||
|
||||
This allows Claude to stop normally after workflow completion.
|
||||
|
||||
### 7. Workflow Complete
|
||||
|
||||
Display:
|
||||
```
|
||||
**🎉 Story Automator workflow complete!**
|
||||
|
||||
All stories have been processed through the build cycle.
|
||||
Retrospectives were triggered automatically when each epic completed (during execution loop).
|
||||
|
||||
State document: {outputFile}
|
||||
Learnings: {learningsFile}
|
||||
```
|
||||
|
||||
Persist final state to `{outputFile}`.
|
||||
|
||||
---
|
||||
|
||||
## End
|
||||
|
||||
**Workflow terminates here.** Retrospectives are now handled within the execution loop (step-03b) when each epic completes, not as a separate terminal step.
|
||||
|
|
@ -0,0 +1,171 @@
|
|||
---
|
||||
outputFolder: '{output_folder}/story-automator'
|
||||
rules: '../data/orchestrator-rules.md'
|
||||
stateFilePattern: '{outputFolder}/orchestration-*.md'
|
||||
outputFile: '{outputFolder}/orchestration-{epic_id}-{timestamp}.md'
|
||||
stateHelper: '../bin/story-automator'
|
||||
validateStep: '../steps-v/step-v-01-check.md'
|
||||
---
|
||||
|
||||
# Edit Step 1: Modify Orchestration
|
||||
|
||||
**Goal:** Load an existing orchestration state and allow configuration changes.
|
||||
|
||||
---
|
||||
|
||||
## Do
|
||||
|
||||
### 1. Load Rules
|
||||
Load `{rules}` once for context.
|
||||
|
||||
### 2. Request State Document
|
||||
```
|
||||
**Which orchestration would you like to edit?**
|
||||
|
||||
Found state documents in `{outputFolder}`:
|
||||
[List all orchestration-*.md files with: name, status, last updated]
|
||||
|
||||
Enter filename or number to select:
|
||||
```
|
||||
|
||||
**Wait.**
|
||||
|
||||
Deterministic listing (matches `{stateFilePattern}`):
|
||||
```bash
|
||||
state_list=$("{stateHelper}" orchestrator-helper state-list "{outputFolder}")
|
||||
```
|
||||
|
||||
### 3. Load Current State
|
||||
Load the selected state document (resolved as `{outputFile}` for this run). Display current configuration:
|
||||
|
||||
Deterministic summary:
|
||||
```bash
|
||||
summary=$("{stateHelper}" orchestrator-helper state-summary "{state_path}")
|
||||
```
|
||||
|
||||
```
|
||||
**Current Configuration: {epicName}**
|
||||
|
||||
**Status:** {status}
|
||||
**Epic:** {epic}
|
||||
**Story Range:** {storyRange}
|
||||
**Current Position:** Story {currentStory}, Step {currentStep}
|
||||
|
||||
**Project Context:**
|
||||
- Product Brief: {projectContext.productBrief}
|
||||
- PRD: {projectContext.prd}
|
||||
- Architecture: {projectContext.architecture}
|
||||
|
||||
**Execution Settings:**
|
||||
- AI Command: {aiCommand}
|
||||
- Max Parallel: {overrides.maxParallel}
|
||||
- Skip Automate: {overrides.skipAutomate}
|
||||
|
||||
**Custom Context:**
|
||||
{customContext or "None"}
|
||||
```
|
||||
|
||||
### 4. Edit Menu
|
||||
|
||||
```
|
||||
**What would you like to modify?**
|
||||
|
||||
[S]tatus - Change orchestration status
|
||||
[R]ange - Modify story range
|
||||
[O]verrides - Adjust execution settings
|
||||
[T]ext Context - Update custom context
|
||||
[I] Command - Change AI tool command
|
||||
[D]ocs - Update project context paths
|
||||
[X]Exit - Save and exit
|
||||
```
|
||||
|
||||
**Wait.**
|
||||
|
||||
#### Menu Handling Logic:
|
||||
- IF S: Update status, log change → redisplay menu
|
||||
- IF R: Update story range, log change → redisplay menu
|
||||
- IF O: Update overrides, log change → redisplay menu
|
||||
- IF T: Update custom context, log change → redisplay menu
|
||||
- IF I: Update AI command, log change → redisplay menu
|
||||
- IF D: Update project doc paths, log change → redisplay menu
|
||||
- IF X: Proceed to step 6
|
||||
- IF Any other: help user respond, then redisplay menu
|
||||
|
||||
#### EXECUTION RULES:
|
||||
- ALWAYS halt and wait for user input after presenting menu
|
||||
- After non-exit options, return to this menu
|
||||
- Keep prompts concise and progressive (one decision at a time)
|
||||
|
||||
### 5. Handle Edits
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| **S** | Present status options: READY, IN_PROGRESS, PAUSED → update, log change → redisplay menu |
|
||||
| **R** | Show stories, ask for new range (e.g., "3-5", "all") → update, log change → redisplay menu |
|
||||
| **O** | Show override settings, allow changes → update, log change → redisplay menu |
|
||||
| **T** | Show current context, accept new text → update, log change → redisplay menu |
|
||||
| **I** | Show current command, accept new (e.g., "cursor", "/path/to/ai") → update, log change → redisplay menu |
|
||||
| **D** | Show current paths, allow updates → update, log change → redisplay menu |
|
||||
| **X** | Proceed to step 6 |
|
||||
|
||||
### 6. Confirm and Save
|
||||
|
||||
```
|
||||
**Changes to save:**
|
||||
[List all modifications made]
|
||||
|
||||
[S]ave - Write changes to state document
|
||||
[D]iscard - Exit without saving
|
||||
[E]dit more - Return to edit menu
|
||||
```
|
||||
|
||||
**Wait.**
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| **S** | Update `lastUpdated`, log "Configuration edited", write file → step 7 |
|
||||
| **D** | Display "Changes discarded." → end |
|
||||
| **E** | Return to step 4 |
|
||||
|
||||
#### Menu Handling Logic:
|
||||
- IF S: Save changes then proceed to step 7
|
||||
- IF D: Discard changes and end
|
||||
- IF E: Return to step 4
|
||||
- IF Any other: help user respond, then redisplay this menu
|
||||
|
||||
#### EXECUTION RULES:
|
||||
- ALWAYS halt and wait for user input after presenting menu
|
||||
- Keep prompts concise and progressive (one decision at a time)
|
||||
|
||||
### 7. Post-Edit Options
|
||||
|
||||
```
|
||||
**Changes saved.**
|
||||
|
||||
[R]esume - Continue orchestration from current position
|
||||
[V]alidate - Run validation check on state
|
||||
[X]Exit - Return to main menu
|
||||
```
|
||||
|
||||
**Wait.**
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| **R** | Route to appropriate step based on `currentStep` (preflight/execute/wrapup) |
|
||||
| **V** | Load `{validateStep}` |
|
||||
| **X** | Display "Edit complete." and end |
|
||||
|
||||
#### Menu Handling Logic:
|
||||
- IF R: Route based on `currentStep`
|
||||
- IF V: Load `{validateStep}`
|
||||
- IF X: End workflow
|
||||
- IF Any other: help user respond, then redisplay this menu
|
||||
|
||||
#### EXECUTION RULES:
|
||||
- ALWAYS halt and wait for user input after presenting menu
|
||||
- Keep prompts concise and progressive (one decision at a time)
|
||||
|
||||
---
|
||||
|
||||
## Then
|
||||
→ End workflow or route based on choice
|
||||
|
|
@ -0,0 +1,165 @@
|
|||
---
|
||||
nextStep: './step-v-02-report.md'
|
||||
outputFolder: '{output_folder}/story-automator'
|
||||
rules: '../data/orchestrator-rules.md'
|
||||
stateFilePattern: '{outputFolder}/orchestration-*.md'
|
||||
outputFile: '{outputFolder}/orchestration-{epic_id}-{timestamp}.md'
|
||||
validateState: '../bin/story-automator'
|
||||
listSessions: '../bin/story-automator'
|
||||
deriveProjectSlug: '../bin/story-automator'
|
||||
tmuxCommands: '../data/tmux-commands.md'
|
||||
---
|
||||
|
||||
# Validation Step 1: Check State Integrity
|
||||
|
||||
**Goal:** Validate an orchestration state document for structural integrity and session health.
|
||||
|
||||
## MANDATORY EXECUTION RULES
|
||||
|
||||
- 🛑 **DO NOT BE LAZY** - CHECK EVERY FIELD AND SESSION
|
||||
- 📖 Validate ALL required fields, not just a sample
|
||||
- 🚫 DO NOT skip any validation checks
|
||||
- ✅ Report ALL issues found, not just the first one
|
||||
|
||||
---
|
||||
|
||||
## Do
|
||||
|
||||
### 1. Load Rules
|
||||
Load `{rules}` once for context on expected state structure.
|
||||
|
||||
### 2. Request State Document
|
||||
```
|
||||
**Which orchestration would you like to validate?**
|
||||
|
||||
Found state documents in `{outputFolder}`:
|
||||
[List all orchestration-*.md files with: name, status, last updated]
|
||||
|
||||
Pattern: `{stateFilePattern}`
|
||||
|
||||
Enter filename or number to select:
|
||||
```
|
||||
|
||||
**Wait.**
|
||||
|
||||
### 3. Load and Parse State
|
||||
Load the selected state document (resolved as `{outputFile}` for this run). Extract frontmatter:
|
||||
- `epic`, `epicName`, `storyRange`
|
||||
- `status`, `currentStory`, `currentStep`
|
||||
- `stepsCompleted`, `lastUpdated`
|
||||
- `projectContext`, `aiCommand`, `overrides`
|
||||
- `activeSessions`, `completedSessions`
|
||||
|
||||
### 3a. Helper CLI Contract Check (Required)
|
||||
|
||||
Before running validation commands, verify helper interfaces in parallel:
|
||||
```bash
|
||||
tmp_help_validate=$(mktemp)
|
||||
tmp_help_sessions=$(mktemp)
|
||||
tmp_help_slug=$(mktemp)
|
||||
|
||||
("{validateState}" validate-state --help >"$tmp_help_validate" 2>&1) &
|
||||
pid_validate=$!
|
||||
("{listSessions}" list-sessions --help >"$tmp_help_sessions" 2>&1) &
|
||||
pid_sessions=$!
|
||||
("{deriveProjectSlug}" derive-project-slug --help >"$tmp_help_slug" 2>&1) &
|
||||
pid_slug=$!
|
||||
|
||||
wait "$pid_validate"; status_validate=$?
|
||||
wait "$pid_sessions"; status_sessions=$?
|
||||
wait "$pid_slug"; status_slug=$?
|
||||
|
||||
if [ "$status_validate" -ne 0 ] || [ "$status_sessions" -ne 0 ] || [ "$status_slug" -ne 0 ]; then
|
||||
rm -f "$tmp_help_validate" "$tmp_help_sessions" "$tmp_help_slug"
|
||||
echo "validation helper CLI contract changed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
rm -f "$tmp_help_validate" "$tmp_help_sessions" "$tmp_help_slug"
|
||||
```
|
||||
|
||||
If any check fails: **STOP and report "validation helper CLI contract changed"**.
|
||||
|
||||
### 4. Run Structure + Session Baseline in Parallel
|
||||
|
||||
Run structure validation and session inventory concurrently, then aggregate results.
|
||||
|
||||
```bash
|
||||
tmp_validation=$(mktemp)
|
||||
tmp_sessions=$(mktemp)
|
||||
|
||||
("{validateState}" validate-state --state "{state_path}" > "$tmp_validation") &
|
||||
validation_pid=$!
|
||||
|
||||
project_slug=$(echo "$("{deriveProjectSlug}" derive-project-slug --project-root "{project-root}")" | jq -r '.slug')
|
||||
("{listSessions}" list-sessions --slug "$project_slug" > "$tmp_sessions") &
|
||||
sessions_pid=$!
|
||||
|
||||
wait "$validation_pid"
|
||||
wait "$sessions_pid"
|
||||
|
||||
validation=$(cat "$tmp_validation")
|
||||
sessions=$(cat "$tmp_sessions")
|
||||
rm -f "$tmp_validation" "$tmp_sessions"
|
||||
```
|
||||
|
||||
### 5. Validate Structure + Session Consistency (Single Diff Pass)
|
||||
|
||||
**Required Fields Check:**
|
||||
|
||||
| Field | Present | Valid |
|
||||
|-------|---------|-------|
|
||||
| epic | ✅/❌ | non-empty string |
|
||||
| epicName | ✅/❌ | non-empty string |
|
||||
| storyRange | ✅/❌ | array |
|
||||
| status | ✅/❌ | valid enum |
|
||||
| lastUpdated | ✅/❌ | ISO date |
|
||||
| aiCommand | ✅/❌ | non-empty string |
|
||||
|
||||
**Valid status values:** INITIALIZING, READY, IN_PROGRESS, PAUSED, COMPLETE, ABORTED
|
||||
|
||||
**Record issues:**
|
||||
- Missing required fields
|
||||
- Invalid field values
|
||||
- Malformed YAML
|
||||
|
||||
Single-pass structure issue extraction (compact output):
|
||||
```bash
|
||||
field_issues=$(echo "$validation" | jq -r '.issues[]? | select(.type=="missing_field" or .type=="invalid_value" or .type=="yaml_error") | "\(.type): \(.field // .message)"')
|
||||
```
|
||||
|
||||
Using `{tmuxCommands}` semantics and `sessions` output, compare state vs live sessions in one pass:
|
||||
```bash
|
||||
state_sessions=$(echo "$validation" | jq -r '.activeSessions[]?.sessionId // empty' | sort -u)
|
||||
live_sessions=$(echo "$sessions" | jq -r '.sessions[]?.name // empty' | sort -u)
|
||||
|
||||
orphaned_refs=$(comm -23 <(echo "$state_sessions") <(echo "$live_sessions"))
|
||||
untracked_live=$(comm -13 <(echo "$state_sessions") <(echo "$live_sessions"))
|
||||
```
|
||||
|
||||
**Session consistency checks:**
|
||||
|
||||
| Check | Result |
|
||||
|-------|--------|
|
||||
| Active sessions in state but not in T-Mux | Orphaned references |
|
||||
| T-Mux sessions not in state | Untracked sessions |
|
||||
| Status=IN_PROGRESS but no active sessions | Stale state |
|
||||
|
||||
### 6. Carry Forward Validation Context
|
||||
|
||||
Carry forward to `{nextStep}`:
|
||||
- `state_path`
|
||||
- `validation`
|
||||
- `sessions`
|
||||
- `orphaned_refs`
|
||||
- `untracked_live`
|
||||
- Any structure/session issues identified
|
||||
|
||||
### 7. Auto-Proceed
|
||||
|
||||
Display: "**Structure and session baseline complete. Proceeding to progress validation and final report...**"
|
||||
|
||||
---
|
||||
|
||||
## Then
|
||||
→ Load and execute `{nextStep}`
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue