Updates after quality-assessment

This commit is contained in:
pbean 2026-04-29 17:36:43 -07:00
parent 1e7769bf89
commit dd1efb3c61
21 changed files with 1730 additions and 170 deletions

View File

@ -1,6 +1,6 @@
---
name: bmad-create-epics-and-stories
description: 'Create, edit, and validate the v7 epic-and-story tree for an initiative. Use when the user says "create the epics and stories", "add an epic", "split an epic", "refine a story", or "re-validate the initiative".'
description: 'Create, edit, and validate the v7 epic-and-story tree for an initiative. Use when the user says "create the epics and stories", "add an epic", "split an epic", "merge epics", "rename an epic", "refine a story", "re-derive deps", or "re-validate the initiative".'
---
# Create Epics and Stories (v7)
@ -11,15 +11,26 @@ This skill produces and maintains the **v7 epic-first folder tree** for an initi
**Acts as:** a product strategist and technical specifications writer collaborating with the user as a peer. The user owns product vision and priorities; this skill brings requirements decomposition, sizing judgment, and the v7 schema. Conversational throughout — soft gates ("ready to move on?") rather than rigid menus.
**One skill, three modes:**
**One skill, three modes × three interaction styles:**
- **Create** — no `epics/` tree yet. Walks intent → discovery → epic design → per-epic authoring → validate → finalize.
- **Edit** — the tree exists. Routes by user phrasing or flag to add-epic, split-epic, refine-story, re-derive-deps, or re-validate. Never re-walks intent or discovery.
- **Migrate** — a v6 monolithic `epics.md` exists but no v7 tree. Offers leave-alone, run-canonical-helper, or walk-through-manually.
- **Edit** — the tree exists. Routes by user phrasing or flag to add-epic, split-epic, merge-epics, rename-epic, refine-story, re-derive-deps, or re-validate. Never re-walks intent or discovery.
- **Migrate** — a v6 monolithic `epics.md` (or sharded directory) exists but no v7 tree. Offers leave-alone, run-canonical-helper, or walk-through-manually.
**Headless surface:** `--re-validate` (alias `--headless` / `-H`) runs strict validation only and emits JSON. This is the CI invocation. All other modes are interactive.
Interaction style is orthogonal:
**Owns:** front-matter schemas (`resources/`), bootstrap and validation scripts (`scripts/`), and the only writers of the epic tree. **Does not own:** `governance.md` or `initiative-context.md` authoring, `initiative_store` config plumbing, downstream status transitions beyond `draft`.
- **Guided** (default) — conversational dialog with soft gates and per-epic checkpoints. Right for first-timers and complex initiatives.
- **YOLO** (`--yolo`) — same flow, but discovery summary, the soft gate dialog at each stage, and the per-epic checkpoint are skipped. The skill proposes the full epic list in one shot, authors every epic end-to-end, then surfaces a single batched recap before validation. Right for experts on their third initiative this quarter.
- **From-spec** (`--from-spec <path>`) — Stages 13 skipped entirely. A structured spec drives Stages 4 and 5 deterministically. Right for pipelines and pre-drafted plans.
**Headless surfaces:**
- `--re-validate` (alias `--headless` / `-H`) runs strict validation only and emits JSON. Pair with `--coverage-strict` to fail CI on uncovered requirements.
- `--from-spec <path>` runs end-to-end authoring + validation deterministically and emits JSON. Implicitly headless; pass `--coverage-strict` to fail on uncovered requirements.
All other modes are interactive (Guided and YOLO).
**Owns:** front-matter schemas (`resources/`), bootstrap and validation scripts (`scripts/`), the inventory cache at `{initiative_store}/.bmad-cache/inventory.json`, and the only writers of the epic tree. **Does not own:** `governance.md` or `initiative-context.md` authoring, `initiative_store` config plumbing, downstream status transitions beyond `draft`.
## Conventions
@ -59,7 +70,7 @@ Resolve and use throughout:
### Step 5: Greet the User
Greet `{user_name}` in `{communication_language}`. Skip the greeting in headless mode — no conversational output should precede the JSON.
Greet `{user_name}` in `{communication_language}`. Skip the greeting in headless mode (including `--from-spec`) — no conversational output should precede the JSON.
### Step 6: Execute Append Steps
@ -69,55 +80,72 @@ Activation is complete. Proceed to Mode Detection.
## Stage 0: Mode Detection
Detect the operating mode before doing anything else. Filesystem state is the source of truth.
Detect the operating mode and interaction style before doing anything else. Filesystem state and CLI flags are the source of truth.
**1. Headless / re-validate surface.** If the user passed `--re-validate`, `--headless`, or `-H` (or said "re-validate" / "validate the initiative" with no edit intent), set `{mode}=headless`. Skip Stages 14, jump straight to `prompts/validate.md`, and emit JSON only.
### 1. Headless / re-validate surface
**2. Mode by filesystem state:**
If the user passed `--re-validate`, `--headless`, or `-H` (or said "re-validate" / "validate the initiative" with no edit intent), set `{mode}=headless`. Skip Stages 14, jump straight to `prompts/validate.md`, and emit JSON only.
### 2. From-spec surface
If the user passed `--from-spec <path>`, set `{mode}=from-spec` and `{spec_path}=<path>`. Set `{headless_mode}=true` by default (override only if the user is interactive). Skip Stages 13, route directly to `prompts/from-spec.md`.
### 3. YOLO interaction style
If the user passed `--yolo` (or said "yolo this", "go fast", "don't ask me", or similar in their opening message), set `{yolo}=true`. Otherwise `{yolo}=false`. YOLO is orthogonal to the mode — both create and migrate flows respect it. Edit-mode sub-flows ignore `{yolo}` because they are inherently interactive graph reasoning.
### 4. Mode by filesystem state
- If `{initiative_store}/epics/` does not exist OR exists but contains no epic folders → `{mode}=create`.
- If `{initiative_store}/epics/` contains v7 epic folders (any folder matching `NN-*` with an `epic.md` inside) → `{mode}=edit`.
- If `{initiative_store}/epics/` is absent BUT a v6 monolithic file exists at `{initiative_store}/epics.md` or `{planning_artifacts}/epics.md``{mode}=migrate`.
- If `{initiative_store}/epics/` is absent BUT a v6 monolithic file exists at `{initiative_store}/epics.md` or `{planning_artifacts}/epics.md`, OR a sharded v6 directory exists at the same locations`{mode}=migrate`.
If both v7 folders and a v6 file exist, prefer `edit` and surface the v6 file in Stage 1 as a one-line note ("there's still a legacy `epics.md` here — leave it alone or delete it after you've confirmed the v7 tree").
If both v7 folders and a v6 file exist, prefer `edit` and surface the v6 file in Stage 1 as a one-line note.
**3. Edit sub-mode dispatch (only when `{mode}=edit`).** Detect from the user's opening message:
### 5. Edit sub-mode dispatch (only when `{mode}=edit`)
Detect from the user's opening message:
| User signal | Sub-mode |
|---|---|
| --------------------------------------------------- | ---------------- |
| "add an epic", "new epic for X" | `add-epic` |
| "split epic NN", "split the auth epic" | `split-epic` |
| "merge epics NN and MM" | `merge-epics` |
| "refine story X", "rewrite story 1.3", "fix story foo" | `refine-story` |
| "rename epic NN", "rename the auth epic" | `rename-epic` |
| "refine story X", "rewrite story 1.3", "fix story" | `refine-story` |
| "re-derive deps", "rebuild the dependency graph" | `re-derive-deps` |
| "re-validate", "check the tree" | `re-validate` |
| Anything else | ask which of the above they intend |
| "fix coverage", "missing coverage" | `coverage-fix` |
| Anything else | route to `prompts/edit-mode.md` (it presents an enumerated menu) |
Set `{edit_submode}` to the matched value before routing.
**4. Route:**
### 6. Route
- `create``prompts/intent.md`
- `migrate``prompts/intent.md` (it offers the migrate three-options branch when `{mode}=migrate`)
- `edit``prompts/edit-mode.md`
- `headless``prompts/validate.md`
- `from-spec``prompts/from-spec.md`
Carry `{mode}` (and `{edit_submode}` when set) into the routed prompt.
Carry `{mode}`, `{yolo}`, `{spec_path}` (when set), and `{edit_submode}` (when set) into the routed prompt.
## Stages
| # | Stage | Purpose | Prompt |
|---|-------|---------|--------|
| 0 | Mode Detection | Filesystem-driven create / edit / migrate / headless dispatch | SKILL.md (above) |
| 1 | Intent | Capture initiative title and primary intent; confirm scope of edit; offer migrate options | `prompts/intent.md` |
| 2 | Discovery | Fan-out artifact scan; build a working-memory requirements inventory | `prompts/discovery.md` |
| --- | ------------------ | ----------------------------------------------------------------------------- | ------------------------- |
| 0 | Mode Detection | Filesystem-driven create / edit / migrate / headless / from-spec dispatch | SKILL.md (above) |
| 1 | Intent | Capture initiative title and primary intent; confirm scope; offer migrate | `prompts/intent.md` |
| 2 | Discovery | Fan-out artifact scan; persist a requirements inventory at `.bmad-cache/` | `prompts/discovery.md` |
| 3 | Epic Design | Collaboratively shape the epic list and cross-epic dependency graph | `prompts/epic-design.md` |
| 4 | Per-Epic Authoring | Write `epic.md` and story files for each epic, in approved order | `prompts/epic-authoring.md` |
| 5 | Validation | Strict schema, deps, coverage, and sizing checks | `prompts/validate.md` |
| 6 | Finalize | Print tree, confirm initial statuses, hand off | `prompts/finalize.md` |
| 6 | Finalize | Print tree, confirm initial statuses, hand off, clean up cache | `prompts/finalize.md` |
Edit-mode flows are dispatched from `prompts/edit-mode.md`, which re-enters the relevant subset of stages above without re-walking 1 and 2.
Edit-mode flows are dispatched from `prompts/edit-mode.md`, which re-enters the relevant subset of stages above without re-walking 1 and 2. From-spec flows skip Stages 13 entirely via `prompts/from-spec.md`.
## Conventions for Downstream Skills (stability commitment)
Future v7 versions of `bmad-create-story`, `bmad-dev-story`, `bmad-code-review`, `bmad-retrospective`, and `bmad-initiative-status` adopt the schemas in `resources/epic-frontmatter-schema.md` and `resources/story-frontmatter-schema.md` **verbatim**. Status transitions beyond `draft` are owned by those downstream skills — this skill only writes `draft`. The folder name `NN-kebab` is the canonical identifier; the `epic:` field exists for portability and the validator flags any drift between them.
The inventory cache at `{initiative_store}/.bmad-cache/inventory.json` is **internal** — its schema may change between minor versions. Downstream skills should not depend on it. The `from_spec.py` spec schema and the validator's `--inventory` JSON schema are stable across minor versions.

View File

@ -33,6 +33,7 @@ activation_steps_append = []
persistent_facts = [
"file:{project-root}/**/project-context.md",
"file:{skill-root}/resources/sizing-heuristics.md",
]
# Scalar: executed when Stage 6 (Finalize) completes -- after the tree is

View File

@ -4,35 +4,67 @@
# Stage 2: Discovery
**Goal:** Build a complete-enough requirements inventory in working memory — every FR, NFR, additional architecture-derived requirement, and UX-DR that the epic-and-story tree will need to cover. **Do not write this to a file.** v7 has no monolithic place for it; the inventory gets distributed into per-epic `epic.md` files in Stage 4.
**Goal:** Build a complete-enough requirements inventory and **persist it to disk** so it survives context compaction across the rest of the workflow. The persisted inventory is the source of truth that Stages 4 (per-story coverage), 5 (coverage gate), and 6 (cleanup) read back. v7 has no monolithic on-tree document for it — it lives in a sidecar cache.
## Pre-flight
Before launching the artifact-analyzer, tell the user (in 35 lines) what you're about to scan: the resolved `{planning_artifacts}` path, the resolved `{project_knowledge}` path, and any user-pointed paths from Stage 1. This lets a misconfigured path surface immediately rather than as an empty result. Skip the pre-flight in `{yolo}=true` and `{mode}=headless`.
## Subagent fan-out
Launch one `agents/artifact-analyzer.md` subagent. Pass it: the initiative intent from Stage 1, `{planning_artifacts}` and `{project_knowledge}` as scan paths, and any specific paths the user pointed to in Stage 1.
The subagent returns structured JSON — see its file for the contract. Hold the returned object in working memory; you will reference its fields directly in Stage 3 (for epic shaping) and Stage 4 (for AC-to-requirement coverage mapping).
The subagent returns structured JSON — see its file for the contract.
## Graceful degradation
- **Subagents unavailable.** Read the most relevant 12 documents in the main context (PRD if present, then architecture). For very large docs, read TOC and section headings first; full-read only sections the initiative intent makes relevant. Note which sections you skimmed.
- **No PRD.** If the initiative is tech-debt-heavy or task-heavy and no PRD exists, do not block. Ask the user for an explicit list of debt items, target areas, or research questions, and use that list as the inventory in place of FRs. Format the list with synthetic codes (`D1`, `D2`, ... or `R1`, `R2`, ... for research questions) so Stage 4's coverage mapping has something to reference.
- **Subagents unavailable.** Read at most 2 documents in the main context (PRD first; architecture only if the PRD didn't cover the relevant ground). Issue both `Read` calls in the same message. For very large docs (>50 pages), read TOC and section headings first; full-read only sections the initiative intent makes relevant. Note which sections you skimmed.
- **No PRD.** If the initiative is tech-debt-heavy or task-heavy and no PRD exists, do not block. Ask the user for an explicit list of debt items, target areas, or research questions, and use that list as the inventory in place of FRs. Format with synthetic codes (`D1`, `D2`, ... or `R1`, `R2`, ... for research) so Stage 4's coverage mapping has something to reference.
- **No UX doc.** Tech-only initiatives may have none. Empty `ux_design_requirements` is fine.
## Synthesis
## Synthesize and persist
When the subagent returns (or inline scanning completes):
1. **Merge with what the user told you in Stage 1.** Volunteered FRs or design ideas are part of the inventory.
2. **Hold the merged inventory in working memory** as a single conceptual list with three sections: requirements (FRs + debt items), constraints (NFRs + governance), and UX-DRs.
3. **Note the starter-template flag**, if present — Stage 4's first epic will need a setup story.
4. **Identify gaps.** Anything the inventory doesn't cover that you'd expect for an initiative of this type? (Auth flows in a product without auth requirements? Migration steps without data-migration requirements?)
1. **Merge volunteered details from Stage 1** into the inventory. FRs, debt items, and design ideas the user mentioned in conversation are first-class.
2. **Identify gaps** the inventory doesn't cover that you'd expect for this initiative type (auth flows in a product without auth requirements? migration steps without data-migration requirements?). Note these for the soft gate; do not invent codes.
3. **Persist the inventory** to `{initiative_store}/.bmad-cache/inventory.json` (create the parent directory if needed). Use the schema below verbatim. This file is the canonical source for Stages 4, 5, and 6.
### inventory.json schema
```json
{
"version": 1,
"title": "<initiative title from Stage 1>",
"intent": "<initiative intent from Stage 1>",
"story_type_mix": "feature-heavy|task-heavy|spike-heavy|mixed",
"starter_template_note": "<one-liner or null>",
"requirements": {
"functional": [{"code": "FR1", "text": "..."}],
"non_functional": [{"code": "NFR1", "text": "..."}],
"ux_design": [{"code": "UX-DR1","text": "..."}],
"debt": [{"code": "D1", "text": "..."}],
"research": [{"code": "R1", "text": "..."}]
},
"additional_requirements": ["<bullet architecture-derived requirement with no code>"],
"governance_constraints": ["<bullet>"],
"documents_found": [{"path": "...", "kind": "prd|architecture|ux|...", "relevance": "..."}],
"noted_gaps": ["<gap you flagged but did not invent a code for>"]
}
```
Lists may be empty. Each requirement entry must have a unique `code`.
## Present a brief summary
Tell the user in 48 lines: how many FRs, NFRs, UX-DRs were extracted, the starter-template note if any, governance constraints if any, and any gaps you noticed. Do not dump the full inventory — they have the source documents.
Tell the user in 48 lines: counts (FRs, NFRs, UX-DRs, debt items), the starter-template note if any, governance constraints if any, and any gaps. Do not dump the full inventory — they have the source documents. Mention the cache path so they know where the inventory lives.
Ask: "Anything missing or wrong here, or shall we move on to designing the epic list?" Soft gate.
In `{yolo}=true` collapse to a single line: "Inventory: N FRs, M NFRs, K UX-DRs (cached at `.bmad-cache/inventory.json`)."
In `{mode}=headless` skip the summary entirely.
Soft gate (interactive only): "Anything missing or wrong here, or shall we move on to designing the epic list?"
## Stage Complete
When the user confirms (or stays silent after the soft prompt), route to `prompts/epic-design.md`. The inventory remains in working memory throughout the rest of the workflow.
When the user confirms (or `{yolo}=true` auto-confirms), route to `prompts/epic-design.md`. The inventory remains on disk; later stages re-read it rather than relying on working memory.

View File

@ -4,24 +4,42 @@
# Edit-Mode Dispatch
You arrive here when Stage 0 detected `{mode}=edit` (the v7 tree at `{initiative_store}/epics/` already has content). Stage 0 also classified `{edit_submode}` from the user's opening message; if it's still ambiguous, ask which of the flows below they want.
You arrive here when Stage 0 detected `{mode}=edit` (the v7 tree at `{initiative_store}/epics/` already has content). Stage 0 also classified `{edit_submode}` from the user's opening message; if the user's intent maps to one of the sub-modes below, route immediately. Otherwise present the menu.
**Principle:** never re-walk Stages 1 (intent) and 2 (discovery). Never re-prompt for things visible in existing files. Read the relevant files and ask only what's actually needed.
**Principle:** never re-walk Stages 1 (intent) and 2 (discovery). Never re-prompt for things visible in existing files. Use the validator's structured summary as your view of the tree instead of reading every file.
## Get the tree summary up front
Before any sub-mode, call:
```
python3 scripts/validate_initiative.py --initiative-store {initiative_store} --summary-only
```
The JSON's `summary.epics[]` already contains, per epic: folder, NN, title, status, depends_on, story_count, and per-story metadata (basename, title, type, status, depends_on). Use this — do not read every `epic.md` and every story file.
## Ambiguous-intent menu
If the user's opening message did not match any sub-mode, ask:
> Which would you like — add an epic, split an epic, merge two epics, rename an epic, refine a story, re-derive deps, or just re-validate?
One soft-prompt sentence per option is enough; do not lecture.
## add-epic
The user wants a new epic.
1. **Mini Stage 1.** Ask only about the new epic: title, intent, expected story-type theme. Skip everything else.
2. **Mini Stage 3.** With the existing epic list visible (read every `epic.md` for title and `depends_on`), discuss where the new epic fits — its NN (next available), its `depends_on`, and how it relates to the existing graph. Validate no cycle.
2. **Mini Stage 3.** Using the summary fetched above (titles + depends_on of every existing epic), discuss where the new epic fits — its NN (next available), its `depends_on`, and how it relates to the existing graph. Validate no cycle.
3. **Mini Stage 4.** Route to `prompts/epic-authoring.md` for the new epic only. Step 1 (`init_epic.py`) creates the folder; steps 26 author it normally. Other epics are not touched.
4. **Strict validation.** Route to `prompts/validate.md` strict. Tree-wide check ensures the addition didn't break anything.
4. **Strict validation.** Route to `prompts/validate.md` strict.
## split-epic
The user wants to split one existing epic into two (or more).
1. **Mini Stage 3 — design the split.** Read the target epic's `epic.md` and every story file under it. Discuss with the user:
1. **Mini Stage 3 — design the split.** From the summary, you already have the target epic's story list (basename, title, type, depends_on). Use it as the working view of the split. Read individual story bodies only when the user wants to discuss a specific story. Discuss with the user:
- Where the seam falls (which stories belong in each post-split epic).
- The new epic's NN (next available), title, intent, and `depends_on` — typically the new sibling depends on the original where the split is downstream.
- Whether any existing stories should be split themselves (rare; usually the seam falls cleanly between stories).
@ -43,26 +61,52 @@ The user wants to merge two epics into one.
- Delete the now-empty `gone-epic` folder.
3. **Strict validation.** Route to `prompts/validate.md` strict.
## rename-epic
The user wants to rename or renumber an existing epic without changing its contents.
1. **Mini Stage 3 — confirm.** Identify the target epic from the summary. Confirm the new title (and optionally the new NN) with the user. If the new NN collides with another epic, suggest renumbering the colliding epic out of the way first.
2. **Mini Stage 4 — execute.**
```
python3 scripts/rename_epic.py --initiative-store {initiative_store} \
--epic <current-folder> [--to-title "<new title>"] [--to-nn <int>]
```
The script renames the folder, updates the renamed `epic.md`'s `title:` and `epic:` fields, rewrites every story's `epic:` field, and propagates cross-epic depends_on and other epics' NN-based deps. No body re-authoring is required.
3. **Strict validation.** Route to `prompts/validate.md` strict.
## refine-story
The user wants to fix one specific story.
1. Read the story file and its enclosing `epic.md` for context.
2. **Targeted edit** in `prompts/epic-authoring.md` step 5 only — fill or rewrite ACs, technical notes, coverage. If the story title changed, run `scripts/rename_story.py --to-title "<new title>"` first; it renames the file, updates the `title:` front-matter, and rewrites every depends_on reference across the tree. If the NN changed, also pass `--to-nn`.
3. **Narrow validation.** Route to `prompts/validate.md` strict (the script is fast even on whole trees; no need to scope unless the tree is huge).
1. Read the story file (and only that one). Use the summary for surrounding context.
2. **Targeted edit** in `prompts/epic-authoring.md` step 5 only — fill or rewrite ACs, technical notes, coverage. If the story title changed, run `scripts/rename_story.py --to-title "<new title>"` first. If the NN changed, also pass `--to-nn`.
3. **Validation.** Route to `prompts/validate.md` strict.
## re-derive-deps
The user wants the dependency graph rebuilt — typically because epics or stories were added/removed by hand and the depends_on lists are stale.
1. **Cross-epic** — Mini Stage 3, walking each `epic.md` and discussing whether its current `depends_on` reflects the actual sequencing. Edit the `depends_on:` line in each affected `epic.md` directly.
2. **Within-epic** — Mini Stage 4, walking each epic's stories and confirming each story's `depends_on` reflects what it actually relies on. Edit the `depends_on:` line in each story file directly.
3. **Strict validation.** Route to `prompts/validate.md` strict — the cycle check and dep resolution are the point.
1. **Cross-epic** — Mini Stage 3, walking the summary's epic list and discussing whether each `depends_on` reflects the actual sequencing. Edit the `depends_on:` line in each affected `epic.md` directly.
2. **Within-epic** — Mini Stage 4, walking each epic's stories from the summary and confirming each `depends_on` reflects what it actually relies on. Edit the `depends_on:` line in each story file directly.
3. **Strict validation.** Route to `prompts/validate.md` strict.
## re-validate
Just run validation. Route directly to `prompts/validate.md` strict. (When the user invoked the skill with `--re-validate` / `--headless` / `-H`, Stage 0 already routed straight to `prompts/validate.md` and never touched this file.)
## coverage-fix
`prompts/validate.md` routes here when `coverage_missing` reports an inventory code that no story body references. Treat it as a narrow refine-story or new-story flow:
1. From the missing-code list, pick the right epic to host coverage. Ask the user when not obvious.
2. Either extend an existing story's `## Coverage` section (refine-story flow above), or add a new story (epic-authoring step 4 + step 5).
3. Re-validate.
## Boundaries (intentionally not supported here)
- **Destructive delete of an epic.** Removing an epic and all its stories irrecoverably is not a sub-mode. The user should `rm -rf` the folder themselves and then run `re-validate` to surface any cross-epic dep refs that need cleanup. Two reasons: a guarded skill flow gives the appearance of safety it doesn't actually provide, and dependent-ref cleanup is exactly what the validator already does post-deletion.
- **Renaming the initiative itself.** Out of scope here — the initiative is identified by `{initiative_store}` configuration upstream of this skill.
## Stage Complete
After any flow above, the routed `prompts/validate.md` run becomes the terminal step. Stage 6 (Finalize) is **not re-run** in edit mode — the user already has the tree; there's no fresh hand-off to make. After validation passes, summarize what changed in 13 lines and exit.

View File

@ -4,9 +4,19 @@
# Stage 4: Per-Epic Authoring
**Goal:** Write `epic.md` and the story files for every approved epic, in order, conversationally. This is the **only stage that writes files**. Every write goes through `scripts/init_epic.py` or `scripts/init_story.py` so paths and front matter are derived consistently.
**Goal:** Write `epic.md` and the story files for every approved epic, in order. This is the **only stage that writes files**. Every write goes through `scripts/init_epic.py` or `scripts/init_story.py` so paths and front matter are derived consistently. The Stage 2 inventory at `{initiative_store}/.bmad-cache/inventory.json` is the source of truth for which FR / NFR / UX-DR / debt-item codes you allocate to which AC.
Load `resources/sizing-heuristics.md` as a fact (if not already loaded in Stage 3). Optionally load `resources/examples/epic-feature-example.md` and `resources/examples/epic-techdebt-example.md` as shape primers if you want concrete reference for the body density and Coverage section format.
`resources/sizing-heuristics.md` should already be in your context as a persistent fact (loaded once via the customize.toml `persistent_facts` mechanism). Before authoring the first epic, load `resources/examples/epic-feature-example.md` and `resources/examples/epic-techdebt-example.md` to anchor body density and Coverage section format.
## Inventory recovery
Re-read `{initiative_store}/.bmad-cache/inventory.json` at the start of each per-epic loop iteration. If the file is missing (manual deletion, a fresh session resuming mid-flow, or compaction since Stage 2):
1. Tell the user the inventory cache is missing and you're rebuilding it.
2. Re-launch `agents/artifact-analyzer.md` with the same inputs Stage 2 used (initiative intent, scan paths). Merge any user-volunteered details from the conversation.
3. Write the rebuilt inventory back to the same path.
Never proceed with an empty inventory — the Stage 5 coverage gate depends on it.
## Per-epic loop
@ -14,19 +24,12 @@ For each approved epic, in order:
### 1. Bootstrap the epic folder
Run:
```
python3 scripts/init_epic.py \
--initiative-store {initiative_store} \
--epic-nn <NN> \
--title "<title>" \
--depends-on <comma-separated NNs from Stage 3>
python3 scripts/init_epic.py --initiative-store {initiative_store} \
--epic-nn <NN> --title "<title>" --depends-on <comma-NNs>
```
The script creates `{initiative_store}/epics/NN-kebab/epic.md` with locked front matter and a body skeleton. Take its JSON output (`epic`, `epic_nn`, `path`) — the `epic` field is the canonical folder name you pass to every subsequent `init_story.py` call.
**Never compose the folder name yourself in prose.** Always read it back from the script's JSON output.
Take its JSON output (`epic`, `epic_nn`, `path`) — `epic` is the canonical folder name you pass to every subsequent `init_story.py` call. **Never compose the folder name yourself in prose.**
### 2. Fill the epic body conversationally
@ -51,43 +54,39 @@ Confirm the story list with the user before any write. The list is: ordered NN,
### 4. Bootstrap each story file
For each story in the approved list:
```
python3 scripts/init_story.py \
--initiative-store {initiative_store} \
--epic <epic-folder-from-step-1> \
--story-nn <NN> \
--title "<title>" \
--type <feature|bug|task|spike> \
--depends-on <comma-separated refs>
python3 scripts/init_story.py --initiative-store {initiative_store} \
--epic <folder-from-step-1> --story-nn <NN> --title "<title>" \
--type <feature|bug|task|spike> --depends-on <comma-refs>
```
`--depends-on` entries are bare basenames for within-epic refs (e.g. `01-define-schema`) or `<epic-folder>/<basename>` for cross-epic refs (e.g. `02-auth-migration/04-session-management`).
Within-epic refs are bare basenames (`01-define-schema`); cross-epic refs use the form `<epic-folder>/<basename>` (`02-auth-migration/04-session-management`).
### 5. Fill the story body conversationally
Read the file the script just wrote. The skeleton matches `resources/story-md-template.md`:
Read the file the script just wrote. Sections, in order:
- **User-story stanza**present for `feature` (required), `bug`/`spike` (optional), absent for `task`. Fill or remove as appropriate.
- **Acceptance Criteria** — Given/When/Then form. Each AC stands alone, specific and testable. Cover happy path, key edge cases, at least one failure mode where applicable. Aim for ≤6 ACs; if you need more, the story may be over-sized — pause and consider splitting.
- **User-story stanza**required for `feature`, optional for `bug`/`spike`, absent for `task`. Fill or remove as appropriate.
- **Acceptance Criteria** — Given/When/Then. Each AC stands alone, specific, testable. Cover happy path, key edge cases, at least one failure mode where applicable. Aim for ≤6 ACs; if you need more, the story may be over-sized — pause and consider splitting.
- **Technical Notes** — implementation hints, file paths, API contracts. Not a full design.
- **Coverage** — one line per AC mapping to the FR / NFR / UX-DR / debt-item codes from the Stage 2 inventory. This is what Stage 5 reads to verify nothing was dropped.
- **Coverage** — one line per AC mapping to the FR / NFR / UX-DR / debt-item codes from `inventory.json`. The format `AC1: FR1, NFR3.2` is what `scripts/extract_coverage.py` and the validator both parse.
### 6. Per-epic non-strict validation
After all stories for the current epic are drafted, run:
```
python3 scripts/validate_initiative.py --initiative-store {initiative_store} --lax --epic <epic-folder>
python3 scripts/validate_initiative.py --initiative-store {initiative_store} --lax --epic <folder>
```
`--lax` skips sizing warnings (still mid-flow). Schema and dep checks always run. If errors come back, fix them before moving to the next epic. Common issues at this stage: a within-epic dep typo, a forgotten `epic:` field mismatch (the script catches both).
`--lax` skips sizing warnings (still mid-flow). Schema and dep checks always run. Fix any errors before moving on.
### 7. User checkpoint
Before starting the next epic, confirm with the user that this epic is complete. The next epic does not begin until the current is approved.
In `{yolo}=true`, **skip the per-epic checkpoint** entirely — author the full epic list end-to-end, then surface a single batched recap before routing to validation.
## After all epics are authored
Route to `prompts/validate.md` for full-tree strict validation.
@ -97,12 +96,14 @@ Route to `prompts/validate.md` for full-tree strict validation.
When this stage is entered from `prompts/edit-mode.md`:
- **add-epic:** run steps 16 for the single new epic, then route to `prompts/validate.md` strict.
- **split-epic / merge-epics:** for each affected epic, run step 1 (if a new folder is needed), step 2 (re-author the `epic.md`), and use `scripts/move_story.py` for any story file that changes its epic. The skill never copy-pastes a story across folders — always move.
- **refine-story:** narrow to step 5 for the single story file. If the story title changed, use `scripts/rename_story.py` first to rename and update sibling refs. Skip steps 14.
- **re-derive-deps:** within-epic dep updates only here; the cross-epic dep updates were already settled in Stage 3. Walk story files in each affected epic and edit the `depends_on` line directly.
- **split-epic / merge-epics:** for each affected epic, run step 1 (if a new folder is needed), step 2 (re-author the `epic.md`), and use `scripts/move_story.py` for any story file that changes its epic. Never copy-paste a story across folders — always move.
- **rename-epic:** invoke `scripts/rename_epic.py --epic <folder> [--to-title "<text>"] [--to-nn <int>]`. The script renames the folder, updates the renamed `epic.md`'s `title:` and `epic:` fields, rewrites every story's `epic:` field, and propagates cross-epic depends_on references and other epics' NN-based deps. After it returns, route directly to `prompts/validate.md` strict — no body re-authoring is required.
- **refine-story:** narrow to step 5 for the single story file. If the title changed, run `scripts/rename_story.py --to-title "<new title>"` first. If the NN changed, also pass `--to-nn`.
- **re-derive-deps:** within-epic dep updates only here; cross-epic was settled in Stage 3. Walk story files in each affected epic and edit the `depends_on` line directly.
- **coverage-fix:** Stage 5 routes here when a requirement code is missing from coverage. Identify the right story (existing or new), then either run step 5 against the existing story to extend its Coverage section, or run steps 45 to add a new story under the right epic. After the fix, route to `prompts/validate.md` strict.
After any edit-mode flow finishes, route to `prompts/validate.md` strict.
## Stage Complete
Stage 4 ends when every approved epic has its `epic.md` and all its story files written, the per-epic non-strict validation passes for each, and the user has confirmed completion of the last epic.
Stage 4 ends when every approved epic has its `epic.md` and all its story files written, the per-epic non-strict validation passes for each, and the user has confirmed completion (or `{yolo}=true` auto-confirmed after the batch recap).

View File

@ -4,9 +4,9 @@
# Stage 3: Epic Design
**Goal:** Produce an approved epic list — for each epic an NN, a kebab title, an intent statement, a `depends_on` list (cross-epic), and a default story-type theme. Validate the cross-epic graph for cycles before leaving the stage. No files are written here; the list lives in working memory until Stage 4 calls `init_epic.py`.
**Goal:** Produce an approved epic list — for each epic an NN, a kebab title, an intent statement, a `depends_on` list (cross-epic), and a default story-type theme. Validate the cross-epic graph for cycles before leaving the stage. No files are written here; the list lives in working memory until Stage 4 calls `init_epic.py`. Re-read `{initiative_store}/.bmad-cache/inventory.json` for the requirements inventory you'll allocate against.
Load `resources/sizing-heuristics.md` as a fact for the rest of the workflow — it shapes how stories will be sized in Stage 4 and informs how big each epic should be.
`resources/sizing-heuristics.md` should already be loaded as a persistent fact via the workflow's `customize.toml`. If you don't see it in your context, load it now (and only now — Stage 4 reads the same file via the same persistent-facts mechanism).
## Principles to apply (carry into the conversation, do not lecture)
@ -15,7 +15,7 @@ Load `resources/sizing-heuristics.md` as a fact for the rest of the workflow —
- **Dependency-free within an epic.** Stories within an epic must not depend on later stories in the same epic. (The validator enforces this in Stage 5 via `depends_on` resolution.)
- **File-churn check.** If multiple proposed epics repeatedly modify the same core files, ask whether they should consolidate into one epic with ordered stories. Distinguish meaningful overlap (same component end-to-end) from incidental sharing. Consolidate when the split provides no risk-mitigation or feedback-loop value.
- **Implementation efficiency over taxonomy.** When the outcome is certain and direction changes between epics are unlikely, prefer fewer larger epics. Split into more epics when there's a genuine risk boundary or where early feedback could change direction.
- **Starter template (if Stage 2 flagged one).** Epic 1's first story must be "set up the project from the starter template." Plan for it now.
- **Starter template (if Stage 2 flagged one in the inventory).** Epic 1's first story must be "set up the project from the starter template." Plan for it now.
## The conversation
@ -29,15 +29,19 @@ Walk through these collaboratively, not as a script:
## Cycle check before exit
Before leaving the stage, mentally compute the cross-epic dependency graph. If you find any cycle (Epic A depends on B which depends on A, directly or transitively), surface it and have the user resolve before proceeding. Stage 5 will catch cycles too, but catching them now avoids re-walking Stage 4.
Mentally compute the cross-epic dependency graph. If you find any cycle (Epic A depends on B which depends on A, directly or transitively), surface it and have the user resolve before proceeding. **Stage 5's validator is the deterministic source of truth for cycles** — this check is best-effort and exists only to avoid re-walking Stage 4 if an obvious loop slipped in.
## Optional deeper review
If the user wants to pressure-test the epic shape, they may invoke `bmad-advanced-elicitation` (deeper critique methods) or `bmad-party-mode` (multi-agent perspectives) explicitly. **Do not present these as a menu** — only invoke when the user asks.
## YOLO mode
When `{yolo}=true`, propose the entire epic list in one message — title, intent, `depends_on`, theme, and FR/UX-DR allocations for every epic — and ask the user once whether to lock it in or revise. Skip the per-step dialog; rely on the cycle check and Stage 5 to catch problems.
## Soft gate
"Does this epic list capture the initiative? Anything missing, anything overlapping that should be consolidated?" When the user is satisfied, the list is approved and Stage 3 is complete.
"Does this epic list capture the initiative? Anything missing, anything overlapping that should be consolidated?" When the user is satisfied, the list is approved and Stage 3 is complete. Skip in `{yolo}=true` after the one-shot proposal is approved.
## Edit-mode flows
@ -46,6 +50,7 @@ When this stage is entered from `prompts/edit-mode.md`:
- **add-epic:** ask only about the new epic. Existing epic NNs are fixed; the new one gets the next-available NN. Capture title, intent, `depends_on`, theme. Validate the new edges don't introduce a cycle.
- **split-epic:** discuss how to split the target epic. Define the new epic NNs, titles, intents, and `depends_on` edges (typically the new sibling depends on the original where the split is downstream). Decide which existing stories move (Stage 4 will use `move_story.py`) and which stay.
- **merge-epics:** decide which is the surviving epic. Define how the merged depends_on collapses. Plan the story renumbering (Stage 4 will use `move_story.py` for the moves, then `rename_story.py` for any renumber).
- **rename-epic:** discuss the new title (and optionally a new NN). Stage 4 invokes `scripts/rename_epic.py` to perform the rename atomically; no other epics are touched.
- **re-derive-deps:** with the existing epic list, walk the cross-epic graph from scratch and update `depends_on` lists where the user agrees. (Within-epic dep updates happen in Stage 4.)
After the relevant edit-mode flow finishes here, route to `prompts/epic-authoring.md` with the focused scope.

View File

@ -4,27 +4,47 @@
# Stage 6: Finalize
**Goal:** Hand off cleanly. Show the user what was produced, confirm initial statuses, point them at the next workflow, and run any user-defined post-completion hook.
**Goal:** Hand off cleanly. Show the user what was produced, confirm initial statuses, point them at the most useful next workflow, and run any user-defined post-completion hook.
## Step 1: Print the produced tree
Walk `{initiative_store}/epics/` and present a concise tree — epic folders in order, story files under each. For each line, include the file's `status` from front matter. Something like:
Use the validator's `--tree` mode rather than re-reading every file:
```
python3 scripts/validate_initiative.py --initiative-store {initiative_store} --tree
```
Print the output verbatim. It already includes types and statuses for every node:
```
{initiative_store}/epics/
├── 01-user-authentication/ (epic, draft)
│ ├── 01-define-user-and-session-models.md (task, draft)
│ ├── 02-register-with-email.md (feature, draft)
│ ├── 03-sign-in-with-email.md (feature, draft)
│ └── 04-password-reset-via-email.md (feature, draft)
│ └── 03-sign-in-with-email.md (feature, draft)
└── 02-billing-stripe/ (epic, draft)
├── 01-customer-and-subscription-models.md (task, draft)
└── 02-checkout-session.md (feature, draft)
```
Numbers, types, and statuses come from each file's front matter — re-read if you don't already have them in working memory.
## Step 2: Print an initiative stats block
## Step 2: Confirm initial statuses
Run summary mode and emit a compact stats line so the user has a satisfying signal of completeness:
```
python3 scripts/validate_initiative.py --initiative-store {initiative_store} --summary-only
```
From the JSON, surface a 3-line block such as:
```
2 epics, 5 stories: 3 features, 2 tasks. Coverage: 100% (8/8 inventory codes mentioned).
Median story body: ~600 chars. No oversized stories.
```
Pull the coverage number from `summary.mentioned_requirements` vs the inventory at `{initiative_store}/.bmad-cache/inventory.json` (if present); skip the coverage line when no inventory exists.
## Step 3: Confirm initial statuses
Every story and epic starts at `draft`. Promotion is owned by downstream skills (`bmad-dev-story` etc.) — this skill never auto-promotes. Two normal next steps from here:
@ -33,19 +53,24 @@ Every story and epic starts at `draft`. Promotion is owned by downstream skills
Ask: "Want to leave these all as draft, or promote a small first batch to ready for immediate dev handoff?"
## Step 3: Point forward
## Step 4: Named hand-offs
Tell the user what they have and what comes next:
Tell the user what they have and what's most likely next, naming the specific skills:
- **Per-story dev handoff:** `bmad-dev-story` reads any story by path and implements it.
- **Epic-context cache:** `bmad-quick-dev`, when v7'd, will read the `epic.md` Shared Context block instead of re-deriving per story.
- **Status rollup:** the future `bmad-initiative-status` reads `status:` from every file to summarize the initiative.
- **Implement a story.** `bmad-dev-story` reads any story file by path and implements it end-to-end.
- **Plan a sprint.** `bmad-sprint-planning` reads the tree and proposes a sprint slice based on dependencies and statuses.
- **Status rollup.** The future `bmad-initiative-status` reads `status:` from every file to summarize the initiative.
- **Quick fixes.** `bmad-quick-dev` (when v7'd) reads the `epic.md` Shared Context block instead of re-deriving per story.
Then invoke `bmad-help` so the user sees the broader BMad surface available to them.
If you're unsure which the user needs, point them at `bmad-help` to surface the broader BMad surface.
## Step 4: Run on_complete
## Step 5: Clean up the cache
Run:
Delete `{initiative_store}/.bmad-cache/inventory.json`. The cache exists to bridge working memory across stages; once Stage 5 has accepted coverage and Stage 6 is finalizing, it has served its purpose. Keep the parent `.bmad-cache/` directory for future runs.
If a fresh edit-mode session needs the inventory back, Stage 4's recovery path will rebuild it.
## Step 6: Run on_complete
```
python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete

View File

@ -0,0 +1,52 @@
**Language:** Use `{communication_language}` for all output.
**Output Language:** Use `{document_output_language}` for documents.
**Paths:** Bare paths (e.g. `scripts/from_spec.py`) resolve from the skill root.
# From-Spec Headless Authoring
You arrive here when the user invoked the skill with `--from-spec <path>`. Stages 13 are skipped — the spec carries the initiative title, intent, optional inventory, and the full epic-and-story breakdown. This stage drives Stage 4 (authoring) and Stage 5 (validation) deterministically and emits a single JSON envelope. Suitable for pipelines (PRD → spec → tree) and for senior users who pre-drafted in a scratch buffer.
## Spec schema
The spec is JSON. See `scripts/from_spec.py`'s docstring for the canonical schema; in summary:
- `title`, `intent` — informational metadata.
- `inventory.requirements.{functional|non_functional|ux_design}[]` — optional; if present, written to `{initiative_store}/.bmad-cache/inventory.json` and consumed by Stage 5's coverage gate.
- `epics[]` — required. Each epic has `nn`, `title`, optional `intent`, `depends_on`, `shared_context`, `story_sequence`, `references`, and `stories[]`.
- `epics[].stories[]` — each story has `nn`, `title`, `type` (one of `feature|task|bug|spike`), optional `depends_on`, `user_story`, `acceptance_criteria`, `technical_notes`, `coverage` (AC→codes map).
## Run
```
python3 scripts/from_spec.py --initiative-store {initiative_store} --spec <path-to-spec.json>
```
Add `--coverage-strict` when the user wants a coverage gap to fail validation (CI-style strict run).
The script:
1. Validates the spec (returns 1 with a `details` array if anything is malformed).
2. Calls `init_epic.py` for every epic, then `init_story.py` for every story — the same path Stage 4 uses for interactive authoring.
3. Patches each generated file with the optional body fields from the spec (Goal, Shared Context, Story Sequence, References, User Story, Acceptance Criteria, Technical Notes, Coverage).
4. If `inventory` was in the spec, writes it to the inventory cache.
5. Runs `validate_initiative.py --inventory ...` and includes the result in the envelope.
6. Exits with the validator's exit code: 0 on success, 1 on errors.
## Headless mode
When `{mode}=headless` (the default for `--from-spec`):
1. Print the script's JSON envelope verbatim. Do not greet, converse, or summarize.
2. Exit code mirrors the script's exit code.
## Interactive mode
When the user invokes `--from-spec` interactively (without `--headless`):
1. Run the script.
2. Print a short summary: "Created N epics and M stories. Validation: <pass|errors>." Show the validator's `findings` if any.
3. Offer the standard hand-offs from `prompts/finalize.md` (skip the tree print — `from_spec.py` already produced it via the validator).
## Stage Complete
This prompt is single-shot. After the script returns, the workflow is done — there is no Stage 6 conversational hand-off in headless mode, and in interactive mode the summary above is the terminal step. Edit-mode flows (add-epic, split-epic, etc.) remain available for follow-up edits and are still interactive.

View File

@ -6,7 +6,16 @@
**Goal:** Know what the initiative is about — well enough to make discovery (Stage 2) targeted, and well enough to suppress questions whose answers are already on disk.
This stage runs in **create** and **migrate** modes only. Edit-mode skips Stage 1 entirely (see `prompts/edit-mode.md`).
This stage runs in **create** and **migrate** modes only. Edit-mode skips Stage 1 entirely (see `prompts/edit-mode.md`); from-spec mode skips Stage 1 entirely (see `prompts/from-spec.md`); headless mode never reaches Stage 1.
## Wrong-skill check (do this first)
If the user's opening message describes a single deliverable rather than an initiative-shaped piece of work, ask once whether this skill is the right fit before continuing:
- "I need to add a single story / fix one bug / write up a small change" → suggest `bmad-create-story`.
- "I haven't decided what we're building yet / I need help framing the product" → suggest `bmad-create-prd`.
A single soft prompt is enough — accept the user's answer and proceed.
## Create mode
@ -14,11 +23,15 @@ You need three things to leave this stage:
1. **Initiative title** — a short kebab-friendly handle (e.g. "billing-stripe-v2"). The user may give a long sentence; ask for a tighter handle once you can summarize.
2. **Primary intent** — one or two sentences. What this initiative is for and roughly what done looks like. This is the relevance filter for Stage 2's artifact scan and Stage 3's epic shaping.
3. **Expected story-type mix** — feature-heavy, task/bug-heavy (tech-debt), spike-heavy (research), or mixed. Tells you how to handle a missing PRD gracefully in Stage 2.
3. **Expected story-type mix** — pick one. This tells Stage 2 how to handle a missing PRD and shapes Stage 3's sizing intuition. When asking, give a one-liner per option:
- **feature-heavy** — most stories deliver new user-visible capability (PRD-driven build).
- **task-heavy / tech-debt** — most stories are refactors, infra, or cleanup with no user-story stanza.
- **spike-heavy / research** — most stories are investigations whose output is a written finding, not shipped code.
- **mixed** — meaningful blend.
**Suppress questions where the answer is already on disk.** If `governance.md` or `initiative-context.md` were loaded into facts, read them first. If they cover scope, owner, deadlines, or constraints, do not re-ask. Note absent files with a one-line pointer ("there's no `initiative-context.md` here — fine for solo work; mention it if you'd like to add one") and move on.
**Capture-don't-interrupt.** If the user volunteers technical details, FRs, or epic ideas during this stage, capture them silently into your working memory. Do not redirect — they will be useful in Stages 24.
**Capture-don't-interrupt with acknowledgement.** If the user volunteers technical details, FRs, or epic ideas during this stage, capture them silently into your working memory. Do not redirect — they will be useful in Stages 24. Once per stage, surface a one-liner like "noted on the rate-limit constraint, I'll bring it back in epic design" so the user knows the tangent landed; do not list every captured detail.
When all three items are settled, route to `prompts/discovery.md` with `{mode}=create` and the initiative title and intent in your working memory.
@ -39,6 +52,8 @@ After the user picks:
This stage is conversational. Confirm with a soft prompt rather than a menu — "Anything else to add about the initiative, or should we move on to scanning the project?" Users almost always remember one more thing when given a graceful exit ramp.
In `{yolo}=true` skip the soft gate entirely once the three items are settled.
## Stage Complete
Stage 1 ends when the chosen mode's exit conditions above are met. Carry the initiative title, primary intent, story-type mix, and any volunteered details into the next stage in working memory — none of this is written to disk yet.
Stage 1 ends when the chosen mode's exit conditions above are met. Carry the initiative title, primary intent, story-type mix, and any volunteered details into the next stage in working memory — none of this is written to disk yet (Stage 2 writes the inventory to `.bmad-cache/inventory.json`).

View File

@ -8,62 +8,64 @@ You arrive here when Stage 0 detected `{mode}=migrate` and the user picked optio
## Scope
**Supported:** the canonical v6 monolithic shape — a single `epics.md` produced by the v6 `bmad-create-epics-and-stories` skill, following its `templates/epics-template.md` structure. Front matter at top, `## Requirements Inventory`, `## Epic List`, then per-epic `## Epic N: <title>` sections, each with `### Story N.M: <title>` blocks containing user-story stanzas and Acceptance Criteria.
**Supported:** the canonical v6 monolithic shape — a single `epics.md` produced by the v6 `bmad-create-epics-and-stories` skill, following its `templates/epics-template.md` structure. Front matter at top, `## Requirements Inventory`, `## Epic List`, then per-epic `## Epic N: <title>` sections, each with `### Story N.M: <title>` blocks.
**Not supported:** sharded v6 (`epics/index.md` + multiple files), or hand-edited / heavily restructured v6 docs. If the input doesn't match the canonical shape, the parsing pass below will report what it could and couldn't extract — show that to the user and offer option 3 from `prompts/intent.md` ("walk it through manually using the v6 file as input to a normal create flow").
**Sharded v6** is also supported via a flatten step (see Sharded Input below).
**Not supported:** hand-edited / heavily restructured v6 docs that don't follow the canonical headings. The parser surfaces what it could and couldn't extract — show that to the user and offer option 3 from `prompts/intent.md` ("walk it through manually using the v6 file as input to a normal create flow").
## Process
### 1. Locate and read the v6 file
### 1. Locate the v6 input
The v6 file is at `{initiative_store}/epics.md` or `{planning_artifacts}/epics.md`. Read it fully.
The v6 input is at `{initiative_store}/epics.md` or `{planning_artifacts}/epics.md`, or — for sharded v6 — a directory containing `index.md` and per-epic files at the same locations.
**Sharded check.** If the path is a directory containing `index.md`, stop here and tell the user:
### 2. Run the parser
> "Your v6 file looks sharded — it's a directory with `index.md` and per-epic files. The migration helper handles only the canonical monolithic shape. To proceed: either flatten it into a single `epics.md` first, or pick option 3 from the previous prompt to walk through manually with the v6 content as context."
```
python3 scripts/parse_v6_epics.py --input <path-or-dir>
```
Then route back to `prompts/intent.md` so the user can pick again.
The script emits structured JSON: `title`, `requirements` (FRs/NFRs/UX-DRs), `epics[]` (each with stories[], acceptance_criteria[], coverage_codes[], inferred type), `warnings[]`, and an `is_sharded` flag.
### 2. Parse the canonical shape
#### Sharded input
Walk the document and extract:
If the parser reports `is_sharded: true`, tell the user:
- **Initiative title.** From the top heading or front-matter `title:`. If neither, ask the user once.
- **Epics.** Each `## Epic N: <title>` (or `### Epic N: <title>`) becomes an epic. The N order in the file becomes the v7 NN order.
- **Per-epic intent / goal.** The first paragraph(s) under each epic heading, before the first story.
- **Stories.** Each `### Story N.M: <title>` (or `#### Story N.M: <title>`) inside an epic becomes a story. The M order becomes the v7 story NN.
- **User-story stanza.** The "As a / I want / So that" block under each story heading. If absent, treat as `type: task`. If present and the title looks user-facing, treat as `type: feature`. If the title contains "bug" / "fix", treat as `type: bug`. If "spike" / "investigate" / "research", treat as `type: spike`. When in doubt, ask the user.
- **Acceptance Criteria.** The Given/When/Then block under `**Acceptance Criteria:**`. Preserve verbatim — re-formatting can introduce errors.
- **Coverage.** Look for `FR1`, `NFR2`, `UX-DR3` references in the AC text. If absent (the v6 template often left coverage in a separate FR Coverage Map), parse the FR Coverage Map and reverse-map FRs back to stories.
> "Your v6 input looks sharded — a directory of files rather than a single monolithic doc. Two options: I can flatten it into a single `epics.md` first (concatenate `index.md` + every per-epic file in order, write to a temp path), or you can pick option 3 from the previous prompt to walk through manually with the v6 content as context."
If a section is malformed or missing, log the issue and continue. Do not block the whole migration on one bad story.
If the user picks flatten: read `index.md` (if present) plus each per-epic file under the directory in NN order; concatenate with a blank line between sections; write to `{initiative_store}/.bmad-cache/v6-flattened.md`; then re-invoke `parse_v6_epics.py` against the flattened file. Otherwise, route back to `prompts/intent.md`.
### 3. Confirm the parse with the user
Show a concise summary: "I parsed N epics and M stories from the v6 file. Epic 1 is '<title>' with K stories; Epic 2 is '<title>' with L stories; ...". Flag anything ambiguous (story type guesses, stories with no ACs, FRs that didn't map cleanly).
Show a concise summary using the parser's output: "Parsed N epics and M stories. Epic 1 is '<title>' with K stories; Epic 2 is '<title>' with L stories; ...". List `warnings[]` so the user can confirm the ambiguous calls (story type guesses, stories with no ACs, codes that didn't map cleanly).
Ask: "Look right? Want to adjust anything before I create the v7 tree?"
### 4. Generate the v7 tree
### 4. Persist the inventory
Write the parsed `requirements` block to `{initiative_store}/.bmad-cache/inventory.json` using the schema documented in `prompts/discovery.md`. This is what Stage 5's coverage gate will read.
### 5. Generate the v7 tree
For each parsed epic, in order:
1. `python3 scripts/init_epic.py --initiative-store {initiative_store} --epic-nn <NN> --title "<title>" [--depends-on <NNs>]`
2. Edit the new `epic.md` body — fill Goal from the parsed intent, fill Shared Context with anything the v6 epic mentioned about architecture or constraints (often sparse; that's fine), fill Story Sequence from any inter-story notes.
2. Edit the new `epic.md` body — fill Goal from the parsed `intent`, fill Shared Context with anything the v6 epic mentioned about architecture or constraints (often sparse; that's fine), fill Story Sequence from any inter-story notes.
3. For each parsed story in this epic:
- `python3 scripts/init_story.py --initiative-store {initiative_store} --epic <folder> --story-nn <NN> --title "<title>" --type <feature|bug|task|spike> [--depends-on <refs>]`
- Edit the new story file — paste the parsed user-story stanza (or remove it for `task`), paste the parsed ACs into the Acceptance Criteria section verbatim, leave Technical Notes empty (the v6 file usually doesn't have them at this granularity), and fill Coverage from the parsed AC-to-FR map.
- `python3 scripts/init_story.py --initiative-store {initiative_store} --epic <folder> --story-nn <NN> --title "<title>" --type <type> [--depends-on <refs>]`
- Paste the parsed user-story stanza (or remove it for `task`), paste the parsed ACs into the Acceptance Criteria section verbatim, leave Technical Notes empty (the v6 file usually doesn't have them at this granularity), and fill Coverage from the parsed `coverage_codes`.
The v6 file does not encode `depends_on` explicitly. Best-effort: assume cross-epic deps follow numeric order (Epic 2 depends on Epic 1, etc.) and confirm with the user before committing. Within-epic, leave `depends_on: []` and let Stage 3 / 5 surface anything the user wants to add.
The v6 file does not encode `depends_on` explicitly. Best-effort: assume cross-epic deps follow numeric order (Epic 2 depends on Epic 1, etc.) and confirm with the user before committing. Within-epic, leave `depends_on: []` and let Stage 5 surface anything to add.
### 5. Validate strict
### 6. Validate strict
After all epics are generated, route to `prompts/validate.md` strict. Surface any failures (typically: a story whose v6 ACs reference an FR code the script's regex didn't pick up, or an inferred dep that doesn't resolve). Loop back into `prompts/epic-authoring.md` for narrow fixes.
Route to `prompts/validate.md` strict (with `--inventory` since you just persisted one). Surface failures and loop into `prompts/epic-authoring.md` for narrow fixes.
### 6. Confirm next steps
### 7. Confirm next steps
Once validation passes, tell the user the v7 tree is ready and the v6 `epics.md` is still on disk untouched. They can delete it whenever they're confident. Then route to `prompts/finalize.md`.
## Stage Complete
This helper is single-shot. After the validation loop closes and the user accepts the migrated tree, the migrate flow is done — Stage 6 finalizes as if it had been a fresh create.
This helper is single-shot. After the validation loop closes, the migrate flow is done — Stage 6 finalizes as if it had been a fresh create.

View File

@ -4,57 +4,72 @@
# Stage 5: Validation
**Goal:** Confirm the v7 epic-and-story tree is sound — schema, deps, numbering, cycles — and that every initiative-level requirement is covered by at least one story's AC mapping. This stage is also the **headless surface** for CI: when invoked with `--re-validate` (or `--headless` / `-H`), it runs once and exits with JSON only.
**Goal:** Confirm the v7 epic-and-story tree is sound — schema, deps, numbering, cycles, and coverage. This stage is also the **headless surface** for CI: when invoked with `--re-validate` (or `--headless` / `-H`), it runs once and exits with JSON only.
## Strict validation
Run:
The validator handles coverage deterministically when given the inventory. Default invocation:
```
python3 scripts/validate_initiative.py --initiative-store {initiative_store}
python3 scripts/validate_initiative.py --initiative-store {initiative_store} \
--inventory {initiative_store}/.bmad-cache/inventory.json
```
Strict mode is the default. Take the JSON output. The `findings` list contains every error and warning; the `summary` block has the epic list, story counts by status, error/warning counts, and `mentioned_requirements` (the deduplicated set of FR / NFR / UX-DR codes the script extracted from story bodies via regex).
Without `--inventory`, the validator only checks schema/deps/cycles/numbering and emits the regex-extracted `mentioned_requirements` set; coverage findings will not be generated.
**The script does not check coverage.** The Stage 2 inventory lives in your working memory, not on disk — only you can compare. See "Coverage check" below.
The JSON contract:
- `findings[]` — every error and warning. New code: `coverage-missing` for inventory codes that don't appear textually in any story body. Default level is `warning`; pass `--coverage-strict` to escalate to `error`.
- `summary.epics[]` — full per-epic summary including story-level metadata (basename, title, type, status, depends_on, body_len). Use this instead of re-reading every file.
- `summary.mentioned_requirements` — deduplicated set of codes the regex found.
- `summary.coverage_missing` — codes from the inventory not found in any story body (only populated when `--inventory` was passed).
## Headless mode
If `{mode}=headless`:
1. Run the validator strict.
1. Run the validator strict with `--inventory {initiative_store}/.bmad-cache/inventory.json` if the file exists, else without. When the inventory is present and the user wants CI to fail on coverage gaps, pass `--coverage-strict`.
2. Print the JSON output to stdout, unmodified.
3. Exit. Do not greet, do not converse, do not invoke the coverage auditor (CI doesn't have the inventory in memory).
4. Exit code mirrors the validator: 0 if no errors, 1 if errors. Warnings do not change the exit code.
3. Exit. Do not greet, converse, or invoke the coverage auditor.
4. Exit code mirrors the validator: 0 if no errors, 1 if any error. Warnings do not change the exit code.
## Interactive mode
### 1. Surface failures conversationally
For each error in `findings`, explain it in one sentence and offer to fix. Group by file when there are several errors on the same path. Common patterns and the right next step:
For each error in `findings`, explain it in one sentence and offer to fix. Group by file when several errors land on the same path. Common patterns and the right next step:
- **Schema errors** (`*-extra-keys`, `*-missing-keys`, `*-bad-status`, `*-bad-type`) → loop back to `prompts/epic-authoring.md` for that one file, edit the front matter, re-validate.
- **`epic-nn-mismatch` / `story-epic-mismatch`** → likely a hand-edit of the front matter; the folder name is canonical, so update the front matter to match.
- **`story-dep-unresolved`** → either the dep was a typo (fix the depends_on entry) or the target was renamed (`scripts/rename_story.py`) or moved (`scripts/move_story.py`) without updating refs. Use the move/rename scripts for renames going forward — they update refs atomically.
- **`epic-dep-cycle`** → the cross-epic graph has a loop. Loop back to `prompts/epic-design.md` (re-derive-deps flow) to fix it.
- **`story-dep-unresolved`** → either the dep was a typo (fix the depends_on entry) or the target was renamed (`scripts/rename_story.py`) or moved (`scripts/move_story.py`) without updating refs. Use the move/rename scripts going forward — they update refs atomically.
- **`epic-dep-cycle`** → cross-epic graph has a loop. Loop back to `prompts/epic-design.md` (re-derive-deps flow) to fix it.
- **`story-numbering-gaps`** → use `scripts/rename_story.py --to-nn` to fill the gap or renumber the survivors.
If the failure-pattern set ever grows past ~10 entries, extract this list to `resources/validation-error-codes.md` to keep this prompt tight.
### 2. Coverage check
The validator's `summary.mentioned_requirements` is the set of codes that appear textually anywhere in any story body. Compare it to the Stage 2 inventory:
When `--inventory` was passed, the validator already produced `coverage_missing` deterministically — surface those codes conversationally and route into the **coverage-fix** edit-mode entry point in `prompts/epic-authoring.md`.
- **Codes in inventory but not in `mentioned_requirements`** → likely uncovered. Confirm by spot-reading the relevant epic's stories.
- **If the prose is ambiguous** (a story's Coverage line uses prose like "password policy" instead of `NFR3.2`) → fan out `agents/coverage-auditor.md` with the inventory and the tree path. The auditor returns exact + fuzzy matches and a list of uncovered codes.
For each missing code: ask whether it should be added to an existing story's AC mapping or whether a new story is needed. The validator does not distinguish between "code not mentioned anywhere" and "code mentioned in prose without the literal token" — when you suspect the latter, fan out `agents/coverage-auditor.md` to do a fuzzy semantic check.
For each uncovered requirement: surface it conversationally, ask whether it should be added to an existing story's AC mapping or whether a new story is needed. Loop back to `prompts/epic-authoring.md` for the targeted edit.
#### Speeding up the auditor with a deterministic pre-pass
When you do invoke the coverage auditor, run `scripts/extract_coverage.py` first and pass the JSON output into the auditor's prompt. The script parses every story's `## Coverage` section into a compact AC→codes map, freeing the auditor to spend tokens on fuzzy semantic matching, not section-locating:
```
python3 scripts/extract_coverage.py --initiative-store {initiative_store}
```
If subagents are unavailable, do the fuzzy pass inline against the same JSON.
### 3. Sizing warnings
The validator emits warnings (not errors) for stories whose body is more than 3× the epic mean. These are advisory — surface them as "this story may not fit one session" and let the user decide whether to split. If many warnings fire on real-world stories, the threshold may need tuning rather than the stories — note it for a follow-up.
The validator emits warnings (not errors) for stories whose body is more than 3× the epic mean. These are advisory — surface them as "this story may not fit one session" and let the user decide whether to split. If many fire on real-world stories, the threshold may need tuning rather than the stories — note for follow-up.
### 4. Re-validate after fixes
Loop back to step 1 after any fix. Stage 5 ends when strict validation has zero errors and every inventory item is either covered or explicitly de-scoped by the user.
Loop back to step 1 after any fix. Stage 5 ends when strict validation has zero errors (and zero `coverage-missing` errors when `--coverage-strict` is in effect) and every inventory item is either covered or explicitly de-scoped by the user.
## Stage Complete

View File

@ -0,0 +1,140 @@
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.10"
# ///
"""Extract every story's `## Coverage` section as a structured AC -> codes map.
Walks every story file under {initiative_store}/epics/<epic-folder>/*.md
(skipping each `epic.md`), locates the `## Coverage` section, and parses each
AC line into the requirement codes it claims. Output is intended to feed
agents/coverage-auditor.md so it spends tokens on fuzzy semantic matching, not
on locating sections in story bodies.
Coverage line shape this script accepts (loosely):
- AC1 -> FR1, NFR3.2
- **AC1**: FR1, NFR3.2
- AC1: FR1; UX-DR2 (password policy)
Codes are matched by REQUIREMENT_CODE_RE: `FR\\d+(.\\d+)?`, `NFR\\d+(.\\d+)?`,
`UX-DR\\d+(.\\d+)?`, plus `D\\d+` (debt) and `R\\d+` (research) when present.
Output (stdout, JSON):
{
"stories": [
{
"epic": "01-auth", "basename": "02-register-with-email",
"path": "<abs>", "has_coverage_section": true,
"ac_to_codes": {"AC1": ["FR1"], "AC2": ["NFR3.2"]},
"all_codes": ["FR1", "NFR3.2"]
}
],
"stories_without_coverage_section": ["<epic>/<basename>"],
"all_codes": ["FR1", "NFR3.2", "UX-DR2"]
}
Exit codes: 0 ok, 1 user error.
"""
from __future__ import annotations
import argparse
import json
import re
import sys
from pathlib import Path
REQUIREMENT_CODE_RE = re.compile(r"\b(?:UX-DR|NFR|FR|D|R)\d+(?:\.\d+)?\b")
COVERAGE_HEADING_RE = re.compile(r"^##\s+Coverage\s*$", re.MULTILINE)
NEXT_HEADING_RE = re.compile(r"^##\s+", re.MULTILINE)
AC_LINE_RE = re.compile(r"\bAC\d+\b", re.IGNORECASE)
def extract_coverage_section(text: str) -> str | None:
m = COVERAGE_HEADING_RE.search(text)
if not m:
return None
start = m.end()
nxt = NEXT_HEADING_RE.search(text, start)
end = nxt.start() if nxt else len(text)
return text[start:end]
def parse_coverage_section(section: str) -> dict[str, list[str]]:
"""Return {AC1: [codes...], AC2: [...]} for every AC referenced.
Loose: any line that mentions one or more AC labels (AC1, AC2, ...) and
one or more requirement codes contributes a mapping.
"""
out: dict[str, list[str]] = {}
for line in section.splitlines():
ac_labels = [m.group(0).upper() for m in AC_LINE_RE.finditer(line)]
if not ac_labels:
continue
codes = REQUIREMENT_CODE_RE.findall(line)
if not codes:
continue
for ac in ac_labels:
out.setdefault(ac, [])
for c in codes:
if c not in out[ac]:
out[ac].append(c)
return out
def walk(initiative_store: Path) -> dict:
epics_dir = initiative_store / "epics"
if not epics_dir.is_dir():
print(f"missing {epics_dir}", file=sys.stderr)
sys.exit(1)
stories: list[dict] = []
no_section: list[str] = []
all_codes: set[str] = set()
for ed in sorted(epics_dir.iterdir()):
if not ed.is_dir() or not re.match(r"^\d+-", ed.name):
continue
for sf in sorted(ed.glob("*.md")):
if sf.name == "epic.md":
continue
text = sf.read_text(encoding="utf-8")
section = extract_coverage_section(text)
if section is None:
no_section.append(f"{ed.name}/{sf.stem}")
stories.append({
"epic": ed.name,
"basename": sf.stem,
"path": str(sf),
"has_coverage_section": False,
"ac_to_codes": {},
"all_codes": [],
})
continue
ac_map = parse_coverage_section(section)
codes = sorted({c for codes in ac_map.values() for c in codes})
all_codes.update(codes)
stories.append({
"epic": ed.name,
"basename": sf.stem,
"path": str(sf),
"has_coverage_section": True,
"ac_to_codes": ac_map,
"all_codes": codes,
})
return {
"stories": stories,
"stories_without_coverage_section": no_section,
"all_codes": sorted(all_codes),
}
def main() -> int:
ap = argparse.ArgumentParser(description=__doc__)
ap.add_argument("--initiative-store", required=True, type=Path)
args = ap.parse_args()
print(json.dumps(walk(args.initiative_store)))
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@ -0,0 +1,242 @@
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.10"
# ///
"""Generate a complete v7 epic-and-story tree from a structured spec, then validate.
Spec schema (JSON or YAML-equivalent JSON):
{
"title": "...", # initiative name (informational)
"intent": "...", # one-line intent (informational)
"inventory": { # optional; written to .bmad-cache/inventory.json
"requirements": {
"functional": [{"code": "FR1", "text": "..."}],
"non_functional": [{"code": "NFR1", "text": "..."}],
"ux_design": [{"code": "UX-DR1","text": "..."}]
}
},
"epics": [
{
"nn": 1, "title": "...", "intent": "...",
"depends_on": [], # epic NNs as ints or strings
"shared_context": "...", # optional; replaces the placeholder
"story_sequence": "...", # optional; replaces the placeholder
"references": "...", # optional; replaces the placeholder
"stories": [
{
"nn": 1, "title": "...", "type": "feature|task|bug|spike",
"depends_on": [], # within-epic basenames or <epic>/<basename>
"user_story": "...", # optional; absent for type=task
"acceptance_criteria": ["..."], # rendered as bullet list under ## Acceptance Criteria
"technical_notes": "...", # optional
"coverage": {"AC1": ["FR1"]} # AC -> codes
}
]
}
]
}
The script invokes init_epic.py / init_story.py for every file and then patches
each file's body with the optional fields the spec provides. Finally, it runs
validate_initiative.py strict (with --inventory if a spec inventory was present)
and emits a JSON envelope summarizing the run.
Output (stdout, JSON):
{
"initiative_store": "<abs>",
"epics_created": ["01-foo", "02-bar"],
"stories_created": ["01-foo/01-baz", ...],
"inventory_path": "<abs>" | null,
"validation": {"findings": [...], "summary": {...}},
"exit_code": 0
}
Exit code mirrors the validation: 0 if no errors, 1 otherwise. Spec errors exit 1.
"""
from __future__ import annotations
import argparse
import json
import re
import subprocess
import sys
from pathlib import Path
SCRIPTS = Path(__file__).resolve().parent
INIT_EPIC = SCRIPTS / "init_epic.py"
INIT_STORY = SCRIPTS / "init_story.py"
VALIDATE = SCRIPTS / "validate_initiative.py"
def _load_spec(path: Path) -> dict:
if not path.is_file():
raise SystemExit(f"spec not found: {path}")
text = path.read_text(encoding="utf-8")
try:
return json.loads(text)
except json.JSONDecodeError as exc:
raise SystemExit(f"could not parse spec as JSON: {exc}")
def _run(cmd: list[str]) -> tuple[int, str, str]:
p = subprocess.run(cmd, capture_output=True, text=True, check=False)
return p.returncode, p.stdout, p.stderr
def _replace_section(body: str, heading: str, replacement: str) -> str:
pattern = re.compile(rf"^(##\s+{re.escape(heading)}\s*\n)(.*?)(?=^##\s+|\Z)", re.MULTILINE | re.DOTALL)
if not pattern.search(body):
return body
return pattern.sub(lambda m: m.group(1) + replacement.rstrip() + "\n\n", body)
def _replace_user_story(body: str, user_story: str | None) -> str:
if user_story is None:
return re.sub(r"\n*<!-- USER_STORY_START -->.*?<!-- USER_STORY_END -->\n*", "\n\n", body, flags=re.DOTALL)
block = f"\n<!-- USER_STORY_START -->\n{user_story.strip()}\n<!-- USER_STORY_END -->\n"
return re.sub(r"\n*<!-- USER_STORY_START -->.*?<!-- USER_STORY_END -->\n*", block, body, flags=re.DOTALL)
def _render_acs(acs: list[str]) -> str:
return "\n".join(f"- {ac.strip()}" for ac in acs if ac.strip())
def _render_coverage(coverage: dict[str, list[str]] | None) -> str:
if not coverage:
return ""
return "\n".join(f"- {ac.strip()}: {', '.join(coverage[ac])}" for ac in sorted(coverage.keys()))
def _patch_epic_md(epic_md: Path, epic_spec: dict) -> None:
body = epic_md.read_text(encoding="utf-8")
if "shared_context" in epic_spec and epic_spec["shared_context"]:
body = _replace_section(body, "Shared Context", epic_spec["shared_context"])
if "intent" in epic_spec and epic_spec["intent"]:
body = _replace_section(body, "Goal", epic_spec["intent"])
if "story_sequence" in epic_spec and epic_spec["story_sequence"]:
body = _replace_section(body, "Story Sequence", epic_spec["story_sequence"])
if "references" in epic_spec and epic_spec["references"]:
body = _replace_section(body, "References", epic_spec["references"])
epic_md.write_text(body, encoding="utf-8")
def _patch_story_file(story_path: Path, story_spec: dict) -> None:
body = story_path.read_text(encoding="utf-8")
body = _replace_user_story(body, story_spec.get("user_story"))
if story_spec.get("acceptance_criteria"):
body = _replace_section(body, "Acceptance Criteria", _render_acs(story_spec["acceptance_criteria"]))
if story_spec.get("technical_notes"):
body = _replace_section(body, "Technical Notes", story_spec["technical_notes"])
cov = _render_coverage(story_spec.get("coverage"))
if cov:
body = _replace_section(body, "Coverage", cov)
story_path.write_text(body, encoding="utf-8")
def _validate_spec(spec: dict) -> list[str]:
errs: list[str] = []
if not isinstance(spec.get("epics"), list) or not spec["epics"]:
errs.append("spec must contain a non-empty `epics` list")
return errs
for i, epic in enumerate(spec["epics"]):
for k in ("nn", "title"):
if k not in epic:
errs.append(f"epic[{i}] missing `{k}`")
for j, story in enumerate(epic.get("stories", []) or []):
for k in ("nn", "title", "type"):
if k not in story:
errs.append(f"epic[{i}].stories[{j}] missing `{k}`")
if story.get("type") not in {"feature", "task", "bug", "spike"}:
errs.append(f"epic[{i}].stories[{j}].type invalid: {story.get('type')!r}")
return errs
def main() -> int:
ap = argparse.ArgumentParser(description=__doc__)
ap.add_argument("--initiative-store", required=True, type=Path)
ap.add_argument("--spec", required=True, type=Path, help="Path to JSON spec")
ap.add_argument("--coverage-strict", action="store_true", help="Pass --coverage-strict to validation when an inventory is present")
args = ap.parse_args()
spec = _load_spec(args.spec)
errs = _validate_spec(spec)
if errs:
print(json.dumps({"error": "invalid spec", "details": errs}))
return 1
epics_created: list[str] = []
stories_created: list[str] = []
for epic_spec in spec["epics"]:
deps = epic_spec.get("depends_on", []) or []
deps_str = ",".join(str(d) for d in deps)
rc, out, err = _run([
sys.executable, str(INIT_EPIC),
"--initiative-store", str(args.initiative_store),
"--epic-nn", str(epic_spec["nn"]),
"--title", str(epic_spec["title"]),
"--depends-on", deps_str,
])
if rc != 0:
print(json.dumps({"error": "init_epic.py failed", "details": err.strip(), "epic": epic_spec.get("title")}))
return 1
epic = json.loads(out)
epics_created.append(epic["epic"])
epic_md = Path(epic["path"])
_patch_epic_md(epic_md, epic_spec)
for story_spec in epic_spec.get("stories", []) or []:
sdeps = story_spec.get("depends_on", []) or []
sdeps_str = ",".join(str(d) for d in sdeps)
rc, out, err = _run([
sys.executable, str(INIT_STORY),
"--initiative-store", str(args.initiative_store),
"--epic", epic["epic"],
"--story-nn", str(story_spec["nn"]),
"--title", str(story_spec["title"]),
"--type", str(story_spec["type"]),
"--depends-on", sdeps_str,
])
if rc != 0:
print(json.dumps({"error": "init_story.py failed", "details": err.strip(), "epic": epic["epic"], "story": story_spec.get("title")}))
return 1
story = json.loads(out)
stories_created.append(f"{epic['epic']}/{story['story']}")
_patch_story_file(Path(story["path"]), story_spec)
inventory_path: Path | None = None
if "inventory" in spec and spec["inventory"]:
cache_dir = args.initiative_store / ".bmad-cache"
cache_dir.mkdir(parents=True, exist_ok=True)
inventory_path = cache_dir / "inventory.json"
inventory = dict(spec["inventory"])
inventory.setdefault("title", spec.get("title"))
inventory.setdefault("intent", spec.get("intent"))
inventory["source"] = "from-spec"
inventory_path.write_text(json.dumps(inventory, indent=2), encoding="utf-8")
validate_cmd = [sys.executable, str(VALIDATE), "--initiative-store", str(args.initiative_store)]
if inventory_path is not None:
validate_cmd.extend(["--inventory", str(inventory_path)])
if args.coverage_strict:
validate_cmd.append("--coverage-strict")
rc, out, err = _run(validate_cmd)
try:
validation = json.loads(out)
except json.JSONDecodeError:
validation = {"error": "could not parse validator output", "stdout": out, "stderr": err}
envelope = {
"initiative_store": str(args.initiative_store),
"epics_created": epics_created,
"stories_created": stories_created,
"inventory_path": str(inventory_path) if inventory_path else None,
"validation": validation,
"exit_code": rc,
}
print(json.dumps(envelope))
return rc
if __name__ == "__main__":
sys.exit(main())

View File

@ -0,0 +1,231 @@
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.10"
# ///
"""Parse a canonical v6 monolithic epics.md into a structured spec.
The v6 canonical shape produced by the v6 `bmad-create-epics-and-stories` skill:
- Front matter at top.
- `## Requirements Inventory` section listing FRs / NFRs / UX-DRs.
- `## Epic List` overview.
- `## Epic N: <title>` sections, each containing `### Story N.M: <title>`
blocks with user-story stanzas, `**Acceptance Criteria:**` and an
optional `## FR Coverage Map` reverse-lookup table.
This script parses what it can deterministically. It does not invent data: any
ambiguous field is left empty for the LLM to confirm.
Output (stdout, JSON):
{
"title": "<initiative title or null>",
"requirements": {
"functional": [{"code": "FR1", "text": "..."}],
"non_functional":[{"code": "NFR1", "text": "..."}],
"ux_design": [{"code": "UX-DR1","text": "..."}]
},
"epics": [
{
"nn": 1, "title": "...", "intent": "...",
"stories": [
{"nn": 1, "title": "...", "type": "feature|task|bug|spike",
"user_story": "..." | null,
"acceptance_criteria": ["..."],
"coverage_codes": ["FR1"]}
]
}
],
"warnings": ["per-section parse note that the LLM should confirm"],
"is_sharded": false
}
Exit codes: 0 ok, 1 user error.
"""
from __future__ import annotations
import argparse
import json
import re
import sys
from pathlib import Path
EPIC_HEADING_RE = re.compile(r"^#{2,3}\s+Epic\s+(\d+)\s*:\s*(.+?)\s*$", re.MULTILINE)
STORY_HEADING_RE = re.compile(r"^#{3,4}\s+Story\s+(\d+)\.(\d+)\s*:\s*(.+?)\s*$", re.MULTILINE)
NEXT_SECTION_RE = re.compile(r"^#{1,4}\s+", re.MULTILINE)
TITLE_FRONT_RE = re.compile(r"^title:\s*(.+)$", re.MULTILINE)
H1_RE = re.compile(r"^#\s+(.+)$", re.MULTILINE)
USER_STORY_RE = re.compile(
r"\*{0,2}As a\*{0,2}[^\n]*\n+\*{0,2}I want\*{0,2}[^\n]*\n+\*{0,2}So that\*{0,2}[^\n]*",
re.IGNORECASE,
)
AC_RE = re.compile(r"\*\*Acceptance Criteria\*\*:?\s*\n+", re.IGNORECASE)
GIVEN_WHEN_THEN_RE = re.compile(r"^[-*]\s+(?:Given|When|Then|And|But)\b.*$", re.MULTILINE | re.IGNORECASE)
REQUIREMENT_CODE_RE = re.compile(r"\b(?:UX-DR|NFR|FR)\d+(?:\.\d+)?\b")
REQUIREMENT_LINE_RE = re.compile(r"^[-*]\s+(?:\*\*)?(FR\d+(?:\.\d+)?|NFR\d+(?:\.\d+)?|UX-DR\d+(?:\.\d+)?)(?:\*\*)?:?\s*(.*)$", re.MULTILINE)
def _strip_frontmatter(text: str) -> str:
if text.startswith("---\n"):
end = text.find("\n---", 4)
if end != -1:
return text[end + 4:]
return text
def _front_title(text: str) -> str | None:
m = TITLE_FRONT_RE.search(text[: text.find("---", 4) if text.startswith("---\n") else 0] or "")
if m:
return m.group(1).strip().strip('"').strip("'")
body = _strip_frontmatter(text)
h1 = H1_RE.search(body)
if h1:
return h1.group(1).strip()
return None
def _section_text(text: str, start: int) -> tuple[str, int]:
nxt = NEXT_SECTION_RE.search(text, start)
end = nxt.start() if nxt else len(text)
return text[start:end], end
def _classify_story_type(title: str, has_user_story: bool) -> str:
t = title.lower()
if any(w in t for w in ("bug", "fix")):
return "bug"
if any(w in t for w in ("spike", "investigate", "research")):
return "spike"
if has_user_story:
return "feature"
return "task"
def _extract_acs(story_body: str) -> list[str]:
m = AC_RE.search(story_body)
if not m:
return []
after = story_body[m.end():]
nxt = NEXT_SECTION_RE.search(after)
block = after[: nxt.start() if nxt else len(after)]
acs: list[str] = []
cur: list[str] = []
for line in block.splitlines():
if GIVEN_WHEN_THEN_RE.match(line):
cur.append(line.lstrip("-* ").strip())
elif cur and not line.strip():
acs.append(" ".join(cur).strip())
cur = []
elif line.strip().startswith("AC") and cur:
acs.append(" ".join(cur).strip())
cur = []
if cur:
acs.append(" ".join(cur).strip())
return [a for a in acs if a]
def parse(text: str) -> dict:
warnings: list[str] = []
title = _front_title(text)
body = _strip_frontmatter(text)
reqs: dict[str, list[dict]] = {"functional": [], "non_functional": [], "ux_design": []}
inv_match = re.search(r"^##\s+Requirements\s+Inventory\s*$", body, re.MULTILINE | re.IGNORECASE)
if inv_match:
section, _ = _section_text(body, inv_match.end())
for m in REQUIREMENT_LINE_RE.finditer(section):
code, txt = m.group(1).strip(), m.group(2).strip()
if code.startswith("UX-DR"):
reqs["ux_design"].append({"code": code, "text": txt})
elif code.startswith("NFR"):
reqs["non_functional"].append({"code": code, "text": txt})
else:
reqs["functional"].append({"code": code, "text": txt})
else:
warnings.append("no `## Requirements Inventory` section found")
epics: list[dict] = []
epic_matches = list(EPIC_HEADING_RE.finditer(body))
for i, em in enumerate(epic_matches):
epic_nn = int(em.group(1))
epic_title = em.group(2).strip()
epic_start = em.end()
epic_end = epic_matches[i + 1].start() if i + 1 < len(epic_matches) else len(body)
epic_body = body[epic_start:epic_end]
first_story = STORY_HEADING_RE.search(epic_body)
intent = epic_body[: first_story.start() if first_story else len(epic_body)].strip()
intent = re.sub(r"^#+.*$", "", intent, flags=re.MULTILINE).strip()
stories: list[dict] = []
story_matches = list(STORY_HEADING_RE.finditer(epic_body))
for j, sm in enumerate(story_matches):
story_nn = int(sm.group(2))
story_title = sm.group(3).strip()
s_start = sm.end()
s_end = story_matches[j + 1].start() if j + 1 < len(story_matches) else len(epic_body)
s_body = epic_body[s_start:s_end]
us_match = USER_STORY_RE.search(s_body)
user_story = us_match.group(0).strip() if us_match else None
story_type = _classify_story_type(story_title, has_user_story=user_story is not None)
acs = _extract_acs(s_body)
codes = sorted(set(REQUIREMENT_CODE_RE.findall(s_body)))
if not acs:
warnings.append(f"epic {epic_nn} story {story_nn}: no acceptance criteria parsed")
stories.append({
"nn": story_nn,
"title": story_title,
"type": story_type,
"user_story": user_story,
"acceptance_criteria": acs,
"coverage_codes": codes,
})
if not stories:
warnings.append(f"epic {epic_nn} ({epic_title}): no stories parsed")
epics.append({
"nn": epic_nn,
"title": epic_title,
"intent": intent,
"stories": stories,
})
if not epics:
warnings.append("no `## Epic N:` headings found; file may not be canonical v6")
return {
"title": title,
"requirements": reqs,
"epics": epics,
"warnings": warnings,
"is_sharded": False,
}
def main() -> int:
ap = argparse.ArgumentParser(description=__doc__)
ap.add_argument("--input", required=True, type=Path, help="Path to v6 epics.md (or directory for sharded)")
args = ap.parse_args()
if args.input.is_dir():
out = {
"title": None,
"requirements": {"functional": [], "non_functional": [], "ux_design": []},
"epics": [],
"warnings": [f"input {args.input} is a directory; sharded v6 input — flatten first or use --convert"],
"is_sharded": True,
}
print(json.dumps(out))
return 0
if not args.input.is_file():
print(f"input not found: {args.input}", file=sys.stderr)
return 1
text = args.input.read_text(encoding="utf-8")
print(json.dumps(parse(text)))
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@ -0,0 +1,148 @@
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.10"
# ///
"""Rename or renumber an epic folder safely; rewrites every reference across the tree.
Inputs:
--epic <NN-kebab> existing epic folder to rename
--to-title <text> new title -> derives a new kebab slug (max 40 chars)
--to-nn <int> new numeric prefix; if omitted, preserves the existing NN
When the epic moves from `src-folder` (NN-kebab) to `dst-folder`:
- The folder is renamed.
- epic.md `title:` is updated when --to-title was supplied.
- epic.md `epic:` field (the NN scalar) is updated when --to-nn was supplied.
- Every story file under the renamed folder gets `epic: <dst-folder>` rewritten.
- Every cross-epic depends_on across the whole tree referencing `<src-folder>/...`
is rewritten to `<dst-folder>/...`.
- Every other epic.md whose depends_on listed the old NN gets its NN updated when --to-nn.
This script does not handle NN collisions: if --to-nn is the NN of another epic,
the rename fails. Use it after renumbering the colliding epic out of the way.
Output (stdout, JSON): {"old": "<folder>", "new": "<folder>", "refs_updated": N, "path": "<abs>"}
Exit codes: 0 ok, 1 user error, 2 internal error.
"""
from __future__ import annotations
import argparse
import json
import re
import sys
from pathlib import Path
def slugify(title: str, max_len: int = 40) -> str:
s = title.lower().strip()
s = re.sub(r"[^a-z0-9]+", "-", s)
return s.strip("-")[:max_len].rstrip("-")
def yaml_quote(s: str) -> str:
return '"' + s.replace("\\", "\\\\").replace('"', '\\"') + '"'
def main() -> int:
ap = argparse.ArgumentParser(description=__doc__)
ap.add_argument("--initiative-store", required=True, type=Path)
ap.add_argument("--epic", dest="src", required=True, help="Existing folder name (e.g. 02-billing-stripe)")
ap.add_argument("--to-title", help="New title; derives a new kebab slug")
ap.add_argument("--to-nn", type=int, help="New numeric prefix; preserves existing NN if omitted")
args = ap.parse_args()
epics_dir = args.initiative_store / "epics"
src_dir = epics_dir / args.src
if not src_dir.is_dir():
print(f"epic folder not found: {src_dir}", file=sys.stderr)
return 1
m = re.match(r"^(\d+)-(.+)$", args.src)
if not m:
print(f"epic folder does not start with NN-: {args.src}", file=sys.stderr)
return 1
src_nn, src_kebab = m.group(1), m.group(2)
new_nn = f"{args.to_nn:02d}" if args.to_nn is not None else src_nn.zfill(2)
new_kebab = slugify(args.to_title) if args.to_title else src_kebab
new_folder = f"{new_nn}-{new_kebab}"
if new_folder == args.src:
print("nothing to change", file=sys.stderr)
print(json.dumps({"old": args.src, "new": new_folder, "refs_updated": 0, "path": str(src_dir)}))
return 0
dst_dir = epics_dir / new_folder
if dst_dir.exists():
print(f"target already exists: {dst_dir}", file=sys.stderr)
return 1
if args.to_nn is not None:
for ed in epics_dir.iterdir():
if not ed.is_dir() or ed.name == args.src:
continue
other_m = re.match(r"^(\d+)-", ed.name)
if other_m and other_m.group(1).zfill(2) == new_nn:
print(f"NN {new_nn} is already used by {ed.name}; renumber it first", file=sys.stderr)
return 1
src_dir.rename(dst_dir)
epic_md = dst_dir / "epic.md"
if epic_md.is_file():
text = epic_md.read_text(encoding="utf-8")
if args.to_title:
text = re.sub(r"^title:.*$", f"title: {yaml_quote(args.to_title)}", text, count=1, flags=re.MULTILINE)
if args.to_nn is not None:
text = re.sub(r"^epic:.*$", f"epic: {yaml_quote(new_nn)}", text, count=1, flags=re.MULTILINE)
epic_md.write_text(text, encoding="utf-8")
refs_updated = 0
for sf in dst_dir.glob("*.md"):
if sf.name == "epic.md":
continue
t = sf.read_text(encoding="utf-8")
new = re.sub(r"^epic:.*$", f"epic: {yaml_quote(new_folder)}", t, count=1, flags=re.MULTILINE)
if new != t:
sf.write_text(new, encoding="utf-8")
refs_updated += 1
for ed in epics_dir.iterdir():
if not ed.is_dir():
continue
for sf in ed.glob("*.md"):
if sf.name == "epic.md":
t = sf.read_text(encoding="utf-8")
new_lines: list[str] = []
changed = False
for line in t.split("\n"):
if line.startswith("depends_on:") and args.to_nn is not None:
pattern = rf'(["\s,\[]){re.escape(src_nn.zfill(2))}(["\s,\]])'
new_line = re.sub(pattern, lambda mm: mm.group(1) + new_nn + mm.group(2), line)
if new_line != line:
line = new_line
changed = True
new_lines.append(line)
if changed:
sf.write_text("\n".join(new_lines), encoding="utf-8")
refs_updated += 1
continue
t = sf.read_text(encoding="utf-8")
new_lines = []
changed = False
for line in t.split("\n"):
if line.startswith("depends_on:") and f"{args.src}/" in line:
line = line.replace(f"{args.src}/", f"{new_folder}/")
changed = True
new_lines.append(line)
if changed:
sf.write_text("\n".join(new_lines), encoding="utf-8")
refs_updated += 1
print(f"renamed {args.src} -> {new_folder}", file=sys.stderr)
print(json.dumps({"old": args.src, "new": new_folder, "refs_updated": refs_updated, "path": str(dst_dir)}))
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@ -0,0 +1,73 @@
#!/usr/bin/env python3
"""Tests for scripts/extract_coverage.py — AC->codes parsing."""
import json
import subprocess
import sys
import tempfile
import unittest
from pathlib import Path
SCRIPTS = Path(__file__).resolve().parent.parent
INIT_EPIC = SCRIPTS / "init_epic.py"
INIT_STORY = SCRIPTS / "init_story.py"
EXTRACT = SCRIPTS / "extract_coverage.py"
def _run(script: Path, *args: str) -> subprocess.CompletedProcess[str]:
return subprocess.run([sys.executable, str(script), *args], capture_output=True, text=True, check=False)
def _set_coverage(path: Path, coverage_md: str) -> None:
text = path.read_text(encoding="utf-8")
if "## Coverage" in text:
head, _, _ = text.partition("## Coverage")
path.write_text(head + "## Coverage\n\n" + coverage_md.rstrip() + "\n", encoding="utf-8")
else:
path.write_text(text.rstrip() + "\n\n## Coverage\n\n" + coverage_md.rstrip() + "\n", encoding="utf-8")
class TestExtractCoverage(unittest.TestCase):
def test_parses_ac_to_codes_map(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
store = Path(tmp)
_run(INIT_EPIC, "--initiative-store", str(store), "--epic-nn", "1", "--title", "Auth")
_run(INIT_STORY, "--initiative-store", str(store), "--epic", "01-auth", "--story-nn", "1", "--title", "Login", "--type", "feature")
sf = store / "epics" / "01-auth" / "01-login.md"
_set_coverage(sf, "- AC1: FR1, NFR3.2\n- AC2: UX-DR2\n- AC3: D1\n")
r = _run(EXTRACT, "--initiative-store", str(store))
self.assertEqual(r.returncode, 0, r.stderr)
data = json.loads(r.stdout)
self.assertEqual(len(data["stories"]), 1)
story = data["stories"][0]
self.assertTrue(story["has_coverage_section"])
self.assertEqual(story["ac_to_codes"]["AC1"], ["FR1", "NFR3.2"])
self.assertEqual(story["ac_to_codes"]["AC2"], ["UX-DR2"])
self.assertEqual(story["ac_to_codes"]["AC3"], ["D1"])
self.assertIn("FR1", data["all_codes"])
self.assertIn("UX-DR2", data["all_codes"])
def test_flags_stories_without_coverage_section(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
store = Path(tmp)
_run(INIT_EPIC, "--initiative-store", str(store), "--epic-nn", "1", "--title", "Auth")
_run(INIT_STORY, "--initiative-store", str(store), "--epic", "01-auth", "--story-nn", "1", "--title", "Login", "--type", "feature")
sf = store / "epics" / "01-auth" / "01-login.md"
text = sf.read_text(encoding="utf-8")
sf.write_text(text.replace("## Coverage", "## CoverageRemoved"), encoding="utf-8")
r = _run(EXTRACT, "--initiative-store", str(store))
self.assertEqual(r.returncode, 0, r.stderr)
data = json.loads(r.stdout)
self.assertIn("01-auth/01-login", data["stories_without_coverage_section"])
def test_skips_epic_md_files(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
store = Path(tmp)
_run(INIT_EPIC, "--initiative-store", str(store), "--epic-nn", "1", "--title", "Auth")
r = _run(EXTRACT, "--initiative-store", str(store))
data = json.loads(r.stdout)
self.assertEqual(data["stories"], [])
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,105 @@
#!/usr/bin/env python3
"""Tests for scripts/from_spec.py — spec validation, deterministic generation."""
import json
import subprocess
import sys
import tempfile
import unittest
from pathlib import Path
SCRIPTS = Path(__file__).resolve().parent.parent
FROM_SPEC = SCRIPTS / "from_spec.py"
def _run(*args: str) -> subprocess.CompletedProcess[str]:
return subprocess.run([sys.executable, str(FROM_SPEC), *args], capture_output=True, text=True, check=False)
BASE_SPEC = {
"title": "Demo",
"intent": "Stand up the v7 tree from a spec.",
"inventory": {
"requirements": {
"functional": [{"code": "FR1", "text": "Users can log in"}],
"non_functional": [{"code": "NFR1", "text": "p99 latency under 200ms"}],
}
},
"epics": [
{
"nn": 1, "title": "Auth", "intent": "User-facing login.",
"shared_context": "Sessions are JWT.",
"stories": [
{"nn": 1, "title": "Login form", "type": "feature",
"user_story": "As a user\nI want to log in\nSo that I can access the app",
"acceptance_criteria": ["Given valid creds When I submit Then I am signed in"],
"coverage": {"AC1": ["FR1"]}},
]
},
{
"nn": 2, "title": "Perf", "intent": "Latency tightening.",
"depends_on": [1],
"stories": [
{"nn": 1, "title": "Profile p99", "type": "task",
"acceptance_criteria": ["Given prod traffic shape When measured Then p99 < 200ms"],
"coverage": {"AC1": ["NFR1"]}},
]
}
]
}
class TestFromSpec(unittest.TestCase):
def test_generates_tree_and_passes_validation(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
store = Path(tmp) / "store"
spec = Path(tmp) / "spec.json"
spec.write_text(json.dumps(BASE_SPEC), encoding="utf-8")
r = _run("--initiative-store", str(store), "--spec", str(spec))
self.assertEqual(r.returncode, 0, r.stdout + r.stderr)
data = json.loads(r.stdout)
self.assertEqual(data["epics_created"], ["01-auth", "02-perf"])
self.assertIn("01-auth/01-login-form", data["stories_created"])
self.assertEqual(data["validation"]["summary"]["errors"], 0)
self.assertEqual(data["validation"]["summary"]["coverage_missing"], [])
self.assertTrue(Path(data["inventory_path"]).is_file())
self.assertIn("Sessions are JWT.", (store / "epics/01-auth/epic.md").read_text(encoding="utf-8"))
def test_invalid_spec_fails_with_details(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
store = Path(tmp) / "store"
spec = Path(tmp) / "spec.json"
spec.write_text(json.dumps({"title": "x"}), encoding="utf-8")
r = _run("--initiative-store", str(store), "--spec", str(spec))
self.assertEqual(r.returncode, 1)
data = json.loads(r.stdout)
self.assertEqual(data["error"], "invalid spec")
self.assertTrue(any("epics" in msg for msg in data["details"]))
def test_invalid_story_type_rejected(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
store = Path(tmp) / "store"
spec_data = json.loads(json.dumps(BASE_SPEC))
spec_data["epics"][0]["stories"][0]["type"] = "bogus"
spec = Path(tmp) / "spec.json"
spec.write_text(json.dumps(spec_data), encoding="utf-8")
r = _run("--initiative-store", str(store), "--spec", str(spec))
self.assertEqual(r.returncode, 1)
data = json.loads(r.stdout)
self.assertTrue(any("type invalid" in msg for msg in data["details"]))
def test_coverage_strict_fails_when_missing(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
store = Path(tmp) / "store"
spec_data = json.loads(json.dumps(BASE_SPEC))
spec_data["inventory"]["requirements"]["functional"].append({"code": "FR99", "text": "Uncovered"})
spec = Path(tmp) / "spec.json"
spec.write_text(json.dumps(spec_data), encoding="utf-8")
r = _run("--initiative-store", str(store), "--spec", str(spec), "--coverage-strict")
self.assertEqual(r.returncode, 1)
data = json.loads(r.stdout)
self.assertIn("FR99", data["validation"]["summary"]["coverage_missing"])
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,128 @@
#!/usr/bin/env python3
"""Tests for scripts/parse_v6_epics.py — canonical v6, sharded detection."""
import json
import subprocess
import sys
import tempfile
import unittest
from pathlib import Path
SCRIPTS = Path(__file__).resolve().parent.parent
PARSE = SCRIPTS / "parse_v6_epics.py"
def _run(*args: str) -> subprocess.CompletedProcess[str]:
return subprocess.run([sys.executable, str(PARSE), *args], capture_output=True, text=True, check=False)
CANONICAL_V6 = """\
---
title: Billing Stripe v2
---
# Billing Stripe v2
## Requirements Inventory
- FR1: Users can subscribe to a plan
- FR2: Users can cancel a subscription
- NFR1: Checkout completes within 5 seconds
- UX-DR1: Use the StatusMessage component for confirmation
## Epic List
1. Subscription management
2. Checkout
## Epic 1: Subscription management
The user can manage their subscription.
### Story 1.1: Subscribe to a plan
As a user
I want to subscribe to a plan
So that I unlock paid features
**Acceptance Criteria:**
- Given I am logged in
- When I select a plan and submit
- Then I am subscribed and see FR1 confirmation
### Story 1.2: Cancel a subscription
As a user
I want to cancel my subscription
So that I can stop being billed
**Acceptance Criteria:**
- Given I am subscribed
- When I cancel
- Then my subscription ends at period close (FR2)
## Epic 2: Checkout
The checkout flow.
### Story 2.1: Investigate Stripe webhook ordering
**Acceptance Criteria:**
- Given Stripe webhooks
- When events arrive out of order
- Then we still reconcile correctly (NFR1)
"""
class TestParseV6(unittest.TestCase):
def test_parses_canonical_v6(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
f = Path(tmp) / "epics.md"
f.write_text(CANONICAL_V6, encoding="utf-8")
r = _run("--input", str(f))
self.assertEqual(r.returncode, 0, r.stderr)
data = json.loads(r.stdout)
self.assertEqual(data["title"], "Billing Stripe v2")
self.assertFalse(data["is_sharded"])
self.assertEqual(len(data["epics"]), 2)
self.assertEqual(len(data["epics"][0]["stories"]), 2)
self.assertEqual(data["epics"][0]["stories"][0]["type"], "feature")
self.assertIn("FR1", data["epics"][0]["stories"][0]["coverage_codes"])
self.assertEqual(data["epics"][1]["stories"][0]["type"], "spike")
codes = {r["code"] for r in data["requirements"]["functional"]}
self.assertSetEqual(codes, {"FR1", "FR2"})
ux_codes = {r["code"] for r in data["requirements"]["ux_design"]}
self.assertEqual(ux_codes, {"UX-DR1"})
def test_classifies_bug_titles(self) -> None:
body = """\
## Epic 1: Hotfixes
### Story 1.1: Fix duplicate-charge bug
**Acceptance Criteria:**
- Given a duplicate charge
- When detected
- Then refund automatically
"""
with tempfile.TemporaryDirectory() as tmp:
f = Path(tmp) / "epics.md"
f.write_text(body, encoding="utf-8")
r = _run("--input", str(f))
data = json.loads(r.stdout)
self.assertEqual(data["epics"][0]["stories"][0]["type"], "bug")
def test_directory_input_reports_sharded(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
d = Path(tmp) / "shards"
d.mkdir()
(d / "index.md").write_text("# index", encoding="utf-8")
r = _run("--input", str(d))
self.assertEqual(r.returncode, 0, r.stderr)
data = json.loads(r.stdout)
self.assertTrue(data["is_sharded"])
self.assertTrue(any("sharded" in w for w in data["warnings"]))
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,79 @@
#!/usr/bin/env python3
"""Tests for scripts/rename_epic.py — retitle, renumber, ref propagation."""
import json
import subprocess
import sys
import tempfile
import unittest
from pathlib import Path
SCRIPTS = Path(__file__).resolve().parent.parent
INIT_EPIC = SCRIPTS / "init_epic.py"
INIT_STORY = SCRIPTS / "init_story.py"
RENAME_EPIC = SCRIPTS / "rename_epic.py"
VALIDATE = SCRIPTS / "validate_initiative.py"
def _run(script: Path, *args: str) -> subprocess.CompletedProcess[str]:
return subprocess.run([sys.executable, str(script), *args], capture_output=True, text=True, check=False)
class TestRenameEpic(unittest.TestCase):
def _bootstrap(self, store: Path) -> None:
_run(INIT_EPIC, "--initiative-store", str(store), "--epic-nn", "1", "--title", "Auth")
_run(INIT_EPIC, "--initiative-store", str(store), "--epic-nn", "2", "--title", "Migration", "--depends-on", "1")
_run(INIT_STORY, "--initiative-store", str(store), "--epic", "01-auth", "--story-nn", "1", "--title", "Schema", "--type", "task")
_run(INIT_STORY, "--initiative-store", str(store), "--epic", "02-migration", "--story-nn", "1", "--title", "Mailer", "--type", "task", "--depends-on", "01-auth/01-schema")
def test_retitle_propagates_to_story_epic_field_and_cross_epic_deps(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
store = Path(tmp)
self._bootstrap(store)
r = _run(RENAME_EPIC, "--initiative-store", str(store), "--epic", "01-auth", "--to-title", "User Authentication")
self.assertEqual(r.returncode, 0, r.stderr)
data = json.loads(r.stdout)
self.assertEqual(data["new"], "01-user-authentication")
self.assertTrue((store / "epics" / "01-user-authentication" / "epic.md").is_file())
schema = (store / "epics" / "01-user-authentication" / "01-schema.md").read_text(encoding="utf-8")
self.assertIn('epic: "01-user-authentication"', schema)
mailer = (store / "epics" / "02-migration" / "01-mailer.md").read_text(encoding="utf-8")
self.assertIn('"01-user-authentication/01-schema"', mailer)
v = _run(VALIDATE, "--initiative-store", str(store))
self.assertEqual(v.returncode, 0, v.stdout + v.stderr)
def test_renumber_updates_other_epic_depends_on(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
store = Path(tmp)
self._bootstrap(store)
r = _run(RENAME_EPIC, "--initiative-store", str(store), "--epic", "01-auth", "--to-nn", "5")
self.assertEqual(r.returncode, 0, r.stderr)
self.assertTrue((store / "epics" / "05-auth").is_dir())
mig_epic = (store / "epics" / "02-migration" / "epic.md").read_text(encoding="utf-8")
self.assertIn('"05"', mig_epic)
self.assertNotIn('"01"', mig_epic.split("---", 2)[1])
auth_epic = (store / "epics" / "05-auth" / "epic.md").read_text(encoding="utf-8")
self.assertIn('epic: "05"', auth_epic)
v = _run(VALIDATE, "--initiative-store", str(store))
self.assertEqual(v.returncode, 0, v.stdout + v.stderr)
def test_collision_with_existing_nn_fails(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
store = Path(tmp)
self._bootstrap(store)
r = _run(RENAME_EPIC, "--initiative-store", str(store), "--epic", "01-auth", "--to-nn", "2")
self.assertEqual(r.returncode, 1)
self.assertIn("already used", r.stderr)
def test_no_op_when_target_equals_source(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
store = Path(tmp)
self._bootstrap(store)
r = _run(RENAME_EPIC, "--initiative-store", str(store), "--epic", "01-auth")
self.assertEqual(r.returncode, 0)
data = json.loads(r.stdout)
self.assertEqual(data["refs_updated"], 0)
if __name__ == "__main__":
unittest.main()

View File

@ -93,6 +93,67 @@ class TestValidateInitiative(unittest.TestCase):
codes = {f["code"] for f in json.loads(r.stdout)["findings"]}
self.assertIn("story-numbering-gaps", codes)
def test_inventory_coverage_warning_then_strict_error(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
store = Path(tmp)
_build_clean_tree(store)
inv = store / "inventory.json"
inv.write_text(json.dumps({
"requirements": {
"functional": [{"code": "FR1", "text": "Users can register"}, {"code": "FR99", "text": "Uncovered"}],
}
}), encoding="utf-8")
schema = store / "epics" / "01-auth" / "01-schema.md"
schema.write_text(schema.read_text(encoding="utf-8") + "\n## Coverage\n- AC1: FR1\n", encoding="utf-8")
r = _run(VALIDATE, "--initiative-store", str(store), "--inventory", str(inv))
self.assertEqual(r.returncode, 0, r.stdout + r.stderr)
data = json.loads(r.stdout)
warnings = [f for f in data["findings"] if f["code"] == "coverage-missing"]
self.assertEqual(len(warnings), 1)
self.assertEqual(warnings[0]["level"], "warning")
self.assertEqual(data["summary"]["coverage_missing"], ["FR99"])
r = _run(VALIDATE, "--initiative-store", str(store), "--inventory", str(inv), "--coverage-strict")
self.assertEqual(r.returncode, 1)
data = json.loads(r.stdout)
errors = [f for f in data["findings"] if f["code"] == "coverage-missing"]
self.assertEqual(errors[0]["level"], "error")
def test_summary_only_emits_per_story_metadata(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
store = Path(tmp)
_build_clean_tree(store)
r = _run(VALIDATE, "--initiative-store", str(store), "--summary-only")
self.assertEqual(r.returncode, 0)
data = json.loads(r.stdout)
self.assertIn("summary", data)
epics = data["summary"]["epics"]
self.assertEqual(len(epics), 2)
self.assertEqual(epics[0]["story_count"], 2)
story_titles = {s["title"] for s in epics[0]["stories"]}
self.assertEqual(story_titles, {"Schema", "Register"})
self.assertIn("type", epics[0]["stories"][0])
def test_tree_emits_plain_text(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
store = Path(tmp)
_build_clean_tree(store)
r = _run(VALIDATE, "--initiative-store", str(store), "--tree")
self.assertEqual(r.returncode, 0)
self.assertIn("01-auth/", r.stdout)
self.assertIn("01-schema.md", r.stdout)
self.assertIn("(task, draft)", r.stdout)
self.assertIn("└──", r.stdout)
def test_inventory_missing_file_fails_cleanly(self) -> None:
with tempfile.TemporaryDirectory() as tmp:
store = Path(tmp)
_build_clean_tree(store)
r = _run(VALIDATE, "--initiative-store", str(store), "--inventory", str(store / "nope.json"))
self.assertEqual(r.returncode, 1)
self.assertIn("not found", r.stderr)
def test_lax_skips_sizing_warnings(self) -> None:
# Sizing warnings fire when one body exceeds 3x the epic mean. With 5 normal
# stories and one massively-padded outlier, the mean stays low enough for

View File

@ -12,18 +12,24 @@ Checks (strict mode):
5. Cross-epic depends_on graph is acyclic.
6. Within-epic story numbering is sequential starting at 01.
7. Sizing sanity (warnings only): a story body >3x the epic mean is flagged.
Coverage of FR/NFR/UX-DR codes is NOT enforced here the inventory lives in the LLM's
working memory, not on disk. The summary's `mentioned_requirements` field exposes every
code mentioned in any story body so the calling prompt can cross-check against its
inventory (see `prompts/validate.md`).
8. Coverage (only when --inventory FILE is provided): every requirement code listed
in the inventory must appear textually in at least one story body.
Output (stdout, JSON): {"findings": [...], "summary": {...}}
--summary-only emits a structured tree block instead, used by edit-mode and finalize.
--tree emits a plain-text tree to stdout, used by Stage 6.
Exit codes: 0 if no errors (warnings ok), 1 if any error finding, 2 on internal error.
Flags:
--lax skip sizing warnings; never relaxes schema or dep checks
--epic NN-kebab limit walks to a single epic folder (still resolves cross-epic refs against the whole tree)
--epic NN-kebab limit walks to a single epic folder (still resolves cross-epic refs)
--inventory FILE path to inventory.json (or .bmad-cache/inventory.json); when present,
missing requirement codes are reported. Default level is warning;
pair with --coverage-strict to escalate to error.
--coverage-strict upgrade coverage-missing findings from warning to error
--summary-only emit tree-shaped summary JSON only (no schema findings); intended for
prompts that need to see what's there without re-reading every file
--tree emit a plain-text tree (epic folders, story files, statuses) and exit 0
"""
from __future__ import annotations
@ -169,7 +175,35 @@ def _find_cycles(graph: dict[str, list[str]]) -> list[list[str]]:
return cycles
def validate(initiative_store: Path, lax: bool, only_epic: str | None) -> tuple[list[dict], dict]:
def _inventory_codes(inventory: dict) -> list[tuple[str, str]]:
"""Return a flat (code, text) list across every category in an inventory dict.
Accepts either a `requirements` map keyed by category, or a flat list of
{code, text} entries under `codes`. Tolerates missing fields.
"""
out: list[tuple[str, str]] = []
reqs = inventory.get("requirements") or {}
if isinstance(reqs, dict):
for entries in reqs.values():
if not isinstance(entries, list):
continue
for e in entries:
if isinstance(e, dict) and "code" in e:
out.append((str(e["code"]), str(e.get("text", ""))))
for legacy_key in ("codes", "additional_codes"):
for e in inventory.get(legacy_key, []) or []:
if isinstance(e, dict) and "code" in e:
out.append((str(e["code"]), str(e.get("text", ""))))
return out
def validate(
initiative_store: Path,
lax: bool,
only_epic: str | None,
inventory_codes: list[tuple[str, str]] | None = None,
coverage_strict: bool = False,
) -> tuple[list[dict], dict]:
findings: list[dict] = []
epics_dir = initiative_store / "epics"
if not epics_dir.is_dir():
@ -223,7 +257,14 @@ def validate(initiative_store: Path, lax: bool, only_epic: str | None) -> tuple[
findings.append({"level": "error", "code": "epic-deps-not-list", "message": "depends_on must be a list", "path": str(epic_md)})
deps = []
epic_meta[ed.name] = {"nn": nn, "depends_on": [str(d) for d in deps], "path": ed, "in_walk": in_walk}
epic_meta[ed.name] = {
"nn": nn,
"title": str(fm.get("title", "")),
"status": fm.get("status"),
"depends_on": [str(d) for d in deps],
"path": ed,
"in_walk": in_walk,
}
story_files = sorted(p for p in ed.iterdir() if p.is_file() and p.suffix == ".md" and p.name != "epic.md" and re.match(r"^\d+-", p.name))
seen_nns: list[int] = []
@ -264,8 +305,11 @@ def validate(initiative_store: Path, lax: bool, only_epic: str | None) -> tuple[
story_index[f"{ed.name}/{sf.stem}"] = {
"depends_on": [str(d) for d in sdeps],
"path": sf,
"basename": sf.stem,
"epic": ed.name,
"nn": snn,
"title": str(sfm.get("title", "")),
"type": sfm.get("type"),
"status": sfm.get("status"),
"body_len": len(stext),
"in_walk": in_walk,
@ -322,28 +366,117 @@ def validate(initiative_store: Path, lax: bool, only_epic: str | None) -> tuple[
"path": str(smeta["path"]),
})
summary = {
"epics": [
{"folder": name, "nn": meta["nn"], "depends_on": meta["depends_on"]}
for name, meta in epic_meta.items() if meta["in_walk"]
coverage_missing: list[str] = []
if inventory_codes is not None:
level = "error" if coverage_strict else "warning"
for code, text in inventory_codes:
if code in mentioned_codes:
continue
coverage_missing.append(code)
findings.append({
"level": level,
"code": "coverage-missing",
"message": f"requirement {code!r} ({text[:60]}...) not referenced by any story body" if text else f"requirement {code!r} not referenced by any story body",
"path": str(initiative_store / "epics"),
})
epics_summary: list[dict] = []
for name, meta in epic_meta.items():
if not meta["in_walk"]:
continue
own_stories = [s for s in story_index.values() if s["epic"] == name and s["in_walk"]]
epics_summary.append({
"folder": name,
"nn": meta["nn"],
"title": meta["title"],
"status": meta["status"],
"depends_on": meta["depends_on"],
"story_count": len(own_stories),
"story_status_counts": dict(Counter(s["status"] for s in own_stories)),
"stories": [
{
"basename": s["basename"],
"nn": f"{s['nn']:02d}",
"title": s["title"],
"type": s["type"],
"status": s["status"],
"depends_on": s["depends_on"],
"body_len": s["body_len"],
}
for s in sorted(own_stories, key=lambda s: s["nn"])
],
})
summary = {
"epics": epics_summary,
"story_count": sum(1 for s in story_index.values() if s["in_walk"]),
"story_status_counts": dict(Counter(s["status"] for s in story_index.values() if s["in_walk"])),
"story_type_counts": dict(Counter(s["type"] for s in story_index.values() if s["in_walk"])),
"errors": sum(1 for f in findings if f["level"] == "error"),
"warnings": sum(1 for f in findings if f["level"] == "warning"),
"mentioned_requirements": sorted(mentioned_codes),
"coverage_missing": sorted(coverage_missing),
}
return findings, summary
def render_tree(initiative_store: Path, summary: dict) -> str:
"""Plain-text tree for direct printing in Stage 6 / edit-mode summary."""
lines = [f"{initiative_store}/epics/"]
epics = summary.get("epics", [])
for ei, epic in enumerate(epics):
is_last_epic = ei == len(epics) - 1
epic_branch = "└── " if is_last_epic else "├── "
lines.append(f"{epic_branch}{epic['folder']}/ (epic, {epic.get('status', '?')})")
epic_indent = " " if is_last_epic else ""
stories = epic.get("stories", [])
for si, story in enumerate(stories):
is_last_story = si == len(stories) - 1
story_branch = "└── " if is_last_story else "├── "
lines.append(
f"{epic_indent}{story_branch}{story['basename']}.md "
f"({story.get('type', '?')}, {story.get('status', '?')})"
)
return "\n".join(lines)
def main() -> int:
ap = argparse.ArgumentParser(description=__doc__)
ap.add_argument("--initiative-store", required=True, type=Path)
ap.add_argument("--lax", action="store_true", help="Skip sizing warnings; never relaxes schema/dep checks")
ap.add_argument("--epic", help="Limit reporting to a single epic folder name")
ap.add_argument("--inventory", type=Path, help="inventory.json with requirement codes; enables coverage check")
ap.add_argument("--coverage-strict", action="store_true", help="Escalate coverage-missing findings from warning to error")
ap.add_argument("--summary-only", action="store_true", help="Emit summary block with full epic/story tree (no findings)")
ap.add_argument("--tree", action="store_true", help="Emit a plain-text tree to stdout and exit")
args = ap.parse_args()
findings, summary = validate(args.initiative_store, args.lax, args.epic)
inventory_codes: list[tuple[str, str]] | None = None
if args.inventory is not None:
if not args.inventory.is_file():
print(f"inventory file not found: {args.inventory}", file=sys.stderr)
return 1
try:
inventory = json.loads(args.inventory.read_text(encoding="utf-8"))
except json.JSONDecodeError as exc:
print(f"could not parse {args.inventory}: {exc}", file=sys.stderr)
return 1
inventory_codes = _inventory_codes(inventory)
findings, summary = validate(
args.initiative_store,
args.lax,
args.epic,
inventory_codes=inventory_codes,
coverage_strict=args.coverage_strict,
)
if args.tree:
print(render_tree(args.initiative_store, summary))
return 0
if args.summary_only:
print(json.dumps({"summary": summary}))
return 0
print(json.dumps({"findings": findings, "summary": summary}))
return 1 if any(f["level"] == "error" for f in findings) else 0