first version of new CES skill
This commit is contained in:
parent
e6cdc93b79
commit
1e7769bf89
|
|
@ -75,6 +75,7 @@ _bmad/custom/*.user.toml
|
||||||
.roo
|
.roo
|
||||||
.trae
|
.trae
|
||||||
.windsurf
|
.windsurf
|
||||||
|
.trunk/
|
||||||
|
|
||||||
|
|
||||||
# Astro / Documentation Build
|
# Astro / Documentation Build
|
||||||
|
|
|
||||||
|
|
@ -1,65 +1,40 @@
|
||||||
---
|
---
|
||||||
name: bmad-create-epics-and-stories
|
name: bmad-create-epics-and-stories
|
||||||
description: 'Break requirements into epics and user stories. Use when the user says "create the epics and stories list"'
|
description: 'Create, edit, and validate the v7 epic-and-story tree for an initiative. Use when the user says "create the epics and stories", "add an epic", "split an epic", "refine a story", or "re-validate the initiative".'
|
||||||
---
|
---
|
||||||
|
|
||||||
# Create Epics and Stories
|
# Create Epics and Stories (v7)
|
||||||
|
|
||||||
**Goal:** Transform PRD requirements and Architecture decisions into comprehensive stories organized by user value, creating detailed, actionable stories with complete acceptance criteria for the Developer agent.
|
## Overview
|
||||||
|
|
||||||
**Your Role:** In addition to your name, communication_style, and persona, you are also a product strategist and technical specifications writer collaborating with a product owner. This is a partnership, not a client-vendor relationship. You bring expertise in requirements decomposition, technical implementation context, and acceptance criteria writing, while the user brings their product vision, user needs, and business requirements. Work together as equals.
|
This skill produces and maintains the **v7 epic-first folder tree** for an initiative — `{initiative_store}/epics/NN-kebab/epic.md` plus one file per story under each epic folder, every file carrying locked YAML front matter that doubles as the kanban tracking system. Downstream skills (`bmad-dev-story`, `bmad-code-review`, `bmad-retrospective`, future `bmad-initiative-status`) read state directly from these files.
|
||||||
|
|
||||||
|
**Acts as:** a product strategist and technical specifications writer collaborating with the user as a peer. The user owns product vision and priorities; this skill brings requirements decomposition, sizing judgment, and the v7 schema. Conversational throughout — soft gates ("ready to move on?") rather than rigid menus.
|
||||||
|
|
||||||
|
**One skill, three modes:**
|
||||||
|
|
||||||
|
- **Create** — no `epics/` tree yet. Walks intent → discovery → epic design → per-epic authoring → validate → finalize.
|
||||||
|
- **Edit** — the tree exists. Routes by user phrasing or flag to add-epic, split-epic, refine-story, re-derive-deps, or re-validate. Never re-walks intent or discovery.
|
||||||
|
- **Migrate** — a v6 monolithic `epics.md` exists but no v7 tree. Offers leave-alone, run-canonical-helper, or walk-through-manually.
|
||||||
|
|
||||||
|
**Headless surface:** `--re-validate` (alias `--headless` / `-H`) runs strict validation only and emits JSON. This is the CI invocation. All other modes are interactive.
|
||||||
|
|
||||||
|
**Owns:** front-matter schemas (`resources/`), bootstrap and validation scripts (`scripts/`), and the only writers of the epic tree. **Does not own:** `governance.md` or `initiative-context.md` authoring, `initiative_store` config plumbing, downstream status transitions beyond `draft`.
|
||||||
|
|
||||||
## Conventions
|
## Conventions
|
||||||
|
|
||||||
- Bare paths (e.g. `steps/step-01-validate-prerequisites.md`) resolve from the skill root.
|
- Bare paths (e.g. `prompts/intent.md`) resolve from the skill root.
|
||||||
- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
|
- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
|
||||||
- `{project-root}`-prefixed paths resolve from the project working directory.
|
- `{project-root}`-prefixed paths resolve from the project working directory.
|
||||||
- `{skill-name}` resolves to the skill directory's basename.
|
- `{skill-name}` resolves to the skill directory's basename.
|
||||||
|
|
||||||
## WORKFLOW ARCHITECTURE
|
|
||||||
|
|
||||||
This uses **step-file architecture** for disciplined execution:
|
|
||||||
|
|
||||||
### Core Principles
|
|
||||||
|
|
||||||
- **Micro-file Design**: Each step toward the overall goal is a self-contained instruction file; adhere to one file at a time, as directed
|
|
||||||
- **Just-In-Time Loading**: Only 1 current step file will be loaded and followed to completion - never load future step files until told to do so
|
|
||||||
- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
|
|
||||||
- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
|
|
||||||
- **Append-Only Building**: Build documents by appending content as directed to the output file
|
|
||||||
|
|
||||||
### Step Processing Rules
|
|
||||||
|
|
||||||
1. **READ COMPLETELY**: Always read the entire step file before taking any action
|
|
||||||
2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
|
|
||||||
3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
|
|
||||||
4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
|
|
||||||
5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
|
|
||||||
6. **LOAD NEXT**: When directed, read fully and follow the next step file
|
|
||||||
|
|
||||||
### Critical Rules (NO EXCEPTIONS)
|
|
||||||
|
|
||||||
- 🛑 **NEVER** load multiple step files simultaneously
|
|
||||||
- 📖 **ALWAYS** read entire step file before execution
|
|
||||||
- 🚫 **NEVER** skip steps or optimize the sequence
|
|
||||||
- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
|
|
||||||
- 🎯 **ALWAYS** follow the exact instructions in the step file
|
|
||||||
- ⏸️ **ALWAYS** halt at menus and wait for user input
|
|
||||||
- 📋 **NEVER** create mental todo lists from future steps
|
|
||||||
|
|
||||||
## On Activation
|
## On Activation
|
||||||
|
|
||||||
### Step 1: Resolve the Workflow Block
|
### Step 1: Resolve the Workflow Block
|
||||||
|
|
||||||
Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
|
Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
|
||||||
|
|
||||||
**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
|
If the script fails, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying structural merge rules: `{skill-root}/customize.toml`, `{project-root}/_bmad/custom/{skill-name}.toml`, `{project-root}/_bmad/custom/{skill-name}.user.toml`. Scalars override, tables deep-merge, arrays of tables keyed by `code`/`id` replace matching entries and append new ones, all other arrays append.
|
||||||
|
|
||||||
1. `{skill-root}/customize.toml` — defaults
|
|
||||||
2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
|
|
||||||
3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
|
|
||||||
|
|
||||||
Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
|
|
||||||
|
|
||||||
### Step 2: Execute Prepend Steps
|
### Step 2: Execute Prepend Steps
|
||||||
|
|
||||||
|
|
@ -67,27 +42,82 @@ Execute each entry in `{workflow.activation_steps_prepend}` in order before proc
|
||||||
|
|
||||||
### Step 3: Load Persistent Facts
|
### Step 3: Load Persistent Facts
|
||||||
|
|
||||||
Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
|
Treat every entry in `{workflow.persistent_facts}` as foundational context for the whole run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
|
||||||
|
|
||||||
### Step 4: Load Config
|
### Step 4: Load Config
|
||||||
|
|
||||||
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
Load config from `{project-root}/_bmad/config.yaml` and `{project-root}/_bmad/config.user.yaml` (root and `bmm` section). If config is missing, let the user know `bmad-bmm-setup` can configure the module at any time, then continue with sensible defaults.
|
||||||
- Use `{user_name}` for greeting
|
|
||||||
- Use `{communication_language}` for all communications
|
Resolve and use throughout:
|
||||||
- Use `{document_output_language}` for output documents
|
|
||||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
- `{user_name}` — for greeting
|
||||||
- Use `{project_knowledge}` for additional context scanning
|
- `{communication_language}` — for all conversation
|
||||||
|
- `{document_output_language}` — for the body content of every produced `epic.md` and story file
|
||||||
|
- `{initiative_store}` — root for the epic tree. **Resolution chain:** if `initiative_store` is set in `bmm` config, use it. Else fall back to `{planning_artifacts}` (Appendix D back-compat). Else fall back to `{output_folder}`. Pass the resolved path explicitly to every script as `--initiative-store`.
|
||||||
|
- `{planning_artifacts}` — scanned by `agents/artifact-analyzer.md` for PRD / architecture / UX / governance inputs.
|
||||||
|
- `{project_knowledge}` — scanned by `agents/artifact-analyzer.md` for project context.
|
||||||
|
|
||||||
### Step 5: Greet the User
|
### Step 5: Greet the User
|
||||||
|
|
||||||
Greet `{user_name}`, speaking in `{communication_language}`.
|
Greet `{user_name}` in `{communication_language}`. Skip the greeting in headless mode — no conversational output should precede the JSON.
|
||||||
|
|
||||||
### Step 6: Execute Append Steps
|
### Step 6: Execute Append Steps
|
||||||
|
|
||||||
Execute each entry in `{workflow.activation_steps_append}` in order.
|
Execute each entry in `{workflow.activation_steps_append}` in order.
|
||||||
|
|
||||||
Activation is complete. Begin the workflow below.
|
Activation is complete. Proceed to Mode Detection.
|
||||||
|
|
||||||
## Execution
|
## Stage 0: Mode Detection
|
||||||
|
|
||||||
Read fully and follow: `./steps/step-01-validate-prerequisites.md` to begin the workflow.
|
Detect the operating mode before doing anything else. Filesystem state is the source of truth.
|
||||||
|
|
||||||
|
**1. Headless / re-validate surface.** If the user passed `--re-validate`, `--headless`, or `-H` (or said "re-validate" / "validate the initiative" with no edit intent), set `{mode}=headless`. Skip Stages 1–4, jump straight to `prompts/validate.md`, and emit JSON only.
|
||||||
|
|
||||||
|
**2. Mode by filesystem state:**
|
||||||
|
|
||||||
|
- If `{initiative_store}/epics/` does not exist OR exists but contains no epic folders → `{mode}=create`.
|
||||||
|
- If `{initiative_store}/epics/` contains v7 epic folders (any folder matching `NN-*` with an `epic.md` inside) → `{mode}=edit`.
|
||||||
|
- If `{initiative_store}/epics/` is absent BUT a v6 monolithic file exists at `{initiative_store}/epics.md` or `{planning_artifacts}/epics.md` → `{mode}=migrate`.
|
||||||
|
|
||||||
|
If both v7 folders and a v6 file exist, prefer `edit` and surface the v6 file in Stage 1 as a one-line note ("there's still a legacy `epics.md` here — leave it alone or delete it after you've confirmed the v7 tree").
|
||||||
|
|
||||||
|
**3. Edit sub-mode dispatch (only when `{mode}=edit`).** Detect from the user's opening message:
|
||||||
|
|
||||||
|
| User signal | Sub-mode |
|
||||||
|
|---|---|
|
||||||
|
| "add an epic", "new epic for X" | `add-epic` |
|
||||||
|
| "split epic NN", "split the auth epic" | `split-epic` |
|
||||||
|
| "merge epics NN and MM" | `merge-epics` |
|
||||||
|
| "refine story X", "rewrite story 1.3", "fix story foo" | `refine-story` |
|
||||||
|
| "re-derive deps", "rebuild the dependency graph" | `re-derive-deps` |
|
||||||
|
| "re-validate", "check the tree" | `re-validate` |
|
||||||
|
| Anything else | ask which of the above they intend |
|
||||||
|
|
||||||
|
Set `{edit_submode}` to the matched value before routing.
|
||||||
|
|
||||||
|
**4. Route:**
|
||||||
|
|
||||||
|
- `create` → `prompts/intent.md`
|
||||||
|
- `migrate` → `prompts/intent.md` (it offers the migrate three-options branch when `{mode}=migrate`)
|
||||||
|
- `edit` → `prompts/edit-mode.md`
|
||||||
|
- `headless` → `prompts/validate.md`
|
||||||
|
|
||||||
|
Carry `{mode}` (and `{edit_submode}` when set) into the routed prompt.
|
||||||
|
|
||||||
|
## Stages
|
||||||
|
|
||||||
|
| # | Stage | Purpose | Prompt |
|
||||||
|
|---|-------|---------|--------|
|
||||||
|
| 0 | Mode Detection | Filesystem-driven create / edit / migrate / headless dispatch | SKILL.md (above) |
|
||||||
|
| 1 | Intent | Capture initiative title and primary intent; confirm scope of edit; offer migrate options | `prompts/intent.md` |
|
||||||
|
| 2 | Discovery | Fan-out artifact scan; build a working-memory requirements inventory | `prompts/discovery.md` |
|
||||||
|
| 3 | Epic Design | Collaboratively shape the epic list and cross-epic dependency graph | `prompts/epic-design.md` |
|
||||||
|
| 4 | Per-Epic Authoring | Write `epic.md` and story files for each epic, in approved order | `prompts/epic-authoring.md` |
|
||||||
|
| 5 | Validation | Strict schema, deps, coverage, and sizing checks | `prompts/validate.md` |
|
||||||
|
| 6 | Finalize | Print tree, confirm initial statuses, hand off | `prompts/finalize.md` |
|
||||||
|
|
||||||
|
Edit-mode flows are dispatched from `prompts/edit-mode.md`, which re-enters the relevant subset of stages above without re-walking 1 and 2.
|
||||||
|
|
||||||
|
## Conventions for Downstream Skills (stability commitment)
|
||||||
|
|
||||||
|
Future v7 versions of `bmad-create-story`, `bmad-dev-story`, `bmad-code-review`, `bmad-retrospective`, and `bmad-initiative-status` adopt the schemas in `resources/epic-frontmatter-schema.md` and `resources/story-frontmatter-schema.md` **verbatim**. Status transitions beyond `draft` are owned by those downstream skills — this skill only writes `draft`. The folder name `NN-kebab` is the canonical identifier; the `epic:` field exists for portability and the validator flags any drift between them.
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,79 @@
|
||||||
|
# Artifact Analyzer
|
||||||
|
|
||||||
|
You are a research analyst for an initiative-planning workflow. Your job is to scan the project's planning artifacts and project knowledge for the inputs an epic-and-story author needs, and return a structured synthesis.
|
||||||
|
|
||||||
|
## Input
|
||||||
|
|
||||||
|
You will receive:
|
||||||
|
|
||||||
|
- **Initiative intent:** A short summary of what this initiative is about.
|
||||||
|
- **Scan paths:** The values of `{planning_artifacts}` and `{project_knowledge}` from the calling skill.
|
||||||
|
- **User-provided paths:** Any specific files the user pointed to.
|
||||||
|
|
||||||
|
## Sources to look for
|
||||||
|
|
||||||
|
Scan the provided directories. The shapes that matter:
|
||||||
|
|
||||||
|
- **PRD** — `*prd*.md` (whole) or `*prd*/index.md` (sharded). For sharded docs, read `index.md` first to understand the structure, then the relevant sections only.
|
||||||
|
- **Architecture** — `*architecture*.md` (whole) or `*architecture*/index.md` (sharded). Same sharded handling.
|
||||||
|
- **UX design** — `*ux*.md` (whole) or `*ux*/index.md` (sharded). Optional but first-class when present.
|
||||||
|
- **Governance** — `governance.md` at any depth under the planning artifacts. Captures org-level constraints (compliance, sign-off rules, mandatory sections). Optional.
|
||||||
|
- **Initiative context** — `initiative-context.md` at any depth. Captures the "why this initiative exists" framing — debt items, OKRs, customer asks. Optional.
|
||||||
|
- **Project context** — `*project-context*.md` under `{project_knowledge}`. Tech-stack and conventions.
|
||||||
|
|
||||||
|
Read documents in parallel — issue all `Read` calls in a single message rather than sequentially. For very large documents (>50 pages estimated), read the table of contents and scan section headings first, then read only sections relevant to the initiative intent. Note which sections were skimmed vs read fully.
|
||||||
|
|
||||||
|
## What to extract
|
||||||
|
|
||||||
|
For each source you find, extract everything that materially shapes epic and story decisions:
|
||||||
|
|
||||||
|
- **Functional requirements (FRs).** Numbered items in the PRD: "FR1:", "Functional Requirement 1:", or similar. Include user actions, system behaviors, business rules. Format as `FR1`, `FR2`, ... — preserve the original numbering if the source uses one, otherwise number sequentially.
|
||||||
|
- **Non-functional requirements (NFRs).** Performance, security, usability, reliability, compliance constraints. Format as `NFR1`, `NFR2`, ...
|
||||||
|
- **Additional requirements from architecture.** Infrastructure, deployment, integration, data migration, monitoring, API versioning, security implementation. **Specifically flag a starter template** if the architecture mentions one — the calling skill needs to make Epic 1 Story 1 a "set up from starter template" task.
|
||||||
|
- **UX design requirements (UX-DRs).** Treat the UX spec as first-class. Extract design tokens, reusable component proposals, accessibility requirements, responsive breakpoints, interaction patterns, browser/device targets. **Be specific** — if the spec identifies six reusable components, list all six, not "create reusable components."
|
||||||
|
- **Governance constraints.** Mandatory sections, sign-off requirements, compliance gates, scope guardrails.
|
||||||
|
- **Initiative context.** What problem this initiative solves, the debt items or business goals driving it, any deadlines or release tie-ins.
|
||||||
|
- **Project context.** Stack, conventions, codebase shape that constrains story sizing.
|
||||||
|
|
||||||
|
Ignore documents that aren't relevant. Don't waste tokens on unrelated content.
|
||||||
|
|
||||||
|
## Graceful degradation
|
||||||
|
|
||||||
|
- A tech-debt-only initiative typically has no PRD. If no PRD is found, return empty `functional_requirements` and `non_functional_requirements` lists — the calling skill accepts an explicit list of debt items / target areas instead.
|
||||||
|
- Sharded UX or architecture docs that lack an `index.md` — read the largest top-level files in the directory.
|
||||||
|
- If `governance.md` and `initiative-context.md` are both absent, return empty fields. The calling skill will note this in conversation but won't block.
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Return ONLY the following JSON object. No preamble, no commentary.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"documents_found": [
|
||||||
|
{"path": "<file path>", "kind": "prd|architecture|ux|governance|initiative-context|project-context|other", "relevance": "one-line summary"}
|
||||||
|
],
|
||||||
|
"functional_requirements": [
|
||||||
|
{"code": "FR1", "text": "<requirement statement>"}
|
||||||
|
],
|
||||||
|
"non_functional_requirements": [
|
||||||
|
{"code": "NFR1", "text": "<requirement statement>"}
|
||||||
|
],
|
||||||
|
"additional_requirements": [
|
||||||
|
"<bullet — technical/architecture-derived requirement>"
|
||||||
|
],
|
||||||
|
"ux_design_requirements": [
|
||||||
|
{"code": "UX-DR1", "text": "<actionable design requirement, specific enough for one story>"}
|
||||||
|
],
|
||||||
|
"starter_template_note": "<one-line description of the starter template, or null>",
|
||||||
|
"governance_constraints": [
|
||||||
|
"<bullet — constraint or sign-off requirement>"
|
||||||
|
],
|
||||||
|
"initiative_context_summary": "<2-4 sentences summarizing why this initiative exists, or null>",
|
||||||
|
"project_context_summary": "<2-4 sentences summarizing stack/conventions/codebase shape, or null>",
|
||||||
|
"skimmed_sections": [
|
||||||
|
{"path": "<file>", "sections_skimmed": ["<section name>"]}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Lists may be empty. `starter_template_note`, `initiative_context_summary`, and `project_context_summary` are `null` when the source isn't present.
|
||||||
|
|
@ -0,0 +1,44 @@
|
||||||
|
# Coverage Auditor
|
||||||
|
|
||||||
|
You are a coverage auditor for an epic-and-story tree. Your job is to determine which initiative-level requirements (FRs, NFRs, UX-DRs) are referenced by at least one story's acceptance-criteria coverage mapping, and which are not.
|
||||||
|
|
||||||
|
## Input
|
||||||
|
|
||||||
|
You will receive:
|
||||||
|
|
||||||
|
- **Requirements inventory** — a list of requirement codes with their text, e.g. `FR1: Users can register with email`, `NFR3.2: Password policy enforces ...`, `UX-DR2: Implement the StatusMessage component ...`.
|
||||||
|
- **Tree path** — `{initiative_store}/epics/`. Read every `*.md` file under each epic folder (skip `epic.md`).
|
||||||
|
- **Mentioned-codes hint** — the list `validate_initiative.py` already extracted via regex (the codes that appear textually in any story body). Use this as a starting point: if a code is in this hint, do a quick read to confirm it's in a Coverage section (not a passing mention), then mark covered. If a code is NOT in the hint, do not assume it's missing — fuzzy matches still need a check.
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
1. **Read every story file in parallel.** Issue all `Read` calls in a single message.
|
||||||
|
2. **Locate each story's `## Coverage` section** (skip stories that lack one — flag them as "no-coverage-section").
|
||||||
|
3. **For each requirement in the inventory:**
|
||||||
|
- **Exact match:** the code appears in any story's Coverage section → covered.
|
||||||
|
- **Fuzzy match:** the code does not appear textually, but a story's Coverage line for some AC describes the same capability or constraint in prose ("AC1 → password-policy enforcement" matching `NFR3.2: Password policy enforces ...`) → covered, with a note.
|
||||||
|
- **No match:** uncovered.
|
||||||
|
4. Be conservative on fuzzy matches. If you're not >70% confident the prose describes the same requirement, mark it uncovered. False negatives cost a question; false positives ship a gap.
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Return ONLY the following JSON object. No preamble.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"covered": [
|
||||||
|
{"code": "FR1", "stories": ["01-billing-stripe/02-register-with-email"], "match": "exact"}
|
||||||
|
],
|
||||||
|
"uncovered": [
|
||||||
|
{"code": "FR7", "text": "<requirement text>", "reason": "no story Coverage section references this code or its capability"}
|
||||||
|
],
|
||||||
|
"fuzzy_matches": [
|
||||||
|
{"code": "NFR3.2", "story": "01-billing-stripe/02-register-with-email", "ac": "AC3", "note": "'password policy' in coverage line is read as a fuzzy match for NFR3.2"}
|
||||||
|
],
|
||||||
|
"stories_without_coverage_section": [
|
||||||
|
"<epic>/<basename>"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The calling skill (Stage 5) decides what to do with each list — typically: surface uncovered conversationally and offer to drop into Stage 4 to add a story or extend an AC; mention fuzzy matches for confirmation; flag missing coverage sections as authoring gaps to fix.
|
||||||
|
|
@ -1,7 +1,8 @@
|
||||||
# DO NOT EDIT -- overwritten on every update.
|
# DO NOT EDIT -- overwritten on every update.
|
||||||
#
|
#
|
||||||
# Workflow customization surface for bmad-create-epics-and-stories. Mirrors the
|
# Workflow customization surface for bmad-create-epics-and-stories.
|
||||||
# agent customization shape under the [workflow] namespace.
|
# Team overrides: {project-root}/_bmad/custom/bmad-create-epics-and-stories.toml
|
||||||
|
# Personal overrides: {project-root}/_bmad/custom/bmad-create-epics-and-stories.user.toml
|
||||||
|
|
||||||
[workflow]
|
[workflow]
|
||||||
|
|
||||||
|
|
@ -14,19 +15,19 @@
|
||||||
|
|
||||||
activation_steps_prepend = []
|
activation_steps_prepend = []
|
||||||
|
|
||||||
# Steps to run after greet but before the workflow begins.
|
# Steps to run after greet but before Mode Detection.
|
||||||
# Overrides append. Use for context-heavy setup that should happen
|
# Overrides append. Use for context-heavy setup that should happen
|
||||||
# once the user has been acknowledged.
|
# once the user has been acknowledged.
|
||||||
|
|
||||||
activation_steps_append = []
|
activation_steps_append = []
|
||||||
|
|
||||||
# Persistent facts the workflow keeps in mind for the whole run
|
# Persistent facts the workflow keeps in mind for the whole run
|
||||||
# (standards, compliance constraints, stylistic guardrails).
|
# (standards, sizing thresholds, governance constraints).
|
||||||
# Distinct from the runtime memory sidecar — these are static context
|
# Distinct from the runtime memory sidecar -- these are static context
|
||||||
# loaded on activation. Overrides append.
|
# loaded on activation. Overrides append.
|
||||||
#
|
#
|
||||||
# Each entry is either:
|
# Each entry is either:
|
||||||
# - a literal sentence, e.g. "All epics must deliver complete end-to-end user value."
|
# - a literal sentence, e.g. "Stories must reference at least one FR or UX-DR."
|
||||||
# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
|
# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
|
||||||
# (glob patterns are supported; the file's contents are loaded and treated as facts).
|
# (glob patterns are supported; the file's contents are loaded and treated as facts).
|
||||||
|
|
||||||
|
|
@ -34,8 +35,8 @@ persistent_facts = [
|
||||||
"file:{project-root}/**/project-context.md",
|
"file:{project-root}/**/project-context.md",
|
||||||
]
|
]
|
||||||
|
|
||||||
# Scalar: executed when the workflow reaches Step 4 (Final Validation) and the
|
# Scalar: executed when Stage 6 (Finalize) completes -- after the tree is
|
||||||
# user confirms [C] Complete — after the epics.md is saved and bmad-help is invoked.
|
# printed, statuses confirmed, and bmad-help has been invoked. Override
|
||||||
# Override wins. Leave empty for no custom post-completion behavior.
|
# wins. Leave empty for no custom post-completion behavior.
|
||||||
|
|
||||||
on_complete = ""
|
on_complete = ""
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,38 @@
|
||||||
|
**Language:** Use `{communication_language}` for all output.
|
||||||
|
**Output Language:** Use `{document_output_language}` for documents.
|
||||||
|
**Paths:** Bare paths (e.g. `agents/artifact-analyzer.md`) resolve from the skill root.
|
||||||
|
|
||||||
|
# Stage 2: Discovery
|
||||||
|
|
||||||
|
**Goal:** Build a complete-enough requirements inventory in working memory — every FR, NFR, additional architecture-derived requirement, and UX-DR that the epic-and-story tree will need to cover. **Do not write this to a file.** v7 has no monolithic place for it; the inventory gets distributed into per-epic `epic.md` files in Stage 4.
|
||||||
|
|
||||||
|
## Subagent fan-out
|
||||||
|
|
||||||
|
Launch one `agents/artifact-analyzer.md` subagent. Pass it: the initiative intent from Stage 1, `{planning_artifacts}` and `{project_knowledge}` as scan paths, and any specific paths the user pointed to in Stage 1.
|
||||||
|
|
||||||
|
The subagent returns structured JSON — see its file for the contract. Hold the returned object in working memory; you will reference its fields directly in Stage 3 (for epic shaping) and Stage 4 (for AC-to-requirement coverage mapping).
|
||||||
|
|
||||||
|
## Graceful degradation
|
||||||
|
|
||||||
|
- **Subagents unavailable.** Read the most relevant 1–2 documents in the main context (PRD if present, then architecture). For very large docs, read TOC and section headings first; full-read only sections the initiative intent makes relevant. Note which sections you skimmed.
|
||||||
|
- **No PRD.** If the initiative is tech-debt-heavy or task-heavy and no PRD exists, do not block. Ask the user for an explicit list of debt items, target areas, or research questions, and use that list as the inventory in place of FRs. Format the list with synthetic codes (`D1`, `D2`, ... or `R1`, `R2`, ... for research questions) so Stage 4's coverage mapping has something to reference.
|
||||||
|
- **No UX doc.** Tech-only initiatives may have none. Empty `ux_design_requirements` is fine.
|
||||||
|
|
||||||
|
## Synthesis
|
||||||
|
|
||||||
|
When the subagent returns (or inline scanning completes):
|
||||||
|
|
||||||
|
1. **Merge with what the user told you in Stage 1.** Volunteered FRs or design ideas are part of the inventory.
|
||||||
|
2. **Hold the merged inventory in working memory** as a single conceptual list with three sections: requirements (FRs + debt items), constraints (NFRs + governance), and UX-DRs.
|
||||||
|
3. **Note the starter-template flag**, if present — Stage 4's first epic will need a setup story.
|
||||||
|
4. **Identify gaps.** Anything the inventory doesn't cover that you'd expect for an initiative of this type? (Auth flows in a product without auth requirements? Migration steps without data-migration requirements?)
|
||||||
|
|
||||||
|
## Present a brief summary
|
||||||
|
|
||||||
|
Tell the user in 4–8 lines: how many FRs, NFRs, UX-DRs were extracted, the starter-template note if any, governance constraints if any, and any gaps you noticed. Do not dump the full inventory — they have the source documents.
|
||||||
|
|
||||||
|
Ask: "Anything missing or wrong here, or shall we move on to designing the epic list?" Soft gate.
|
||||||
|
|
||||||
|
## Stage Complete
|
||||||
|
|
||||||
|
When the user confirms (or stays silent after the soft prompt), route to `prompts/epic-design.md`. The inventory remains in working memory throughout the rest of the workflow.
|
||||||
|
|
@ -0,0 +1,68 @@
|
||||||
|
**Language:** Use `{communication_language}` for all output.
|
||||||
|
**Output Language:** Use `{document_output_language}` for documents.
|
||||||
|
**Paths:** Bare paths (e.g. `scripts/move_story.py`) resolve from the skill root.
|
||||||
|
|
||||||
|
# Edit-Mode Dispatch
|
||||||
|
|
||||||
|
You arrive here when Stage 0 detected `{mode}=edit` (the v7 tree at `{initiative_store}/epics/` already has content). Stage 0 also classified `{edit_submode}` from the user's opening message; if it's still ambiguous, ask which of the flows below they want.
|
||||||
|
|
||||||
|
**Principle:** never re-walk Stages 1 (intent) and 2 (discovery). Never re-prompt for things visible in existing files. Read the relevant files and ask only what's actually needed.
|
||||||
|
|
||||||
|
## add-epic
|
||||||
|
|
||||||
|
The user wants a new epic.
|
||||||
|
|
||||||
|
1. **Mini Stage 1.** Ask only about the new epic: title, intent, expected story-type theme. Skip everything else.
|
||||||
|
2. **Mini Stage 3.** With the existing epic list visible (read every `epic.md` for title and `depends_on`), discuss where the new epic fits — its NN (next available), its `depends_on`, and how it relates to the existing graph. Validate no cycle.
|
||||||
|
3. **Mini Stage 4.** Route to `prompts/epic-authoring.md` for the new epic only. Step 1 (`init_epic.py`) creates the folder; steps 2–6 author it normally. Other epics are not touched.
|
||||||
|
4. **Strict validation.** Route to `prompts/validate.md` strict. Tree-wide check ensures the addition didn't break anything.
|
||||||
|
|
||||||
|
## split-epic
|
||||||
|
|
||||||
|
The user wants to split one existing epic into two (or more).
|
||||||
|
|
||||||
|
1. **Mini Stage 3 — design the split.** Read the target epic's `epic.md` and every story file under it. Discuss with the user:
|
||||||
|
- Where the seam falls (which stories belong in each post-split epic).
|
||||||
|
- The new epic's NN (next available), title, intent, and `depends_on` — typically the new sibling depends on the original where the split is downstream.
|
||||||
|
- Whether any existing stories should be split themselves (rare; usually the seam falls cleanly between stories).
|
||||||
|
2. **Mini Stage 4 — execute the split.**
|
||||||
|
- Run `init_epic.py` for the new epic folder (the one receiving migrated stories).
|
||||||
|
- For each story moving to the new epic, run `scripts/move_story.py --from <old-epic>/<basename> --to-epic <new-epic> [--new-nn N]`. The script rewrites the moved story's `epic:` field, rewrites within-epic depends_on entries that pointed at the moved story (turning bare basenames into cross-epic refs), and updates cross-epic refs across the whole tree. Use a fresh NN sequence in the destination starting at 01 — pass `--new-nn` to renumber.
|
||||||
|
- Re-author each affected `epic.md` body so its Goal, Shared Context, and Story Sequence reflect the post-split shape.
|
||||||
|
- Renumber any gaps left behind in the source epic with `scripts/rename_story.py --to-nn`.
|
||||||
|
3. **Strict validation.** Route to `prompts/validate.md` strict.
|
||||||
|
|
||||||
|
## merge-epics
|
||||||
|
|
||||||
|
The user wants to merge two epics into one.
|
||||||
|
|
||||||
|
1. **Mini Stage 3 — design the merge.** Decide which epic survives. Discuss how the surviving `depends_on` collapses (union of both, minus any that becomes self-referential).
|
||||||
|
2. **Mini Stage 4 — execute.**
|
||||||
|
- For each story in the disappearing epic, run `scripts/move_story.py --from <gone-epic>/<basename> --to-epic <surviving-epic> --new-nn <next-N>` to land it after the existing surviving-epic stories.
|
||||||
|
- Update the surviving `epic.md` body — consolidate Goal, Shared Context, Story Sequence.
|
||||||
|
- Delete the now-empty `gone-epic` folder.
|
||||||
|
3. **Strict validation.** Route to `prompts/validate.md` strict.
|
||||||
|
|
||||||
|
## refine-story
|
||||||
|
|
||||||
|
The user wants to fix one specific story.
|
||||||
|
|
||||||
|
1. Read the story file and its enclosing `epic.md` for context.
|
||||||
|
2. **Targeted edit** in `prompts/epic-authoring.md` step 5 only — fill or rewrite ACs, technical notes, coverage. If the story title changed, run `scripts/rename_story.py --to-title "<new title>"` first; it renames the file, updates the `title:` front-matter, and rewrites every depends_on reference across the tree. If the NN changed, also pass `--to-nn`.
|
||||||
|
3. **Narrow validation.** Route to `prompts/validate.md` strict (the script is fast even on whole trees; no need to scope unless the tree is huge).
|
||||||
|
|
||||||
|
## re-derive-deps
|
||||||
|
|
||||||
|
The user wants the dependency graph rebuilt — typically because epics or stories were added/removed by hand and the depends_on lists are stale.
|
||||||
|
|
||||||
|
1. **Cross-epic** — Mini Stage 3, walking each `epic.md` and discussing whether its current `depends_on` reflects the actual sequencing. Edit the `depends_on:` line in each affected `epic.md` directly.
|
||||||
|
2. **Within-epic** — Mini Stage 4, walking each epic's stories and confirming each story's `depends_on` reflects what it actually relies on. Edit the `depends_on:` line in each story file directly.
|
||||||
|
3. **Strict validation.** Route to `prompts/validate.md` strict — the cycle check and dep resolution are the point.
|
||||||
|
|
||||||
|
## re-validate
|
||||||
|
|
||||||
|
Just run validation. Route directly to `prompts/validate.md` strict. (When the user invoked the skill with `--re-validate` / `--headless` / `-H`, Stage 0 already routed straight to `prompts/validate.md` and never touched this file.)
|
||||||
|
|
||||||
|
## Stage Complete
|
||||||
|
|
||||||
|
After any flow above, the routed `prompts/validate.md` run becomes the terminal step. Stage 6 (Finalize) is **not re-run** in edit mode — the user already has the tree; there's no fresh hand-off to make. After validation passes, summarize what changed in 1–3 lines and exit.
|
||||||
|
|
@ -0,0 +1,108 @@
|
||||||
|
**Language:** Use `{communication_language}` for all output.
|
||||||
|
**Output Language:** Use `{document_output_language}` for documents.
|
||||||
|
**Paths:** Bare paths (e.g. `scripts/init_epic.py`) resolve from the skill root.
|
||||||
|
|
||||||
|
# Stage 4: Per-Epic Authoring
|
||||||
|
|
||||||
|
**Goal:** Write `epic.md` and the story files for every approved epic, in order, conversationally. This is the **only stage that writes files**. Every write goes through `scripts/init_epic.py` or `scripts/init_story.py` so paths and front matter are derived consistently.
|
||||||
|
|
||||||
|
Load `resources/sizing-heuristics.md` as a fact (if not already loaded in Stage 3). Optionally load `resources/examples/epic-feature-example.md` and `resources/examples/epic-techdebt-example.md` as shape primers if you want concrete reference for the body density and Coverage section format.
|
||||||
|
|
||||||
|
## Per-epic loop
|
||||||
|
|
||||||
|
For each approved epic, in order:
|
||||||
|
|
||||||
|
### 1. Bootstrap the epic folder
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```
|
||||||
|
python3 scripts/init_epic.py \
|
||||||
|
--initiative-store {initiative_store} \
|
||||||
|
--epic-nn <NN> \
|
||||||
|
--title "<title>" \
|
||||||
|
--depends-on <comma-separated NNs from Stage 3>
|
||||||
|
```
|
||||||
|
|
||||||
|
The script creates `{initiative_store}/epics/NN-kebab/epic.md` with locked front matter and a body skeleton. Take its JSON output (`epic`, `epic_nn`, `path`) — the `epic` field is the canonical folder name you pass to every subsequent `init_story.py` call.
|
||||||
|
|
||||||
|
**Never compose the folder name yourself in prose.** Always read it back from the script's JSON output.
|
||||||
|
|
||||||
|
### 2. Fill the epic body conversationally
|
||||||
|
|
||||||
|
Read the file the script just wrote. The skeleton has four sections — fill them by working with the user:
|
||||||
|
|
||||||
|
- **Goal.** One paragraph. The user-value (or measurable engineering improvement) this epic delivers as a whole.
|
||||||
|
- **Shared Context.** Architectural decisions, constraints, integration points that apply to every story. This is the cache `bmad-dev-story` and downstream skills read instead of re-deriving context per story. Tight; if a fact would change story-to-story, it doesn't go here.
|
||||||
|
- **Story Sequence.** Brief notes on inter-story flow within this epic — what unlocks what.
|
||||||
|
- **References.** Links into PRD / architecture / UX sections most load-bearing for this epic. Anchored paths where possible.
|
||||||
|
|
||||||
|
Edit the file in place when the user is satisfied with each section's content.
|
||||||
|
|
||||||
|
### 3. Decompose into stories
|
||||||
|
|
||||||
|
Discuss the story breakdown with the user. Apply the sizing heuristics:
|
||||||
|
|
||||||
|
- **One AI session, verifiable end-state, bounded blast radius.** If you can't picture a single dev agent finishing the story without context exhaustion, split it.
|
||||||
|
- **Vertical slices preferred.** Horizontal slices acceptable when the epic's Shared Context explicitly justifies the seam.
|
||||||
|
- **Story types.** `feature` for user-visible capability; `task` for setup or refactor with no user-story stanza by default; `bug` for a defect with optional user-story; `spike` for research with optional user-story.
|
||||||
|
|
||||||
|
Confirm the story list with the user before any write. The list is: ordered NN, story type, title, within-epic or cross-epic `depends_on`.
|
||||||
|
|
||||||
|
### 4. Bootstrap each story file
|
||||||
|
|
||||||
|
For each story in the approved list:
|
||||||
|
|
||||||
|
```
|
||||||
|
python3 scripts/init_story.py \
|
||||||
|
--initiative-store {initiative_store} \
|
||||||
|
--epic <epic-folder-from-step-1> \
|
||||||
|
--story-nn <NN> \
|
||||||
|
--title "<title>" \
|
||||||
|
--type <feature|bug|task|spike> \
|
||||||
|
--depends-on <comma-separated refs>
|
||||||
|
```
|
||||||
|
|
||||||
|
`--depends-on` entries are bare basenames for within-epic refs (e.g. `01-define-schema`) or `<epic-folder>/<basename>` for cross-epic refs (e.g. `02-auth-migration/04-session-management`).
|
||||||
|
|
||||||
|
### 5. Fill the story body conversationally
|
||||||
|
|
||||||
|
Read the file the script just wrote. The skeleton matches `resources/story-md-template.md`:
|
||||||
|
|
||||||
|
- **User-story stanza** — present for `feature` (required), `bug`/`spike` (optional), absent for `task`. Fill or remove as appropriate.
|
||||||
|
- **Acceptance Criteria** — Given/When/Then form. Each AC stands alone, specific and testable. Cover happy path, key edge cases, at least one failure mode where applicable. Aim for ≤6 ACs; if you need more, the story may be over-sized — pause and consider splitting.
|
||||||
|
- **Technical Notes** — implementation hints, file paths, API contracts. Not a full design.
|
||||||
|
- **Coverage** — one line per AC mapping to the FR / NFR / UX-DR / debt-item codes from the Stage 2 inventory. This is what Stage 5 reads to verify nothing was dropped.
|
||||||
|
|
||||||
|
### 6. Per-epic non-strict validation
|
||||||
|
|
||||||
|
After all stories for the current epic are drafted, run:
|
||||||
|
|
||||||
|
```
|
||||||
|
python3 scripts/validate_initiative.py --initiative-store {initiative_store} --lax --epic <epic-folder>
|
||||||
|
```
|
||||||
|
|
||||||
|
`--lax` skips sizing warnings (still mid-flow). Schema and dep checks always run. If errors come back, fix them before moving to the next epic. Common issues at this stage: a within-epic dep typo, a forgotten `epic:` field mismatch (the script catches both).
|
||||||
|
|
||||||
|
### 7. User checkpoint
|
||||||
|
|
||||||
|
Before starting the next epic, confirm with the user that this epic is complete. The next epic does not begin until the current is approved.
|
||||||
|
|
||||||
|
## After all epics are authored
|
||||||
|
|
||||||
|
Route to `prompts/validate.md` for full-tree strict validation.
|
||||||
|
|
||||||
|
## Edit-mode entry points
|
||||||
|
|
||||||
|
When this stage is entered from `prompts/edit-mode.md`:
|
||||||
|
|
||||||
|
- **add-epic:** run steps 1–6 for the single new epic, then route to `prompts/validate.md` strict.
|
||||||
|
- **split-epic / merge-epics:** for each affected epic, run step 1 (if a new folder is needed), step 2 (re-author the `epic.md`), and use `scripts/move_story.py` for any story file that changes its epic. The skill never copy-pastes a story across folders — always move.
|
||||||
|
- **refine-story:** narrow to step 5 for the single story file. If the story title changed, use `scripts/rename_story.py` first to rename and update sibling refs. Skip steps 1–4.
|
||||||
|
- **re-derive-deps:** within-epic dep updates only here; the cross-epic dep updates were already settled in Stage 3. Walk story files in each affected epic and edit the `depends_on` line directly.
|
||||||
|
|
||||||
|
After any edit-mode flow finishes, route to `prompts/validate.md` strict.
|
||||||
|
|
||||||
|
## Stage Complete
|
||||||
|
|
||||||
|
Stage 4 ends when every approved epic has its `epic.md` and all its story files written, the per-epic non-strict validation passes for each, and the user has confirmed completion of the last epic.
|
||||||
|
|
@ -0,0 +1,55 @@
|
||||||
|
**Language:** Use `{communication_language}` for all output.
|
||||||
|
**Output Language:** Use `{document_output_language}` for documents.
|
||||||
|
**Paths:** Bare paths (e.g. `resources/sizing-heuristics.md`) resolve from the skill root.
|
||||||
|
|
||||||
|
# Stage 3: Epic Design
|
||||||
|
|
||||||
|
**Goal:** Produce an approved epic list — for each epic an NN, a kebab title, an intent statement, a `depends_on` list (cross-epic), and a default story-type theme. Validate the cross-epic graph for cycles before leaving the stage. No files are written here; the list lives in working memory until Stage 4 calls `init_epic.py`.
|
||||||
|
|
||||||
|
Load `resources/sizing-heuristics.md` as a fact for the rest of the workflow — it shapes how stories will be sized in Stage 4 and informs how big each epic should be.
|
||||||
|
|
||||||
|
## Principles to apply (carry into the conversation, do not lecture)
|
||||||
|
|
||||||
|
- **User-value first.** Each epic must enable users to accomplish something meaningful, or — for tech-debt epics — leave a measurably better engineering state. Epics organized by technical layers ("database setup," "API endpoints," "frontend components") are wrong; reshape them.
|
||||||
|
- **Standalone within the dependency graph.** Each epic delivers complete functionality for its domain. Epic 2 must not require Epic 3 to function. Epic 3 may build on 1 and 2 but must stand alone.
|
||||||
|
- **Dependency-free within an epic.** Stories within an epic must not depend on later stories in the same epic. (The validator enforces this in Stage 5 via `depends_on` resolution.)
|
||||||
|
- **File-churn check.** If multiple proposed epics repeatedly modify the same core files, ask whether they should consolidate into one epic with ordered stories. Distinguish meaningful overlap (same component end-to-end) from incidental sharing. Consolidate when the split provides no risk-mitigation or feedback-loop value.
|
||||||
|
- **Implementation efficiency over taxonomy.** When the outcome is certain and direction changes between epics are unlikely, prefer fewer larger epics. Split into more epics when there's a genuine risk boundary or where early feedback could change direction.
|
||||||
|
- **Starter template (if Stage 2 flagged one).** Epic 1's first story must be "set up the project from the starter template." Plan for it now.
|
||||||
|
|
||||||
|
## The conversation
|
||||||
|
|
||||||
|
Walk through these collaboratively, not as a script:
|
||||||
|
|
||||||
|
1. **Identify user-value themes.** From the inventory: where are the natural groupings? Which FRs deliver cohesive user outcomes together? Which UX-DRs cluster around the same component or flow?
|
||||||
|
2. **Propose an epic structure.** For each candidate epic, share the title, the user outcome, the FR / UX-DR coverage, and any technical or UX considerations. Do this in dialog, not as a finished list.
|
||||||
|
3. **Pressure-test for file churn.** As you finalize, mentally trace which files each epic touches. Flag overlap. Ask the user whether to consolidate.
|
||||||
|
4. **Sequence and depends_on.** Establish the cross-epic graph. Capture each epic's `depends_on` as a list of prior epic NNs.
|
||||||
|
5. **Coverage check.** Walk every FR / NFR / UX-DR / debt item from the inventory and confirm it's allocated to an epic. NFRs and UX-DRs may cross-cut; pick the epic where they most naturally land or note them as cross-cutting.
|
||||||
|
|
||||||
|
## Cycle check before exit
|
||||||
|
|
||||||
|
Before leaving the stage, mentally compute the cross-epic dependency graph. If you find any cycle (Epic A depends on B which depends on A, directly or transitively), surface it and have the user resolve before proceeding. Stage 5 will catch cycles too, but catching them now avoids re-walking Stage 4.
|
||||||
|
|
||||||
|
## Optional deeper review
|
||||||
|
|
||||||
|
If the user wants to pressure-test the epic shape, they may invoke `bmad-advanced-elicitation` (deeper critique methods) or `bmad-party-mode` (multi-agent perspectives) explicitly. **Do not present these as a menu** — only invoke when the user asks.
|
||||||
|
|
||||||
|
## Soft gate
|
||||||
|
|
||||||
|
"Does this epic list capture the initiative? Anything missing, anything overlapping that should be consolidated?" When the user is satisfied, the list is approved and Stage 3 is complete.
|
||||||
|
|
||||||
|
## Edit-mode flows
|
||||||
|
|
||||||
|
When this stage is entered from `prompts/edit-mode.md`:
|
||||||
|
|
||||||
|
- **add-epic:** ask only about the new epic. Existing epic NNs are fixed; the new one gets the next-available NN. Capture title, intent, `depends_on`, theme. Validate the new edges don't introduce a cycle.
|
||||||
|
- **split-epic:** discuss how to split the target epic. Define the new epic NNs, titles, intents, and `depends_on` edges (typically the new sibling depends on the original where the split is downstream). Decide which existing stories move (Stage 4 will use `move_story.py`) and which stay.
|
||||||
|
- **merge-epics:** decide which is the surviving epic. Define how the merged depends_on collapses. Plan the story renumbering (Stage 4 will use `move_story.py` for the moves, then `rename_story.py` for any renumber).
|
||||||
|
- **re-derive-deps:** with the existing epic list, walk the cross-epic graph from scratch and update `depends_on` lists where the user agrees. (Within-epic dep updates happen in Stage 4.)
|
||||||
|
|
||||||
|
After the relevant edit-mode flow finishes here, route to `prompts/epic-authoring.md` with the focused scope.
|
||||||
|
|
||||||
|
## Stage Complete
|
||||||
|
|
||||||
|
When the epic list is approved (and the cycle check passes), route to `prompts/epic-authoring.md`. Carry the approved list — for each epic: NN, kebab title, intent, `depends_on`, and story-type theme — into Stage 4.
|
||||||
|
|
@ -0,0 +1,58 @@
|
||||||
|
**Language:** Use `{communication_language}` for all output.
|
||||||
|
**Output Language:** Use `{document_output_language}` for documents.
|
||||||
|
**Paths:** Bare paths (e.g. `scripts/validate_initiative.py`) resolve from the skill root.
|
||||||
|
|
||||||
|
# Stage 6: Finalize
|
||||||
|
|
||||||
|
**Goal:** Hand off cleanly. Show the user what was produced, confirm initial statuses, point them at the next workflow, and run any user-defined post-completion hook.
|
||||||
|
|
||||||
|
## Step 1: Print the produced tree
|
||||||
|
|
||||||
|
Walk `{initiative_store}/epics/` and present a concise tree — epic folders in order, story files under each. For each line, include the file's `status` from front matter. Something like:
|
||||||
|
|
||||||
|
```
|
||||||
|
{initiative_store}/epics/
|
||||||
|
├── 01-user-authentication/ (epic, draft)
|
||||||
|
│ ├── 01-define-user-and-session-models.md (task, draft)
|
||||||
|
│ ├── 02-register-with-email.md (feature, draft)
|
||||||
|
│ ├── 03-sign-in-with-email.md (feature, draft)
|
||||||
|
│ └── 04-password-reset-via-email.md (feature, draft)
|
||||||
|
└── 02-billing-stripe/ (epic, draft)
|
||||||
|
├── 01-customer-and-subscription-models.md (task, draft)
|
||||||
|
└── 02-checkout-session.md (feature, draft)
|
||||||
|
```
|
||||||
|
|
||||||
|
Numbers, types, and statuses come from each file's front matter — re-read if you don't already have them in working memory.
|
||||||
|
|
||||||
|
## Step 2: Confirm initial statuses
|
||||||
|
|
||||||
|
Every story and epic starts at `draft`. Promotion is owned by downstream skills (`bmad-dev-story` etc.) — this skill never auto-promotes. Two normal next steps from here:
|
||||||
|
|
||||||
|
- **Leave everything `draft`.** The user iterates further before any dev work begins. Most common.
|
||||||
|
- **Promote a small first batch to `ready`.** If the user wants immediate dev-story handoff for the first epic's first story (or two), edit those files' `status: draft` → `status: ready` directly. Do not touch others.
|
||||||
|
|
||||||
|
Ask: "Want to leave these all as draft, or promote a small first batch to ready for immediate dev handoff?"
|
||||||
|
|
||||||
|
## Step 3: Point forward
|
||||||
|
|
||||||
|
Tell the user what they have and what comes next:
|
||||||
|
|
||||||
|
- **Per-story dev handoff:** `bmad-dev-story` reads any story by path and implements it.
|
||||||
|
- **Epic-context cache:** `bmad-quick-dev`, when v7'd, will read the `epic.md` Shared Context block instead of re-deriving per story.
|
||||||
|
- **Status rollup:** the future `bmad-initiative-status` reads `status:` from every file to summarize the initiative.
|
||||||
|
|
||||||
|
Then invoke `bmad-help` so the user sees the broader BMad surface available to them.
|
||||||
|
|
||||||
|
## Step 4: Run on_complete
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```
|
||||||
|
python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete
|
||||||
|
```
|
||||||
|
|
||||||
|
If the resolved `workflow.on_complete` value is non-empty, follow it as the final terminal instruction before exiting.
|
||||||
|
|
||||||
|
## Stage Complete
|
||||||
|
|
||||||
|
The workflow is done. If the user asks for further changes, route back through `prompts/edit-mode.md` (Stage 0's mode detection now sees `edit` because the v7 tree exists). Otherwise exit.
|
||||||
|
|
@ -0,0 +1,44 @@
|
||||||
|
**Language:** Use `{communication_language}` for all output.
|
||||||
|
**Output Language:** Use `{document_output_language}` for documents.
|
||||||
|
**Paths:** Bare paths (e.g. `prompts/discovery.md`) resolve from the skill root.
|
||||||
|
|
||||||
|
# Stage 1: Intent
|
||||||
|
|
||||||
|
**Goal:** Know what the initiative is about — well enough to make discovery (Stage 2) targeted, and well enough to suppress questions whose answers are already on disk.
|
||||||
|
|
||||||
|
This stage runs in **create** and **migrate** modes only. Edit-mode skips Stage 1 entirely (see `prompts/edit-mode.md`).
|
||||||
|
|
||||||
|
## Create mode
|
||||||
|
|
||||||
|
You need three things to leave this stage:
|
||||||
|
|
||||||
|
1. **Initiative title** — a short kebab-friendly handle (e.g. "billing-stripe-v2"). The user may give a long sentence; ask for a tighter handle once you can summarize.
|
||||||
|
2. **Primary intent** — one or two sentences. What this initiative is for and roughly what done looks like. This is the relevance filter for Stage 2's artifact scan and Stage 3's epic shaping.
|
||||||
|
3. **Expected story-type mix** — feature-heavy, task/bug-heavy (tech-debt), spike-heavy (research), or mixed. Tells you how to handle a missing PRD gracefully in Stage 2.
|
||||||
|
|
||||||
|
**Suppress questions where the answer is already on disk.** If `governance.md` or `initiative-context.md` were loaded into facts, read them first. If they cover scope, owner, deadlines, or constraints, do not re-ask. Note absent files with a one-line pointer ("there's no `initiative-context.md` here — fine for solo work; mention it if you'd like to add one") and move on.
|
||||||
|
|
||||||
|
**Capture-don't-interrupt.** If the user volunteers technical details, FRs, or epic ideas during this stage, capture them silently into your working memory. Do not redirect — they will be useful in Stages 2–4.
|
||||||
|
|
||||||
|
When all three items are settled, route to `prompts/discovery.md` with `{mode}=create` and the initiative title and intent in your working memory.
|
||||||
|
|
||||||
|
## Migrate mode
|
||||||
|
|
||||||
|
A v6 monolithic file exists at `{initiative_store}/epics.md` or `{planning_artifacts}/epics.md`. Surface it in one short sentence — the path and roughly what's in it (epic count from a quick scan) — then offer the three options:
|
||||||
|
|
||||||
|
1. **Leave it alone and start fresh.** Treat the v6 file as inert; walk a normal create flow. Useful when the v6 plan is stale or you want to redesign.
|
||||||
|
2. **Run the canonical-v6 migration helper** at `prompts/migrate-v6.md`. Best when the v6 file is recent and you want to bootstrap the v7 tree from it without redoing the design work.
|
||||||
|
3. **Walk through manually using the v6 file as input** to a normal create flow. Useful when the v6 file is messy or partially edited — you read it as context, but the create flow is the system of record.
|
||||||
|
|
||||||
|
After the user picks:
|
||||||
|
|
||||||
|
- **Option 1 or 3** → set `{mode}=create` and route to `prompts/discovery.md`. Add the v6 file path to user-provided paths so the artifact analyzer reads it.
|
||||||
|
- **Option 2** → route to `prompts/migrate-v6.md`.
|
||||||
|
|
||||||
|
## Soft gate
|
||||||
|
|
||||||
|
This stage is conversational. Confirm with a soft prompt rather than a menu — "Anything else to add about the initiative, or should we move on to scanning the project?" Users almost always remember one more thing when given a graceful exit ramp.
|
||||||
|
|
||||||
|
## Stage Complete
|
||||||
|
|
||||||
|
Stage 1 ends when the chosen mode's exit conditions above are met. Carry the initiative title, primary intent, story-type mix, and any volunteered details into the next stage in working memory — none of this is written to disk yet.
|
||||||
|
|
@ -0,0 +1,69 @@
|
||||||
|
**Language:** Use `{communication_language}` for all output.
|
||||||
|
**Output Language:** Use `{document_output_language}` for documents.
|
||||||
|
**Paths:** Bare paths (e.g. `scripts/init_epic.py`) resolve from the skill root.
|
||||||
|
|
||||||
|
# Migrate-v6 Helper
|
||||||
|
|
||||||
|
You arrive here when Stage 0 detected `{mode}=migrate` and the user picked option 2 in `prompts/intent.md` ("run the canonical-v6 migration helper"). The helper bootstraps a v7 tree from a canonical v6 monolithic `epics.md`. It is intentionally narrow.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
**Supported:** the canonical v6 monolithic shape — a single `epics.md` produced by the v6 `bmad-create-epics-and-stories` skill, following its `templates/epics-template.md` structure. Front matter at top, `## Requirements Inventory`, `## Epic List`, then per-epic `## Epic N: <title>` sections, each with `### Story N.M: <title>` blocks containing user-story stanzas and Acceptance Criteria.
|
||||||
|
|
||||||
|
**Not supported:** sharded v6 (`epics/index.md` + multiple files), or hand-edited / heavily restructured v6 docs. If the input doesn't match the canonical shape, the parsing pass below will report what it could and couldn't extract — show that to the user and offer option 3 from `prompts/intent.md` ("walk it through manually using the v6 file as input to a normal create flow").
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
### 1. Locate and read the v6 file
|
||||||
|
|
||||||
|
The v6 file is at `{initiative_store}/epics.md` or `{planning_artifacts}/epics.md`. Read it fully.
|
||||||
|
|
||||||
|
**Sharded check.** If the path is a directory containing `index.md`, stop here and tell the user:
|
||||||
|
|
||||||
|
> "Your v6 file looks sharded — it's a directory with `index.md` and per-epic files. The migration helper handles only the canonical monolithic shape. To proceed: either flatten it into a single `epics.md` first, or pick option 3 from the previous prompt to walk through manually with the v6 content as context."
|
||||||
|
|
||||||
|
Then route back to `prompts/intent.md` so the user can pick again.
|
||||||
|
|
||||||
|
### 2. Parse the canonical shape
|
||||||
|
|
||||||
|
Walk the document and extract:
|
||||||
|
|
||||||
|
- **Initiative title.** From the top heading or front-matter `title:`. If neither, ask the user once.
|
||||||
|
- **Epics.** Each `## Epic N: <title>` (or `### Epic N: <title>`) becomes an epic. The N order in the file becomes the v7 NN order.
|
||||||
|
- **Per-epic intent / goal.** The first paragraph(s) under each epic heading, before the first story.
|
||||||
|
- **Stories.** Each `### Story N.M: <title>` (or `#### Story N.M: <title>`) inside an epic becomes a story. The M order becomes the v7 story NN.
|
||||||
|
- **User-story stanza.** The "As a / I want / So that" block under each story heading. If absent, treat as `type: task`. If present and the title looks user-facing, treat as `type: feature`. If the title contains "bug" / "fix", treat as `type: bug`. If "spike" / "investigate" / "research", treat as `type: spike`. When in doubt, ask the user.
|
||||||
|
- **Acceptance Criteria.** The Given/When/Then block under `**Acceptance Criteria:**`. Preserve verbatim — re-formatting can introduce errors.
|
||||||
|
- **Coverage.** Look for `FR1`, `NFR2`, `UX-DR3` references in the AC text. If absent (the v6 template often left coverage in a separate FR Coverage Map), parse the FR Coverage Map and reverse-map FRs back to stories.
|
||||||
|
|
||||||
|
If a section is malformed or missing, log the issue and continue. Do not block the whole migration on one bad story.
|
||||||
|
|
||||||
|
### 3. Confirm the parse with the user
|
||||||
|
|
||||||
|
Show a concise summary: "I parsed N epics and M stories from the v6 file. Epic 1 is '<title>' with K stories; Epic 2 is '<title>' with L stories; ...". Flag anything ambiguous (story type guesses, stories with no ACs, FRs that didn't map cleanly).
|
||||||
|
|
||||||
|
Ask: "Look right? Want to adjust anything before I create the v7 tree?"
|
||||||
|
|
||||||
|
### 4. Generate the v7 tree
|
||||||
|
|
||||||
|
For each parsed epic, in order:
|
||||||
|
|
||||||
|
1. `python3 scripts/init_epic.py --initiative-store {initiative_store} --epic-nn <NN> --title "<title>" [--depends-on <NNs>]`
|
||||||
|
2. Edit the new `epic.md` body — fill Goal from the parsed intent, fill Shared Context with anything the v6 epic mentioned about architecture or constraints (often sparse; that's fine), fill Story Sequence from any inter-story notes.
|
||||||
|
3. For each parsed story in this epic:
|
||||||
|
- `python3 scripts/init_story.py --initiative-store {initiative_store} --epic <folder> --story-nn <NN> --title "<title>" --type <feature|bug|task|spike> [--depends-on <refs>]`
|
||||||
|
- Edit the new story file — paste the parsed user-story stanza (or remove it for `task`), paste the parsed ACs into the Acceptance Criteria section verbatim, leave Technical Notes empty (the v6 file usually doesn't have them at this granularity), and fill Coverage from the parsed AC-to-FR map.
|
||||||
|
|
||||||
|
The v6 file does not encode `depends_on` explicitly. Best-effort: assume cross-epic deps follow numeric order (Epic 2 depends on Epic 1, etc.) and confirm with the user before committing. Within-epic, leave `depends_on: []` and let Stage 3 / 5 surface anything the user wants to add.
|
||||||
|
|
||||||
|
### 5. Validate strict
|
||||||
|
|
||||||
|
After all epics are generated, route to `prompts/validate.md` strict. Surface any failures (typically: a story whose v6 ACs reference an FR code the script's regex didn't pick up, or an inferred dep that doesn't resolve). Loop back into `prompts/epic-authoring.md` for narrow fixes.
|
||||||
|
|
||||||
|
### 6. Confirm next steps
|
||||||
|
|
||||||
|
Once validation passes, tell the user the v7 tree is ready and the v6 `epics.md` is still on disk untouched. They can delete it whenever they're confident. Then route to `prompts/finalize.md`.
|
||||||
|
|
||||||
|
## Stage Complete
|
||||||
|
|
||||||
|
This helper is single-shot. After the validation loop closes and the user accepts the migrated tree, the migrate flow is done — Stage 6 finalizes as if it had been a fresh create.
|
||||||
|
|
@ -0,0 +1,61 @@
|
||||||
|
**Language:** Use `{communication_language}` for all output.
|
||||||
|
**Output Language:** Use `{document_output_language}` for documents.
|
||||||
|
**Paths:** Bare paths (e.g. `scripts/validate_initiative.py`) resolve from the skill root.
|
||||||
|
|
||||||
|
# Stage 5: Validation
|
||||||
|
|
||||||
|
**Goal:** Confirm the v7 epic-and-story tree is sound — schema, deps, numbering, cycles — and that every initiative-level requirement is covered by at least one story's AC mapping. This stage is also the **headless surface** for CI: when invoked with `--re-validate` (or `--headless` / `-H`), it runs once and exits with JSON only.
|
||||||
|
|
||||||
|
## Strict validation
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```
|
||||||
|
python3 scripts/validate_initiative.py --initiative-store {initiative_store}
|
||||||
|
```
|
||||||
|
|
||||||
|
Strict mode is the default. Take the JSON output. The `findings` list contains every error and warning; the `summary` block has the epic list, story counts by status, error/warning counts, and `mentioned_requirements` (the deduplicated set of FR / NFR / UX-DR codes the script extracted from story bodies via regex).
|
||||||
|
|
||||||
|
**The script does not check coverage.** The Stage 2 inventory lives in your working memory, not on disk — only you can compare. See "Coverage check" below.
|
||||||
|
|
||||||
|
## Headless mode
|
||||||
|
|
||||||
|
If `{mode}=headless`:
|
||||||
|
|
||||||
|
1. Run the validator strict.
|
||||||
|
2. Print the JSON output to stdout, unmodified.
|
||||||
|
3. Exit. Do not greet, do not converse, do not invoke the coverage auditor (CI doesn't have the inventory in memory).
|
||||||
|
4. Exit code mirrors the validator: 0 if no errors, 1 if errors. Warnings do not change the exit code.
|
||||||
|
|
||||||
|
## Interactive mode
|
||||||
|
|
||||||
|
### 1. Surface failures conversationally
|
||||||
|
|
||||||
|
For each error in `findings`, explain it in one sentence and offer to fix. Group by file when there are several errors on the same path. Common patterns and the right next step:
|
||||||
|
|
||||||
|
- **Schema errors** (`*-extra-keys`, `*-missing-keys`, `*-bad-status`, `*-bad-type`) → loop back to `prompts/epic-authoring.md` for that one file, edit the front matter, re-validate.
|
||||||
|
- **`epic-nn-mismatch` / `story-epic-mismatch`** → likely a hand-edit of the front matter; the folder name is canonical, so update the front matter to match.
|
||||||
|
- **`story-dep-unresolved`** → either the dep was a typo (fix the depends_on entry) or the target was renamed (`scripts/rename_story.py`) or moved (`scripts/move_story.py`) without updating refs. Use the move/rename scripts for renames going forward — they update refs atomically.
|
||||||
|
- **`epic-dep-cycle`** → the cross-epic graph has a loop. Loop back to `prompts/epic-design.md` (re-derive-deps flow) to fix it.
|
||||||
|
- **`story-numbering-gaps`** → use `scripts/rename_story.py --to-nn` to fill the gap or renumber the survivors.
|
||||||
|
|
||||||
|
### 2. Coverage check
|
||||||
|
|
||||||
|
The validator's `summary.mentioned_requirements` is the set of codes that appear textually anywhere in any story body. Compare it to the Stage 2 inventory:
|
||||||
|
|
||||||
|
- **Codes in inventory but not in `mentioned_requirements`** → likely uncovered. Confirm by spot-reading the relevant epic's stories.
|
||||||
|
- **If the prose is ambiguous** (a story's Coverage line uses prose like "password policy" instead of `NFR3.2`) → fan out `agents/coverage-auditor.md` with the inventory and the tree path. The auditor returns exact + fuzzy matches and a list of uncovered codes.
|
||||||
|
|
||||||
|
For each uncovered requirement: surface it conversationally, ask whether it should be added to an existing story's AC mapping or whether a new story is needed. Loop back to `prompts/epic-authoring.md` for the targeted edit.
|
||||||
|
|
||||||
|
### 3. Sizing warnings
|
||||||
|
|
||||||
|
The validator emits warnings (not errors) for stories whose body is more than 3× the epic mean. These are advisory — surface them as "this story may not fit one session" and let the user decide whether to split. If many warnings fire on real-world stories, the threshold may need tuning rather than the stories — note it for a follow-up.
|
||||||
|
|
||||||
|
### 4. Re-validate after fixes
|
||||||
|
|
||||||
|
Loop back to step 1 after any fix. Stage 5 ends when strict validation has zero errors and every inventory item is either covered or explicitly de-scoped by the user.
|
||||||
|
|
||||||
|
## Stage Complete
|
||||||
|
|
||||||
|
When the validator returns zero errors and coverage is settled, route to `prompts/finalize.md`. In headless mode, exit after step 1 of "Headless mode" above — there is no Stage 6 in headless.
|
||||||
|
|
@ -0,0 +1,30 @@
|
||||||
|
# Epic Front-Matter Schema (canonical, locked for v7)
|
||||||
|
|
||||||
|
Every epic folder contains exactly one `epic.md` carrying this front matter. The five top-level keys below are the **only** allowed top-level keys; anything else fails strict validation.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
title: "Auth Migration" # REQUIRED — string, double-quoted
|
||||||
|
epic: "02" # REQUIRED — the NN portion of the folder name, zero-padded, quoted as a string
|
||||||
|
status: draft # REQUIRED — same enum as stories: draft | ready | in-progress | review | done | blocked
|
||||||
|
depends_on: ["01"] # REQUIRED — list of epic NNs (zero-padded strings); may be empty
|
||||||
|
metadata: # OPTIONAL — free-form table; BMad ignores its contents
|
||||||
|
initiative: billing-stripe-v2
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
## Field rules
|
||||||
|
|
||||||
|
- **title** — Always double-quoted by `init_epic.py`.
|
||||||
|
- **epic** — Always quoted as a string to preserve the leading zero (`"02"`, not `2`). This is the NN, not the full folder name. The validator cross-checks it against the folder name's NN prefix.
|
||||||
|
- **status** — `init_epic.py` writes `draft`. A future `bmad-initiative-status` may derive a rollup from story statuses; this skill never updates it after creation.
|
||||||
|
- **depends_on** — Inline list of epic NN strings, e.g. `["01", "03"]`. Cross-epic syntax is **not** allowed here — epics depend on whole epics, not individual stories.
|
||||||
|
- **metadata** — Free-form. Common uses: initiative name, owning team, target release.
|
||||||
|
|
||||||
|
## What is NOT allowed at the top level
|
||||||
|
|
||||||
|
`type` (epics may mix story types), `description`, `goal`, `owner`, `created`, `updated`. Goal and shared context belong in the body, not the front matter.
|
||||||
|
|
||||||
|
## What an epic is for
|
||||||
|
|
||||||
|
`epic.md` is the **shared context cache** for all stories in the epic — architectural decisions, constraints, references, the inter-story flow. This file replaces the v6 `epic-N-context.md` cache that `bmad-quick-dev` used to compile. See `resources/epic-md-template.md` for the body shape.
|
||||||
|
|
@ -0,0 +1,17 @@
|
||||||
|
# {{title}}
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
<One paragraph. The user-value or technical outcome this epic delivers as a whole. If a user accomplishes nothing observable from this epic, it is probably mis-shaped — see the user-value-first principle in `prompts/epic-design.md`.>
|
||||||
|
|
||||||
|
## Shared Context
|
||||||
|
|
||||||
|
<Architectural decisions, constraints, and integration points that apply to every story in this epic. This is the cache `bmad-dev-story` and downstream skills read instead of re-deriving context per story. Keep tight — if it would change story-to-story, it doesn't go here.>
|
||||||
|
|
||||||
|
## Story Sequence
|
||||||
|
|
||||||
|
<Brief notes on the inter-story flow within this epic — what depends on what, what should be done first to unlock parallel work later. The authoritative dependency graph lives in `depends_on` front matter on each story; this section is human-readable orientation.>
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
<Links into PRD / architecture / UX sections that are most load-bearing for this epic. Use anchored paths (`{planning_artifacts}/prd.md#auth-flow`) where possible. Do not duplicate the source content — link.>
|
||||||
|
|
@ -0,0 +1,157 @@
|
||||||
|
# Reference example — feature epic
|
||||||
|
|
||||||
|
This file shows a complete, canonical feature epic with three stories. Use it as a shape primer in Stage 4 — match the section order, body density, and depends_on usage. Do not copy the content.
|
||||||
|
|
||||||
|
## File: `01-user-authentication/epic.md`
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
title: "User Authentication"
|
||||||
|
epic: "01"
|
||||||
|
status: draft
|
||||||
|
depends_on: []
|
||||||
|
metadata:
|
||||||
|
initiative: account-foundations-q2
|
||||||
|
---
|
||||||
|
|
||||||
|
# User Authentication
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
End-users can register, sign in, and recover access to their account using
|
||||||
|
email + password. Establishes the session model that every other epic relies on.
|
||||||
|
|
||||||
|
## Shared Context
|
||||||
|
|
||||||
|
- Sessions are JWT-based, signed with the rotating key in `auth/keys/`. Token
|
||||||
|
TTL: 30 minutes; refresh token TTL: 30 days.
|
||||||
|
- Password storage uses argon2id with the parameters from
|
||||||
|
`{planning_artifacts}/architecture.md#password-hashing`.
|
||||||
|
- All endpoints under this epic live in `apps/api/src/routes/auth/`.
|
||||||
|
- Tests use the `authFixtures` helper at `apps/api/test/fixtures/auth.ts`.
|
||||||
|
|
||||||
|
## Story Sequence
|
||||||
|
|
||||||
|
01-define-user-and-session-models seeds the schema; 02-register-with-email and
|
||||||
|
03-sign-in-with-email both depend on it but are independent of each other.
|
||||||
|
04-password-reset-via-email depends on 02 (needs an existing account) and on
|
||||||
|
the mailer wired in `02-auth-migration` (cross-epic).
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- `{planning_artifacts}/prd.md#fr-1-1-through-fr-1-7`
|
||||||
|
- `{planning_artifacts}/architecture.md#auth`
|
||||||
|
- `{planning_artifacts}/ux/auth-flows.md`
|
||||||
|
```
|
||||||
|
|
||||||
|
## File: `01-user-authentication/01-define-user-and-session-models.md`
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
title: "Define User and Session Models"
|
||||||
|
type: task
|
||||||
|
status: draft
|
||||||
|
epic: 01-user-authentication
|
||||||
|
depends_on: []
|
||||||
|
---
|
||||||
|
|
||||||
|
# Define User and Session Models
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- **AC1** — Given a fresh database, When the migration runs,
|
||||||
|
Then the `users` and `sessions` tables exist with the columns specified in
|
||||||
|
`architecture.md#auth`.
|
||||||
|
- **AC2** — Given the migration has run, When the seed script executes,
|
||||||
|
Then a single admin user exists with the credentials from `.env.test`.
|
||||||
|
|
||||||
|
## Technical Notes
|
||||||
|
|
||||||
|
- Migration file: `apps/api/migrations/2026_05_01_auth.sql`.
|
||||||
|
- Use the existing `argon2id` helper at `apps/api/src/auth/hash.ts`.
|
||||||
|
|
||||||
|
## Coverage
|
||||||
|
|
||||||
|
- AC1 → FR1.1, FR1.2
|
||||||
|
- AC2 → FR1.3
|
||||||
|
```
|
||||||
|
|
||||||
|
## File: `01-user-authentication/02-register-with-email.md`
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
title: "Register with Email"
|
||||||
|
type: feature
|
||||||
|
status: draft
|
||||||
|
epic: 01-user-authentication
|
||||||
|
depends_on: ["01-define-user-and-session-models"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Register with Email
|
||||||
|
|
||||||
|
As a new visitor,
|
||||||
|
I want to create an account with my email and a password,
|
||||||
|
So that I can sign in and use the product.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- **AC1** — Given a valid email and a password meeting the policy,
|
||||||
|
When I POST `/auth/register`,
|
||||||
|
Then the response is 201 with my user id and a session token.
|
||||||
|
- **AC2** — Given an email that is already registered,
|
||||||
|
When I POST `/auth/register`,
|
||||||
|
Then the response is 409 with the `email_taken` error code.
|
||||||
|
- **AC3** — Given a password failing the policy,
|
||||||
|
When I POST `/auth/register`,
|
||||||
|
Then the response is 422 with the failing rules listed.
|
||||||
|
|
||||||
|
## Technical Notes
|
||||||
|
|
||||||
|
- Reuse `validatePasswordPolicy()` from `apps/api/src/auth/policy.ts`.
|
||||||
|
- Emit a `user.registered` event for downstream onboarding hooks.
|
||||||
|
|
||||||
|
## Coverage
|
||||||
|
|
||||||
|
- AC1 → FR1.4
|
||||||
|
- AC2 → FR1.5
|
||||||
|
- AC3 → FR1.6, NFR3.2 (password policy)
|
||||||
|
```
|
||||||
|
|
||||||
|
## File: `01-user-authentication/04-password-reset-via-email.md`
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
title: "Password Reset via Email"
|
||||||
|
type: feature
|
||||||
|
status: draft
|
||||||
|
epic: 01-user-authentication
|
||||||
|
depends_on: ["02-register-with-email", "02-auth-migration/03-mailer-wired"]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Password Reset via Email
|
||||||
|
|
||||||
|
As a registered user,
|
||||||
|
I want to reset my password via an emailed link,
|
||||||
|
So that I can recover access if I forget my credentials.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- **AC1** — Given a registered email, When I POST `/auth/password-reset`,
|
||||||
|
Then a reset email is dispatched and the response is 202.
|
||||||
|
- **AC2** — Given a valid reset token, When I POST `/auth/password-reset/confirm`
|
||||||
|
with a new password, Then my password is updated and the token is consumed.
|
||||||
|
- **AC3** — Given a reset token older than 1 hour,
|
||||||
|
When I POST `/auth/password-reset/confirm`,
|
||||||
|
Then the response is 410 with the `token_expired` error.
|
||||||
|
|
||||||
|
## Technical Notes
|
||||||
|
|
||||||
|
- Reset tokens: 32-byte random, 1-hour TTL, single-use. Store hashed.
|
||||||
|
- The mailer dependency comes from epic 02; depends_on encodes that.
|
||||||
|
|
||||||
|
## Coverage
|
||||||
|
|
||||||
|
- AC1 → FR1.7
|
||||||
|
- AC2 → FR1.8
|
||||||
|
- AC3 → NFR4.1 (token lifecycle)
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,115 @@
|
||||||
|
# Reference example — tech-debt epic
|
||||||
|
|
||||||
|
This file shows a complete tech-debt epic with mixed `task` and `bug` story types and **no PRD** behind it. Use it as a shape primer when an initiative has no functional requirements — only a list of debt items or target areas. Do not copy the content.
|
||||||
|
|
||||||
|
## File: `04-billing-cleanup/epic.md`
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
title: "Billing Cleanup"
|
||||||
|
epic: "04"
|
||||||
|
status: draft
|
||||||
|
depends_on: ["01"]
|
||||||
|
metadata:
|
||||||
|
initiative: tech-debt-q3
|
||||||
|
---
|
||||||
|
|
||||||
|
# Billing Cleanup
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
Reduce ongoing maintenance cost in the billing module by removing two dead
|
||||||
|
code paths, fixing the long-standing rounding bug in invoice totals, and
|
||||||
|
extracting the price calculator into its own service for testability.
|
||||||
|
|
||||||
|
## Shared Context
|
||||||
|
|
||||||
|
- All work is contained in `apps/api/src/billing/` and its test directory.
|
||||||
|
- The price calculator extraction is the most invasive change; do it last so
|
||||||
|
earlier stories don't have to be re-tested against the new boundary.
|
||||||
|
- No customer-visible behaviour changes; existing snapshot tests must pass
|
||||||
|
unchanged after every story (except the rounding-bug story, which updates
|
||||||
|
one snapshot deliberately).
|
||||||
|
|
||||||
|
## Story Sequence
|
||||||
|
|
||||||
|
01 and 02 remove the dead paths; they are independent and can run in parallel.
|
||||||
|
03 fixes the rounding bug and updates the affected snapshot.
|
||||||
|
04 extracts the calculator and depends on all three earlier stories so the new
|
||||||
|
boundary is drawn around the cleaned-up code.
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- `{planning_artifacts}/initiative-context.md` (debt rationale)
|
||||||
|
- `{planning_artifacts}/architecture.md#billing` (target end-state)
|
||||||
|
```
|
||||||
|
|
||||||
|
## File: `04-billing-cleanup/01-remove-legacy-coupon-path.md`
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
title: "Remove Legacy Coupon Code Path"
|
||||||
|
type: task
|
||||||
|
status: draft
|
||||||
|
epic: 04-billing-cleanup
|
||||||
|
depends_on: []
|
||||||
|
---
|
||||||
|
|
||||||
|
# Remove Legacy Coupon Code Path
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- **AC1** — Given the codebase, When `applyLegacyCoupon` is grepped for,
|
||||||
|
Then there are zero references and the function file is deleted.
|
||||||
|
- **AC2** — Given the test suite, When it runs after the deletion,
|
||||||
|
Then all tests pass with no skipped or pending tests.
|
||||||
|
|
||||||
|
## Technical Notes
|
||||||
|
|
||||||
|
- File to delete: `apps/api/src/billing/legacy-coupons.ts`.
|
||||||
|
- Two callers in `checkout.ts` already short-circuit; remove their dead branches.
|
||||||
|
|
||||||
|
## Coverage
|
||||||
|
|
||||||
|
- AC1 → debt item D1 (legacy coupon path, see initiative-context.md)
|
||||||
|
- AC2 → no regressions
|
||||||
|
```
|
||||||
|
|
||||||
|
## File: `04-billing-cleanup/03-fix-invoice-rounding-bug.md`
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
title: "Fix Invoice Total Rounding Bug"
|
||||||
|
type: bug
|
||||||
|
status: draft
|
||||||
|
epic: 04-billing-cleanup
|
||||||
|
depends_on: []
|
||||||
|
---
|
||||||
|
|
||||||
|
# Fix Invoice Total Rounding Bug
|
||||||
|
|
||||||
|
As a finance operator,
|
||||||
|
I want invoice totals to match the sum of their line items to the cent,
|
||||||
|
So that monthly reconciliations stop flagging false discrepancies.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- **AC1** — Given an invoice with three line items totalling $100.005,
|
||||||
|
When the invoice is finalized,
|
||||||
|
Then the stored total is $100.01 (banker's rounding), matching the line
|
||||||
|
items summed in the same order.
|
||||||
|
- **AC2** — Given the existing snapshot for the rounding regression case,
|
||||||
|
When the test runs,
|
||||||
|
Then it produces the corrected total and the snapshot is updated.
|
||||||
|
|
||||||
|
## Technical Notes
|
||||||
|
|
||||||
|
- Switch `Math.round` for the `roundHalfEven` helper at
|
||||||
|
`apps/api/src/util/decimal.ts` (already used elsewhere).
|
||||||
|
- Update one snapshot: `apps/api/test/__snapshots__/invoice-totals.snap`.
|
||||||
|
|
||||||
|
## Coverage
|
||||||
|
|
||||||
|
- AC1 → bug B1 (rounding discrepancy in monthly reconciliations)
|
||||||
|
- AC2 → no other regressions
|
||||||
|
```
|
||||||
|
|
@ -0,0 +1,42 @@
|
||||||
|
# Story Sizing Heuristics
|
||||||
|
|
||||||
|
Stories target **one AI session, verifiable end-state, bounded blast radius**. Use these heuristics during Stage 4 decomposition. Loaded as a persistent fact for the duration of authoring.
|
||||||
|
|
||||||
|
## The three tests
|
||||||
|
|
||||||
|
1. **One AI session.** A single dev agent, equipped with the epic's Shared Context, can complete the story end-to-end (code + tests + verification) in roughly one focused work session — typically 1–4 hours of effort, perhaps a few thousand lines of touched code at the high end. If you can't picture the agent finishing without context exhaustion or a hand-off, the story is too big.
|
||||||
|
|
||||||
|
2. **Verifiable end-state.** "Done" can be unambiguously checked — a test passes, an endpoint returns the expected shape, a UI flow completes. If "done" is "we wrote some code toward X," the story is mis-shaped.
|
||||||
|
|
||||||
|
3. **Bounded blast radius.** The story modifies a tractable surface area — typically a handful of files, one component or one cross-cutting concern. A story that touches every layer of the app and every test file is a feature, not a story.
|
||||||
|
|
||||||
|
## Right-sized examples
|
||||||
|
|
||||||
|
- "Add the `/users/:id/avatar` endpoint with multipart upload, S3 storage, and integration tests." — One endpoint, one storage path, well-bounded.
|
||||||
|
- "Migrate the `Order.totals` calculation from the controller into a domain service, with parity tests." — One refactor, one verification.
|
||||||
|
- "Wire the existing rate-limiter middleware onto the public auth routes and add coverage for 429 responses." — One integration, clear AC.
|
||||||
|
|
||||||
|
## Too-large signals
|
||||||
|
|
||||||
|
- The user story uses "and" three times.
|
||||||
|
- Acceptance criteria number more than ~6.
|
||||||
|
- The story would touch 4+ unrelated layers (UI + API + storage + worker + ops).
|
||||||
|
- "Build the X system."
|
||||||
|
|
||||||
|
If you see these, **split**: each AC is often a candidate sub-story; or, factor along the seams of the epic's Shared Context.
|
||||||
|
|
||||||
|
## Too-small signals
|
||||||
|
|
||||||
|
- AC list of one item that is itself a single function call.
|
||||||
|
- "Rename a variable" or "delete a comment" as a standalone story.
|
||||||
|
- Story body shorter than the front matter.
|
||||||
|
|
||||||
|
If you see these, **fold** into a sibling story or into the epic's Shared Context as a note.
|
||||||
|
|
||||||
|
## Vertical over horizontal
|
||||||
|
|
||||||
|
Vertical slices (a user-visible capability, end-to-end through the layers it needs) are preferred over horizontal slices (one layer at a time across many capabilities). Horizontal is acceptable when the slice has **explicit justification** in the epic's Shared Context — typically a deliberate seam (e.g. "Phase 1 establishes the schema; Phase 2 wires the UI in a single batch"). Justify horizontals in the epic.md, not in each story.
|
||||||
|
|
||||||
|
## Stage 5 sizing check is advisory
|
||||||
|
|
||||||
|
The validator's sizing check emits **warnings**, not failures. A warning means "this story's body is far longer than the epic's average — consider splitting." It does not block completion. If many warnings fire on real-world stories, tune the body-length thresholds in `scripts/validate_initiative.py` rather than weakening the schema or dependency checks.
|
||||||
|
|
@ -0,0 +1,36 @@
|
||||||
|
# Story Front-Matter Schema (canonical, locked for v7)
|
||||||
|
|
||||||
|
Every file inside an epic folder **except** `epic.md` is a story. The first six top-level keys are the **only** allowed top-level keys; anything else fails strict validation.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
title: "Payment API Integration" # REQUIRED — string, double-quoted to keep colons safe
|
||||||
|
type: feature # REQUIRED — one of: feature | bug | task | spike
|
||||||
|
status: draft # REQUIRED — one of: draft | ready | in-progress | review | done | blocked
|
||||||
|
epic: 01-billing-stripe # REQUIRED — must exactly match the enclosing epic folder name
|
||||||
|
depends_on: [] # REQUIRED — list (may be empty)
|
||||||
|
metadata: # OPTIONAL — free-form table; BMad ignores its contents
|
||||||
|
jira_key: BILL-234
|
||||||
|
priority: high
|
||||||
|
story_points: 3
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
## Field rules
|
||||||
|
|
||||||
|
- **title** — Always emitted with double quotes by `init_story.py`. Inner double quotes escaped with `\`.
|
||||||
|
- **type** — Drives body-skeleton generation: `task` omits the As-a/I-want/So-that stanza by default; `bug` and `spike` make it optional; `feature` requires it.
|
||||||
|
- **status** — `init_story.py` always writes `draft`. Promotion to any other value is owned by downstream skills (`bmad-dev-story` etc.). This skill never auto-promotes.
|
||||||
|
- **epic** — The enclosing folder name (e.g. `01-billing-stripe`), not just the NN. The folder name wins on conflict; the validator flags drift.
|
||||||
|
- **depends_on** — Inline YAML list. Two reference forms:
|
||||||
|
- **Within-epic:** the sibling story's basename without `.md` — e.g. `04-define-schema`.
|
||||||
|
- **Cross-epic:** `<epic-folder>/<story-basename>` — e.g. `02-auth-migration/04-session-management`.
|
||||||
|
- **metadata** — Anything goes. BMad does not read or rewrite it. Useful for org-specific tracker keys, points, owners.
|
||||||
|
|
||||||
|
## What is NOT allowed at the top level
|
||||||
|
|
||||||
|
`description`, `assignee`, `points`, `priority`, `created`, `updated`, `tags`, `labels`, `body`, `acceptance_criteria`, `notes`, anything else. Put org-specific fields under `metadata:` instead.
|
||||||
|
|
||||||
|
## Why this is locked
|
||||||
|
|
||||||
|
Downstream skills (`bmad-dev-story`, `bmad-code-review`, `bmad-retrospective`, future `bmad-initiative-status`) read state directly from these files. A drifting schema would silently break them. The set is small on purpose so a human editor can hold the whole shape in their head.
|
||||||
|
|
@ -0,0 +1,33 @@
|
||||||
|
# {{title}}
|
||||||
|
|
||||||
|
<!-- USER_STORY_START -->
|
||||||
|
As a {{user_type}},
|
||||||
|
I want {{capability}},
|
||||||
|
So that {{value_benefit}}.
|
||||||
|
<!-- USER_STORY_END -->
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
<Each AC stands alone. Given/When/Then form. Specific and testable. Cover happy path, key edge cases, and at least one failure mode where applicable.>
|
||||||
|
|
||||||
|
- **AC1** — Given <precondition>, When <action>, Then <expected outcome>.
|
||||||
|
- **AC2** — ...
|
||||||
|
|
||||||
|
## Technical Notes
|
||||||
|
|
||||||
|
<Implementation hints — file paths, API contracts, gotchas, references into the epic's Shared Context. Not a full design; just what saves the implementer a lookup.>
|
||||||
|
|
||||||
|
## Coverage
|
||||||
|
|
||||||
|
<AC-to-requirement mapping. One line per AC. Use the FR / NFR / UX-DR codes from the requirements inventory.>
|
||||||
|
|
||||||
|
- AC1 → <list of FR / NFR / UX-DR codes>
|
||||||
|
- AC2 → <list of FR / NFR / UX-DR codes>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Stripping rules** (applied by `init_story.py` based on `--type`):
|
||||||
|
|
||||||
|
- `type: task` — the `<!-- USER_STORY_START --> ... <!-- USER_STORY_END -->` block (and its surrounding blank line) is removed.
|
||||||
|
- `type: bug` / `type: spike` — the block is left in. Remove or fill it as fits the story.
|
||||||
|
- `type: feature` — the block is left in. The user-story stanza is required.
|
||||||
|
|
@ -0,0 +1,79 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
# /// script
|
||||||
|
# requires-python = ">=3.10"
|
||||||
|
# ///
|
||||||
|
"""Bootstrap an epic folder and its epic.md with locked v7 front matter.
|
||||||
|
|
||||||
|
Output (stdout, JSON): {"epic": "<folder>", "epic_nn": "NN", "path": "<abs path>"}
|
||||||
|
Errors and progress on stderr. Exit codes: 0 ok, 1 user error, 2 internal error.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
SKILL_ROOT = Path(__file__).resolve().parent.parent
|
||||||
|
TEMPLATE_PATH = SKILL_ROOT / "resources" / "epic-md-template.md"
|
||||||
|
|
||||||
|
|
||||||
|
def slugify(title: str, max_len: int = 40) -> str:
|
||||||
|
s = title.lower().strip()
|
||||||
|
s = re.sub(r"[^a-z0-9]+", "-", s)
|
||||||
|
return s.strip("-")[:max_len].rstrip("-")
|
||||||
|
|
||||||
|
|
||||||
|
def yaml_quote(s: str) -> str:
|
||||||
|
return '"' + s.replace("\\", "\\\\").replace('"', '\\"') + '"'
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
ap = argparse.ArgumentParser(description=__doc__)
|
||||||
|
ap.add_argument("--initiative-store", required=True, type=Path)
|
||||||
|
ap.add_argument("--epic-nn", required=True, type=int, help="Ordinal in the epic list (1-based)")
|
||||||
|
ap.add_argument("--title", required=True)
|
||||||
|
ap.add_argument(
|
||||||
|
"--depends-on",
|
||||||
|
default="",
|
||||||
|
help="Comma-separated epic NNs the new epic depends on (e.g. '01,03')",
|
||||||
|
)
|
||||||
|
args = ap.parse_args()
|
||||||
|
|
||||||
|
nn = f"{args.epic_nn:02d}"
|
||||||
|
folder = f"{nn}-{slugify(args.title)}"
|
||||||
|
epic_dir = args.initiative_store / "epics" / folder
|
||||||
|
if epic_dir.exists():
|
||||||
|
print(f"epic folder already exists: {epic_dir}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
if not TEMPLATE_PATH.is_file():
|
||||||
|
print(f"template missing: {TEMPLATE_PATH}", file=sys.stderr)
|
||||||
|
return 2
|
||||||
|
|
||||||
|
epic_dir.mkdir(parents=True)
|
||||||
|
|
||||||
|
deps = [d.strip().zfill(2) for d in args.depends_on.split(",") if d.strip()]
|
||||||
|
deps_yaml = "[" + ", ".join(yaml_quote(d) for d in deps) + "]"
|
||||||
|
|
||||||
|
body = TEMPLATE_PATH.read_text(encoding="utf-8").replace("{{title}}", args.title)
|
||||||
|
front = (
|
||||||
|
"---\n"
|
||||||
|
f"title: {yaml_quote(args.title)}\n"
|
||||||
|
f"epic: {yaml_quote(nn)}\n"
|
||||||
|
"status: draft\n"
|
||||||
|
f"depends_on: {deps_yaml}\n"
|
||||||
|
"---\n\n"
|
||||||
|
)
|
||||||
|
epic_md = epic_dir / "epic.md"
|
||||||
|
epic_md.write_text(front + body, encoding="utf-8")
|
||||||
|
|
||||||
|
print(f"created {epic_md.relative_to(args.initiative_store)}", file=sys.stderr)
|
||||||
|
print(json.dumps({"epic": folder, "epic_nn": nn, "path": str(epic_md)}))
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
|
|
@ -0,0 +1,105 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
# /// script
|
||||||
|
# requires-python = ">=3.10"
|
||||||
|
# ///
|
||||||
|
"""Bootstrap a story file with locked v7 front matter inside an existing epic folder.
|
||||||
|
|
||||||
|
Output (stdout, JSON): {"story": "<basename>", "story_nn": "NN", "epic": "<folder>", "path": "<abs>"}
|
||||||
|
Errors and progress on stderr. Exit codes: 0 ok, 1 user error, 2 internal error.
|
||||||
|
|
||||||
|
The body skeleton's <!-- USER_STORY_START --> ... <!-- USER_STORY_END --> block is stripped
|
||||||
|
when --type=task. For type=feature, the stanza is required and remains in the skeleton.
|
||||||
|
For bug/spike it remains and is optional to fill.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
SKILL_ROOT = Path(__file__).resolve().parent.parent
|
||||||
|
TEMPLATE_PATH = SKILL_ROOT / "resources" / "story-md-template.md"
|
||||||
|
TYPES = ("feature", "bug", "task", "spike")
|
||||||
|
|
||||||
|
|
||||||
|
def slugify(title: str, max_len: int = 40) -> str:
|
||||||
|
s = title.lower().strip()
|
||||||
|
s = re.sub(r"[^a-z0-9]+", "-", s)
|
||||||
|
return s.strip("-")[:max_len].rstrip("-")
|
||||||
|
|
||||||
|
|
||||||
|
def yaml_quote(s: str) -> str:
|
||||||
|
return '"' + s.replace("\\", "\\\\").replace('"', '\\"') + '"'
|
||||||
|
|
||||||
|
|
||||||
|
def render_body(template: str, title: str, story_type: str) -> str:
|
||||||
|
body = template.replace("{{title}}", title)
|
||||||
|
if story_type == "task":
|
||||||
|
body = re.sub(
|
||||||
|
r"\n*<!-- USER_STORY_START -->.*?<!-- USER_STORY_END -->\n*",
|
||||||
|
"\n\n",
|
||||||
|
body,
|
||||||
|
flags=re.DOTALL,
|
||||||
|
)
|
||||||
|
# The resource file ends with a "Stripping rules" block that documents the
|
||||||
|
# init script's behavior — that's reference material for the LLM, not story
|
||||||
|
# content. Strip it so the rendered file is clean.
|
||||||
|
body = re.sub(r"\n+---\n+\*\*Stripping rules\*\*.*$", "\n", body, flags=re.DOTALL)
|
||||||
|
return body
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
ap = argparse.ArgumentParser(description=__doc__)
|
||||||
|
ap.add_argument("--initiative-store", required=True, type=Path)
|
||||||
|
ap.add_argument("--epic", required=True, help="Enclosing epic folder name (e.g. 01-billing-stripe)")
|
||||||
|
ap.add_argument("--story-nn", required=True, type=int, help="Ordinal within the epic (1-based)")
|
||||||
|
ap.add_argument("--title", required=True)
|
||||||
|
ap.add_argument("--type", required=True, choices=TYPES)
|
||||||
|
ap.add_argument(
|
||||||
|
"--depends-on",
|
||||||
|
default="",
|
||||||
|
help="Comma-separated refs (within-epic basenames or <epic-folder>/<basename> cross-epic)",
|
||||||
|
)
|
||||||
|
args = ap.parse_args()
|
||||||
|
|
||||||
|
epic_dir = args.initiative_store / "epics" / args.epic
|
||||||
|
if not epic_dir.is_dir():
|
||||||
|
print(f"epic folder does not exist: {epic_dir}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
nn = f"{args.story_nn:02d}"
|
||||||
|
basename = f"{nn}-{slugify(args.title)}"
|
||||||
|
story_path = epic_dir / f"{basename}.md"
|
||||||
|
if story_path.exists():
|
||||||
|
print(f"story file already exists: {story_path}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
if not TEMPLATE_PATH.is_file():
|
||||||
|
print(f"template missing: {TEMPLATE_PATH}", file=sys.stderr)
|
||||||
|
return 2
|
||||||
|
|
||||||
|
deps = [d.strip() for d in args.depends_on.split(",") if d.strip()]
|
||||||
|
deps_yaml = "[" + ", ".join(yaml_quote(d) for d in deps) + "]"
|
||||||
|
|
||||||
|
body = render_body(TEMPLATE_PATH.read_text(encoding="utf-8"), args.title, args.type)
|
||||||
|
front = (
|
||||||
|
"---\n"
|
||||||
|
f"title: {yaml_quote(args.title)}\n"
|
||||||
|
f"type: {args.type}\n"
|
||||||
|
"status: draft\n"
|
||||||
|
f"epic: {yaml_quote(args.epic)}\n"
|
||||||
|
f"depends_on: {deps_yaml}\n"
|
||||||
|
"---\n\n"
|
||||||
|
)
|
||||||
|
story_path.write_text(front + body, encoding="utf-8")
|
||||||
|
|
||||||
|
print(f"created {story_path.relative_to(args.initiative_store)}", file=sys.stderr)
|
||||||
|
print(json.dumps({"story": basename, "story_nn": nn, "epic": args.epic, "path": str(story_path)}))
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
|
|
@ -0,0 +1,129 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
# /// script
|
||||||
|
# requires-python = ">=3.10"
|
||||||
|
# ///
|
||||||
|
"""Move a story between epic folders; rewrites the file's epic field and depends_on refs.
|
||||||
|
|
||||||
|
When the story moves from `src-epic` to `dst-epic`:
|
||||||
|
- The file's `epic:` front-matter field is rewritten to dst-epic.
|
||||||
|
- In every story across the tree, depends_on entries `src-epic/<basename>` become
|
||||||
|
`dst-epic/<new-basename>`.
|
||||||
|
- Within the OLD source epic, sibling stories whose depends_on used the bare
|
||||||
|
`<basename>` (within-epic form) are rewritten to the cross-epic form
|
||||||
|
`dst-epic/<new-basename>` so they keep resolving.
|
||||||
|
|
||||||
|
Output (stdout, JSON): {"old": "<src-epic>/<basename>", "new": "<dst-epic>/<basename>", "refs_updated": N, "path": "<abs>"}
|
||||||
|
Exit codes: 0 ok, 1 user error, 2 internal error.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
def yaml_quote(s: str) -> str:
|
||||||
|
return '"' + s.replace("\\", "\\\\").replace('"', '\\"') + '"'
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
ap = argparse.ArgumentParser(description=__doc__)
|
||||||
|
ap.add_argument("--initiative-store", required=True, type=Path)
|
||||||
|
ap.add_argument("--from", dest="src", required=True, help="Source as <epic-folder>/<basename> (no .md)")
|
||||||
|
ap.add_argument("--to-epic", required=True, help="Destination epic folder name")
|
||||||
|
ap.add_argument("--new-nn", type=int, help="Renumber on move; if omitted, preserve the existing NN")
|
||||||
|
args = ap.parse_args()
|
||||||
|
|
||||||
|
if "/" not in args.src:
|
||||||
|
print("--from must be <epic-folder>/<basename>", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
src_epic, src_basename = args.src.split("/", 1)
|
||||||
|
|
||||||
|
if src_epic == args.to_epic:
|
||||||
|
print("--to-epic equals source epic; use rename_story.py to renumber within an epic", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
src_path = args.initiative_store / "epics" / src_epic / f"{src_basename}.md"
|
||||||
|
if not src_path.is_file():
|
||||||
|
print(f"source story not found: {src_path}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
dst_epic_dir = args.initiative_store / "epics" / args.to_epic
|
||||||
|
if not dst_epic_dir.is_dir():
|
||||||
|
print(f"destination epic folder does not exist: {dst_epic_dir}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
m = re.match(r"^(\d+)-(.+)$", src_basename)
|
||||||
|
if not m:
|
||||||
|
print(f"source basename does not start with NN-: {src_basename}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
new_nn = f"{args.new_nn:02d}" if args.new_nn is not None else m.group(1).zfill(2)
|
||||||
|
new_basename = f"{new_nn}-{m.group(2)}"
|
||||||
|
|
||||||
|
dst_path = dst_epic_dir / f"{new_basename}.md"
|
||||||
|
if dst_path.exists():
|
||||||
|
print(f"destination already exists: {dst_path}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
text = src_path.read_text(encoding="utf-8")
|
||||||
|
text = re.sub(r"^epic:.*$", f"epic: {yaml_quote(args.to_epic)}", text, count=1, flags=re.MULTILINE)
|
||||||
|
# The moved story's own depends_on may carry bare basenames that referenced
|
||||||
|
# within-epic siblings in src_epic; those refs now need cross-epic form.
|
||||||
|
new_text_lines: list[str] = []
|
||||||
|
self_refs_rewritten = False
|
||||||
|
for line in text.split("\n"):
|
||||||
|
if line.startswith("depends_on:"):
|
||||||
|
def _to_cross(match: re.Match[str]) -> str:
|
||||||
|
ref = match.group(2)
|
||||||
|
if "/" in ref:
|
||||||
|
return match.group(0)
|
||||||
|
return match.group(1) + f"{src_epic}/{ref}" + match.group(3)
|
||||||
|
new_line = re.sub(r'(["\s,\[])([^"\s,\[\]]+)(["\s,\]])', _to_cross, line)
|
||||||
|
if new_line != line:
|
||||||
|
self_refs_rewritten = True
|
||||||
|
line = new_line
|
||||||
|
new_text_lines.append(line)
|
||||||
|
text = "\n".join(new_text_lines)
|
||||||
|
dst_path.write_text(text, encoding="utf-8")
|
||||||
|
src_path.unlink()
|
||||||
|
|
||||||
|
old_cross = f"{src_epic}/{src_basename}"
|
||||||
|
new_cross = f"{args.to_epic}/{new_basename}"
|
||||||
|
refs_updated = 0
|
||||||
|
epics_dir = args.initiative_store / "epics"
|
||||||
|
|
||||||
|
for ef in epics_dir.iterdir():
|
||||||
|
if not ef.is_dir():
|
||||||
|
continue
|
||||||
|
for sf in ef.glob("*.md"):
|
||||||
|
if sf == dst_path:
|
||||||
|
continue
|
||||||
|
t = sf.read_text(encoding="utf-8")
|
||||||
|
new_lines: list[str] = []
|
||||||
|
changed = False
|
||||||
|
for line in t.split("\n"):
|
||||||
|
if line.startswith("depends_on:"):
|
||||||
|
if old_cross in line:
|
||||||
|
line = line.replace(old_cross, new_cross)
|
||||||
|
changed = True
|
||||||
|
if ef.name == src_epic:
|
||||||
|
# Within-epic siblings used the bare basename; rewrite to cross-epic form.
|
||||||
|
pattern = rf'(["\s,\[]){re.escape(src_basename)}(?=["\s,\]])'
|
||||||
|
new_line = re.sub(pattern, lambda mm: mm.group(1) + new_cross, line)
|
||||||
|
if new_line != line:
|
||||||
|
line = new_line
|
||||||
|
changed = True
|
||||||
|
new_lines.append(line)
|
||||||
|
if changed:
|
||||||
|
sf.write_text("\n".join(new_lines), encoding="utf-8")
|
||||||
|
refs_updated += 1
|
||||||
|
|
||||||
|
print(f"moved {old_cross} -> {new_cross}", file=sys.stderr)
|
||||||
|
print(json.dumps({"old": old_cross, "new": new_cross, "refs_updated": refs_updated, "path": str(dst_path)}))
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
|
|
@ -0,0 +1,120 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
# /// script
|
||||||
|
# requires-python = ">=3.10"
|
||||||
|
# ///
|
||||||
|
"""Rename or renumber a story safely; updates the file and rewrites depends_on refs.
|
||||||
|
|
||||||
|
The new basename is derived as `<NN>-<slug(--to-title)>` where:
|
||||||
|
- NN comes from --to-nn if provided, else the existing NN.
|
||||||
|
- slug(...) replaces non-alphanumerics with hyphens, max 40 chars.
|
||||||
|
- If --to-title is omitted, the kebab portion is preserved (NN-only renumber).
|
||||||
|
|
||||||
|
Updates depends_on references in every story file across the whole tree:
|
||||||
|
- Within the source epic, bare `<old-basename>` becomes `<new-basename>`.
|
||||||
|
- Across epics, `<src-epic>/<old-basename>` becomes `<src-epic>/<new-basename>`.
|
||||||
|
|
||||||
|
Output (stdout, JSON): {"old": "<basename>", "new": "<basename>", "refs_updated": N, "path": "<abs>"}
|
||||||
|
Exit codes: 0 ok, 1 user error, 2 internal error.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
def slugify(title: str, max_len: int = 40) -> str:
|
||||||
|
s = title.lower().strip()
|
||||||
|
s = re.sub(r"[^a-z0-9]+", "-", s)
|
||||||
|
return s.strip("-")[:max_len].rstrip("-")
|
||||||
|
|
||||||
|
|
||||||
|
def yaml_quote(s: str) -> str:
|
||||||
|
return '"' + s.replace("\\", "\\\\").replace('"', '\\"') + '"'
|
||||||
|
|
||||||
|
|
||||||
|
def update_deps(path: Path, src_epic: str, src_basename: str, new_basename: str, *, file_in_src_epic: bool) -> bool:
|
||||||
|
text = path.read_text(encoding="utf-8")
|
||||||
|
new_lines: list[str] = []
|
||||||
|
changed = False
|
||||||
|
for line in text.split("\n"):
|
||||||
|
if line.startswith("depends_on:"):
|
||||||
|
old_cross = f"{src_epic}/{src_basename}"
|
||||||
|
new_cross = f"{src_epic}/{new_basename}"
|
||||||
|
if old_cross in line:
|
||||||
|
line = line.replace(old_cross, new_cross)
|
||||||
|
changed = True
|
||||||
|
if file_in_src_epic:
|
||||||
|
pattern = rf'(["\s,\[]){re.escape(src_basename)}(?=["\s,\]])'
|
||||||
|
new_line = re.sub(pattern, lambda m: m.group(1) + new_basename, line)
|
||||||
|
if new_line != line:
|
||||||
|
line = new_line
|
||||||
|
changed = True
|
||||||
|
new_lines.append(line)
|
||||||
|
if changed:
|
||||||
|
path.write_text("\n".join(new_lines), encoding="utf-8")
|
||||||
|
return changed
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
ap = argparse.ArgumentParser(description=__doc__)
|
||||||
|
ap.add_argument("--initiative-store", required=True, type=Path)
|
||||||
|
ap.add_argument("--epic", required=True, help="Enclosing epic folder name (e.g. 01-billing-stripe)")
|
||||||
|
ap.add_argument("--from", dest="src", required=True, help="Old story basename (without .md)")
|
||||||
|
ap.add_argument("--to-title", help="New title — derives a new kebab-slug; if omitted, preserves the kebab")
|
||||||
|
ap.add_argument("--to-nn", type=int, help="New numeric prefix; if omitted, preserves the existing NN")
|
||||||
|
args = ap.parse_args()
|
||||||
|
|
||||||
|
epic_dir = args.initiative_store / "epics" / args.epic
|
||||||
|
src_path = epic_dir / f"{args.src}.md"
|
||||||
|
if not src_path.is_file():
|
||||||
|
print(f"source story not found: {src_path}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
m = re.match(r"^(\d+)-(.+)$", args.src)
|
||||||
|
if not m:
|
||||||
|
print(f"source basename does not start with NN-: {args.src}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
src_nn, src_kebab = m.group(1), m.group(2)
|
||||||
|
|
||||||
|
new_nn = f"{args.to_nn:02d}" if args.to_nn is not None else src_nn.zfill(2)
|
||||||
|
new_kebab = slugify(args.to_title) if args.to_title else src_kebab
|
||||||
|
new_basename = f"{new_nn}-{new_kebab}"
|
||||||
|
if new_basename == args.src:
|
||||||
|
print("nothing to change", file=sys.stderr)
|
||||||
|
print(json.dumps({"old": args.src, "new": new_basename, "refs_updated": 0, "path": str(src_path)}))
|
||||||
|
return 0
|
||||||
|
|
||||||
|
dst_path = epic_dir / f"{new_basename}.md"
|
||||||
|
if dst_path.exists():
|
||||||
|
print(f"target already exists: {dst_path}", file=sys.stderr)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
src_path.rename(dst_path)
|
||||||
|
|
||||||
|
if args.to_title:
|
||||||
|
text = dst_path.read_text(encoding="utf-8")
|
||||||
|
text = re.sub(r"^title:.*$", f"title: {yaml_quote(args.to_title)}", text, count=1, flags=re.MULTILINE)
|
||||||
|
dst_path.write_text(text, encoding="utf-8")
|
||||||
|
|
||||||
|
refs_updated = 0
|
||||||
|
epics_dir = args.initiative_store / "epics"
|
||||||
|
for ef in epics_dir.iterdir():
|
||||||
|
if not ef.is_dir():
|
||||||
|
continue
|
||||||
|
for sf in ef.glob("*.md"):
|
||||||
|
if sf == dst_path:
|
||||||
|
continue
|
||||||
|
if update_deps(sf, args.epic, args.src, new_basename, file_in_src_epic=(ef.name == args.epic)):
|
||||||
|
refs_updated += 1
|
||||||
|
|
||||||
|
print(f"renamed {args.src} -> {new_basename}", file=sys.stderr)
|
||||||
|
print(json.dumps({"old": args.src, "new": new_basename, "refs_updated": refs_updated, "path": str(dst_path)}))
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
|
|
@ -0,0 +1,65 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Tests for scripts/init_epic.py — happy path, depends_on, conflict."""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
SCRIPT = Path(__file__).resolve().parent.parent / "init_epic.py"
|
||||||
|
|
||||||
|
|
||||||
|
def run(*args: str, store: Path) -> subprocess.CompletedProcess[str]:
|
||||||
|
return subprocess.run(
|
||||||
|
[sys.executable, str(SCRIPT), "--initiative-store", str(store), *args],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
check=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestInitEpic(unittest.TestCase):
|
||||||
|
def test_creates_folder_and_epic_md(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
r = run("--epic-nn", "1", "--title", "User Authentication", store=store)
|
||||||
|
self.assertEqual(r.returncode, 0, r.stderr)
|
||||||
|
data = json.loads(r.stdout)
|
||||||
|
self.assertEqual(data["epic"], "01-user-authentication")
|
||||||
|
self.assertEqual(data["epic_nn"], "01")
|
||||||
|
self.assertTrue(Path(data["path"]).is_file())
|
||||||
|
content = Path(data["path"]).read_text(encoding="utf-8")
|
||||||
|
self.assertIn('title: "User Authentication"', content)
|
||||||
|
self.assertIn('epic: "01"', content)
|
||||||
|
self.assertIn("status: draft", content)
|
||||||
|
self.assertIn("depends_on: []", content)
|
||||||
|
|
||||||
|
def test_emits_depends_on(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
run("--epic-nn", "1", "--title", "A", store=store)
|
||||||
|
r = run("--epic-nn", "2", "--title", "B", "--depends-on", "1", store=store)
|
||||||
|
self.assertEqual(r.returncode, 0, r.stderr)
|
||||||
|
content = Path(json.loads(r.stdout)["path"]).read_text(encoding="utf-8")
|
||||||
|
self.assertIn('depends_on: ["01"]', content)
|
||||||
|
|
||||||
|
def test_collision_fails(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
run("--epic-nn", "1", "--title", "Auth", store=store)
|
||||||
|
r = run("--epic-nn", "1", "--title", "Auth", store=store)
|
||||||
|
self.assertEqual(r.returncode, 1)
|
||||||
|
self.assertIn("already exists", r.stderr)
|
||||||
|
|
||||||
|
def test_slug_special_chars(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
r = run("--epic-nn", "1", "--title", "Billing & Stripe v2!", store=store)
|
||||||
|
self.assertEqual(r.returncode, 0, r.stderr)
|
||||||
|
self.assertEqual(json.loads(r.stdout)["epic"], "01-billing-stripe-v2")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
|
|
@ -0,0 +1,81 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Tests for scripts/init_story.py — type-driven body, depends_on, missing epic."""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
SCRIPTS = Path(__file__).resolve().parent.parent
|
||||||
|
INIT_EPIC = SCRIPTS / "init_epic.py"
|
||||||
|
INIT_STORY = SCRIPTS / "init_story.py"
|
||||||
|
|
||||||
|
|
||||||
|
def _run(script: Path, *args: str) -> subprocess.CompletedProcess[str]:
|
||||||
|
return subprocess.run([sys.executable, str(script), *args], capture_output=True, text=True, check=False)
|
||||||
|
|
||||||
|
|
||||||
|
def _bootstrap_epic(store: Path, nn: int = 1, title: str = "Auth") -> str:
|
||||||
|
r = _run(INIT_EPIC, "--initiative-store", str(store), "--epic-nn", str(nn), "--title", title)
|
||||||
|
assert r.returncode == 0, r.stderr
|
||||||
|
return json.loads(r.stdout)["epic"]
|
||||||
|
|
||||||
|
|
||||||
|
class TestInitStory(unittest.TestCase):
|
||||||
|
def test_feature_keeps_user_story_block(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
epic = _bootstrap_epic(store)
|
||||||
|
r = _run(
|
||||||
|
INIT_STORY, "--initiative-store", str(store), "--epic", epic,
|
||||||
|
"--story-nn", "1", "--title", "Register", "--type", "feature",
|
||||||
|
)
|
||||||
|
self.assertEqual(r.returncode, 0, r.stderr)
|
||||||
|
content = Path(json.loads(r.stdout)["path"]).read_text(encoding="utf-8")
|
||||||
|
self.assertIn("As a {{user_type}}", content)
|
||||||
|
self.assertIn("type: feature", content)
|
||||||
|
self.assertIn(f'epic: "{epic}"', content)
|
||||||
|
self.assertIn("status: draft", content)
|
||||||
|
# The user-story marker comments stay so the LLM can locate the block.
|
||||||
|
self.assertIn("USER_STORY_START", content)
|
||||||
|
self.assertNotIn("Stripping rules", content)
|
||||||
|
|
||||||
|
def test_task_strips_user_story_block(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
epic = _bootstrap_epic(store)
|
||||||
|
r = _run(
|
||||||
|
INIT_STORY, "--initiative-store", str(store), "--epic", epic,
|
||||||
|
"--story-nn", "1", "--title", "Schema", "--type", "task",
|
||||||
|
)
|
||||||
|
self.assertEqual(r.returncode, 0, r.stderr)
|
||||||
|
content = Path(json.loads(r.stdout)["path"]).read_text(encoding="utf-8")
|
||||||
|
self.assertNotIn("As a {{user_type}}", content)
|
||||||
|
self.assertIn("type: task", content)
|
||||||
|
|
||||||
|
def test_depends_on_inline_list(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
epic = _bootstrap_epic(store)
|
||||||
|
r = _run(
|
||||||
|
INIT_STORY, "--initiative-store", str(store), "--epic", epic,
|
||||||
|
"--story-nn", "1", "--title", "Schema", "--type", "task",
|
||||||
|
"--depends-on", "01-foo,02-bar/03-baz",
|
||||||
|
)
|
||||||
|
content = Path(json.loads(r.stdout)["path"]).read_text(encoding="utf-8")
|
||||||
|
self.assertIn('depends_on: ["01-foo", "02-bar/03-baz"]', content)
|
||||||
|
|
||||||
|
def test_missing_epic_folder_fails(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
r = _run(
|
||||||
|
INIT_STORY, "--initiative-store", tmp, "--epic", "99-nope",
|
||||||
|
"--story-nn", "1", "--title", "X", "--type", "task",
|
||||||
|
)
|
||||||
|
self.assertEqual(r.returncode, 1)
|
||||||
|
self.assertIn("does not exist", r.stderr)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
|
|
@ -0,0 +1,81 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Tests for scripts/move_story.py — cross-epic move, self-dep rewrite, sibling rewrite."""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
SCRIPTS = Path(__file__).resolve().parent.parent
|
||||||
|
INIT_EPIC = SCRIPTS / "init_epic.py"
|
||||||
|
INIT_STORY = SCRIPTS / "init_story.py"
|
||||||
|
MOVE = SCRIPTS / "move_story.py"
|
||||||
|
VALIDATE = SCRIPTS / "validate_initiative.py"
|
||||||
|
|
||||||
|
|
||||||
|
def _run(script: Path, *args: str) -> subprocess.CompletedProcess[str]:
|
||||||
|
return subprocess.run([sys.executable, str(script), *args], capture_output=True, text=True, check=False)
|
||||||
|
|
||||||
|
|
||||||
|
class TestMoveStory(unittest.TestCase):
|
||||||
|
def _bootstrap(self, store: Path) -> None:
|
||||||
|
_run(INIT_EPIC, "--initiative-store", str(store), "--epic-nn", "1", "--title", "Auth")
|
||||||
|
_run(INIT_EPIC, "--initiative-store", str(store), "--epic-nn", "2", "--title", "Mig", "--depends-on", "1")
|
||||||
|
_run(INIT_STORY, "--initiative-store", str(store), "--epic", "01-auth", "--story-nn", "1", "--title", "Schema", "--type", "task")
|
||||||
|
_run(INIT_STORY, "--initiative-store", str(store), "--epic", "01-auth", "--story-nn", "2", "--title", "Register", "--type", "feature", "--depends-on", "01-schema")
|
||||||
|
|
||||||
|
def test_move_rewrites_epic_and_within_dep(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
self._bootstrap(store)
|
||||||
|
r = _run(MOVE, "--initiative-store", str(store), "--from", "01-auth/02-register", "--to-epic", "02-mig", "--new-nn", "1")
|
||||||
|
self.assertEqual(r.returncode, 0, r.stderr)
|
||||||
|
data = json.loads(r.stdout)
|
||||||
|
self.assertEqual(data["new"], "02-mig/01-register")
|
||||||
|
moved = (store / "epics" / "02-mig" / "01-register.md").read_text(encoding="utf-8")
|
||||||
|
self.assertIn('epic: "02-mig"', moved)
|
||||||
|
self.assertIn('"01-auth/01-schema"', moved)
|
||||||
|
self.assertNotIn('depends_on: ["01-schema"]', moved)
|
||||||
|
self.assertFalse((store / "epics" / "01-auth" / "02-register.md").exists())
|
||||||
|
|
||||||
|
def test_move_keeps_validator_clean(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
self._bootstrap(store)
|
||||||
|
_run(INIT_STORY, "--initiative-store", str(store), "--epic", "02-mig", "--story-nn", "1", "--title", "Mailer", "--type", "task", "--depends-on", "01-auth/01-schema")
|
||||||
|
_run(MOVE, "--initiative-store", str(store), "--from", "01-auth/02-register", "--to-epic", "02-mig", "--new-nn", "2")
|
||||||
|
r = _run(VALIDATE, "--initiative-store", str(store))
|
||||||
|
self.assertEqual(r.returncode, 0, r.stdout + r.stderr)
|
||||||
|
|
||||||
|
def test_sibling_dep_rewritten_to_cross_epic(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
self._bootstrap(store)
|
||||||
|
# Add a third sibling that depends on the soon-to-move story via bare ref.
|
||||||
|
_run(INIT_STORY, "--initiative-store", str(store), "--epic", "01-auth", "--story-nn", "3", "--title", "Login", "--type", "feature", "--depends-on", "02-register")
|
||||||
|
_run(MOVE, "--initiative-store", str(store), "--from", "01-auth/02-register", "--to-epic", "02-mig", "--new-nn", "1")
|
||||||
|
login = (store / "epics" / "01-auth" / "03-login.md").read_text(encoding="utf-8")
|
||||||
|
self.assertIn('"02-mig/01-register"', login)
|
||||||
|
self.assertNotIn('depends_on: ["02-register"]', login)
|
||||||
|
|
||||||
|
def test_same_epic_rejected(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
self._bootstrap(store)
|
||||||
|
r = _run(MOVE, "--initiative-store", str(store), "--from", "01-auth/02-register", "--to-epic", "01-auth")
|
||||||
|
self.assertEqual(r.returncode, 1)
|
||||||
|
self.assertIn("rename_story.py", r.stderr)
|
||||||
|
|
||||||
|
def test_missing_destination_fails(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
self._bootstrap(store)
|
||||||
|
r = _run(MOVE, "--initiative-store", str(store), "--from", "01-auth/02-register", "--to-epic", "99-nope")
|
||||||
|
self.assertEqual(r.returncode, 1)
|
||||||
|
self.assertIn("does not exist", r.stderr)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
|
|
@ -0,0 +1,83 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Tests for scripts/rename_story.py — renumber, retitle, ref propagation."""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
SCRIPTS = Path(__file__).resolve().parent.parent
|
||||||
|
INIT_EPIC = SCRIPTS / "init_epic.py"
|
||||||
|
INIT_STORY = SCRIPTS / "init_story.py"
|
||||||
|
RENAME = SCRIPTS / "rename_story.py"
|
||||||
|
|
||||||
|
|
||||||
|
def _run(script: Path, *args: str) -> subprocess.CompletedProcess[str]:
|
||||||
|
return subprocess.run([sys.executable, str(script), *args], capture_output=True, text=True, check=False)
|
||||||
|
|
||||||
|
|
||||||
|
class TestRenameStory(unittest.TestCase):
|
||||||
|
def _bootstrap(self, store: Path) -> None:
|
||||||
|
_run(INIT_EPIC, "--initiative-store", str(store), "--epic-nn", "1", "--title", "Auth")
|
||||||
|
_run(INIT_EPIC, "--initiative-store", str(store), "--epic-nn", "2", "--title", "Mig", "--depends-on", "1")
|
||||||
|
_run(INIT_STORY, "--initiative-store", str(store), "--epic", "01-auth", "--story-nn", "1", "--title", "Schema", "--type", "task")
|
||||||
|
_run(INIT_STORY, "--initiative-store", str(store), "--epic", "01-auth", "--story-nn", "2", "--title", "Register", "--type", "feature", "--depends-on", "01-schema")
|
||||||
|
_run(INIT_STORY, "--initiative-store", str(store), "--epic", "02-mig", "--story-nn", "1", "--title", "Mailer", "--type", "task", "--depends-on", "01-auth/01-schema")
|
||||||
|
|
||||||
|
def test_renumber_only(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
self._bootstrap(store)
|
||||||
|
r = _run(RENAME, "--initiative-store", str(store), "--epic", "01-auth", "--from", "02-register", "--to-nn", "3")
|
||||||
|
self.assertEqual(r.returncode, 0, r.stderr)
|
||||||
|
data = json.loads(r.stdout)
|
||||||
|
self.assertEqual(data["new"], "03-register")
|
||||||
|
self.assertTrue((store / "epics" / "01-auth" / "03-register.md").is_file())
|
||||||
|
self.assertFalse((store / "epics" / "01-auth" / "02-register.md").exists())
|
||||||
|
|
||||||
|
def test_retitle_propagates_within_epic_refs(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
self._bootstrap(store)
|
||||||
|
r = _run(
|
||||||
|
RENAME, "--initiative-store", str(store), "--epic", "01-auth",
|
||||||
|
"--from", "01-schema", "--to-title", "User and Session Schema",
|
||||||
|
)
|
||||||
|
self.assertEqual(r.returncode, 0, r.stderr)
|
||||||
|
data = json.loads(r.stdout)
|
||||||
|
self.assertEqual(data["new"], "01-user-and-session-schema")
|
||||||
|
self.assertGreaterEqual(data["refs_updated"], 1)
|
||||||
|
register = (store / "epics" / "01-auth" / "02-register.md").read_text(encoding="utf-8")
|
||||||
|
self.assertIn('"01-user-and-session-schema"', register)
|
||||||
|
self.assertNotIn('"01-schema"', register)
|
||||||
|
|
||||||
|
def test_retitle_propagates_cross_epic_refs(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
self._bootstrap(store)
|
||||||
|
r = _run(
|
||||||
|
RENAME, "--initiative-store", str(store), "--epic", "01-auth",
|
||||||
|
"--from", "01-schema", "--to-title", "User Schema", "--to-nn", "5",
|
||||||
|
)
|
||||||
|
self.assertEqual(r.returncode, 0, r.stderr)
|
||||||
|
mailer = (store / "epics" / "02-mig" / "01-mailer.md").read_text(encoding="utf-8")
|
||||||
|
self.assertIn('"01-auth/05-user-schema"', mailer)
|
||||||
|
self.assertNotIn("01-auth/01-schema", mailer)
|
||||||
|
|
||||||
|
def test_target_collision_fails(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
self._bootstrap(store)
|
||||||
|
# Rename 01-schema -> 02-register would collide with the existing 02-register.md
|
||||||
|
r = _run(
|
||||||
|
RENAME, "--initiative-store", str(store), "--epic", "01-auth",
|
||||||
|
"--from", "01-schema", "--to-title", "Register", "--to-nn", "2",
|
||||||
|
)
|
||||||
|
self.assertEqual(r.returncode, 1)
|
||||||
|
self.assertIn("already exists", r.stderr)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
|
|
@ -0,0 +1,117 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Tests for scripts/validate_initiative.py — happy path, schema/dep/cycle errors."""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
SCRIPTS = Path(__file__).resolve().parent.parent
|
||||||
|
INIT_EPIC = SCRIPTS / "init_epic.py"
|
||||||
|
INIT_STORY = SCRIPTS / "init_story.py"
|
||||||
|
VALIDATE = SCRIPTS / "validate_initiative.py"
|
||||||
|
|
||||||
|
|
||||||
|
def _run(script: Path, *args: str) -> subprocess.CompletedProcess[str]:
|
||||||
|
return subprocess.run([sys.executable, str(script), *args], capture_output=True, text=True, check=False)
|
||||||
|
|
||||||
|
|
||||||
|
def _build_clean_tree(store: Path) -> tuple[str, str]:
|
||||||
|
_run(INIT_EPIC, "--initiative-store", str(store), "--epic-nn", "1", "--title", "Auth")
|
||||||
|
_run(INIT_EPIC, "--initiative-store", str(store), "--epic-nn", "2", "--title", "Migration", "--depends-on", "1")
|
||||||
|
_run(INIT_STORY, "--initiative-store", str(store), "--epic", "01-auth", "--story-nn", "1", "--title", "Schema", "--type", "task")
|
||||||
|
_run(INIT_STORY, "--initiative-store", str(store), "--epic", "01-auth", "--story-nn", "2", "--title", "Register", "--type", "feature", "--depends-on", "01-schema")
|
||||||
|
_run(INIT_STORY, "--initiative-store", str(store), "--epic", "02-migration", "--story-nn", "1", "--title", "Mailer", "--type", "task", "--depends-on", "01-auth/01-schema")
|
||||||
|
return "01-auth", "02-migration"
|
||||||
|
|
||||||
|
|
||||||
|
class TestValidateInitiative(unittest.TestCase):
|
||||||
|
def test_clean_tree_passes(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
_build_clean_tree(store)
|
||||||
|
r = _run(VALIDATE, "--initiative-store", str(store))
|
||||||
|
self.assertEqual(r.returncode, 0, r.stderr + r.stdout)
|
||||||
|
data = json.loads(r.stdout)
|
||||||
|
self.assertEqual(data["findings"], [])
|
||||||
|
self.assertEqual(data["summary"]["story_count"], 3)
|
||||||
|
self.assertEqual(data["summary"]["story_status_counts"], {"draft": 3})
|
||||||
|
|
||||||
|
def test_bad_status_enum(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
_build_clean_tree(store)
|
||||||
|
sp = store / "epics" / "01-auth" / "01-schema.md"
|
||||||
|
sp.write_text(sp.read_text(encoding="utf-8").replace("status: draft", "status: bogus", 1), encoding="utf-8")
|
||||||
|
r = _run(VALIDATE, "--initiative-store", str(store))
|
||||||
|
self.assertEqual(r.returncode, 1)
|
||||||
|
codes = {f["code"] for f in json.loads(r.stdout)["findings"]}
|
||||||
|
self.assertIn("story-bad-status", codes)
|
||||||
|
|
||||||
|
def test_dangling_dep(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
_build_clean_tree(store)
|
||||||
|
sp = store / "epics" / "02-migration" / "01-mailer.md"
|
||||||
|
sp.write_text(sp.read_text(encoding="utf-8").replace('"01-auth/01-schema"', '"01-auth/99-nope"'), encoding="utf-8")
|
||||||
|
r = _run(VALIDATE, "--initiative-store", str(store))
|
||||||
|
self.assertEqual(r.returncode, 1)
|
||||||
|
codes = {f["code"] for f in json.loads(r.stdout)["findings"]}
|
||||||
|
self.assertIn("story-dep-unresolved", codes)
|
||||||
|
|
||||||
|
def test_epic_dep_cycle(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
_build_clean_tree(store)
|
||||||
|
ep = store / "epics" / "01-auth" / "epic.md"
|
||||||
|
ep.write_text(ep.read_text(encoding="utf-8").replace("depends_on: []", 'depends_on: ["02"]', 1), encoding="utf-8")
|
||||||
|
r = _run(VALIDATE, "--initiative-store", str(store))
|
||||||
|
self.assertEqual(r.returncode, 1)
|
||||||
|
codes = {f["code"] for f in json.loads(r.stdout)["findings"]}
|
||||||
|
self.assertIn("epic-dep-cycle", codes)
|
||||||
|
|
||||||
|
def test_extra_top_level_key(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
_build_clean_tree(store)
|
||||||
|
sp = store / "epics" / "01-auth" / "01-schema.md"
|
||||||
|
sp.write_text(sp.read_text(encoding="utf-8").replace("status: draft", "status: draft\nbogus: 1", 1), encoding="utf-8")
|
||||||
|
r = _run(VALIDATE, "--initiative-store", str(store))
|
||||||
|
self.assertEqual(r.returncode, 1)
|
||||||
|
codes = {f["code"] for f in json.loads(r.stdout)["findings"]}
|
||||||
|
self.assertIn("story-extra-keys", codes)
|
||||||
|
|
||||||
|
def test_numbering_gap(self) -> None:
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
_build_clean_tree(store)
|
||||||
|
(store / "epics" / "01-auth" / "01-schema.md").unlink()
|
||||||
|
r = _run(VALIDATE, "--initiative-store", str(store))
|
||||||
|
self.assertEqual(r.returncode, 1)
|
||||||
|
codes = {f["code"] for f in json.loads(r.stdout)["findings"]}
|
||||||
|
self.assertIn("story-numbering-gaps", codes)
|
||||||
|
|
||||||
|
def test_lax_skips_sizing_warnings(self) -> None:
|
||||||
|
# Sizing warnings fire when one body exceeds 3x the epic mean. With 5 normal
|
||||||
|
# stories and one massively-padded outlier, the mean stays low enough for
|
||||||
|
# the outlier to clear the 3x threshold.
|
||||||
|
with tempfile.TemporaryDirectory() as tmp:
|
||||||
|
store = Path(tmp)
|
||||||
|
_run(INIT_EPIC, "--initiative-store", str(store), "--epic-nn", "1", "--title", "E1")
|
||||||
|
for nn, title in ((1, "Tiny A"), (2, "Tiny B"), (3, "Tiny C"), (4, "Tiny D"), (5, "Big One")):
|
||||||
|
_run(INIT_STORY, "--initiative-store", str(store), "--epic", "01-e1", "--story-nn", str(nn), "--title", title, "--type", "task")
|
||||||
|
big = store / "epics" / "01-e1" / "05-big-one.md"
|
||||||
|
big.write_text(big.read_text(encoding="utf-8") + ("filler " * 50000), encoding="utf-8")
|
||||||
|
r_strict = _run(VALIDATE, "--initiative-store", str(store))
|
||||||
|
self.assertEqual(r_strict.returncode, 0)
|
||||||
|
warns_strict = [f for f in json.loads(r_strict.stdout)["findings"] if f["level"] == "warning"]
|
||||||
|
self.assertTrue(any(f["code"] == "story-oversized" for f in warns_strict), warns_strict)
|
||||||
|
r_lax = _run(VALIDATE, "--initiative-store", str(store), "--lax")
|
||||||
|
warns_lax = [f for f in json.loads(r_lax.stdout)["findings"] if f["level"] == "warning"]
|
||||||
|
self.assertEqual(warns_lax, [])
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
|
|
@ -0,0 +1,352 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
# /// script
|
||||||
|
# requires-python = ">=3.10"
|
||||||
|
# ///
|
||||||
|
"""Validate the v7 epic-and-story tree under an initiative store.
|
||||||
|
|
||||||
|
Checks (strict mode):
|
||||||
|
1. Each file's front matter has only the allowed top-level keys, all required keys present.
|
||||||
|
2. Enum values valid (type, status).
|
||||||
|
3. Story `epic:` field equals the enclosing folder name; epic `epic:` field equals the folder NN.
|
||||||
|
4. depends_on entries resolve (within-epic basenames or <epic-folder>/<basename> cross-epic).
|
||||||
|
5. Cross-epic depends_on graph is acyclic.
|
||||||
|
6. Within-epic story numbering is sequential starting at 01.
|
||||||
|
7. Sizing sanity (warnings only): a story body >3x the epic mean is flagged.
|
||||||
|
|
||||||
|
Coverage of FR/NFR/UX-DR codes is NOT enforced here — the inventory lives in the LLM's
|
||||||
|
working memory, not on disk. The summary's `mentioned_requirements` field exposes every
|
||||||
|
code mentioned in any story body so the calling prompt can cross-check against its
|
||||||
|
inventory (see `prompts/validate.md`).
|
||||||
|
|
||||||
|
Output (stdout, JSON): {"findings": [...], "summary": {...}}
|
||||||
|
Exit codes: 0 if no errors (warnings ok), 1 if any error finding, 2 on internal error.
|
||||||
|
|
||||||
|
Flags:
|
||||||
|
--lax skip sizing warnings; never relaxes schema or dep checks
|
||||||
|
--epic NN-kebab limit walks to a single epic folder (still resolves cross-epic refs against the whole tree)
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from collections import Counter, defaultdict
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
STORY_TYPES = {"feature", "bug", "task", "spike"}
|
||||||
|
STATUSES = {"draft", "ready", "in-progress", "review", "done", "blocked"}
|
||||||
|
STORY_KEYS = {"title", "type", "status", "epic", "depends_on", "metadata"}
|
||||||
|
STORY_REQUIRED = {"title", "type", "status", "epic", "depends_on"}
|
||||||
|
EPIC_KEYS = {"title", "epic", "status", "depends_on", "metadata"}
|
||||||
|
EPIC_REQUIRED = {"title", "epic", "status", "depends_on"}
|
||||||
|
|
||||||
|
REQUIREMENT_CODE_RE = re.compile(r"\b(?:UX-DR|NFR|FR)\d+(?:\.\d+)?\b")
|
||||||
|
|
||||||
|
|
||||||
|
def parse_frontmatter(text: str) -> tuple[dict | None, str | None]:
|
||||||
|
if not text.startswith("---\n"):
|
||||||
|
return None, "missing front matter (expected leading '---')"
|
||||||
|
end = text.find("\n---", 4)
|
||||||
|
if end == -1:
|
||||||
|
return None, "front matter not closed (expected closing '---')"
|
||||||
|
return _parse_block(text[4:end])
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_block(block: str) -> tuple[dict | None, str | None]:
|
||||||
|
out: dict = {}
|
||||||
|
lines = block.split("\n")
|
||||||
|
i = 0
|
||||||
|
while i < len(lines):
|
||||||
|
line = lines[i]
|
||||||
|
if not line.strip() or line.lstrip().startswith("#"):
|
||||||
|
i += 1
|
||||||
|
continue
|
||||||
|
if line.startswith(" "):
|
||||||
|
return None, f"unexpected indented line: {line!r}"
|
||||||
|
if ":" not in line:
|
||||||
|
return None, f"line missing colon: {line!r}"
|
||||||
|
key, _, val = line.partition(":")
|
||||||
|
key = key.strip()
|
||||||
|
val = val.strip()
|
||||||
|
if val == "":
|
||||||
|
children = []
|
||||||
|
j = i + 1
|
||||||
|
while j < len(lines) and (lines[j].startswith(" ") or not lines[j].strip()):
|
||||||
|
children.append(lines[j])
|
||||||
|
j += 1
|
||||||
|
out[key] = _parse_indented(children)
|
||||||
|
i = j
|
||||||
|
continue
|
||||||
|
out[key] = _parse_scalar_or_list(val)
|
||||||
|
i += 1
|
||||||
|
return out, None
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_scalar_or_list(val: str):
|
||||||
|
if val.startswith("[") and val.endswith("]"):
|
||||||
|
inner = val[1:-1].strip()
|
||||||
|
if not inner:
|
||||||
|
return []
|
||||||
|
return [_unquote(p.strip()) for p in _split_top_level(inner, ",")]
|
||||||
|
return _unquote(val)
|
||||||
|
|
||||||
|
|
||||||
|
def _split_top_level(s: str, sep: str) -> list[str]:
|
||||||
|
out, cur, depth, in_q = [], [], 0, None
|
||||||
|
i = 0
|
||||||
|
while i < len(s):
|
||||||
|
c = s[i]
|
||||||
|
if in_q:
|
||||||
|
cur.append(c)
|
||||||
|
if c == "\\" and i + 1 < len(s):
|
||||||
|
cur.append(s[i + 1])
|
||||||
|
i += 2
|
||||||
|
continue
|
||||||
|
if c == in_q:
|
||||||
|
in_q = None
|
||||||
|
elif c in '"\'':
|
||||||
|
in_q = c
|
||||||
|
cur.append(c)
|
||||||
|
elif c in "[{":
|
||||||
|
depth += 1
|
||||||
|
cur.append(c)
|
||||||
|
elif c in "]}":
|
||||||
|
depth -= 1
|
||||||
|
cur.append(c)
|
||||||
|
elif c == sep and depth == 0:
|
||||||
|
out.append("".join(cur))
|
||||||
|
cur = []
|
||||||
|
else:
|
||||||
|
cur.append(c)
|
||||||
|
i += 1
|
||||||
|
if cur:
|
||||||
|
out.append("".join(cur))
|
||||||
|
return out
|
||||||
|
|
||||||
|
|
||||||
|
def _unquote(val: str) -> str:
|
||||||
|
val = val.strip()
|
||||||
|
if len(val) >= 2 and val[0] == val[-1] and val[0] in "\"'":
|
||||||
|
inner = val[1:-1]
|
||||||
|
if val[0] == '"':
|
||||||
|
return inner.replace('\\"', '"').replace("\\\\", "\\")
|
||||||
|
return inner
|
||||||
|
return val
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_indented(lines: list[str]) -> dict:
|
||||||
|
out: dict = {}
|
||||||
|
for line in lines:
|
||||||
|
s = line.strip()
|
||||||
|
if not s or s.startswith("#") or ":" not in s:
|
||||||
|
continue
|
||||||
|
key, _, val = s.partition(":")
|
||||||
|
out[key.strip()] = _unquote(val.strip())
|
||||||
|
return out
|
||||||
|
|
||||||
|
|
||||||
|
def _find_cycles(graph: dict[str, list[str]]) -> list[list[str]]:
|
||||||
|
cycles: list[list[str]] = []
|
||||||
|
state: dict[str, int] = {}
|
||||||
|
stack: list[str] = []
|
||||||
|
|
||||||
|
def dfs(node: str) -> None:
|
||||||
|
state[node] = 1
|
||||||
|
stack.append(node)
|
||||||
|
for nxt in graph.get(node, []):
|
||||||
|
if state.get(nxt) == 1:
|
||||||
|
cycles.append(stack[stack.index(nxt):] + [nxt])
|
||||||
|
elif state.get(nxt, 0) == 0:
|
||||||
|
dfs(nxt)
|
||||||
|
stack.pop()
|
||||||
|
state[node] = 2
|
||||||
|
|
||||||
|
for n in graph:
|
||||||
|
if state.get(n, 0) == 0:
|
||||||
|
dfs(n)
|
||||||
|
return cycles
|
||||||
|
|
||||||
|
|
||||||
|
def validate(initiative_store: Path, lax: bool, only_epic: str | None) -> tuple[list[dict], dict]:
|
||||||
|
findings: list[dict] = []
|
||||||
|
epics_dir = initiative_store / "epics"
|
||||||
|
if not epics_dir.is_dir():
|
||||||
|
findings.append({"level": "error", "code": "no-epics-dir", "message": f"missing {epics_dir}", "path": str(epics_dir)})
|
||||||
|
return findings, {}
|
||||||
|
|
||||||
|
all_epic_folders = sorted(p for p in epics_dir.iterdir() if p.is_dir() and re.match(r"^\d+-", p.name))
|
||||||
|
walk_folders = [p for p in all_epic_folders if (only_epic is None or p.name == only_epic)]
|
||||||
|
|
||||||
|
epic_meta: dict[str, dict] = {}
|
||||||
|
story_index: dict[str, dict] = {}
|
||||||
|
mentioned_codes: set[str] = set()
|
||||||
|
|
||||||
|
# walk every epic in the tree (so cross-epic refs always resolve), but only
|
||||||
|
# report non-resolution findings when the offending file is in walk_folders.
|
||||||
|
epic_folders_for_meta = all_epic_folders
|
||||||
|
walk_set = {p.name for p in walk_folders}
|
||||||
|
|
||||||
|
for ed in epic_folders_for_meta:
|
||||||
|
in_walk = ed.name in walk_set
|
||||||
|
nn = ed.name.split("-", 1)[0].zfill(2)
|
||||||
|
epic_md = ed / "epic.md"
|
||||||
|
if not epic_md.is_file():
|
||||||
|
if in_walk:
|
||||||
|
findings.append({"level": "error", "code": "missing-epic-md", "message": f"no epic.md in {ed.name}", "path": str(ed)})
|
||||||
|
continue
|
||||||
|
text = epic_md.read_text(encoding="utf-8")
|
||||||
|
fm, err = parse_frontmatter(text)
|
||||||
|
if err:
|
||||||
|
if in_walk:
|
||||||
|
findings.append({"level": "error", "code": "epic-frontmatter-parse", "message": err, "path": str(epic_md)})
|
||||||
|
continue
|
||||||
|
|
||||||
|
if in_walk:
|
||||||
|
present = set(fm.keys())
|
||||||
|
forbidden = present - EPIC_KEYS
|
||||||
|
if forbidden:
|
||||||
|
findings.append({"level": "error", "code": "epic-extra-keys", "message": f"forbidden top-level keys: {sorted(forbidden)}", "path": str(epic_md)})
|
||||||
|
missing = EPIC_REQUIRED - present
|
||||||
|
if missing:
|
||||||
|
findings.append({"level": "error", "code": "epic-missing-keys", "message": f"missing required keys: {sorted(missing)}", "path": str(epic_md)})
|
||||||
|
if fm.get("status") not in STATUSES:
|
||||||
|
findings.append({"level": "error", "code": "epic-bad-status", "message": f"status={fm.get('status')!r} not in {sorted(STATUSES)}", "path": str(epic_md)})
|
||||||
|
ef = str(fm.get("epic", "")).strip()
|
||||||
|
if ef and ef != nn:
|
||||||
|
findings.append({"level": "error", "code": "epic-nn-mismatch", "message": f"epic field {ef!r} does not match folder NN {nn!r}", "path": str(epic_md)})
|
||||||
|
|
||||||
|
deps = fm.get("depends_on", [])
|
||||||
|
if not isinstance(deps, list):
|
||||||
|
if in_walk:
|
||||||
|
findings.append({"level": "error", "code": "epic-deps-not-list", "message": "depends_on must be a list", "path": str(epic_md)})
|
||||||
|
deps = []
|
||||||
|
|
||||||
|
epic_meta[ed.name] = {"nn": nn, "depends_on": [str(d) for d in deps], "path": ed, "in_walk": in_walk}
|
||||||
|
|
||||||
|
story_files = sorted(p for p in ed.iterdir() if p.is_file() and p.suffix == ".md" and p.name != "epic.md" and re.match(r"^\d+-", p.name))
|
||||||
|
seen_nns: list[int] = []
|
||||||
|
for sf in story_files:
|
||||||
|
nn_m = re.match(r"^(\d+)-", sf.name)
|
||||||
|
if not nn_m:
|
||||||
|
if in_walk:
|
||||||
|
findings.append({"level": "error", "code": "story-bad-prefix", "message": "expected NN-kebab.md", "path": str(sf)})
|
||||||
|
continue
|
||||||
|
snn = int(nn_m.group(1))
|
||||||
|
seen_nns.append(snn)
|
||||||
|
stext = sf.read_text(encoding="utf-8")
|
||||||
|
sfm, serr = parse_frontmatter(stext)
|
||||||
|
if serr:
|
||||||
|
if in_walk:
|
||||||
|
findings.append({"level": "error", "code": "story-frontmatter-parse", "message": serr, "path": str(sf)})
|
||||||
|
continue
|
||||||
|
if in_walk:
|
||||||
|
present = set(sfm.keys())
|
||||||
|
forbidden = present - STORY_KEYS
|
||||||
|
if forbidden:
|
||||||
|
findings.append({"level": "error", "code": "story-extra-keys", "message": f"forbidden top-level keys: {sorted(forbidden)}", "path": str(sf)})
|
||||||
|
missing = STORY_REQUIRED - present
|
||||||
|
if missing:
|
||||||
|
findings.append({"level": "error", "code": "story-missing-keys", "message": f"missing required keys: {sorted(missing)}", "path": str(sf)})
|
||||||
|
if sfm.get("type") not in STORY_TYPES:
|
||||||
|
findings.append({"level": "error", "code": "story-bad-type", "message": f"type={sfm.get('type')!r} not in {sorted(STORY_TYPES)}", "path": str(sf)})
|
||||||
|
if sfm.get("status") not in STATUSES:
|
||||||
|
findings.append({"level": "error", "code": "story-bad-status", "message": f"status={sfm.get('status')!r} not in {sorted(STATUSES)}", "path": str(sf)})
|
||||||
|
if sfm.get("epic") != ed.name:
|
||||||
|
findings.append({"level": "error", "code": "story-epic-mismatch", "message": f"epic field {sfm.get('epic')!r} does not match folder {ed.name!r}", "path": str(sf)})
|
||||||
|
sdeps = sfm.get("depends_on", [])
|
||||||
|
if not isinstance(sdeps, list):
|
||||||
|
if in_walk:
|
||||||
|
findings.append({"level": "error", "code": "story-deps-not-list", "message": "depends_on must be a list", "path": str(sf)})
|
||||||
|
sdeps = []
|
||||||
|
mentioned_codes.update(REQUIREMENT_CODE_RE.findall(stext))
|
||||||
|
story_index[f"{ed.name}/{sf.stem}"] = {
|
||||||
|
"depends_on": [str(d) for d in sdeps],
|
||||||
|
"path": sf,
|
||||||
|
"epic": ed.name,
|
||||||
|
"nn": snn,
|
||||||
|
"status": sfm.get("status"),
|
||||||
|
"body_len": len(stext),
|
||||||
|
"in_walk": in_walk,
|
||||||
|
}
|
||||||
|
|
||||||
|
if in_walk and seen_nns:
|
||||||
|
expected = list(range(1, len(seen_nns) + 1))
|
||||||
|
if sorted(seen_nns) != expected:
|
||||||
|
findings.append({"level": "error", "code": "story-numbering-gaps", "message": f"story NNs {sorted(seen_nns)} expected {expected}", "path": str(ed)})
|
||||||
|
|
||||||
|
# depends_on resolution
|
||||||
|
epic_nns = {meta["nn"]: name for name, meta in epic_meta.items()}
|
||||||
|
for name, meta in epic_meta.items():
|
||||||
|
if not meta["in_walk"]:
|
||||||
|
continue
|
||||||
|
for d in meta["depends_on"]:
|
||||||
|
d2 = d.zfill(2)
|
||||||
|
if d2 not in epic_nns:
|
||||||
|
findings.append({"level": "error", "code": "epic-dep-unresolved", "message": f"epic {name} depends on NN {d!r} which has no folder", "path": str(meta["path"])})
|
||||||
|
|
||||||
|
for skey, smeta in story_index.items():
|
||||||
|
if not smeta["in_walk"]:
|
||||||
|
continue
|
||||||
|
for d in smeta["depends_on"]:
|
||||||
|
if "/" in d:
|
||||||
|
if f"{d.split('/', 1)[0]}/{d.split('/', 1)[1]}" not in story_index:
|
||||||
|
findings.append({"level": "error", "code": "story-dep-unresolved", "message": f"cross-epic dep {d!r} references missing story", "path": str(smeta["path"])})
|
||||||
|
else:
|
||||||
|
if f"{smeta['epic']}/{d}" not in story_index:
|
||||||
|
findings.append({"level": "error", "code": "story-dep-unresolved", "message": f"within-epic dep {d!r} not found in {smeta['epic']}", "path": str(smeta["path"])})
|
||||||
|
|
||||||
|
# epic dep cycles (compute on whole tree; report once)
|
||||||
|
if walk_set:
|
||||||
|
cycle_graph = {meta["nn"]: [d.zfill(2) for d in meta["depends_on"]] for meta in epic_meta.values()}
|
||||||
|
for cyc in _find_cycles(cycle_graph):
|
||||||
|
findings.append({"level": "error", "code": "epic-dep-cycle", "message": "cycle in epic depends_on: " + " -> ".join(cyc), "path": str(epics_dir)})
|
||||||
|
|
||||||
|
# sizing warnings
|
||||||
|
if not lax:
|
||||||
|
by_epic: dict[str, list] = defaultdict(list)
|
||||||
|
for skey, smeta in story_index.items():
|
||||||
|
if smeta["in_walk"]:
|
||||||
|
by_epic[smeta["epic"]].append(smeta)
|
||||||
|
for epic_name, items in by_epic.items():
|
||||||
|
if len(items) < 3:
|
||||||
|
continue
|
||||||
|
mean = sum(s["body_len"] for s in items) / len(items)
|
||||||
|
for smeta in items:
|
||||||
|
if mean > 0 and smeta["body_len"] > mean * 3:
|
||||||
|
findings.append({
|
||||||
|
"level": "warning",
|
||||||
|
"code": "story-oversized",
|
||||||
|
"message": f"body {smeta['body_len']} chars is >3x epic mean ({mean:.0f}); consider splitting",
|
||||||
|
"path": str(smeta["path"]),
|
||||||
|
})
|
||||||
|
|
||||||
|
summary = {
|
||||||
|
"epics": [
|
||||||
|
{"folder": name, "nn": meta["nn"], "depends_on": meta["depends_on"]}
|
||||||
|
for name, meta in epic_meta.items() if meta["in_walk"]
|
||||||
|
],
|
||||||
|
"story_count": sum(1 for s in story_index.values() if s["in_walk"]),
|
||||||
|
"story_status_counts": dict(Counter(s["status"] for s in story_index.values() if s["in_walk"])),
|
||||||
|
"errors": sum(1 for f in findings if f["level"] == "error"),
|
||||||
|
"warnings": sum(1 for f in findings if f["level"] == "warning"),
|
||||||
|
"mentioned_requirements": sorted(mentioned_codes),
|
||||||
|
}
|
||||||
|
return findings, summary
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
ap = argparse.ArgumentParser(description=__doc__)
|
||||||
|
ap.add_argument("--initiative-store", required=True, type=Path)
|
||||||
|
ap.add_argument("--lax", action="store_true", help="Skip sizing warnings; never relaxes schema/dep checks")
|
||||||
|
ap.add_argument("--epic", help="Limit reporting to a single epic folder name")
|
||||||
|
args = ap.parse_args()
|
||||||
|
|
||||||
|
findings, summary = validate(args.initiative_store, args.lax, args.epic)
|
||||||
|
print(json.dumps({"findings": findings, "summary": summary}))
|
||||||
|
return 1 if any(f["level"] == "error" for f in findings) else 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
|
|
@ -1,255 +0,0 @@
|
||||||
# Step 1: Validate Prerequisites and Extract Requirements
|
|
||||||
|
|
||||||
## STEP GOAL:
|
|
||||||
|
|
||||||
To validate that all required input documents exist and extract all requirements (FRs, NFRs, and additional requirements from UX/Architecture) needed for epic and story creation.
|
|
||||||
|
|
||||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
|
||||||
|
|
||||||
### Universal Rules:
|
|
||||||
|
|
||||||
- 🛑 NEVER generate content without user input
|
|
||||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
|
||||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
|
||||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
|
||||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
|
||||||
|
|
||||||
### Role Reinforcement:
|
|
||||||
|
|
||||||
- ✅ You are a product strategist and technical specifications writer
|
|
||||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
|
||||||
- ✅ We engage in collaborative dialogue, not command-response
|
|
||||||
- ✅ You bring requirements extraction expertise
|
|
||||||
- ✅ User brings their product vision and context
|
|
||||||
|
|
||||||
### Step-Specific Rules:
|
|
||||||
|
|
||||||
- 🎯 Focus ONLY on extracting and organizing requirements
|
|
||||||
- 🚫 FORBIDDEN to start creating epics or stories in this step
|
|
||||||
- 💬 Extract requirements from ALL available documents
|
|
||||||
- 🚪 POPULATE the template sections exactly as needed
|
|
||||||
|
|
||||||
## EXECUTION PROTOCOLS:
|
|
||||||
|
|
||||||
- 🎯 Extract requirements systematically from all documents
|
|
||||||
- 💾 Populate {planning_artifacts}/epics.md with extracted requirements
|
|
||||||
- 📖 Update frontmatter with extraction progress
|
|
||||||
- 🚫 FORBIDDEN to load next step until user selects 'C' and requirements are extracted
|
|
||||||
|
|
||||||
## REQUIREMENTS EXTRACTION PROCESS:
|
|
||||||
|
|
||||||
### 1. Welcome and Overview
|
|
||||||
|
|
||||||
Welcome {user_name} to comprehensive epic and story creation!
|
|
||||||
|
|
||||||
**CRITICAL PREREQUISITE VALIDATION:**
|
|
||||||
|
|
||||||
Verify required documents exist and are complete:
|
|
||||||
|
|
||||||
1. **PRD.md** - Contains requirements (FRs and NFRs) and product scope
|
|
||||||
2. **Architecture.md** - Contains technical decisions, API contracts, data models
|
|
||||||
3. **UX Design.md** (if UI exists) - Contains interaction patterns, mockups, user flows
|
|
||||||
|
|
||||||
### 2. Document Discovery and Validation
|
|
||||||
|
|
||||||
Search for required documents using these patterns (sharded means a large document was split into multiple small files with an index.md into a folder) - if the whole document is found, use that instead of the sharded version:
|
|
||||||
|
|
||||||
**PRD Document Search Priority:**
|
|
||||||
|
|
||||||
1. `{planning_artifacts}/*prd*.md` (whole document)
|
|
||||||
2. `{planning_artifacts}/*prd*/index.md` (sharded version)
|
|
||||||
|
|
||||||
**Architecture Document Search Priority:**
|
|
||||||
|
|
||||||
1. `{planning_artifacts}/*architecture*.md` (whole document)
|
|
||||||
2. `{planning_artifacts}/*architecture*/index.md` (sharded version)
|
|
||||||
|
|
||||||
**UX Design Document Search (Optional):**
|
|
||||||
|
|
||||||
1. `{planning_artifacts}/*ux*.md` (whole document)
|
|
||||||
2. `{planning_artifacts}/*ux*/index.md` (sharded version)
|
|
||||||
|
|
||||||
Before proceeding, Ask the user if there are any other documents to include for analysis, and if anything found should be excluded. Wait for user confirmation. Once confirmed, create the {planning_artifacts}/epics.md from the ../templates/epics-template.md and in the front matter list the files in the array of `inputDocuments: []`.
|
|
||||||
|
|
||||||
### 3. Extract Functional Requirements (FRs)
|
|
||||||
|
|
||||||
From the PRD document (full or sharded), read then entire document and extract ALL functional requirements:
|
|
||||||
|
|
||||||
**Extraction Method:**
|
|
||||||
|
|
||||||
- Look for numbered items like "FR1:", "Functional Requirement 1:", or similar
|
|
||||||
- Identify requirement statements that describe what the system must DO
|
|
||||||
- Include user actions, system behaviors, and business rules
|
|
||||||
|
|
||||||
**Format the FR list as:**
|
|
||||||
|
|
||||||
```
|
|
||||||
FR1: [Clear, testable requirement description]
|
|
||||||
FR2: [Clear, testable requirement description]
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Extract Non-Functional Requirements (NFRs)
|
|
||||||
|
|
||||||
From the PRD document, extract ALL non-functional requirements:
|
|
||||||
|
|
||||||
**Extraction Method:**
|
|
||||||
|
|
||||||
- Look for performance, security, usability, reliability requirements
|
|
||||||
- Identify constraints and quality attributes
|
|
||||||
- Include technical standards and compliance requirements
|
|
||||||
|
|
||||||
**Format the NFR list as:**
|
|
||||||
|
|
||||||
```
|
|
||||||
NFR1: [Performance/Security/Usability requirement]
|
|
||||||
NFR2: [Performance/Security/Usability requirement]
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. Extract Additional Requirements from Architecture
|
|
||||||
|
|
||||||
Review the Architecture document for technical requirements that impact epic and story creation:
|
|
||||||
|
|
||||||
**Look for:**
|
|
||||||
|
|
||||||
- **Starter Template**: Does Architecture specify a starter/greenfield template? If YES, document this for Epic 1 Story 1
|
|
||||||
- Infrastructure and deployment requirements
|
|
||||||
- Integration requirements with external systems
|
|
||||||
- Data migration or setup requirements
|
|
||||||
- Monitoring and logging requirements
|
|
||||||
- API versioning or compatibility requirements
|
|
||||||
- Security implementation requirements
|
|
||||||
|
|
||||||
**IMPORTANT**: If a starter template is mentioned in Architecture, note it prominently. This will impact Epic 1 Story 1.
|
|
||||||
|
|
||||||
**Format Additional Requirements as:**
|
|
||||||
|
|
||||||
```
|
|
||||||
- [Technical requirement from Architecture that affects implementation]
|
|
||||||
- [Infrastructure setup requirement]
|
|
||||||
- [Integration requirement]
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6. Extract UX Design Requirements (if UX document exists)
|
|
||||||
|
|
||||||
**IMPORTANT**: The UX Design Specification is a first-class input document, not supplementary material. Requirements from the UX spec must be extracted with the same rigor as PRD functional requirements.
|
|
||||||
|
|
||||||
Read the FULL UX Design document and extract ALL actionable work items:
|
|
||||||
|
|
||||||
**Look for:**
|
|
||||||
|
|
||||||
- **Design token work**: Color systems, spacing scales, typography tokens that need implementation or consolidation
|
|
||||||
- **Component proposals**: Reusable UI components identified in the UX spec (e.g., ConfirmActions, StatusMessage, EmptyState, FocusIndicator)
|
|
||||||
- **Visual standardization**: Semantic CSS classes, consistent color palette usage, design pattern consolidation
|
|
||||||
- **Accessibility requirements**: Contrast audit fixes, ARIA patterns, keyboard navigation, screen reader support
|
|
||||||
- **Responsive design requirements**: Breakpoints, layout adaptations, mobile-specific interactions
|
|
||||||
- **Interaction patterns**: Animations, transitions, loading states, error handling UX
|
|
||||||
- **Browser/device compatibility**: Target platforms, progressive enhancement requirements
|
|
||||||
|
|
||||||
**Format UX Design Requirements as a SEPARATE section (not merged into Additional Requirements):**
|
|
||||||
|
|
||||||
```
|
|
||||||
UX-DR1: [Actionable UX design requirement with clear implementation scope]
|
|
||||||
UX-DR2: [Actionable UX design requirement with clear implementation scope]
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
**🚨 CRITICAL**: Do NOT reduce UX requirements to vague summaries. Each UX-DR must be specific enough to generate a story with testable acceptance criteria. If the UX spec identifies 6 reusable components, list all 6 — not "create reusable components."
|
|
||||||
|
|
||||||
### 7. Load and Initialize Template
|
|
||||||
|
|
||||||
Load ../templates/epics-template.md and initialize {planning_artifacts}/epics.md:
|
|
||||||
|
|
||||||
1. Copy the entire template to {planning_artifacts}/epics.md
|
|
||||||
2. Replace {{project_name}} with the actual project name
|
|
||||||
3. Replace placeholder sections with extracted requirements:
|
|
||||||
- {{fr_list}} → extracted FRs
|
|
||||||
- {{nfr_list}} → extracted NFRs
|
|
||||||
- {{additional_requirements}} → extracted additional requirements (from Architecture)
|
|
||||||
- {{ux_design_requirements}} → extracted UX Design Requirements (if UX document exists)
|
|
||||||
4. Leave {{requirements_coverage_map}} and {{epics_list}} as placeholders for now
|
|
||||||
|
|
||||||
### 8. Present Extracted Requirements
|
|
||||||
|
|
||||||
Display to user:
|
|
||||||
|
|
||||||
**Functional Requirements Extracted:**
|
|
||||||
|
|
||||||
- Show count of FRs found
|
|
||||||
- Display the first few FRs as examples
|
|
||||||
- Ask if any FRs are missing or incorrectly captured
|
|
||||||
|
|
||||||
**Non-Functional Requirements Extracted:**
|
|
||||||
|
|
||||||
- Show count of NFRs found
|
|
||||||
- Display key NFRs
|
|
||||||
- Ask if any constraints were missed
|
|
||||||
|
|
||||||
**Additional Requirements (Architecture):**
|
|
||||||
|
|
||||||
- Summarize technical requirements from Architecture
|
|
||||||
- Verify completeness
|
|
||||||
|
|
||||||
**UX Design Requirements (if applicable):**
|
|
||||||
|
|
||||||
- Show count of UX-DRs found
|
|
||||||
- Display key UX Design requirements (design tokens, components, accessibility)
|
|
||||||
- Verify each UX-DR is specific enough for story creation
|
|
||||||
|
|
||||||
### 9. Get User Confirmation
|
|
||||||
|
|
||||||
Ask: "Do these extracted requirements accurately represent what needs to be built? Any additions or corrections?"
|
|
||||||
|
|
||||||
Update the requirements based on user feedback until confirmation is received.
|
|
||||||
|
|
||||||
## CONTENT TO SAVE TO DOCUMENT:
|
|
||||||
|
|
||||||
After extraction and confirmation, update {planning_artifacts}/epics.md with:
|
|
||||||
|
|
||||||
- Complete FR list in {{fr_list}} section
|
|
||||||
- Complete NFR list in {{nfr_list}} section
|
|
||||||
- All additional requirements in {{additional_requirements}} section
|
|
||||||
- UX Design requirements in {{ux_design_requirements}} section (if UX document exists)
|
|
||||||
|
|
||||||
### 10. Present MENU OPTIONS
|
|
||||||
|
|
||||||
Display: `**Confirm the Requirements are complete and correct to [C] continue:**`
|
|
||||||
|
|
||||||
#### EXECUTION RULES:
|
|
||||||
|
|
||||||
- ALWAYS halt and wait for user input after presenting menu
|
|
||||||
- ONLY proceed to next step when user selects 'C'
|
|
||||||
- User can chat or ask questions - always respond and then end with display again of the menu option
|
|
||||||
|
|
||||||
#### Menu Handling Logic:
|
|
||||||
|
|
||||||
- IF C: Save all to {planning_artifacts}/epics.md, update frontmatter, then read fully and follow: ./step-02-design-epics.md
|
|
||||||
- IF Any other comments or queries: help user respond then [Redisplay Menu Options](#10-present-menu-options)
|
|
||||||
|
|
||||||
## CRITICAL STEP COMPLETION NOTE
|
|
||||||
|
|
||||||
ONLY WHEN C is selected and all requirements are saved to document and frontmatter is updated, will you then read fully and follow: ./step-02-design-epics.md to begin epic design step.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
|
||||||
|
|
||||||
### ✅ SUCCESS:
|
|
||||||
|
|
||||||
- All required documents found and validated
|
|
||||||
- All FRs extracted and formatted correctly
|
|
||||||
- All NFRs extracted and formatted correctly
|
|
||||||
- Additional requirements from Architecture/UX identified
|
|
||||||
- Template initialized with requirements
|
|
||||||
- User confirms requirements are complete and accurate
|
|
||||||
|
|
||||||
### ❌ SYSTEM FAILURE:
|
|
||||||
|
|
||||||
- Missing required documents
|
|
||||||
- Incomplete requirements extraction
|
|
||||||
- Template not properly initialized
|
|
||||||
- Not saving requirements to output file
|
|
||||||
|
|
||||||
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.
|
|
||||||
|
|
@ -1,242 +0,0 @@
|
||||||
# Step 2: Design Epic List
|
|
||||||
|
|
||||||
## STEP GOAL:
|
|
||||||
|
|
||||||
To design and get approval for the epics_list that will organize all requirements into user-value-focused epics.
|
|
||||||
|
|
||||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
|
||||||
|
|
||||||
### Universal Rules:
|
|
||||||
|
|
||||||
- 🛑 NEVER generate content without user input
|
|
||||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
|
||||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
|
||||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
|
||||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
|
||||||
|
|
||||||
### Role Reinforcement:
|
|
||||||
|
|
||||||
- ✅ You are a product strategist and technical specifications writer
|
|
||||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
|
||||||
- ✅ We engage in collaborative dialogue, not command-response
|
|
||||||
- ✅ You bring product strategy and epic design expertise
|
|
||||||
- ✅ User brings their product vision and priorities
|
|
||||||
|
|
||||||
### Step-Specific Rules:
|
|
||||||
|
|
||||||
- 🎯 Focus ONLY on creating the epics_list
|
|
||||||
- 🚫 FORBIDDEN to create individual stories in this step
|
|
||||||
- 💬 Organize epics around user value, not technical layers
|
|
||||||
- 🚪 GET explicit approval for the epics_list
|
|
||||||
- 🔗 **CRITICAL: Each epic must be standalone and enable future epics without requiring future epics to function**
|
|
||||||
|
|
||||||
## EXECUTION PROTOCOLS:
|
|
||||||
|
|
||||||
- 🎯 Design epics collaboratively based on extracted requirements
|
|
||||||
- 💾 Update {{epics_list}} in {planning_artifacts}/epics.md
|
|
||||||
- 📖 Document the FR coverage mapping
|
|
||||||
- 🚫 FORBIDDEN to load next step until user approves epics_list
|
|
||||||
|
|
||||||
## EPIC DESIGN PROCESS:
|
|
||||||
|
|
||||||
### 1. Review Extracted Requirements
|
|
||||||
|
|
||||||
Load {planning_artifacts}/epics.md and review:
|
|
||||||
|
|
||||||
- **Functional Requirements:** Count and review FRs from Step 1
|
|
||||||
- **Non-Functional Requirements:** Review NFRs that need to be addressed
|
|
||||||
- **Additional Requirements:** Review technical and UX requirements
|
|
||||||
|
|
||||||
### 2. Explain Epic Design Principles
|
|
||||||
|
|
||||||
**EPIC DESIGN PRINCIPLES:**
|
|
||||||
|
|
||||||
1. **User-Value First**: Each epic must enable users to accomplish something meaningful
|
|
||||||
2. **Requirements Grouping**: Group related FRs that deliver cohesive user outcomes
|
|
||||||
3. **Incremental Delivery**: Each epic should deliver value independently
|
|
||||||
4. **Logical Flow**: Natural progression from user's perspective
|
|
||||||
5. **Dependency-Free Within Epic**: Stories within an epic must NOT depend on future stories
|
|
||||||
6. **Implementation Efficiency**: Consider consolidating epics that all modify the same core files into fewer epics
|
|
||||||
|
|
||||||
**⚠️ CRITICAL PRINCIPLE:**
|
|
||||||
Organize by USER VALUE, not technical layers:
|
|
||||||
|
|
||||||
**✅ CORRECT Epic Examples (Standalone & Enable Future Epics):**
|
|
||||||
|
|
||||||
- Epic 1: User Authentication & Profiles (users can register, login, manage profiles) - **Standalone: Complete auth system**
|
|
||||||
- Epic 2: Content Creation (users can create, edit, publish content) - **Standalone: Uses auth, creates content**
|
|
||||||
- Epic 3: Social Interaction (users can follow, comment, like content) - **Standalone: Uses auth + content**
|
|
||||||
- Epic 4: Search & Discovery (users can find content and other users) - **Standalone: Uses all previous**
|
|
||||||
|
|
||||||
**❌ WRONG Epic Examples (Technical Layers or Dependencies):**
|
|
||||||
|
|
||||||
- Epic 1: Database Setup (creates all tables upfront) - **No user value**
|
|
||||||
- Epic 2: API Development (builds all endpoints) - **No user value**
|
|
||||||
- Epic 3: Frontend Components (creates reusable components) - **No user value**
|
|
||||||
- Epic 4: Deployment Pipeline (CI/CD setup) - **No user value**
|
|
||||||
|
|
||||||
**❌ WRONG Epic Examples (File Churn on Same Component):**
|
|
||||||
|
|
||||||
- Epic 1: File Upload (modifies model, controller, web form, web API)
|
|
||||||
- Epic 2: File Status (modifies model, controller, web form, web API)
|
|
||||||
- Epic 3: File Access permissions (modifies model, controller, web form, web API)
|
|
||||||
- All three epics touch the same files — consolidate into one epic with ordered stories
|
|
||||||
|
|
||||||
**✅ CORRECT Alternative:**
|
|
||||||
|
|
||||||
- Epic 1: File Management Enhancement (upload, status, permissions as stories within one epic)
|
|
||||||
- Rationale: Single component, fully pre-designed, no feedback loop between epics
|
|
||||||
|
|
||||||
**🔗 DEPENDENCY RULES:**
|
|
||||||
|
|
||||||
- Each epic must deliver COMPLETE functionality for its domain
|
|
||||||
- Epic 2 must not require Epic 3 to function
|
|
||||||
- Epic 3 can build upon Epic 1 & 2 but must stand alone
|
|
||||||
|
|
||||||
### 3. Design Epic Structure Collaboratively
|
|
||||||
|
|
||||||
**Step A: Assess Context and Identify Themes**
|
|
||||||
|
|
||||||
First, assess how much of the solution design is already validated (Architecture, UX, Test Design).
|
|
||||||
When the outcome is certain and direction changes between epics are unlikely, prefer fewer but larger epics.
|
|
||||||
Split into multiple epics when there is a genuine risk boundary or when early feedback could change direction
|
|
||||||
of following epics.
|
|
||||||
|
|
||||||
Then, identify user value themes:
|
|
||||||
|
|
||||||
- Look for natural groupings in the FRs
|
|
||||||
- Identify user journeys or workflows
|
|
||||||
- Consider user types and their goals
|
|
||||||
|
|
||||||
**Step B: Propose Epic Structure**
|
|
||||||
|
|
||||||
For each proposed epic (considering whether epics share the same core files):
|
|
||||||
|
|
||||||
1. **Epic Title**: User-centric, value-focused
|
|
||||||
2. **User Outcome**: What users can accomplish after this epic
|
|
||||||
3. **FR Coverage**: Which FR numbers this epic addresses
|
|
||||||
4. **Implementation Notes**: Any technical or UX considerations
|
|
||||||
|
|
||||||
**Step C: Review for File Overlap**
|
|
||||||
|
|
||||||
Assess whether multiple proposed epics repeatedly target the same core files. If overlap is significant:
|
|
||||||
|
|
||||||
- Distinguish meaningful overlap (same component end-to-end) from incidental sharing
|
|
||||||
- Ask whether to consolidate into one epic with ordered stories
|
|
||||||
- If confirmed, merge the epic FRs into a single epic, preserving dependency flow: each story must still fit within
|
|
||||||
a single dev agent's context
|
|
||||||
|
|
||||||
**Step D: Create the epics_list**
|
|
||||||
|
|
||||||
Format the epics_list as:
|
|
||||||
|
|
||||||
```
|
|
||||||
## Epic List
|
|
||||||
|
|
||||||
### Epic 1: [Epic Title]
|
|
||||||
[Epic goal statement - what users can accomplish]
|
|
||||||
**FRs covered:** FR1, FR2, FR3, etc.
|
|
||||||
|
|
||||||
### Epic 2: [Epic Title]
|
|
||||||
[Epic goal statement - what users can accomplish]
|
|
||||||
**FRs covered:** FR4, FR5, FR6, etc.
|
|
||||||
|
|
||||||
[Continue for all epics]
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Present Epic List for Review
|
|
||||||
|
|
||||||
Display the complete epics_list to user with:
|
|
||||||
|
|
||||||
- Total number of epics
|
|
||||||
- FR coverage per epic
|
|
||||||
- User value delivered by each epic
|
|
||||||
- Any natural dependencies
|
|
||||||
|
|
||||||
### 5. Create Requirements Coverage Map
|
|
||||||
|
|
||||||
Create {{requirements_coverage_map}} showing how each FR maps to an epic:
|
|
||||||
|
|
||||||
```
|
|
||||||
### FR Coverage Map
|
|
||||||
|
|
||||||
FR1: Epic 1 - [Brief description]
|
|
||||||
FR2: Epic 1 - [Brief description]
|
|
||||||
FR3: Epic 2 - [Brief description]
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
This ensures no FRs are missed.
|
|
||||||
|
|
||||||
### 6. Collaborative Refinement
|
|
||||||
|
|
||||||
Ask user:
|
|
||||||
|
|
||||||
- "Does this epic structure align with your product vision?"
|
|
||||||
- "Are all user outcomes properly captured?"
|
|
||||||
- "Should we adjust any epic groupings?"
|
|
||||||
- "Are there natural dependencies we've missed?"
|
|
||||||
|
|
||||||
### 7. Get Final Approval
|
|
||||||
|
|
||||||
**CRITICAL:** Must get explicit user approval:
|
|
||||||
"Do you approve this epic structure for proceeding to story creation?"
|
|
||||||
|
|
||||||
If user wants changes:
|
|
||||||
|
|
||||||
- Make the requested adjustments
|
|
||||||
- Update the epics_list
|
|
||||||
- Re-present for approval
|
|
||||||
- Repeat until approval is received
|
|
||||||
|
|
||||||
## CONTENT TO UPDATE IN DOCUMENT:
|
|
||||||
|
|
||||||
After approval, update {planning_artifacts}/epics.md:
|
|
||||||
|
|
||||||
1. Replace {{epics_list}} placeholder with the approved epic list
|
|
||||||
2. Replace {{requirements_coverage_map}} with the coverage map
|
|
||||||
3. Ensure all FRs are mapped to epics
|
|
||||||
|
|
||||||
### 8. Present MENU OPTIONS
|
|
||||||
|
|
||||||
Display: "**Select an Option:** [A] Advanced Elicitation [P] Party Mode [C] Continue"
|
|
||||||
|
|
||||||
#### Menu Handling Logic:
|
|
||||||
|
|
||||||
- IF A: Invoke the `bmad-advanced-elicitation` skill
|
|
||||||
- IF P: Invoke the `bmad-party-mode` skill
|
|
||||||
- IF C: Save approved epics_list to {planning_artifacts}/epics.md, update frontmatter, then read fully and follow: ./step-03-create-stories.md
|
|
||||||
- IF Any other comments or queries: help user respond then [Redisplay Menu Options](#8-present-menu-options)
|
|
||||||
|
|
||||||
#### EXECUTION RULES:
|
|
||||||
|
|
||||||
- ALWAYS halt and wait for user input after presenting menu
|
|
||||||
- ONLY proceed to next step when user selects 'C'
|
|
||||||
- After other menu items execution completes, redisplay the menu
|
|
||||||
- User can chat or ask questions - always respond when conversation ends, redisplay the menu options
|
|
||||||
|
|
||||||
## CRITICAL STEP COMPLETION NOTE
|
|
||||||
|
|
||||||
ONLY WHEN C is selected and the approved epics_list is saved to document, will you then read fully and follow: ./step-03-create-stories.md to begin story creation step.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
|
||||||
|
|
||||||
### ✅ SUCCESS:
|
|
||||||
|
|
||||||
- Epics designed around user value
|
|
||||||
- All FRs mapped to specific epics
|
|
||||||
- epics_list created and formatted correctly
|
|
||||||
- Requirements coverage map completed
|
|
||||||
- User gives explicit approval for epic structure
|
|
||||||
- Document updated with approved epics
|
|
||||||
|
|
||||||
### ❌ SYSTEM FAILURE:
|
|
||||||
|
|
||||||
- Epics organized by technical layers
|
|
||||||
- Missing FRs in coverage map
|
|
||||||
- No user approval obtained
|
|
||||||
- epics_list not saved to document
|
|
||||||
|
|
||||||
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.
|
|
||||||
|
|
@ -1,255 +0,0 @@
|
||||||
# Step 3: Generate Epics and Stories
|
|
||||||
|
|
||||||
## STEP GOAL:
|
|
||||||
|
|
||||||
To generate all epics with their stories based on the approved epics_list, following the template structure exactly.
|
|
||||||
|
|
||||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
|
||||||
|
|
||||||
### Universal Rules:
|
|
||||||
|
|
||||||
- 🛑 NEVER generate content without user input
|
|
||||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
|
||||||
- 🔄 CRITICAL: Process epics sequentially
|
|
||||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
|
||||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
|
||||||
|
|
||||||
### Role Reinforcement:
|
|
||||||
|
|
||||||
- ✅ You are a product strategist and technical specifications writer
|
|
||||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
|
||||||
- ✅ We engage in collaborative dialogue, not command-response
|
|
||||||
- ✅ You bring story creation and acceptance criteria expertise
|
|
||||||
- ✅ User brings their implementation priorities and constraints
|
|
||||||
|
|
||||||
### Step-Specific Rules:
|
|
||||||
|
|
||||||
- 🎯 Generate stories for each epic following the template exactly
|
|
||||||
- 🚫 FORBIDDEN to deviate from template structure
|
|
||||||
- 💬 Each story must have clear acceptance criteria
|
|
||||||
- 🚪 ENSURE each story is completable by a single dev agent
|
|
||||||
- 🔗 **CRITICAL: Stories MUST NOT depend on future stories within the same epic**
|
|
||||||
|
|
||||||
## EXECUTION PROTOCOLS:
|
|
||||||
|
|
||||||
- 🎯 Generate stories collaboratively with user input
|
|
||||||
- 💾 Append epics and stories to {planning_artifacts}/epics.md following template
|
|
||||||
- 📖 Process epics one at a time in sequence
|
|
||||||
- 🚫 FORBIDDEN to skip any epic or rush through stories
|
|
||||||
|
|
||||||
## STORY GENERATION PROCESS:
|
|
||||||
|
|
||||||
### 1. Load Approved Epic Structure
|
|
||||||
|
|
||||||
Load {planning_artifacts}/epics.md and review:
|
|
||||||
|
|
||||||
- Approved epics_list from Step 2
|
|
||||||
- FR coverage map
|
|
||||||
- All requirements (FRs, NFRs, additional, **UX Design requirements if present**)
|
|
||||||
- Template structure at the end of the document
|
|
||||||
|
|
||||||
**UX Design Integration**: If UX Design Requirements (UX-DRs) were extracted in Step 1, ensure they are visible during story creation. UX-DRs must be covered by stories — either within existing epics (e.g., accessibility fixes for a feature epic) or in a dedicated "Design System / UX Polish" epic.
|
|
||||||
|
|
||||||
### 2. Explain Story Creation Approach
|
|
||||||
|
|
||||||
**STORY CREATION GUIDELINES:**
|
|
||||||
|
|
||||||
For each epic, create stories that:
|
|
||||||
|
|
||||||
- Follow the exact template structure
|
|
||||||
- Are sized for single dev agent completion
|
|
||||||
- Have clear user value
|
|
||||||
- Include specific acceptance criteria
|
|
||||||
- Reference requirements being fulfilled
|
|
||||||
|
|
||||||
**🚨 DATABASE/ENTITY CREATION PRINCIPLE:**
|
|
||||||
Create tables/entities ONLY when needed by the story:
|
|
||||||
|
|
||||||
- ❌ WRONG: Epic 1 Story 1 creates all 50 database tables
|
|
||||||
- ✅ RIGHT: Each story creates/alters ONLY the tables it needs
|
|
||||||
|
|
||||||
**🔗 STORY DEPENDENCY PRINCIPLE:**
|
|
||||||
Stories must be independently completable in sequence:
|
|
||||||
|
|
||||||
- ❌ WRONG: Story 1.2 requires Story 1.3 to be completed first
|
|
||||||
- ✅ RIGHT: Each story can be completed based only on previous stories
|
|
||||||
- ❌ WRONG: "Wait for Story 1.4 to be implemented before this works"
|
|
||||||
- ✅ RIGHT: "This story works independently and enables future stories"
|
|
||||||
|
|
||||||
**STORY FORMAT (from template):**
|
|
||||||
|
|
||||||
```
|
|
||||||
### Story {N}.{M}: {story_title}
|
|
||||||
|
|
||||||
As a {user_type},
|
|
||||||
I want {capability},
|
|
||||||
So that {value_benefit}.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
|
|
||||||
**Given** {precondition}
|
|
||||||
**When** {action}
|
|
||||||
**Then** {expected_outcome}
|
|
||||||
**And** {additional_criteria}
|
|
||||||
```
|
|
||||||
|
|
||||||
**✅ GOOD STORY EXAMPLES:**
|
|
||||||
|
|
||||||
_Epic 1: User Authentication_
|
|
||||||
|
|
||||||
- Story 1.1: User Registration with Email
|
|
||||||
- Story 1.2: User Login with Password
|
|
||||||
- Story 1.3: Password Reset via Email
|
|
||||||
|
|
||||||
_Epic 2: Content Creation_
|
|
||||||
|
|
||||||
- Story 2.1: Create New Blog Post
|
|
||||||
- Story 2.2: Edit Existing Blog Post
|
|
||||||
- Story 2.3: Publish Blog Post
|
|
||||||
|
|
||||||
**❌ BAD STORY EXAMPLES:**
|
|
||||||
|
|
||||||
- Story: "Set up database" (no user value)
|
|
||||||
- Story: "Create all models" (too large, no user value)
|
|
||||||
- Story: "Build authentication system" (too large)
|
|
||||||
- Story: "Login UI (depends on Story 1.3 API endpoint)" (future dependency!)
|
|
||||||
- Story: "Edit post (requires Story 1.4 to be implemented first)" (wrong order!)
|
|
||||||
|
|
||||||
### 3. Process Epics Sequentially
|
|
||||||
|
|
||||||
For each epic in the approved epics_list:
|
|
||||||
|
|
||||||
#### A. Epic Overview
|
|
||||||
|
|
||||||
Display:
|
|
||||||
|
|
||||||
- Epic number and title
|
|
||||||
- Epic goal statement
|
|
||||||
- FRs covered by this epic
|
|
||||||
- Any NFRs or additional requirements relevant
|
|
||||||
- Any UX Design Requirements (UX-DRs) relevant to this epic
|
|
||||||
|
|
||||||
#### B. Story Breakdown
|
|
||||||
|
|
||||||
Work with user to break down the epic into stories:
|
|
||||||
|
|
||||||
- Identify distinct user capabilities
|
|
||||||
- Ensure logical flow within the epic
|
|
||||||
- Size stories appropriately
|
|
||||||
|
|
||||||
#### C. Generate Each Story
|
|
||||||
|
|
||||||
For each story in the epic:
|
|
||||||
|
|
||||||
1. **Story Title**: Clear, action-oriented
|
|
||||||
2. **User Story**: Complete the As a/I want/So that format
|
|
||||||
3. **Acceptance Criteria**: Write specific, testable criteria
|
|
||||||
|
|
||||||
**AC Writing Guidelines:**
|
|
||||||
|
|
||||||
- Use Given/When/Then format
|
|
||||||
- Each AC should be independently testable
|
|
||||||
- Include edge cases and error conditions
|
|
||||||
- Reference specific requirements when applicable
|
|
||||||
|
|
||||||
#### D. Collaborative Review
|
|
||||||
|
|
||||||
After writing each story:
|
|
||||||
|
|
||||||
- Present the story to user
|
|
||||||
- Ask: "Does this story capture the requirement correctly?"
|
|
||||||
- "Is the scope appropriate for a single dev session?"
|
|
||||||
- "Are the acceptance criteria complete and testable?"
|
|
||||||
|
|
||||||
#### E. Append to Document
|
|
||||||
|
|
||||||
When story is approved:
|
|
||||||
|
|
||||||
- Append it to {planning_artifacts}/epics.md following template structure
|
|
||||||
- Use correct numbering (Epic N, Story M)
|
|
||||||
- Maintain proper markdown formatting
|
|
||||||
|
|
||||||
### 4. Epic Completion
|
|
||||||
|
|
||||||
After all stories for an epic are complete:
|
|
||||||
|
|
||||||
- Display epic summary
|
|
||||||
- Show count of stories created
|
|
||||||
- Verify all FRs for the epic are covered
|
|
||||||
- Get user confirmation to proceed to next epic
|
|
||||||
|
|
||||||
### 5. Repeat for All Epics
|
|
||||||
|
|
||||||
Continue the process for each epic in the approved list, processing them in order (Epic 1, Epic 2, etc.).
|
|
||||||
|
|
||||||
### 6. Final Document Completion
|
|
||||||
|
|
||||||
After all epics and stories are generated:
|
|
||||||
|
|
||||||
- Verify the document follows template structure exactly
|
|
||||||
- Ensure all placeholders are replaced
|
|
||||||
- Confirm all FRs are covered
|
|
||||||
- **Confirm all UX Design Requirements (UX-DRs) are covered by at least one story** (if UX document was an input)
|
|
||||||
- Check formatting consistency
|
|
||||||
|
|
||||||
## TEMPLATE STRUCTURE COMPLIANCE:
|
|
||||||
|
|
||||||
The final {planning_artifacts}/epics.md must follow this structure exactly:
|
|
||||||
|
|
||||||
1. **Overview** section with project name
|
|
||||||
2. **Requirements Inventory** with all three subsections populated
|
|
||||||
3. **FR Coverage Map** showing requirement to epic mapping
|
|
||||||
4. **Epic List** with approved epic structure
|
|
||||||
5. **Epic sections** for each epic (N = 1, 2, 3...)
|
|
||||||
- Epic title and goal
|
|
||||||
- All stories for that epic (M = 1, 2, 3...)
|
|
||||||
- Story title and user story
|
|
||||||
- Acceptance Criteria using Given/When/Then format
|
|
||||||
|
|
||||||
### 7. Present FINAL MENU OPTIONS
|
|
||||||
|
|
||||||
After all epics and stories are complete:
|
|
||||||
|
|
||||||
Display: "**Select an Option:** [A] Advanced Elicitation [P] Party Mode [C] Continue"
|
|
||||||
|
|
||||||
#### Menu Handling Logic:
|
|
||||||
|
|
||||||
- IF A: Invoke the `bmad-advanced-elicitation` skill
|
|
||||||
- IF P: Invoke the `bmad-party-mode` skill
|
|
||||||
- IF C: Save content to {planning_artifacts}/epics.md, update frontmatter, then read fully and follow: ./step-04-final-validation.md
|
|
||||||
- IF Any other comments or queries: help user respond then [Redisplay Menu Options](#7-present-final-menu-options)
|
|
||||||
|
|
||||||
#### EXECUTION RULES:
|
|
||||||
|
|
||||||
- ALWAYS halt and wait for user input after presenting menu
|
|
||||||
- ONLY proceed to next step when user selects 'C'
|
|
||||||
- After other menu items execution, return to this menu
|
|
||||||
- User can chat or ask questions - always respond and then end with display again of the menu options
|
|
||||||
|
|
||||||
## CRITICAL STEP COMPLETION NOTE
|
|
||||||
|
|
||||||
ONLY WHEN [C continue option] is selected and [all epics and stories saved to document following the template structure exactly], will you then read fully and follow: `./step-04-final-validation.md` to begin final validation phase.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
|
||||||
|
|
||||||
### ✅ SUCCESS:
|
|
||||||
|
|
||||||
- All epics processed in sequence
|
|
||||||
- Stories created for each epic
|
|
||||||
- Template structure followed exactly
|
|
||||||
- All FRs covered by stories
|
|
||||||
- Stories appropriately sized
|
|
||||||
- Acceptance criteria are specific and testable
|
|
||||||
- Document is complete and ready for development
|
|
||||||
|
|
||||||
### ❌ SYSTEM FAILURE:
|
|
||||||
|
|
||||||
- Deviating from template structure
|
|
||||||
- Missing epics or stories
|
|
||||||
- Stories too large or unclear
|
|
||||||
- Missing acceptance criteria
|
|
||||||
- Not following proper formatting
|
|
||||||
|
|
||||||
**Master Rule:** Skipping steps, optimizing sequences, or not following exact instructions is FORBIDDEN and constitutes SYSTEM FAILURE.
|
|
||||||
|
|
@ -1,143 +0,0 @@
|
||||||
# Step 4: Final Validation
|
|
||||||
|
|
||||||
## STEP GOAL:
|
|
||||||
|
|
||||||
To validate complete coverage of all requirements and ensure stories are ready for development.
|
|
||||||
|
|
||||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
|
||||||
|
|
||||||
### Universal Rules:
|
|
||||||
|
|
||||||
- 🛑 NEVER generate content without user input
|
|
||||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
|
||||||
- 🔄 CRITICAL: Process validation sequentially without skipping
|
|
||||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
|
||||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
|
||||||
|
|
||||||
### Role Reinforcement:
|
|
||||||
|
|
||||||
- ✅ You are a product strategist and technical specifications writer
|
|
||||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
|
||||||
- ✅ We engage in collaborative dialogue, not command-response
|
|
||||||
- ✅ You bring validation expertise and quality assurance
|
|
||||||
- ✅ User brings their implementation priorities and final review
|
|
||||||
|
|
||||||
### Step-Specific Rules:
|
|
||||||
|
|
||||||
- 🎯 Focus ONLY on validating complete requirements coverage
|
|
||||||
- 🚫 FORBIDDEN to skip any validation checks
|
|
||||||
- 💬 Validate FR coverage, story completeness, and dependencies
|
|
||||||
- 🚪 ENSURE all stories are ready for development
|
|
||||||
|
|
||||||
## EXECUTION PROTOCOLS:
|
|
||||||
|
|
||||||
- 🎯 Validate every requirement has story coverage
|
|
||||||
- 💾 Check story dependencies and flow
|
|
||||||
- 📖 Verify architecture compliance
|
|
||||||
- 🚫 FORBIDDEN to approve incomplete coverage
|
|
||||||
|
|
||||||
## CONTEXT BOUNDARIES:
|
|
||||||
|
|
||||||
- Available context: Complete epic and story breakdown from previous steps
|
|
||||||
- Focus: Final validation of requirements coverage and story readiness
|
|
||||||
- Limits: Validation only, no new content creation
|
|
||||||
- Dependencies: Completed story generation from Step 3
|
|
||||||
|
|
||||||
## VALIDATION PROCESS:
|
|
||||||
|
|
||||||
### 1. FR Coverage Validation
|
|
||||||
|
|
||||||
Review the complete epic and story breakdown to ensure EVERY FR is covered:
|
|
||||||
|
|
||||||
**CRITICAL CHECK:**
|
|
||||||
|
|
||||||
- Go through each FR from the Requirements Inventory
|
|
||||||
- Verify it appears in at least one story
|
|
||||||
- Check that acceptance criteria fully address the FR
|
|
||||||
- No FRs should be left uncovered
|
|
||||||
|
|
||||||
### 2. Architecture Implementation Validation
|
|
||||||
|
|
||||||
**Check for Starter Template Setup:**
|
|
||||||
|
|
||||||
- Does Architecture document specify a starter template?
|
|
||||||
- If YES: Epic 1 Story 1 must be "Set up initial project from starter template"
|
|
||||||
- This includes cloning, installing dependencies, initial configuration
|
|
||||||
|
|
||||||
**Database/Entity Creation Validation:**
|
|
||||||
|
|
||||||
- Are database tables/entities created ONLY when needed by stories?
|
|
||||||
- ❌ WRONG: Epic 1 creates all tables upfront
|
|
||||||
- ✅ RIGHT: Tables created as part of the first story that needs them
|
|
||||||
- Each story should create/modify ONLY what it needs
|
|
||||||
|
|
||||||
### 3. Story Quality Validation
|
|
||||||
|
|
||||||
**Each story must:**
|
|
||||||
|
|
||||||
- Be completable by a single dev agent
|
|
||||||
- Have clear acceptance criteria
|
|
||||||
- Reference specific FRs it implements
|
|
||||||
- Include necessary technical details
|
|
||||||
- **Not have forward dependencies** (can only depend on PREVIOUS stories)
|
|
||||||
- Be implementable without waiting for future stories
|
|
||||||
|
|
||||||
### 4. Epic Structure Validation
|
|
||||||
|
|
||||||
**Check that:**
|
|
||||||
|
|
||||||
- Epics deliver user value, not technical milestones
|
|
||||||
- Dependencies flow naturally
|
|
||||||
- Foundation stories only setup what's needed
|
|
||||||
- No big upfront technical work
|
|
||||||
- **File Churn Check:** Do multiple epics repeatedly modify the same core files?
|
|
||||||
- Assess whether the overlap pattern suggests unnecessary churn or is incidental
|
|
||||||
- If overlap is significant: Validate that splitting provides genuine value (risk mitigation, feedback loops, context size limits)
|
|
||||||
- If no justification for the split: Recommend consolidation into fewer epics
|
|
||||||
- ❌ WRONG: Multiple epics each modify the same core files with no feedback loop between them
|
|
||||||
- ✅ RIGHT: Epics target distinct files/components, OR consolidation was explicitly considered and rejected with rationale
|
|
||||||
|
|
||||||
### 5. Dependency Validation (CRITICAL)
|
|
||||||
|
|
||||||
**Epic Independence Check:**
|
|
||||||
|
|
||||||
- Does each epic deliver COMPLETE functionality for its domain?
|
|
||||||
- Can Epic 2 function without Epic 3 being implemented?
|
|
||||||
- Can Epic 3 function standalone using Epic 1 & 2 outputs?
|
|
||||||
- ❌ WRONG: Epic 2 requires Epic 3 features to work
|
|
||||||
- ✅ RIGHT: Each epic is independently valuable
|
|
||||||
|
|
||||||
**Within-Epic Story Dependency Check:**
|
|
||||||
For each epic, review stories in order:
|
|
||||||
|
|
||||||
- Can Story N.1 be completed without Stories N.2, N.3, etc.?
|
|
||||||
- Can Story N.2 be completed using only Story N.1 output?
|
|
||||||
- Can Story N.3 be completed using only Stories N.1 & N.2 outputs?
|
|
||||||
- ❌ WRONG: "This story depends on a future story"
|
|
||||||
- ❌ WRONG: Story references features not yet implemented
|
|
||||||
- ✅ RIGHT: Each story builds only on previous stories
|
|
||||||
|
|
||||||
### 6. Complete and Save
|
|
||||||
|
|
||||||
If all validations pass:
|
|
||||||
|
|
||||||
- Update any remaining placeholders in the document
|
|
||||||
- Ensure proper formatting
|
|
||||||
- Save the final epics.md
|
|
||||||
|
|
||||||
**Present Final Menu:**
|
|
||||||
**All validations complete!** [C] Complete Workflow
|
|
||||||
|
|
||||||
HALT — wait for user input before proceeding.
|
|
||||||
|
|
||||||
When C is selected, the workflow is complete and the epics.md is ready for development.
|
|
||||||
|
|
||||||
Epics and Stories complete. Invoke the `bmad-help` skill.
|
|
||||||
|
|
||||||
Upon Completion of task output: offer to answer any questions about the Epics and Stories.
|
|
||||||
|
|
||||||
## On Complete
|
|
||||||
|
|
||||||
Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
|
|
||||||
|
|
||||||
If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
|
|
||||||
|
|
@ -1,61 +0,0 @@
|
||||||
---
|
|
||||||
stepsCompleted: []
|
|
||||||
inputDocuments: []
|
|
||||||
---
|
|
||||||
|
|
||||||
# {{project_name}} - Epic Breakdown
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
This document provides the complete epic and story breakdown for {{project_name}}, decomposing the requirements from the PRD, UX Design if it exists, and Architecture requirements into implementable stories.
|
|
||||||
|
|
||||||
## Requirements Inventory
|
|
||||||
|
|
||||||
### Functional Requirements
|
|
||||||
|
|
||||||
{{fr_list}}
|
|
||||||
|
|
||||||
### NonFunctional Requirements
|
|
||||||
|
|
||||||
{{nfr_list}}
|
|
||||||
|
|
||||||
### Additional Requirements
|
|
||||||
|
|
||||||
{{additional_requirements}}
|
|
||||||
|
|
||||||
### UX Design Requirements
|
|
||||||
|
|
||||||
{{ux_design_requirements}}
|
|
||||||
|
|
||||||
### FR Coverage Map
|
|
||||||
|
|
||||||
{{requirements_coverage_map}}
|
|
||||||
|
|
||||||
## Epic List
|
|
||||||
|
|
||||||
{{epics_list}}
|
|
||||||
|
|
||||||
<!-- Repeat for each epic in epics_list (N = 1, 2, 3...) -->
|
|
||||||
|
|
||||||
## Epic {{N}}: {{epic_title_N}}
|
|
||||||
|
|
||||||
{{epic_goal_N}}
|
|
||||||
|
|
||||||
<!-- Repeat for each story (M = 1, 2, 3...) within epic N -->
|
|
||||||
|
|
||||||
### Story {{N}}.{{M}}: {{story_title_N_M}}
|
|
||||||
|
|
||||||
As a {{user_type}},
|
|
||||||
I want {{capability}},
|
|
||||||
So that {{value_benefit}}.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
|
|
||||||
<!-- for each AC on this story -->
|
|
||||||
|
|
||||||
**Given** {{precondition}}
|
|
||||||
**When** {{action}}
|
|
||||||
**Then** {{expected_outcome}}
|
|
||||||
**And** {{additional_criteria}}
|
|
||||||
|
|
||||||
<!-- End story repeat -->
|
|
||||||
Loading…
Reference in New Issue