Merge branch 'main' into feat/kimi-code-support

This commit is contained in:
Brian 2026-04-24 18:21:44 -05:00 committed by GitHub
commit 7b02233215
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
41 changed files with 4130 additions and 1562 deletions

View File

@ -1,122 +1,226 @@
---
title: 'How to Install BMad'
description: Step-by-step guide to installing BMad in your project
description: Install, update, and pin BMad for local development, teams, and CI
sidebar:
order: 1
---
Use the `npx bmad-method install` command to set up BMad in your project with your choice of modules and AI tools.
If you want to use a non interactive installer and provide all install options on the command line, see [this guide](./non-interactive-installation.md).
Use `npx bmad-method install` to set up BMad in your project. One command handles first installs, upgrades, channel switching, and scripted CI runs. This page covers all of it.
## When to Use This
- Starting a new project with BMad
- Adding BMad to an existing codebase
- Update the existing BMad Installation
- Adding or removing modules on an existing install
- Switching a module to main-HEAD or pinning to a specific release
- Scripting installs for CI pipelines, Dockerfiles, or enterprise rollouts
:::note[Prerequisites]
- **Node.js** 20+ (required for the installer)
- **Git** (recommended)
- **AI tool** (Claude Code, Cursor, or similar)
:::
- **Node.js** 20+ (the installer requires it)
- **Git** (for cloning external modules)
- **An AI tool** such as Claude Code or Cursor — or install without one using `--tools none`
## Steps
:::
### 1. Run the Installer
## First-time install (the fast path)
```bash
npx bmad-method install
```
:::tip[Want the newest prerelease build?]
Use the `next` dist-tag:
The interactive flow asks you five things:
1. Installation directory (defaults to the current working directory)
2. Which modules to install (checkboxes for core, bmm, bmb, cis, gds, tea)
3. **"Ready to install (all stable)?"** — Yes accepts the latest released tag for every external module
4. Which AI tools/IDEs to integrate with (claude-code, cursor, and others)
5. Per-module config (name, language, output folder)
Accept the defaults and you land on the latest stable release of every module, configured for your chosen tool.
:::tip[Just want the newest prerelease?]
```bash
npx bmad-method@next install
```
This gets you newer changes earlier, with a higher chance of churn than the default install.
Runs the prerelease installer, which ships a newer snapshot of core and bmm. More churn, fewer delays between development and release.
:::
:::tip[Bleeding edge]
To install the latest from the main branch (may be unstable):
## Picking a specific version
Two independent axes control what ends up on disk.
### Axis 1: external module channels
Every external module — bmb, cis, gds, tea, and any community module — installs on one of three channels:
| Channel | What gets installed | Who picks this |
| ------------------ | ---------------------------------------------------------------------------- | --------------------------------------- |
| `stable` (default) | Highest released semver tag. Prereleases like `v2.0.0-alpha.1` are excluded. | Most users |
| `next` | Main branch HEAD at install time | Contributors, early adopters |
| `pinned` | A specific tag you name | Enterprise installs, CI reproducibility |
Channels are per-module. You can run bmb on `next` while leaving cis on `stable` — the flags below let you mix freely.
### Axis 2: installer binary version
The `bmad-method` npm package itself has two dist-tags:
| Command | What you get |
| ------------------------------------- | ----------------------------------------------------------------- |
| `npx bmad-method install` (`@latest`) | Latest stable installer release |
| `npx bmad-method@next install` | Latest prerelease installer, auto-published on every push to main |
**The installer binary determines your core and bmm versions.** Those two modules ship bundled inside the installer package rather than being cloned from separate repos.
### Why core and bmm don't have their own channel
They're stapled to the installer binary you ran:
- `npx bmad-method install` → latest stable core and bmm
- `npx bmad-method@next install` → prerelease core and bmm
- `node /path/to/local-checkout/tools/installer/bmad-cli.js install` → whatever your local checkout has
`--pin bmm=v6.3.0` and `--next=bmm` are silently ineffective against bundled modules, and the installer warns you when you try. A future release extracts bmm from the installer package; once that ships, bmm gets a proper channel selector like bmb has today.
## Updating an existing install
Running `npx bmad-method install` in a directory that already contains `_bmad/` gives you a menu:
| Choice | What it does |
| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Quick Update** | Re-runs the install with your existing settings. Refreshes files, applies patches and minor stable upgrades, refuses major upgrades. Fast, non-interactive. |
| **Modify Install** | Full interactive flow. Add or remove modules, reconfigure settings, optionally review and switch channels for existing modules. |
### Upgrade prompts
When Modify detects a newer stable tag for a module you've installed on `stable`, it classifies the diff and prompts accordingly:
| Upgrade type | Example | Default |
| ------------ | --------------- | ------- |
| Patch | v1.7.0 → v1.7.1 | Y |
| Minor | v1.7.0 → v1.8.0 | Y |
| Major | v1.7.0 → v2.0.0 | **N** |
Major defaults to N because breaking changes frequently surface as "instability" when they weren't expected. The prompt includes a GitHub release-notes URL so you can read what changed before accepting.
Under `--yes`, patch and minor upgrades apply automatically. Majors stay frozen — pass `--pin <code>=<new-tag>` to accept non-interactively.
### Switching a module's channel
**Interactively:** choose Modify → answer **Yes** to "Review channel assignments?" → each external module offers Keep, Switch to stable, Switch to next, or Pin to a tag.
**Via flags:** the recipes in the next section cover the common cases.
## Headless CI installs
### Flag reference
| Flag | Purpose |
| ------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------- |
| `--yes`, `-y` | Skip all prompts; accept flag values + defaults |
| `--directory <path>` | Install into this directory (default: current working dir) |
| `--modules <a,b,c>` | Exact module set. Core is auto-added. Not a delta — list everything you want kept. |
| `--tools <a,b>` or `--tools none` | IDE/tool selection. `none` skips tool config entirely. |
| `--action <type>` | `install`, `update`, or `quick-update`. Defaults based on existing install state. |
| `--custom-source <urls>` | Install custom modules from Git URLs or local paths |
| `--channel <stable\|next>` | Apply to all externals (aliased as `--all-stable` / `--all-next`) |
| `--all-stable` | Alias for `--channel=stable` |
| `--all-next` | Alias for `--channel=next` |
| `--next=<code>` | Put one module on next. Repeatable. |
| `--pin <code>=<tag>` | Pin one module to a specific tag. Repeatable. |
| `--user-name`, `--communication-language`, `--document-output-language`, `--output-folder` | Override per-user config defaults |
Precedence when flags overlap: `--pin` beats `--next=` beats `--channel` / `--all-*` beats the registry default (`stable`).
:::note[Example resolution]
`--all-next --pin cis=v0.2.0` puts bmb, gds, and tea on next while pinning cis to v0.2.0.
:::
### Recipes
**Default install — latest stable for everything:**
```bash
npx github:bmad-code-org/BMAD-METHOD install
npx bmad-method install --yes --modules bmm,bmb,cis --tools claude-code
```
**Enterprise pin — reproducible byte-for-byte:**
```bash
npx bmad-method install --yes \
--modules bmm,bmb,cis \
--pin bmb=v1.7.0 --pin cis=v0.2.0 \
--tools claude-code
```
**Bleeding edge — externals on main HEAD:**
```bash
npx bmad-method install --yes --modules bmm,bmb --all-next --tools claude-code
```
**Add a module to an existing install** (keep everything else):
```bash
npx bmad-method install --yes --action update \
--modules bmm,bmb,gds \
--tools none
```
**Mix channels — bmb on next, gds on stable:**
```bash
npx bmad-method install --yes --action update \
--modules bmm,bmb,cis,gds \
--next=bmb \
--tools none
```
:::caution[Rate limit on shared IPs]
Anonymous GitHub API calls are capped at 60/hour per IP. A single install hits the API once per external module to resolve the stable tag. Offices behind NAT, CI runner pools, and VPNs can collectively exhaust this.
Set `GITHUB_TOKEN=<personal access token>` in the environment to raise the limit to 5000/hour per account. Any public-repo-read PAT works; no scopes are required.
:::
### 2. Choose Installation Location
## What got installed
The installer will ask where to install BMad files:
After any install, `_bmad/_config/manifest.yaml` records exactly what's on disk:
- Current directory (recommended for new projects if you created the directory yourself and ran from within the directory)
- Custom path
### 3. Select Your AI Tools
Pick which AI tools you use:
- Claude Code
- Cursor
- Others
Each tool has its own way of integrating skills. The installer creates tiny prompt files to activate workflows and agents — it just puts them where your tool expects to find them.
:::note[Enabling Skills]
Some platforms require skills to be explicitly enabled in settings before they appear. If you install BMad and don't see the skills, check your platform's settings or ask your AI assistant how to enable skills.
:::
### 4. Choose Modules
The installer shows available modules. Select whichever ones you need — most users just want **BMad Method** (the software development module).
### 5. Follow the Prompts
The installer guides you through the rest — settings, tool integrations, etc.
## What You Get
```text
your-project/
├── _bmad/
│ ├── bmm/ # Your selected modules
│ │ └── config.yaml # Module settings (if you ever need to change them)
│ ├── core/ # Required core module
│ └── ...
├── _bmad-output/ # Generated artifacts
├── .claude/ # Claude Code skills (if using Claude Code)
│ └── skills/
│ ├── bmad-help/
│ ├── bmad-persona/
│ └── ...
└── .cursor/ # Cursor skills (if using Cursor)
└── skills/
└── ...
```yaml
modules:
- name: bmb
version: v1.7.0 # the tag, or "main" for next
channel: stable # stable | next | pinned
sha: 86033fc9aeae2ca6d52c7cdb675c1f4bf17fc1c1
source: external
repoUrl: https://github.com/bmad-code-org/bmad-builder
```
## Verify Installation
The `sha` field is written for git-backed modules (external, community, and URL-based custom). Bundled modules (core, bmm) and local-path custom modules don't have one — their code travels with the installer binary or your filesystem, not a cloneable ref.
Run `bmad-help` to verify everything works and see what to do next.
For cross-machine reproducibility, don't rely on rerunning the same `--modules` command. Stable-channel installs resolve to the highest released tag **at install time**, so a later rerun lands on whatever has been released since. Convert the recorded tags from `manifest.yaml` into explicit `--pin` flags on the target machine, e.g.:
**BMad-Help is your intelligent guide** that will:
- Confirm your installation is working
- Show what's available based on your installed modules
- Recommend your first step
You can also ask it questions:
```
bmad-help I just installed, what should I do first?
bmad-help What are my options for a SaaS project?
```bash
npx bmad-method install --yes --modules bmb,cis \
--pin bmb=v1.7.0 --pin cis=v0.4.2 --tools none
```
## Troubleshooting
**Installer throws an error** — Copy-paste the output into your AI assistant and let it figure it out.
### "Could not resolve stable tag" or "API rate limit exceeded"
**Installer worked but something doesn't work later** — Your AI needs BMad context to help. See [How to Get Answers About BMad](./get-answers-about-bmad.md) for how to point your AI at the right sources.
You've hit GitHub's 60/hr anonymous limit. Set `GITHUB_TOKEN` and retry. If you already have a token set, it may be expired or rate-limited on its own budget — try a different token or wait for the hourly reset.
### "Tag 'vX.Y.Z' not found"
The tag you passed to `--pin` doesn't exist in the module's repo. Check the repo's releases page on GitHub for valid tags.
### A pinned install keeps upgrading
Pinned installs don't upgrade. Quick-update applies patches and minors on stable channel only; it won't touch `pinned` or `next`. If a pinned install changed, open `_bmad/_config/manifest.yaml``channel: pinned` plus a fixed `version` and `sha` should hold across runs unless you explicitly override via flags.
### `--pin bmm=X` didn't do anything
bmm is a bundled module — `--pin` and `--next=` don't apply. Use `npx bmad-method@next install` for a prerelease core/bmm, or check out the bmad-bmm repo and run the installer locally to get unreleased changes.

View File

@ -1,196 +1,10 @@
---
title: Non-Interactive Installation
description: Install BMad using command-line flags for CI/CD pipelines and automated deployments
description: Headless / CI install docs have moved
sidebar:
order: 2
---
Use command-line flags to install BMad non-interactively. This is useful for:
## When to Use This
- Automated deployments and CI/CD pipelines
- Scripted installations
- Batch installations across multiple projects
- Quick installations with known configurations
:::note[Prerequisites]
Requires [Node.js](https://nodejs.org) v20+ and `npx` (included with npm).
:::
## Available Flags
### Installation Options
| Flag | Description | Example |
| --------------------------- | ----------------------------------------------------------------------------------- | ---------------------------------------------- |
| `--directory <path>` | Installation directory | `--directory ~/projects/myapp` |
| `--modules <modules>` | Comma-separated module IDs | `--modules bmm,bmb` |
| `--tools <tools>` | Comma-separated tool/IDE IDs (use `none` to skip) | `--tools claude-code,cursor` or `--tools none` |
| `--action <type>` | Action for existing installations: `install` (default), `update`, or `quick-update` | `--action quick-update` |
| `--custom-source <sources>` | Comma-separated Git URLs or local paths for custom modules | `--custom-source /path/to/module` |
### Core Configuration
| Flag | Description | Default |
| ----------------------------------- | ----------------------------------------------- | --------------- |
| `--user-name <name>` | Name for agents to use | System username |
| `--communication-language <lang>` | Agent communication language | English |
| `--document-output-language <lang>` | Document output language | English |
| `--output-folder <path>` | Output folder path (see resolution rules below) | `_bmad-output` |
#### Output Folder Path Resolution
The value passed to `--output-folder` (or entered interactively) is resolved according to these rules:
| Input type | Example | Resolved as |
| ---------------------------- | -------------------------- | ---------------------------------------------------------- |
| Relative path (default) | `_bmad-output` | `<project-root>/_bmad-output` |
| Relative path with traversal | `../../shared-outputs` | Normalized absolute path — e.g. `/Users/me/shared-outputs` |
| Absolute path | `/Users/me/shared-outputs` | Used as-is — project root is **not** prepended |
The resolved path is what agents and workflows use at runtime when writing output files. Using an absolute path or a traversal-based relative path lets you direct all generated artifacts to a directory outside your project tree — useful for shared or monorepo setups.
### Other Options
| Flag | Description |
| ------------- | ------------------------------------------- |
| `-y, --yes` | Accept all defaults and skip prompts |
| `-d, --debug` | Enable debug output for manifest generation |
## Module IDs
Available module IDs for the `--modules` flag:
- `bmm` — BMad Method Master
- `bmb` — BMad Builder
Check the [BMad registry](https://github.com/bmad-code-org) for available external modules.
## Tool/IDE IDs
Available tool IDs for the `--tools` flag:
**Preferred:** `claude-code`, `cursor`
Run `npx bmad-method install` interactively once to see the full current list of supported tools, or check the [platform codes configuration](https://github.com/bmad-code-org/BMAD-METHOD/blob/main/tools/installer/ide/platform-codes.yaml).
## Installation Modes
| Mode | Description | Example |
| --------------------- | --------------------------------------------- | ------------------------------------------------------------------------------------------------- |
| Fully non-interactive | Provide all flags to skip all prompts | `npx bmad-method install --directory . --modules bmm --tools claude-code --yes` |
| Semi-interactive | Provide some flags; BMad prompts for the rest | `npx bmad-method install --directory . --modules bmm` |
| Defaults only | Accept all defaults with `-y` | `npx bmad-method install --yes` |
| Custom source only | Install core + custom module(s) | `npx bmad-method install --directory . --custom-source /path/to/module --tools claude-code --yes` |
| Without tools | Skip tool/IDE configuration | `npx bmad-method install --modules bmm --tools none` |
## Examples
### CI/CD Pipeline Installation
```bash
#!/bin/bash
# install-bmad.sh
npx bmad-method install \
--directory "${GITHUB_WORKSPACE}" \
--modules bmm \
--tools claude-code \
--user-name "CI Bot" \
--communication-language English \
--document-output-language English \
--output-folder _bmad-output \
--yes
```
### Update Existing Installation
```bash
npx bmad-method install \
--directory ~/projects/myapp \
--action update \
--modules bmm,bmb,custom-module
```
### Quick Update (Preserve Settings)
```bash
npx bmad-method install \
--directory ~/projects/myapp \
--action quick-update
```
### Install from Custom Source
Install a module from a local path or any Git host:
```bash
npx bmad-method install \
--directory . \
--custom-source /path/to/my-module \
--tools claude-code \
--yes
```
Combine with official modules:
```bash
npx bmad-method install \
--directory . \
--modules bmm \
--custom-source https://gitlab.com/myorg/my-module \
--tools claude-code \
--yes
```
:::note[Custom source behavior]
When `--custom-source` is used without `--modules`, only core and the custom modules are installed. Add `--modules` to include official modules as well. See [Install Custom and Community Modules](./install-custom-modules.md) for details.
:::
## What You Get
- A fully configured `_bmad/` directory in your project
- Agents and workflows configured for your selected modules and tools
- A `_bmad-output/` folder for generated artifacts
## Validation and Error Handling
BMad validates all provided flags:
- **Directory** — Must be a valid path with write permissions
- **Modules** — Warns about invalid module IDs (but won't fail)
- **Tools** — Warns about invalid tool IDs (but won't fail)
- **Action** — Must be one of: `install`, `update`, `quick-update`
Invalid values will either:
1. Show an error and exit (for critical options like directory)
2. Show a warning and skip (for optional items)
3. Fall back to interactive prompts (for missing required values)
:::tip[Best Practices]
- Use absolute paths for `--directory` to avoid ambiguity
- Use an absolute path for `--output-folder` when you want artifacts written outside the project tree (e.g. a shared monorepo outputs directory)
- Test flags locally before using in CI/CD pipelines
- Combine with `-y` for truly unattended installations
- Use `--debug` if you encounter issues during installation
:::
## Troubleshooting
### Installation fails with "Invalid directory"
- The directory path must exist (or its parent must exist)
- You need write permissions
- The path must be absolute or correctly relative to the current directory
### Module not found
- Verify the module ID is correct
- External modules must be available in the registry
:::note[Still stuck?]
Run with `--debug` for detailed output, try interactive mode to isolate the issue, or report at <https://github.com/bmad-code-org/BMAD-METHOD/issues>.
:::note[This page has moved]
Headless and CI install flags, channel selection, and pinning now live in the unified [How to Install BMad](./install-bmad.md) guide. Jump to the [Headless / CI installs](./install-bmad.md#headless-ci-installs) section for the flag reference and copy-paste recipes.
:::

42
package-lock.json generated
View File

@ -15,7 +15,6 @@
"chalk": "^4.1.2",
"commander": "^14.0.0",
"csv-parse": "^6.1.0",
"fs-extra": "^11.3.0",
"glob": "^11.0.3",
"ignore": "^7.0.5",
"js-yaml": "^4.1.0",
@ -25,8 +24,8 @@
"yaml": "^2.7.0"
},
"bin": {
"bmad": "tools/bmad-npx-wrapper.js",
"bmad-method": "tools/bmad-npx-wrapper.js"
"bmad": "tools/installer/bmad-cli.js",
"bmad-method": "tools/installer/bmad-cli.js"
},
"devDependencies": {
"@astrojs/sitemap": "^3.6.0",
@ -46,6 +45,7 @@
"prettier": "^3.7.4",
"prettier-plugin-packagejson": "^2.5.19",
"sharp": "^0.33.5",
"unist-util-visit": "^5.1.0",
"yaml-eslint-parser": "^1.2.3",
"yaml-lint": "^1.7.0"
},
@ -6975,20 +6975,6 @@
"url": "https://github.com/sponsors/isaacs"
}
},
"node_modules/fs-extra": {
"version": "11.3.3",
"resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-11.3.3.tgz",
"integrity": "sha512-VWSRii4t0AFm6ixFFmLLx1t7wS1gh+ckoa84aOeapGum0h+EZd1EhEumSB+ZdDLnEPuucsVB9oB7cxJHap6Afg==",
"license": "MIT",
"dependencies": {
"graceful-fs": "^4.2.0",
"jsonfile": "^6.0.1",
"universalify": "^2.0.0"
},
"engines": {
"node": ">=14.14"
}
},
"node_modules/fs.realpath": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz",
@ -7227,6 +7213,7 @@
"version": "4.2.11",
"resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz",
"integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==",
"dev": true,
"license": "ISC"
},
"node_modules/h3": {
@ -9066,18 +9053,6 @@
"dev": true,
"license": "MIT"
},
"node_modules/jsonfile": {
"version": "6.2.0",
"resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.2.0.tgz",
"integrity": "sha512-FGuPw30AdOIUTRMC2OMRtQV+jkVj2cfPqSeWXv1NEAJ1qZ5zb1X6z1mFhbfOB/iy3ssJCD+3KuZ8r8C3uVFlAg==",
"license": "MIT",
"dependencies": {
"universalify": "^2.0.0"
},
"optionalDependencies": {
"graceful-fs": "^4.1.6"
}
},
"node_modules/katex": {
"version": "0.16.28",
"resolved": "https://registry.npmjs.org/katex/-/katex-0.16.28.tgz",
@ -13607,15 +13582,6 @@
"url": "https://opencollective.com/unified"
}
},
"node_modules/universalify": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/universalify/-/universalify-2.0.1.tgz",
"integrity": "sha512-gptHNQghINnc/vTGIk0SOFGFNXw7JVrlRUtConJRlvaw6DuX0wO5Jeko9sWrMBhh+PsYAZ7oXAiOnf/UKogyiw==",
"license": "MIT",
"engines": {
"node": ">= 10.0.0"
}
},
"node_modules/unrs-resolver": {
"version": "1.11.1",
"resolved": "https://registry.npmjs.org/unrs-resolver/-/unrs-resolver-1.11.1.tgz",

View File

@ -41,7 +41,8 @@
"prepare": "command -v husky >/dev/null 2>&1 && husky || exit 0",
"quality": "npm run format:check && npm run lint && npm run lint:md && npm run docs:build && npm run test:install && npm run validate:refs && npm run validate:skills",
"rebundle": "node tools/installer/bundlers/bundle-web.js rebundle",
"test": "npm run test:refs && npm run test:install && npm run lint && npm run lint:md && npm run format:check",
"test": "npm run test:refs && npm run test:install && npm run test:channels && npm run lint && npm run lint:md && npm run format:check",
"test:channels": "node test/test-installer-channels.js",
"test:install": "node test/test-installation-components.js",
"test:refs": "node test/test-file-refs-csv.js",
"validate:refs": "node tools/validate-file-refs.js --strict",

View File

@ -7,7 +7,55 @@ description: 'LLM-assisted human-in-the-loop review. Make sense of a change, foc
**Goal:** Guide a human through reviewing a change — from purpose and context into details.
You are assisting the user in reviewing a change.
**Your Role:** You are assisting the user in reviewing a change.
## Conventions
- Bare paths (e.g. `step-01-orientation.md`) resolve from the skill root.
- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
- `{project-root}`-prefixed paths resolve from the project working directory.
- `{skill-name}` resolves to the skill directory's basename.
## On Activation
### Step 1: Resolve the Workflow Block
Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
1. `{skill-root}/customize.toml` — defaults
2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
### Step 2: Execute Prepend Steps
Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
### Step 3: Load Persistent Facts
Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
### Step 4: Load Config
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
- `implementation_artifacts`
- `planning_artifacts`
- `communication_language`
- `document_output_language`
### Step 5: Greet the User
Greet the user, speaking in `{communication_language}`.
### Step 6: Execute Append Steps
Execute each entry in `{workflow.activation_steps_append}` in order.
Activation is complete. Begin the workflow below.
## Global Step Rules (apply to every step)
@ -15,15 +63,6 @@ You are assisting the user in reviewing a change.
- **Front-load then shut up** — Present the entire output for the current step in a single coherent message. Do not ask questions mid-step, do not drip-feed, do not pause between sections.
- **Language** — Speak in `{communication_language}`. Write any file output in `{document_output_language}`.
## INITIALIZATION
Load and read full config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
- `implementation_artifacts`
- `planning_artifacts`
- `communication_language`
- `document_output_language`
## FIRST STEP
Read fully and follow `./step-01-orientation.md` to begin.

View File

@ -0,0 +1,41 @@
# DO NOT EDIT -- overwritten on every update.
#
# Workflow customization surface for bmad-checkpoint-preview. Mirrors the
# agent customization shape under the [workflow] namespace.
[workflow]
# --- Configurable below. Overrides merge per BMad structural rules: ---
# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
# Steps to run before the standard activation (config load, greet).
# Overrides append. Use for pre-flight loads, compliance checks, etc.
activation_steps_prepend = []
# Steps to run after greet but before the workflow begins.
# Overrides append. Use for context-heavy setup that should happen
# once the user has been acknowledged.
activation_steps_append = []
# Persistent facts the workflow keeps in mind for the whole run
# (standards, compliance constraints, stylistic guardrails).
# Distinct from the runtime memory sidecar — these are static context
# loaded on activation. Overrides append.
#
# Each entry is either:
# - a literal sentence, e.g. "All stories must include testable acceptance criteria."
# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
# (glob patterns are supported; the file's contents are loaded and treated as facts).
persistent_facts = [
"file:{project-root}/**/project-context.md",
]
# Scalar: executed when the workflow reaches its final step,
# after the review decision (approve/rework/discuss) is made. Override wins.
# Leave empty for no custom post-completion behavior.
on_complete = ""

View File

@ -22,3 +22,9 @@ HALT — do not proceed until the user makes their choice.
- **Approve**: Acknowledge briefly. If the human wants to patch something before shipping, help apply the fix interactively. If reviewing a PR, offer to approve via `gh pr review --approve` — but confirm with the human before executing, since this is a visible action on a shared resource.
- **Rework**: Ask what went wrong — was it the approach, the spec, or the implementation? Help the human decide on next steps (revert commit, open an issue, revise the spec, etc.). Help draft specific, actionable feedback tied to `path:line` locations if the change is a PR from someone else.
- **Discuss**: Open conversation — answer questions, explore concerns, dig into any aspect. After discussion, return to the decision prompt above.
## On Complete
Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.

View File

@ -3,4 +3,88 @@ name: bmad-code-review
description: 'Review code changes adversarially using parallel review layers (Blind Hunter, Edge Case Hunter, Acceptance Auditor) with structured triage into actionable categories. Use when the user says "run code review" or "review this code"'
---
Follow the instructions in ./workflow.md.
# Code Review Workflow
**Goal:** Review code changes adversarially using parallel review layers and structured triage.
**Your Role:** You are an elite code reviewer. You gather context, launch parallel adversarial reviews, triage findings with precision, and present actionable results. No noise, no filler.
## Conventions
- Bare paths (e.g. `checklist.md`) resolve from the skill root.
- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
- `{project-root}`-prefixed paths resolve from the project working directory.
- `{skill-name}` resolves to the skill directory's basename.
## On Activation
### Step 1: Resolve the Workflow Block
Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
1. `{skill-root}/customize.toml` — defaults
2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
### Step 2: Execute Prepend Steps
Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
### Step 3: Load Persistent Facts
Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
### Step 4: Load Config
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name`
- `communication_language`, `document_output_language`, `user_skill_level`
- `date` as system-generated current datetime
- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
- `project_context` = `**/project-context.md` (load if exists)
- CLAUDE.md / memory files (load if exist)
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Step 5: Greet the User
Greet `{user_name}`, speaking in `{communication_language}`.
### Step 6: Execute Append Steps
Execute each entry in `{workflow.activation_steps_append}` in order.
Activation is complete. Begin the workflow below.
## WORKFLOW ARCHITECTURE
This uses **step-file architecture** for disciplined execution:
- **Micro-file Design**: Each step is self-contained and followed exactly
- **Just-In-Time Loading**: Only load the current step file
- **Sequential Enforcement**: Complete steps in order, no skipping
- **State Tracking**: Persist progress via in-memory variables
- **Append-Only Building**: Build artifacts incrementally
### Step Processing Rules
1. **READ COMPLETELY**: Read the entire step file before acting
2. **FOLLOW SEQUENCE**: Execute sections in order
3. **WAIT FOR INPUT**: Halt at checkpoints and wait for human
4. **LOAD NEXT**: When directed, read fully and follow the next step file
### Critical Rules (NO EXCEPTIONS)
- **NEVER** load multiple step files simultaneously
- **ALWAYS** read entire step file before execution
- **NEVER** skip steps or optimize the sequence
- **ALWAYS** follow the exact instructions in the step file
- **ALWAYS** halt at checkpoints and wait for human input
## FIRST STEP
Read fully and follow: `./steps/step-01-gather-context.md`

View File

@ -0,0 +1,41 @@
# DO NOT EDIT -- overwritten on every update.
#
# Workflow customization surface for bmad-code-review. Mirrors the
# agent customization shape under the [workflow] namespace.
[workflow]
# --- Configurable below. Overrides merge per BMad structural rules: ---
# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
# Steps to run before the standard activation (config load, greet).
# Overrides append. Use for pre-flight loads, compliance checks, etc.
activation_steps_prepend = []
# Steps to run after greet but before the workflow begins.
# Overrides append. Use for context-heavy setup that should happen
# once the user has been acknowledged.
activation_steps_append = []
# Persistent facts the workflow keeps in mind for the whole run
# (standards, compliance constraints, stylistic guardrails).
# Distinct from the runtime memory sidecar — these are static context
# loaded on activation. Overrides append.
#
# Each entry is either:
# - a literal sentence, e.g. "All stories must include testable acceptance criteria."
# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
# (glob patterns are supported; the file's contents are loaded and treated as facts).
persistent_facts = [
"file:{project-root}/**/project-context.md",
]
# Scalar: executed when the workflow reaches its final step,
# after review findings are presented and sprint status is synced. Override wins.
# Leave empty for no custom post-completion behavior.
on_complete = ""

View File

@ -124,3 +124,9 @@ Present the user with follow-up options:
> 3. **Done** — end the workflow
**HALT** — I am waiting for your choice. Do not proceed until the user selects an option.
## On Complete
Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.

View File

@ -1,55 +0,0 @@
---
main_config: '{project-root}/_bmad/bmm/config.yaml'
---
# Code Review Workflow
**Goal:** Review code changes adversarially using parallel review layers and structured triage.
**Your Role:** You are an elite code reviewer. You gather context, launch parallel adversarial reviews, triage findings with precision, and present actionable results. No noise, no filler.
## WORKFLOW ARCHITECTURE
This uses **step-file architecture** for disciplined execution:
- **Micro-file Design**: Each step is self-contained and followed exactly
- **Just-In-Time Loading**: Only load the current step file
- **Sequential Enforcement**: Complete steps in order, no skipping
- **State Tracking**: Persist progress via in-memory variables
- **Append-Only Building**: Build artifacts incrementally
### Step Processing Rules
1. **READ COMPLETELY**: Read the entire step file before acting
2. **FOLLOW SEQUENCE**: Execute sections in order
3. **WAIT FOR INPUT**: Halt at checkpoints and wait for human
4. **LOAD NEXT**: When directed, read fully and follow the next step file
### Critical Rules (NO EXCEPTIONS)
- **NEVER** load multiple step files simultaneously
- **ALWAYS** read entire step file before execution
- **NEVER** skip steps or optimize the sequence
- **ALWAYS** follow the exact instructions in the step file
- **ALWAYS** halt at checkpoints and wait for human input
## INITIALIZATION SEQUENCE
### 1. Configuration Loading
Load and read full config from `{main_config}` and resolve:
- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name`
- `communication_language`, `document_output_language`, `user_skill_level`
- `date` as system-generated current datetime
- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
- `project_context` = `**/project-context.md` (load if exists)
- CLAUDE.md / memory files (load if exist)
YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`.
### 2. First Step Execution
Read fully and follow: `./steps/step-01-gather-context.md` to begin the workflow.

View File

@ -302,6 +302,18 @@ Activation is complete. Begin the workflow below.
processes - **Integration Patterns:** External service integrations, data flows <action>Extract any story-specific requirements that the
developer MUST follow</action>
<action>Identify any architectural decisions that override previous patterns</action>
<!-- Read existing code being modified — non-negotiable -->
<critical>📂 READ FILES BEING MODIFIED — skipping this is the primary cause of implementation failures and review cycles</critical>
<action>From the architecture directory structure, identify every file marked UPDATE (not NEW) that this story will touch</action>
<action>Read each relevant UPDATE file completely. For each one, document in dev notes:
- Current state: what it does today (state machine, API calls, data shapes, existing behaviors)
- What this story changes: the specific sections or behaviors being modified
- What must be preserved: existing interactions and behaviors the story must not break
</action>
<critical>A story implementation must leave the system working end-to-end — not just satisfy its stated ACs.
If a behavior is required for the feature to work correctly in the existing system, it is a requirement
whether or not it is explicitly written in the story. The dev agent owns this.</critical>
</step>
<step n="4" goal="Web research for latest technical specifics">

View File

@ -3,4 +3,483 @@ name: bmad-dev-story
description: 'Execute story implementation following a context filled story spec file. Use when the user says "dev this story [story file]" or "implement the next story in the sprint plan"'
---
Follow the instructions in ./workflow.md.
# Dev Story Workflow
**Goal:** Execute story implementation following a context filled story spec file.
**Your Role:** Developer implementing the story.
- Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}
- Generate all documents in {document_output_language}
- Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, Change Log, and Status
- Execute ALL steps in exact order; do NOT skip steps
- Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives other instruction.
- Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 9 decides completion.
- User skill level ({user_skill_level}) affects conversation style ONLY, not code updates.
## Conventions
- Bare paths (e.g. `steps/step-01-init.md`) resolve from the skill root.
- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
- `{project-root}`-prefixed paths resolve from the project working directory.
- `{skill-name}` resolves to the skill directory's basename.
## On Activation
### Step 1: Resolve the Workflow Block
Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
1. `{skill-root}/customize.toml` — defaults
2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
### Step 2: Execute Prepend Steps
Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
### Step 3: Load Persistent Facts
Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
### Step 4: Load Config
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
- `project_name`, `user_name`
- `communication_language`, `document_output_language`
- `user_skill_level`
- `implementation_artifacts`
- `date` as system-generated current datetime
### Step 5: Greet the User
Greet `{user_name}`, speaking in `{communication_language}`.
### Step 6: Execute Append Steps
Execute each entry in `{workflow.activation_steps_append}` in order.
Activation is complete. Begin the workflow below.
## Paths
- `story_file` = `` (explicit story path; auto-discovered if empty)
- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
## Execution
<workflow>
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
<critical>Generate all documents in {document_output_language}</critical>
<critical>Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List,
Change Log, and Status</critical>
<critical>Execute ALL steps in exact order; do NOT skip steps</critical>
<critical>Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution
until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives
other instruction.</critical>
<critical>Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 9 decides completion.</critical>
<critical>User skill level ({user_skill_level}) affects conversation style ONLY, not code updates.</critical>
<step n="1" goal="Find next ready story and load it" tag="sprint-status">
<check if="{{story_path}} is provided">
<action>Use {{story_path}} directly</action>
<action>Read COMPLETE story file</action>
<action>Extract story_key from filename or metadata</action>
<goto anchor="task_check" />
</check>
<!-- Sprint-based story discovery -->
<check if="{{sprint_status}} file exists">
<critical>MUST read COMPLETE sprint-status.yaml file from start to end to preserve order</critical>
<action>Load the FULL file: {{sprint_status}}</action>
<action>Read ALL lines from beginning to end - do not skip any content</action>
<action>Parse the development_status section completely to understand story order</action>
<action>Find the FIRST story (by reading in order from top to bottom) where:
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
- Status value equals "ready-for-dev"
</action>
<check if="no ready-for-dev or in-progress story found">
<output>📋 No ready-for-dev stories found in sprint-status.yaml
**Current Sprint Status:** {{sprint_status_summary}}
**What would you like to do?**
1. Run `create-story` to create next story from epics with comprehensive context
2. Run `*validate-create-story` to improve existing stories before development (recommended quality check)
3. Specify a particular story file to develop (provide full path)
4. Check {{sprint_status}} file to see current sprint status
💡 **Tip:** Stories in `ready-for-dev` may not have been validated. Consider running `validate-create-story` first for a quality
check.
</output>
<ask>Choose option [1], [2], [3], or [4], or specify story file path:</ask>
<check if="user chooses '1'">
<action>HALT - Run create-story to create next story</action>
</check>
<check if="user chooses '2'">
<action>HALT - Run validate-create-story to improve existing stories</action>
</check>
<check if="user chooses '3'">
<ask>Provide the story file path to develop:</ask>
<action>Store user-provided story path as {{story_path}}</action>
<goto anchor="task_check" />
</check>
<check if="user chooses '4'">
<output>Loading {{sprint_status}} for detailed status review...</output>
<action>Display detailed sprint status analysis</action>
<action>HALT - User can review sprint status and provide story path</action>
</check>
<check if="user provides story file path">
<action>Store user-provided story path as {{story_path}}</action>
<goto anchor="task_check" />
</check>
</check>
</check>
<!-- Non-sprint story discovery -->
<check if="{{sprint_status}} file does NOT exist">
<action>Search {implementation_artifacts} for stories directly</action>
<action>Find stories with "ready-for-dev" status in files</action>
<action>Look for story files matching pattern: *-*-*.md</action>
<action>Read each candidate story file to check Status section</action>
<check if="no ready-for-dev stories found in story files">
<output>📋 No ready-for-dev stories found
**Available Options:**
1. Run `create-story` to create next story from epics with comprehensive context
2. Run `*validate-create-story` to improve existing stories
3. Specify which story to develop
</output>
<ask>What would you like to do? Choose option [1], [2], or [3]:</ask>
<check if="user chooses '1'">
<action>HALT - Run create-story to create next story</action>
</check>
<check if="user chooses '2'">
<action>HALT - Run validate-create-story to improve existing stories</action>
</check>
<check if="user chooses '3'">
<ask>It's unclear what story you want developed. Please provide the full path to the story file:</ask>
<action>Store user-provided story path as {{story_path}}</action>
<action>Continue with provided story file</action>
</check>
</check>
<check if="ready-for-dev story found in files">
<action>Use discovered story file and extract story_key</action>
</check>
</check>
<action>Store the found story_key (e.g., "1-2-user-authentication") for later status updates</action>
<action>Find matching story file in {implementation_artifacts} using story_key pattern: {{story_key}}.md</action>
<action>Read COMPLETE story file from discovered path</action>
<anchor id="task_check" />
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status</action>
<action>Load comprehensive context from story file's Dev Notes section</action>
<action>Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications</action>
<action>Use enhanced story context to inform implementation decisions and approaches</action>
<action>Identify first incomplete task (unchecked [ ]) in Tasks/Subtasks</action>
<action if="no incomplete tasks">
<goto step="9">Completion sequence</goto>
</action>
<action if="story file inaccessible">HALT: "Cannot develop story without access to story file"</action>
<action if="incomplete task or subtask requirements ambiguous">ASK user to clarify or HALT</action>
</step>
<step n="2" goal="Load project context and story information">
<critical>Load all available context to inform implementation</critical>
<action>Load {project_context} for coding standards and project-wide patterns (if exists)</action>
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status</action>
<action>Load comprehensive context from story file's Dev Notes section</action>
<action>Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications</action>
<action>Use enhanced story context to inform implementation decisions and approaches</action>
<output>✅ **Context Loaded**
Story and project context available for implementation
</output>
</step>
<step n="3" goal="Detect review continuation and extract review context">
<critical>Determine if this is a fresh start or continuation after code review</critical>
<action>Check if "Senior Developer Review (AI)" section exists in the story file</action>
<action>Check if "Review Follow-ups (AI)" subsection exists under Tasks/Subtasks</action>
<check if="Senior Developer Review section exists">
<action>Set review_continuation = true</action>
<action>Extract from "Senior Developer Review (AI)" section:
- Review outcome (Approve/Changes Requested/Blocked)
- Review date
- Total action items with checkboxes (count checked vs unchecked)
- Severity breakdown (High/Med/Low counts)
</action>
<action>Count unchecked [ ] review follow-up tasks in "Review Follow-ups (AI)" subsection</action>
<action>Store list of unchecked review items as {{pending_review_items}}</action>
<output>⏯️ **Resuming Story After Code Review** ({{review_date}})
**Review Outcome:** {{review_outcome}}
**Action Items:** {{unchecked_review_count}} remaining to address
**Priorities:** {{high_count}} High, {{med_count}} Medium, {{low_count}} Low
**Strategy:** Will prioritize review follow-up tasks (marked [AI-Review]) before continuing with regular tasks.
</output>
</check>
<check if="Senior Developer Review section does NOT exist">
<action>Set review_continuation = false</action>
<action>Set {{pending_review_items}} = empty</action>
<output>🚀 **Starting Fresh Implementation**
Story: {{story_key}}
Story Status: {{current_status}}
First incomplete task: {{first_task_description}}
</output>
</check>
</step>
<step n="4" goal="Mark story in-progress" tag="sprint-status">
<check if="{{sprint_status}} file exists">
<action>Load the FULL file: {{sprint_status}}</action>
<action>Read all development_status entries to find {{story_key}}</action>
<action>Get current status value for development_status[{{story_key}}]</action>
<check if="current status == 'ready-for-dev' OR review_continuation == true">
<action>Update the story in the sprint status report to = "in-progress"</action>
<action>Update last_updated field to current date</action>
<output>🚀 Starting work on story {{story_key}}
Status updated: ready-for-dev → in-progress
</output>
</check>
<check if="current status == 'in-progress'">
<output>⏯️ Resuming work on story {{story_key}}
Story is already marked in-progress
</output>
</check>
<check if="current status is neither ready-for-dev nor in-progress">
<output>⚠️ Unexpected story status: {{current_status}}
Expected ready-for-dev or in-progress. Continuing anyway...
</output>
</check>
<action>Store {{current_sprint_status}} for later use</action>
</check>
<check if="{{sprint_status}} file does NOT exist">
<output> No sprint status file exists - story progress will be tracked in story file only</output>
<action>Set {{current_sprint_status}} = "no-sprint-tracking"</action>
</check>
</step>
<step n="5" goal="Implement task following red-green-refactor cycle">
<critical>FOLLOW THE STORY FILE TASKS/SUBTASKS SEQUENCE EXACTLY AS WRITTEN - NO DEVIATION</critical>
<action>Review the current task/subtask from the story file - this is your authoritative implementation guide</action>
<action>Plan implementation following red-green-refactor cycle</action>
<!-- RED PHASE -->
<action>Write FAILING tests first for the task/subtask functionality</action>
<action>Confirm tests fail before implementation - this validates test correctness</action>
<!-- GREEN PHASE -->
<action>Implement MINIMAL code to make tests pass</action>
<action>Run tests to confirm they now pass</action>
<action>Handle error conditions and edge cases as specified in task/subtask</action>
<!-- REFACTOR PHASE -->
<action>Improve code structure while keeping tests green</action>
<action>Ensure code follows architecture patterns and coding standards from Dev Notes</action>
<action>Document technical approach and decisions in Dev Agent Record → Implementation Plan</action>
<action if="new dependencies required beyond story specifications">HALT: "Additional dependencies need user approval"</action>
<action if="3 consecutive implementation failures occur">HALT and request guidance</action>
<action if="required configuration is missing">HALT: "Cannot proceed without necessary configuration files"</action>
<critical>NEVER implement anything not mapped to a specific task/subtask in the story file</critical>
<critical>NEVER proceed to next task until current task/subtask is complete AND tests pass</critical>
<critical>Execute continuously without pausing until all tasks/subtasks are complete or explicit HALT condition</critical>
<critical>Do NOT propose to pause for review until Step 9 completion gates are satisfied</critical>
</step>
<step n="6" goal="Author comprehensive tests">
<action>Create unit tests for business logic and core functionality introduced/changed by the task</action>
<action>Add integration tests for component interactions specified in story requirements</action>
<action>Include end-to-end tests for critical user flows when story requirements demand them</action>
<action>Cover edge cases and error handling scenarios identified in story Dev Notes</action>
</step>
<step n="7" goal="Run validations and tests">
<action>Determine how to run tests for this repo (infer test framework from project structure)</action>
<action>Run all existing tests to ensure no regressions</action>
<action>Run the new tests to verify implementation correctness</action>
<action>Run linting and code quality checks if configured in project</action>
<action>Validate implementation meets ALL story acceptance criteria; enforce quantitative thresholds explicitly</action>
<action if="regression tests fail">STOP and fix before continuing - identify breaking changes immediately</action>
<action if="new tests fail">STOP and fix before continuing - ensure implementation correctness</action>
</step>
<step n="8" goal="Validate and mark task complete ONLY when fully done">
<critical>NEVER mark a task complete unless ALL conditions are met - NO LYING OR CHEATING</critical>
<!-- VALIDATION GATES -->
<action>Verify ALL tests for this task/subtask ACTUALLY EXIST and PASS 100%</action>
<action>Confirm implementation matches EXACTLY what the task/subtask specifies - no extra features</action>
<action>Validate that ALL acceptance criteria related to this task are satisfied</action>
<action>Run full test suite to ensure NO regressions introduced</action>
<!-- REVIEW FOLLOW-UP HANDLING -->
<check if="task is review follow-up (has [AI-Review] prefix)">
<action>Extract review item details (severity, description, related AC/file)</action>
<action>Add to resolution tracking list: {{resolved_review_items}}</action>
<!-- Mark task in Review Follow-ups section -->
<action>Mark task checkbox [x] in "Tasks/Subtasks → Review Follow-ups (AI)" section</action>
<!-- CRITICAL: Also mark corresponding action item in review section -->
<action>Find matching action item in "Senior Developer Review (AI) → Action Items" section by matching description</action>
<action>Mark that action item checkbox [x] as resolved</action>
<action>Add to Dev Agent Record → Completion Notes: "✅ Resolved review finding [{{severity}}]: {{description}}"</action>
</check>
<!-- ONLY MARK COMPLETE IF ALL VALIDATION PASS -->
<check if="ALL validation gates pass AND tests ACTUALLY exist and pass">
<action>ONLY THEN mark the task (and subtasks) checkbox with [x]</action>
<action>Update File List section with ALL new, modified, or deleted files (paths relative to repo root)</action>
<action>Add completion notes to Dev Agent Record summarizing what was ACTUALLY implemented and tested</action>
</check>
<check if="ANY validation fails">
<action>DO NOT mark task complete - fix issues first</action>
<action>HALT if unable to fix validation failures</action>
</check>
<check if="review_continuation == true and {{resolved_review_items}} is not empty">
<action>Count total resolved review items in this session</action>
<action>Add Change Log entry: "Addressed code review findings - {{resolved_count}} items resolved (Date: {{date}})"</action>
</check>
<action>Save the story file</action>
<action>Determine if more incomplete tasks remain</action>
<action if="more tasks remain">
<goto step="5">Next task</goto>
</action>
<action if="no tasks remain">
<goto step="9">Completion</goto>
</action>
</step>
<step n="9" goal="Story completion and mark for review" tag="sprint-status">
<action>Verify ALL tasks and subtasks are marked [x] (re-scan the story document now)</action>
<action>Run the full regression suite (do not skip)</action>
<action>Confirm File List includes every changed file</action>
<action>Execute enhanced definition-of-done validation</action>
<action>Update the story Status to: "review"</action>
<!-- Enhanced Definition of Done Validation -->
<action>Validate definition-of-done checklist with essential requirements:
- All tasks/subtasks marked complete with [x]
- Implementation satisfies every Acceptance Criterion
- Unit tests for core functionality added/updated
- Integration tests for component interactions added when required
- End-to-end tests for critical flows added when story demands them
- All tests pass (no regressions, new tests successful)
- Code quality checks pass (linting, static analysis if configured)
- File List includes every new/modified/deleted file (relative paths)
- Dev Agent Record contains implementation notes
- Change Log includes summary of changes
- Only permitted story sections were modified
</action>
<!-- Mark story ready for review - sprint status conditional -->
<check if="{sprint_status} file exists AND {{current_sprint_status}} != 'no-sprint-tracking'">
<action>Load the FULL file: {sprint_status}</action>
<action>Find development_status key matching {{story_key}}</action>
<action>Verify current status is "in-progress" (expected previous state)</action>
<action>Update development_status[{{story_key}}] = "review"</action>
<action>Update last_updated field to current date</action>
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
<output>✅ Story status updated to "review" in sprint-status.yaml</output>
</check>
<check if="{sprint_status} file does NOT exist OR {{current_sprint_status}} == 'no-sprint-tracking'">
<output> Story status updated to "review" in story file (no sprint tracking configured)</output>
</check>
<check if="story key not found in sprint status">
<output>⚠️ Story file updated, but sprint-status update failed: {{story_key}} not found
Story status is set to "review" in file, but sprint-status.yaml may be out of sync.
</output>
</check>
<!-- Final validation gates -->
<action if="any task is incomplete">HALT - Complete remaining tasks before marking ready for review</action>
<action if="regression failures exist">HALT - Fix regression issues before completing</action>
<action if="File List is incomplete">HALT - Update File List with all changed files</action>
<action if="definition-of-done validation fails">HALT - Address DoD failures before completing</action>
</step>
<step n="10" goal="Completion communication and user support">
<action>Execute the enhanced definition-of-done checklist using the validation framework</action>
<action>Prepare a concise summary in Dev Agent Record → Completion Notes</action>
<action>Communicate to {user_name} that story implementation is complete and ready for review</action>
<action>Summarize key accomplishments: story ID, story key, title, key changes made, tests added, files modified</action>
<action>Provide the story file path and current status (now "review")</action>
<action>Based on {user_skill_level}, ask if user needs any explanations about:
- What was implemented and how it works
- Why certain technical decisions were made
- How to test or verify the changes
- Any patterns, libraries, or approaches used
- Anything else they'd like clarified
</action>
<check if="user asks for explanations">
<action>Provide clear, contextual explanations tailored to {user_skill_level}</action>
<action>Use examples and references to specific code when helpful</action>
</check>
<action>Once explanations are complete (or user indicates no questions), suggest logical next steps</action>
<action>Recommended next steps (flexible based on project setup):
- Review the implemented story and test the changes
- Verify all acceptance criteria are met
- Ensure deployment readiness if applicable
- Run `code-review` workflow for peer review
- Optional: If Test Architect module installed, run `/bmad:tea:automate` to expand guardrail tests
</action>
<output>💡 **Tip:** For best results, run `code-review` using a **different** LLM than the one that implemented this story.</output>
<check if="{sprint_status} file exists">
<action>Suggest checking {sprint_status} to see project progress</action>
</check>
<action>Remain flexible - allow user to choose their own path or ask for other assistance</action>
<action>Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.</action>
</step>
</workflow>

View File

@ -0,0 +1,41 @@
# DO NOT EDIT -- overwritten on every update.
#
# Workflow customization surface for bmad-dev-story. Mirrors the
# agent customization shape under the [workflow] namespace.
[workflow]
# --- Configurable below. Overrides merge per BMad structural rules: ---
# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
# Steps to run before the standard activation (config load, greet).
# Overrides append. Use for pre-flight loads, compliance checks, etc.
activation_steps_prepend = []
# Steps to run after greet but before the workflow begins.
# Overrides append. Use for context-heavy setup that should happen
# once the user has been acknowledged.
activation_steps_append = []
# Persistent facts the workflow keeps in mind for the whole run
# (standards, compliance constraints, stylistic guardrails).
# Distinct from the runtime memory sidecar — these are static context
# loaded on activation. Overrides append.
#
# Each entry is either:
# - a literal sentence, e.g. "All stories must include testable acceptance criteria."
# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
# (glob patterns are supported; the file's contents are loaded and treated as facts).
persistent_facts = [
"file:{project-root}/**/project-context.md",
]
# Scalar: executed when the workflow reaches its final step,
# after the story implementation is complete and status is updated. Override wins.
# Leave empty for no custom post-completion behavior.
on_complete = ""

View File

@ -1,450 +0,0 @@
# Dev Story Workflow
**Goal:** Execute story implementation following a context filled story spec file.
**Your Role:** Developer implementing the story.
- Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}
- Generate all documents in {document_output_language}
- Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, Change Log, and Status
- Execute ALL steps in exact order; do NOT skip steps
- Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives other instruction.
- Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 6 decides completion.
- User skill level ({user_skill_level}) affects conversation style ONLY, not code updates.
---
## INITIALIZATION
### Configuration Loading
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
- `project_name`, `user_name`
- `communication_language`, `document_output_language`
- `user_skill_level`
- `implementation_artifacts`
- `date` as system-generated current datetime
### Paths
- `story_file` = `` (explicit story path; auto-discovered if empty)
- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
### Context
- `project_context` = `**/project-context.md` (load if exists)
---
## EXECUTION
<workflow>
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
<critical>Generate all documents in {document_output_language}</critical>
<critical>Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List,
Change Log, and Status</critical>
<critical>Execute ALL steps in exact order; do NOT skip steps</critical>
<critical>Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution
until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives
other instruction.</critical>
<critical>Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 6 decides completion.</critical>
<critical>User skill level ({user_skill_level}) affects conversation style ONLY, not code updates.</critical>
<step n="1" goal="Find next ready story and load it" tag="sprint-status">
<check if="{{story_path}} is provided">
<action>Use {{story_path}} directly</action>
<action>Read COMPLETE story file</action>
<action>Extract story_key from filename or metadata</action>
<goto anchor="task_check" />
</check>
<!-- Sprint-based story discovery -->
<check if="{{sprint_status}} file exists">
<critical>MUST read COMPLETE sprint-status.yaml file from start to end to preserve order</critical>
<action>Load the FULL file: {{sprint_status}}</action>
<action>Read ALL lines from beginning to end - do not skip any content</action>
<action>Parse the development_status section completely to understand story order</action>
<action>Find the FIRST story (by reading in order from top to bottom) where:
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
- Status value equals "ready-for-dev"
</action>
<check if="no ready-for-dev or in-progress story found">
<output>📋 No ready-for-dev stories found in sprint-status.yaml
**Current Sprint Status:** {{sprint_status_summary}}
**What would you like to do?**
1. Run `create-story` to create next story from epics with comprehensive context
2. Run `*validate-create-story` to improve existing stories before development (recommended quality check)
3. Specify a particular story file to develop (provide full path)
4. Check {{sprint_status}} file to see current sprint status
💡 **Tip:** Stories in `ready-for-dev` may not have been validated. Consider running `validate-create-story` first for a quality
check.
</output>
<ask>Choose option [1], [2], [3], or [4], or specify story file path:</ask>
<check if="user chooses '1'">
<action>HALT - Run create-story to create next story</action>
</check>
<check if="user chooses '2'">
<action>HALT - Run validate-create-story to improve existing stories</action>
</check>
<check if="user chooses '3'">
<ask>Provide the story file path to develop:</ask>
<action>Store user-provided story path as {{story_path}}</action>
<goto anchor="task_check" />
</check>
<check if="user chooses '4'">
<output>Loading {{sprint_status}} for detailed status review...</output>
<action>Display detailed sprint status analysis</action>
<action>HALT - User can review sprint status and provide story path</action>
</check>
<check if="user provides story file path">
<action>Store user-provided story path as {{story_path}}</action>
<goto anchor="task_check" />
</check>
</check>
</check>
<!-- Non-sprint story discovery -->
<check if="{{sprint_status}} file does NOT exist">
<action>Search {implementation_artifacts} for stories directly</action>
<action>Find stories with "ready-for-dev" status in files</action>
<action>Look for story files matching pattern: *-*-*.md</action>
<action>Read each candidate story file to check Status section</action>
<check if="no ready-for-dev stories found in story files">
<output>📋 No ready-for-dev stories found
**Available Options:**
1. Run `create-story` to create next story from epics with comprehensive context
2. Run `*validate-create-story` to improve existing stories
3. Specify which story to develop
</output>
<ask>What would you like to do? Choose option [1], [2], or [3]:</ask>
<check if="user chooses '1'">
<action>HALT - Run create-story to create next story</action>
</check>
<check if="user chooses '2'">
<action>HALT - Run validate-create-story to improve existing stories</action>
</check>
<check if="user chooses '3'">
<ask>It's unclear what story you want developed. Please provide the full path to the story file:</ask>
<action>Store user-provided story path as {{story_path}}</action>
<action>Continue with provided story file</action>
</check>
</check>
<check if="ready-for-dev story found in files">
<action>Use discovered story file and extract story_key</action>
</check>
</check>
<action>Store the found story_key (e.g., "1-2-user-authentication") for later status updates</action>
<action>Find matching story file in {implementation_artifacts} using story_key pattern: {{story_key}}.md</action>
<action>Read COMPLETE story file from discovered path</action>
<anchor id="task_check" />
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status</action>
<action>Load comprehensive context from story file's Dev Notes section</action>
<action>Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications</action>
<action>Use enhanced story context to inform implementation decisions and approaches</action>
<action>Identify first incomplete task (unchecked [ ]) in Tasks/Subtasks</action>
<action if="no incomplete tasks">
<goto step="6">Completion sequence</goto>
</action>
<action if="story file inaccessible">HALT: "Cannot develop story without access to story file"</action>
<action if="incomplete task or subtask requirements ambiguous">ASK user to clarify or HALT</action>
</step>
<step n="2" goal="Load project context and story information">
<critical>Load all available context to inform implementation</critical>
<action>Load {project_context} for coding standards and project-wide patterns (if exists)</action>
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status</action>
<action>Load comprehensive context from story file's Dev Notes section</action>
<action>Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications</action>
<action>Use enhanced story context to inform implementation decisions and approaches</action>
<output>✅ **Context Loaded**
Story and project context available for implementation
</output>
</step>
<step n="3" goal="Detect review continuation and extract review context">
<critical>Determine if this is a fresh start or continuation after code review</critical>
<action>Check if "Senior Developer Review (AI)" section exists in the story file</action>
<action>Check if "Review Follow-ups (AI)" subsection exists under Tasks/Subtasks</action>
<check if="Senior Developer Review section exists">
<action>Set review_continuation = true</action>
<action>Extract from "Senior Developer Review (AI)" section:
- Review outcome (Approve/Changes Requested/Blocked)
- Review date
- Total action items with checkboxes (count checked vs unchecked)
- Severity breakdown (High/Med/Low counts)
</action>
<action>Count unchecked [ ] review follow-up tasks in "Review Follow-ups (AI)" subsection</action>
<action>Store list of unchecked review items as {{pending_review_items}}</action>
<output>⏯️ **Resuming Story After Code Review** ({{review_date}})
**Review Outcome:** {{review_outcome}}
**Action Items:** {{unchecked_review_count}} remaining to address
**Priorities:** {{high_count}} High, {{med_count}} Medium, {{low_count}} Low
**Strategy:** Will prioritize review follow-up tasks (marked [AI-Review]) before continuing with regular tasks.
</output>
</check>
<check if="Senior Developer Review section does NOT exist">
<action>Set review_continuation = false</action>
<action>Set {{pending_review_items}} = empty</action>
<output>🚀 **Starting Fresh Implementation**
Story: {{story_key}}
Story Status: {{current_status}}
First incomplete task: {{first_task_description}}
</output>
</check>
</step>
<step n="4" goal="Mark story in-progress" tag="sprint-status">
<check if="{{sprint_status}} file exists">
<action>Load the FULL file: {{sprint_status}}</action>
<action>Read all development_status entries to find {{story_key}}</action>
<action>Get current status value for development_status[{{story_key}}]</action>
<check if="current status == 'ready-for-dev' OR review_continuation == true">
<action>Update the story in the sprint status report to = "in-progress"</action>
<action>Update last_updated field to current date</action>
<output>🚀 Starting work on story {{story_key}}
Status updated: ready-for-dev → in-progress
</output>
</check>
<check if="current status == 'in-progress'">
<output>⏯️ Resuming work on story {{story_key}}
Story is already marked in-progress
</output>
</check>
<check if="current status is neither ready-for-dev nor in-progress">
<output>⚠️ Unexpected story status: {{current_status}}
Expected ready-for-dev or in-progress. Continuing anyway...
</output>
</check>
<action>Store {{current_sprint_status}} for later use</action>
</check>
<check if="{{sprint_status}} file does NOT exist">
<output> No sprint status file exists - story progress will be tracked in story file only</output>
<action>Set {{current_sprint_status}} = "no-sprint-tracking"</action>
</check>
</step>
<step n="5" goal="Implement task following red-green-refactor cycle">
<critical>FOLLOW THE STORY FILE TASKS/SUBTASKS SEQUENCE EXACTLY AS WRITTEN - NO DEVIATION</critical>
<action>Review the current task/subtask from the story file - this is your authoritative implementation guide</action>
<action>Plan implementation following red-green-refactor cycle</action>
<!-- RED PHASE -->
<action>Write FAILING tests first for the task/subtask functionality</action>
<action>Confirm tests fail before implementation - this validates test correctness</action>
<!-- GREEN PHASE -->
<action>Implement MINIMAL code to make tests pass</action>
<action>Run tests to confirm they now pass</action>
<action>Handle error conditions and edge cases as specified in task/subtask</action>
<!-- REFACTOR PHASE -->
<action>Improve code structure while keeping tests green</action>
<action>Ensure code follows architecture patterns and coding standards from Dev Notes</action>
<action>Document technical approach and decisions in Dev Agent Record → Implementation Plan</action>
<action if="new dependencies required beyond story specifications">HALT: "Additional dependencies need user approval"</action>
<action if="3 consecutive implementation failures occur">HALT and request guidance</action>
<action if="required configuration is missing">HALT: "Cannot proceed without necessary configuration files"</action>
<critical>NEVER implement anything not mapped to a specific task/subtask in the story file</critical>
<critical>NEVER proceed to next task until current task/subtask is complete AND tests pass</critical>
<critical>Execute continuously without pausing until all tasks/subtasks are complete or explicit HALT condition</critical>
<critical>Do NOT propose to pause for review until Step 9 completion gates are satisfied</critical>
</step>
<step n="6" goal="Author comprehensive tests">
<action>Create unit tests for business logic and core functionality introduced/changed by the task</action>
<action>Add integration tests for component interactions specified in story requirements</action>
<action>Include end-to-end tests for critical user flows when story requirements demand them</action>
<action>Cover edge cases and error handling scenarios identified in story Dev Notes</action>
</step>
<step n="7" goal="Run validations and tests">
<action>Determine how to run tests for this repo (infer test framework from project structure)</action>
<action>Run all existing tests to ensure no regressions</action>
<action>Run the new tests to verify implementation correctness</action>
<action>Run linting and code quality checks if configured in project</action>
<action>Validate implementation meets ALL story acceptance criteria; enforce quantitative thresholds explicitly</action>
<action if="regression tests fail">STOP and fix before continuing - identify breaking changes immediately</action>
<action if="new tests fail">STOP and fix before continuing - ensure implementation correctness</action>
</step>
<step n="8" goal="Validate and mark task complete ONLY when fully done">
<critical>NEVER mark a task complete unless ALL conditions are met - NO LYING OR CHEATING</critical>
<!-- VALIDATION GATES -->
<action>Verify ALL tests for this task/subtask ACTUALLY EXIST and PASS 100%</action>
<action>Confirm implementation matches EXACTLY what the task/subtask specifies - no extra features</action>
<action>Validate that ALL acceptance criteria related to this task are satisfied</action>
<action>Run full test suite to ensure NO regressions introduced</action>
<!-- REVIEW FOLLOW-UP HANDLING -->
<check if="task is review follow-up (has [AI-Review] prefix)">
<action>Extract review item details (severity, description, related AC/file)</action>
<action>Add to resolution tracking list: {{resolved_review_items}}</action>
<!-- Mark task in Review Follow-ups section -->
<action>Mark task checkbox [x] in "Tasks/Subtasks → Review Follow-ups (AI)" section</action>
<!-- CRITICAL: Also mark corresponding action item in review section -->
<action>Find matching action item in "Senior Developer Review (AI) → Action Items" section by matching description</action>
<action>Mark that action item checkbox [x] as resolved</action>
<action>Add to Dev Agent Record → Completion Notes: "✅ Resolved review finding [{{severity}}]: {{description}}"</action>
</check>
<!-- ONLY MARK COMPLETE IF ALL VALIDATION PASS -->
<check if="ALL validation gates pass AND tests ACTUALLY exist and pass">
<action>ONLY THEN mark the task (and subtasks) checkbox with [x]</action>
<action>Update File List section with ALL new, modified, or deleted files (paths relative to repo root)</action>
<action>Add completion notes to Dev Agent Record summarizing what was ACTUALLY implemented and tested</action>
</check>
<check if="ANY validation fails">
<action>DO NOT mark task complete - fix issues first</action>
<action>HALT if unable to fix validation failures</action>
</check>
<check if="review_continuation == true and {{resolved_review_items}} is not empty">
<action>Count total resolved review items in this session</action>
<action>Add Change Log entry: "Addressed code review findings - {{resolved_count}} items resolved (Date: {{date}})"</action>
</check>
<action>Save the story file</action>
<action>Determine if more incomplete tasks remain</action>
<action if="more tasks remain">
<goto step="5">Next task</goto>
</action>
<action if="no tasks remain">
<goto step="9">Completion</goto>
</action>
</step>
<step n="9" goal="Story completion and mark for review" tag="sprint-status">
<action>Verify ALL tasks and subtasks are marked [x] (re-scan the story document now)</action>
<action>Run the full regression suite (do not skip)</action>
<action>Confirm File List includes every changed file</action>
<action>Execute enhanced definition-of-done validation</action>
<action>Update the story Status to: "review"</action>
<!-- Enhanced Definition of Done Validation -->
<action>Validate definition-of-done checklist with essential requirements:
- All tasks/subtasks marked complete with [x]
- Implementation satisfies every Acceptance Criterion
- Unit tests for core functionality added/updated
- Integration tests for component interactions added when required
- End-to-end tests for critical flows added when story demands them
- All tests pass (no regressions, new tests successful)
- Code quality checks pass (linting, static analysis if configured)
- File List includes every new/modified/deleted file (relative paths)
- Dev Agent Record contains implementation notes
- Change Log includes summary of changes
- Only permitted story sections were modified
</action>
<!-- Mark story ready for review - sprint status conditional -->
<check if="{sprint_status} file exists AND {{current_sprint_status}} != 'no-sprint-tracking'">
<action>Load the FULL file: {sprint_status}</action>
<action>Find development_status key matching {{story_key}}</action>
<action>Verify current status is "in-progress" (expected previous state)</action>
<action>Update development_status[{{story_key}}] = "review"</action>
<action>Update last_updated field to current date</action>
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
<output>✅ Story status updated to "review" in sprint-status.yaml</output>
</check>
<check if="{sprint_status} file does NOT exist OR {{current_sprint_status}} == 'no-sprint-tracking'">
<output> Story status updated to "review" in story file (no sprint tracking configured)</output>
</check>
<check if="story key not found in sprint status">
<output>⚠️ Story file updated, but sprint-status update failed: {{story_key}} not found
Story status is set to "review" in file, but sprint-status.yaml may be out of sync.
</output>
</check>
<!-- Final validation gates -->
<action if="any task is incomplete">HALT - Complete remaining tasks before marking ready for review</action>
<action if="regression failures exist">HALT - Fix regression issues before completing</action>
<action if="File List is incomplete">HALT - Update File List with all changed files</action>
<action if="definition-of-done validation fails">HALT - Address DoD failures before completing</action>
</step>
<step n="10" goal="Completion communication and user support">
<action>Execute the enhanced definition-of-done checklist using the validation framework</action>
<action>Prepare a concise summary in Dev Agent Record → Completion Notes</action>
<action>Communicate to {user_name} that story implementation is complete and ready for review</action>
<action>Summarize key accomplishments: story ID, story key, title, key changes made, tests added, files modified</action>
<action>Provide the story file path and current status (now "review")</action>
<action>Based on {user_skill_level}, ask if user needs any explanations about:
- What was implemented and how it works
- Why certain technical decisions were made
- How to test or verify the changes
- Any patterns, libraries, or approaches used
- Anything else they'd like clarified
</action>
<check if="user asks for explanations">
<action>Provide clear, contextual explanations tailored to {user_skill_level}</action>
<action>Use examples and references to specific code when helpful</action>
</check>
<action>Once explanations are complete (or user indicates no questions), suggest logical next steps</action>
<action>Recommended next steps (flexible based on project setup):
- Review the implemented story and test the changes
- Verify all acceptance criteria are met
- Ensure deployment readiness if applicable
- Run `code-review` workflow for peer review
- Optional: If Test Architect module installed, run `/bmad:tea:automate` to expand guardrail tests
</action>
<output>💡 **Tip:** For best results, run `code-review` using a **different** LLM than the one that implemented this story.</output>
<check if="{sprint_status} file exists">
<action>Suggest checking {sprint_status} to see project progress</action>
</check>
<action>Remain flexible - allow user to choose their own path or ask for other assistance</action>
</step>
</workflow>

View File

@ -3,4 +3,109 @@ name: bmad-quick-dev
description: 'Implements any user intent, requirement, story, bug fix or change request by producing clean working code artifacts that follow the project''s existing architecture, patterns and conventions. Use when the user wants to build, fix, tweak, refactor, add or modify any code, component or feature.'
---
Follow the instructions in ./workflow.md.
# Quick Dev New Preview Workflow
**Goal:** Turn user intent into a hardened, reviewable artifact.
**CRITICAL:** If a step says "read fully and follow step-XX", you read and follow step-XX. No exceptions.
## READY FOR DEVELOPMENT STANDARD
A specification is "Ready for Development" when:
- **Actionable**: Every task has a file path and specific action.
- **Logical**: Tasks ordered by dependency.
- **Testable**: All ACs use Given/When/Then.
- **Complete**: No placeholders or TBDs.
## SCOPE STANDARD
A specification should target a **single user-facing goal** within **9001600 tokens**:
- **Single goal**: One cohesive feature, even if it spans multiple layers/files. Multi-goal means >=2 **top-level independent shippable deliverables** — each could be reviewed, tested, and merged as a separate PR without breaking the others. Never count surface verbs, "and" conjunctions, or noun phrases. Never split cross-layer implementation details inside one user goal.
- Split: "add dark mode toggle AND refactor auth to JWT AND build admin dashboard"
- Don't split: "add validation and display errors" / "support drag-and-drop AND paste AND retry"
- **9001600 tokens**: Optimal range for LLM consumption. Below 900 risks ambiguity; above 1600 risks context-rot in implementation agents.
- **Neither limit is a gate.** Both are proposals with user override.
## Conventions
- Bare paths (e.g. `step-01-clarify-and-route.md`) resolve from the skill root.
- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
- `{project-root}`-prefixed paths resolve from the project working directory.
- `{skill-name}` resolves to the skill directory's basename.
## On Activation
### Step 1: Resolve the Workflow Block
Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
1. `{skill-root}/customize.toml` — defaults
2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
### Step 2: Execute Prepend Steps
Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
### Step 3: Load Persistent Facts
Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` -- load the referenced contents as facts. All other entries are facts verbatim.
### Step 4: Load Config
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name`
- `communication_language`, `document_output_language`, `user_skill_level`
- `date` as system-generated current datetime
- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
- `project_context` = `**/project-context.md` (load if exists)
- CLAUDE.md / memory files (load if exist)
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
- Language MUST be tailored to `{user_skill_level}`
- Generate all documents in `{document_output_language}`
### Step 5: Greet the User
Greet `{user_name}`, speaking in `{communication_language}`.
### Step 6: Execute Append Steps
Execute each entry in `{workflow.activation_steps_append}` in order.
Activation is complete. Begin the workflow below.
## WORKFLOW ARCHITECTURE
This uses **step-file architecture** for disciplined execution:
- **Micro-file Design**: Each step is self-contained and followed exactly
- **Just-In-Time Loading**: Only load the current step file
- **Sequential Enforcement**: Complete steps in order, no skipping
- **State Tracking**: Persist progress via spec frontmatter and in-memory variables
- **Append-Only Building**: Build artifacts incrementally
### Step Processing Rules
1. **READ COMPLETELY**: Read the entire step file before acting
2. **FOLLOW SEQUENCE**: Execute sections in order
3. **WAIT FOR INPUT**: Halt at checkpoints and wait for human
4. **LOAD NEXT**: When directed, read fully and follow the next step file
### Critical Rules (NO EXCEPTIONS)
- **NEVER** load multiple step files simultaneously
- **ALWAYS** read entire step file before execution
- **NEVER** skip steps or optimize the sequence
- **ALWAYS** follow the exact instructions in the step file
- **ALWAYS** halt at checkpoints and wait for human input
## FIRST STEP
Read fully and follow: `./step-01-clarify-and-route.md` to begin the workflow.

View File

@ -0,0 +1,41 @@
# DO NOT EDIT -- overwritten on every update.
#
# Workflow customization surface for bmad-quick-dev. Mirrors the
# agent customization shape under the [workflow] namespace.
[workflow]
# --- Configurable below. Overrides merge per BMad structural rules: ---
# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
# Steps to run before the standard activation (config load, greet).
# Overrides append. Use for pre-flight loads, compliance checks, etc.
activation_steps_prepend = []
# Steps to run after greet but before the workflow begins.
# Overrides append. Use for context-heavy setup that should happen
# once the user has been acknowledged.
activation_steps_append = []
# Persistent facts the workflow keeps in mind for the whole run
# (standards, compliance constraints, stylistic guardrails).
# Distinct from the runtime memory sidecar — these are static context
# loaded on activation. Overrides append.
#
# Each entry is either:
# - a literal sentence, e.g. "All stories must include testable acceptance criteria."
# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
# (glob patterns are supported; the file's contents are loaded and treated as facts).
persistent_facts = [
"file:{project-root}/**/project-context.md",
]
# Scalar: executed when the workflow reaches its final step,
# after implementation is complete and explanations are provided. Override wins.
# Leave empty for no custom post-completion behavior.
on_complete = ""

View File

@ -70,3 +70,9 @@ Display summary of your work to the user, including the commit hash if one was c
- Offer to push and/or create a pull request.
Workflow complete.
## On Complete
Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.

View File

@ -63,3 +63,9 @@ If version control is available and the tree is dirty, create a local commit wit
HALT and wait for human input.
Workflow complete.
## On Complete
Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.

View File

@ -1,76 +0,0 @@
---
main_config: '{project-root}/_bmad/bmm/config.yaml'
---
# Quick Dev New Preview Workflow
**Goal:** Turn user intent into a hardened, reviewable artifact.
**CRITICAL:** If a step says "read fully and follow step-XX", you read and follow step-XX. No exceptions.
## READY FOR DEVELOPMENT STANDARD
A specification is "Ready for Development" when:
- **Actionable**: Every task has a file path and specific action.
- **Logical**: Tasks ordered by dependency.
- **Testable**: All ACs use Given/When/Then.
- **Complete**: No placeholders or TBDs.
## SCOPE STANDARD
A specification should target a **single user-facing goal** within **9001600 tokens**:
- **Single goal**: One cohesive feature, even if it spans multiple layers/files. Multi-goal means >=2 **top-level independent shippable deliverables** — each could be reviewed, tested, and merged as a separate PR without breaking the others. Never count surface verbs, "and" conjunctions, or noun phrases. Never split cross-layer implementation details inside one user goal.
- Split: "add dark mode toggle AND refactor auth to JWT AND build admin dashboard"
- Don't split: "add validation and display errors" / "support drag-and-drop AND paste AND retry"
- **9001600 tokens**: Optimal range for LLM consumption. Below 900 risks ambiguity; above 1600 risks context-rot in implementation agents.
- **Neither limit is a gate.** Both are proposals with user override.
## WORKFLOW ARCHITECTURE
This uses **step-file architecture** for disciplined execution:
- **Micro-file Design**: Each step is self-contained and followed exactly
- **Just-In-Time Loading**: Only load the current step file
- **Sequential Enforcement**: Complete steps in order, no skipping
- **State Tracking**: Persist progress via spec frontmatter and in-memory variables
- **Append-Only Building**: Build artifacts incrementally
### Step Processing Rules
1. **READ COMPLETELY**: Read the entire step file before acting
2. **FOLLOW SEQUENCE**: Execute sections in order
3. **WAIT FOR INPUT**: Halt at checkpoints and wait for human
4. **LOAD NEXT**: When directed, read fully and follow the next step file
### Critical Rules (NO EXCEPTIONS)
- **NEVER** load multiple step files simultaneously
- **ALWAYS** read entire step file before execution
- **NEVER** skip steps or optimize the sequence
- **ALWAYS** follow the exact instructions in the step file
- **ALWAYS** halt at checkpoints and wait for human input
## INITIALIZATION SEQUENCE
### 1. Configuration Loading
Load and read full config from `{main_config}` and resolve:
- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name`
- `communication_language`, `document_output_language`, `user_skill_level`
- `date` as system-generated current datetime
- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
- `project_context` = `**/project-context.md` (load if exists)
- CLAUDE.md / memory files (load if exist)
YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`.
### 2. First Step Execution
Read fully and follow: `./step-01-clarify-and-route.md` to begin the workflow.

View File

@ -3,4 +3,297 @@ name: bmad-sprint-planning
description: 'Generate sprint status tracking from epics. Use when the user says "run sprint planning" or "generate sprint plan"'
---
Follow the instructions in ./workflow.md.
# Sprint Planning Workflow
**Goal:** Generate sprint status tracking from epics, detecting current story statuses and building a complete sprint-status.yaml file.
**Your Role:** You are a Developer generating and maintaining sprint tracking. Parse epic files, detect story statuses, and produce a structured sprint-status.yaml.
## Conventions
- Bare paths (e.g. `checklist.md`) resolve from the skill root.
- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
- `{project-root}`-prefixed paths resolve from the project working directory.
- `{skill-name}` resolves to the skill directory's basename.
## On Activation
### Step 1: Resolve the Workflow Block
Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
1. `{skill-root}/customize.toml` — defaults
2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
### Step 2: Execute Prepend Steps
Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
### Step 3: Load Persistent Facts
Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
### Step 4: Load Config
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
- `project_name`, `user_name`
- `communication_language`, `document_output_language`
- `implementation_artifacts`
- `planning_artifacts`
- `date` as system-generated current datetime
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
- Generate all documents in `{document_output_language}`
### Step 5: Greet the User
Greet `{user_name}`, speaking in `{communication_language}`.
### Step 6: Execute Append Steps
Execute each entry in `{workflow.activation_steps_append}` in order.
Activation is complete. Begin the workflow below.
## Paths
- `tracking_system` = `file-system`
- `project_key` = `NOKEY`
- `story_location` = `{implementation_artifacts}`
- `story_location_absolute` = `{implementation_artifacts}`
- `epics_location` = `{planning_artifacts}`
- `epics_pattern` = `*epic*.md`
- `status_file` = `{implementation_artifacts}/sprint-status.yaml`
## Input Files
| Input | Path | Load Strategy |
|-------|------|---------------|
| Epics | `{planning_artifacts}/*epic*.md` (whole) or `{planning_artifacts}/*epic*/*.md` (sharded) | FULL_LOAD |
## Execution
### Document Discovery - Full Epic Loading
**Strategy**: Sprint planning needs ALL epics and stories to build complete status tracking.
**Epic Discovery Process:**
1. **Search for whole document first** - Look for `epics.md`, `bmm-epics.md`, or any `*epic*.md` file
2. **Check for sharded version** - If whole document not found, look for `epics/index.md`
3. **If sharded version found**:
- Read `index.md` to understand the document structure
- Read ALL epic section files listed in the index (e.g., `epic-1.md`, `epic-2.md`, etc.)
- Process all epics and their stories from the combined content
- This ensures complete sprint status coverage
4. **Priority**: If both whole and sharded versions exist, use the whole document
**Fuzzy matching**: Be flexible with document names - users may use variations like `epics.md`, `bmm-epics.md`, `user-stories.md`, etc.
<workflow>
<step n="1" goal="Parse epic files and extract all work items">
<action>Load {project_context} for project-wide patterns and conventions (if exists)</action>
<action>Communicate in {communication_language} with {user_name}</action>
<action>Look for all files matching `{epics_pattern}` in {epics_location}</action>
<action>Could be a single `epics.md` file or multiple `epic-1.md`, `epic-2.md` files</action>
<action>For each epic file found, extract:</action>
- Epic numbers from headers like `## Epic 1:` or `## Epic 2:`
- Story IDs and titles from patterns like `### Story 1.1: User Authentication`
- Convert story format from `Epic.Story: Title` to kebab-case key: `epic-story-title`
**Story ID Conversion Rules:**
- Original: `### Story 1.1: User Authentication`
- Replace period with dash: `1-1`
- Convert title to kebab-case: `user-authentication`
- Final key: `1-1-user-authentication`
<action>Build complete inventory of all epics and stories from all epic files</action>
</step>
<step n="2" goal="Build sprint status structure">
<action>For each epic found, create entries in this order:</action>
1. **Epic entry** - Key: `epic-{num}`, Default status: `backlog`
2. **Story entries** - Key: `{epic}-{story}-{title}`, Default status: `backlog`
3. **Retrospective entry** - Key: `epic-{num}-retrospective`, Default status: `optional`
**Example structure:**
```yaml
development_status:
epic-1: backlog
1-1-user-authentication: backlog
1-2-account-management: backlog
epic-1-retrospective: optional
```
</step>
<step n="3" goal="Apply intelligent status detection">
<action>For each story, detect current status by checking files:</action>
**Story file detection:**
- Check: `{story_location_absolute}/{story-key}.md` (e.g., `stories/1-1-user-authentication.md`)
- If exists → upgrade status to at least `ready-for-dev`
**Preservation rule:**
- If existing `{status_file}` exists and has more advanced status, preserve it
- Never downgrade status (e.g., don't change `done` to `ready-for-dev`)
**Status Flow Reference:**
- Epic: `backlog``in-progress``done`
- Story: `backlog``ready-for-dev``in-progress``review``done`
- Retrospective: `optional``done`
</step>
<step n="4" goal="Generate sprint status file">
<action>Create or update {status_file} with:</action>
**File Structure:**
```yaml
# generated: {date}
# last_updated: {date}
# project: {project_name}
# project_key: {project_key}
# tracking_system: {tracking_system}
# story_location: {story_location}
# STATUS DEFINITIONS:
# ==================
# Epic Status:
# - backlog: Epic not yet started
# - in-progress: Epic actively being worked on
# - done: All stories in epic completed
#
# Epic Status Transitions:
# - backlog → in-progress: Automatically when first story is created (via create-story)
# - in-progress → done: Manually when all stories reach 'done' status
#
# Story Status:
# - backlog: Story only exists in epic file
# - ready-for-dev: Story file created in stories folder
# - in-progress: Developer actively working on implementation
# - review: Ready for code review (via Dev's code-review workflow)
# - done: Story completed
#
# Retrospective Status:
# - optional: Can be completed but not required
# - done: Retrospective has been completed
#
# WORKFLOW NOTES:
# ===============
# - Epic transitions to 'in-progress' automatically when first story is created
# - Stories can be worked in parallel if team capacity allows
# - Developer typically creates next story after previous one is 'done' to incorporate learnings
# - Dev moves story to 'review', then runs code-review (fresh context, different LLM recommended)
generated: { date }
last_updated: { date }
project: { project_name }
project_key: { project_key }
tracking_system: { tracking_system }
story_location: { story_location }
development_status:
# All epics, stories, and retrospectives in order
```
<action>Write the complete sprint status YAML to {status_file}</action>
<action>CRITICAL: Metadata appears TWICE - once as comments (#) for documentation, once as YAML key:value fields for parsing</action>
<action>Ensure all items are ordered: epic, its stories, its retrospective, next epic...</action>
</step>
<step n="5" goal="Validate and report">
<action>Perform validation checks:</action>
- [ ] Every epic in epic files appears in {status_file}
- [ ] Every story in epic files appears in {status_file}
- [ ] Every epic has a corresponding retrospective entry
- [ ] No items in {status_file} that don't exist in epic files
- [ ] All status values are legal (match state machine definitions)
- [ ] File is valid YAML syntax
<action>Count totals:</action>
- Total epics: {{epic_count}}
- Total stories: {{story_count}}
- Epics in-progress: {{in_progress_count}}
- Stories done: {{done_count}}
<action>Display completion summary to {user_name} in {communication_language}:</action>
**Sprint Status Generated Successfully**
- **File Location:** {status_file}
- **Total Epics:** {{epic_count}}
- **Total Stories:** {{story_count}}
- **Epics In Progress:** {{in_progress_count}}
- **Stories Completed:** {{done_count}}
**Next Steps:**
1. Review the generated {status_file}
2. Use this file to track development progress
3. Agents will update statuses as they work
4. Re-run this workflow to refresh auto-detected statuses
<action>Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.</action>
</step>
</workflow>
## Additional Documentation
### Status State Machine
**Epic Status Flow:**
```
backlog → in-progress → done
```
- **backlog**: Epic not yet started
- **in-progress**: Epic actively being worked on (stories being created/implemented)
- **done**: All stories in epic completed
**Story Status Flow:**
```
backlog → ready-for-dev → in-progress → review → done
```
- **backlog**: Story only exists in epic file
- **ready-for-dev**: Story file created (e.g., `stories/1-3-plant-naming.md`)
- **in-progress**: Developer actively working
- **review**: Ready for code review (via Dev's code-review workflow)
- **done**: Completed
**Retrospective Status:**
```
optional ↔ done
```
- **optional**: Ready to be conducted but not required
- **done**: Finished
### Guidelines
1. **Epic Activation**: Mark epic as `in-progress` when starting work on its first story
2. **Sequential Default**: Stories are typically worked in order, but parallel work is supported
3. **Parallel Work Supported**: Multiple stories can be `in-progress` if team capacity allows
4. **Review Before Done**: Stories should pass through `review` before `done`
5. **Learning Transfer**: Developer typically creates next story after previous one is `done` to incorporate learnings

View File

@ -0,0 +1,41 @@
# DO NOT EDIT -- overwritten on every update.
#
# Workflow customization surface for bmad-sprint-planning. Mirrors the
# agent customization shape under the [workflow] namespace.
[workflow]
# --- Configurable below. Overrides merge per BMad structural rules: ---
# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
# Steps to run before the standard activation (config load, greet).
# Overrides append. Use for pre-flight loads, compliance checks, etc.
activation_steps_prepend = []
# Steps to run after greet but before the workflow begins.
# Overrides append. Use for context-heavy setup that should happen
# once the user has been acknowledged.
activation_steps_append = []
# Persistent facts the workflow keeps in mind for the whole run
# (standards, compliance constraints, stylistic guardrails).
# Distinct from the runtime memory sidecar — these are static context
# loaded on activation. Overrides append.
#
# Each entry is either:
# - a literal sentence, e.g. "All stories must include testable acceptance criteria."
# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
# (glob patterns are supported; the file's contents are loaded and treated as facts).
persistent_facts = [
"file:{project-root}/**/project-context.md",
]
# Scalar: executed when the workflow reaches its final step,
# after sprint-status.yaml is generated and validated. Override wins.
# Leave empty for no custom post-completion behavior.
on_complete = ""

View File

@ -1,263 +0,0 @@
# Sprint Planning Workflow
**Goal:** Generate sprint status tracking from epics, detecting current story statuses and building a complete sprint-status.yaml file.
**Your Role:** You are a Developer generating and maintaining sprint tracking. Parse epic files, detect story statuses, and produce a structured sprint-status.yaml.
---
## INITIALIZATION
### Configuration Loading
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
- `project_name`, `user_name`
- `communication_language`, `document_output_language`
- `implementation_artifacts`
- `planning_artifacts`
- `date` as system-generated current datetime
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Paths
- `tracking_system` = `file-system`
- `project_key` = `NOKEY`
- `story_location` = `{implementation_artifacts}`
- `story_location_absolute` = `{implementation_artifacts}`
- `epics_location` = `{planning_artifacts}`
- `epics_pattern` = `*epic*.md`
- `status_file` = `{implementation_artifacts}/sprint-status.yaml`
### Input Files
| Input | Path | Load Strategy |
|-------|------|---------------|
| Epics | `{planning_artifacts}/*epic*.md` (whole) or `{planning_artifacts}/*epic*/*.md` (sharded) | FULL_LOAD |
### Context
- `project_context` = `**/project-context.md` (load if exists)
---
## EXECUTION
### Document Discovery - Full Epic Loading
**Strategy**: Sprint planning needs ALL epics and stories to build complete status tracking.
**Epic Discovery Process:**
1. **Search for whole document first** - Look for `epics.md`, `bmm-epics.md`, or any `*epic*.md` file
2. **Check for sharded version** - If whole document not found, look for `epics/index.md`
3. **If sharded version found**:
- Read `index.md` to understand the document structure
- Read ALL epic section files listed in the index (e.g., `epic-1.md`, `epic-2.md`, etc.)
- Process all epics and their stories from the combined content
- This ensures complete sprint status coverage
4. **Priority**: If both whole and sharded versions exist, use the whole document
**Fuzzy matching**: Be flexible with document names - users may use variations like `epics.md`, `bmm-epics.md`, `user-stories.md`, etc.
<workflow>
<step n="1" goal="Parse epic files and extract all work items">
<action>Load {project_context} for project-wide patterns and conventions (if exists)</action>
<action>Communicate in {communication_language} with {user_name}</action>
<action>Look for all files matching `{epics_pattern}` in {epics_location}</action>
<action>Could be a single `epics.md` file or multiple `epic-1.md`, `epic-2.md` files</action>
<action>For each epic file found, extract:</action>
- Epic numbers from headers like `## Epic 1:` or `## Epic 2:`
- Story IDs and titles from patterns like `### Story 1.1: User Authentication`
- Convert story format from `Epic.Story: Title` to kebab-case key: `epic-story-title`
**Story ID Conversion Rules:**
- Original: `### Story 1.1: User Authentication`
- Replace period with dash: `1-1`
- Convert title to kebab-case: `user-authentication`
- Final key: `1-1-user-authentication`
<action>Build complete inventory of all epics and stories from all epic files</action>
</step>
<step n="2" goal="Build sprint status structure">
<action>For each epic found, create entries in this order:</action>
1. **Epic entry** - Key: `epic-{num}`, Default status: `backlog`
2. **Story entries** - Key: `{epic}-{story}-{title}`, Default status: `backlog`
3. **Retrospective entry** - Key: `epic-{num}-retrospective`, Default status: `optional`
**Example structure:**
```yaml
development_status:
epic-1: backlog
1-1-user-authentication: backlog
1-2-account-management: backlog
epic-1-retrospective: optional
```
</step>
<step n="3" goal="Apply intelligent status detection">
<action>For each story, detect current status by checking files:</action>
**Story file detection:**
- Check: `{story_location_absolute}/{story-key}.md` (e.g., `stories/1-1-user-authentication.md`)
- If exists → upgrade status to at least `ready-for-dev`
**Preservation rule:**
- If existing `{status_file}` exists and has more advanced status, preserve it
- Never downgrade status (e.g., don't change `done` to `ready-for-dev`)
**Status Flow Reference:**
- Epic: `backlog``in-progress``done`
- Story: `backlog``ready-for-dev``in-progress``review``done`
- Retrospective: `optional``done`
</step>
<step n="4" goal="Generate sprint status file">
<action>Create or update {status_file} with:</action>
**File Structure:**
```yaml
# generated: {date}
# last_updated: {date}
# project: {project_name}
# project_key: {project_key}
# tracking_system: {tracking_system}
# story_location: {story_location}
# STATUS DEFINITIONS:
# ==================
# Epic Status:
# - backlog: Epic not yet started
# - in-progress: Epic actively being worked on
# - done: All stories in epic completed
#
# Epic Status Transitions:
# - backlog → in-progress: Automatically when first story is created (via create-story)
# - in-progress → done: Manually when all stories reach 'done' status
#
# Story Status:
# - backlog: Story only exists in epic file
# - ready-for-dev: Story file created in stories folder
# - in-progress: Developer actively working on implementation
# - review: Ready for code review (via Dev's code-review workflow)
# - done: Story completed
#
# Retrospective Status:
# - optional: Can be completed but not required
# - done: Retrospective has been completed
#
# WORKFLOW NOTES:
# ===============
# - Epic transitions to 'in-progress' automatically when first story is created
# - Stories can be worked in parallel if team capacity allows
# - Developer typically creates next story after previous one is 'done' to incorporate learnings
# - Dev moves story to 'review', then runs code-review (fresh context, different LLM recommended)
generated: { date }
last_updated: { date }
project: { project_name }
project_key: { project_key }
tracking_system: { tracking_system }
story_location: { story_location }
development_status:
# All epics, stories, and retrospectives in order
```
<action>Write the complete sprint status YAML to {status_file}</action>
<action>CRITICAL: Metadata appears TWICE - once as comments (#) for documentation, once as YAML key:value fields for parsing</action>
<action>Ensure all items are ordered: epic, its stories, its retrospective, next epic...</action>
</step>
<step n="5" goal="Validate and report">
<action>Perform validation checks:</action>
- [ ] Every epic in epic files appears in {status_file}
- [ ] Every story in epic files appears in {status_file}
- [ ] Every epic has a corresponding retrospective entry
- [ ] No items in {status_file} that don't exist in epic files
- [ ] All status values are legal (match state machine definitions)
- [ ] File is valid YAML syntax
<action>Count totals:</action>
- Total epics: {{epic_count}}
- Total stories: {{story_count}}
- Epics in-progress: {{in_progress_count}}
- Stories done: {{done_count}}
<action>Display completion summary to {user_name} in {communication_language}:</action>
**Sprint Status Generated Successfully**
- **File Location:** {status_file}
- **Total Epics:** {{epic_count}}
- **Total Stories:** {{story_count}}
- **Epics In Progress:** {{in_progress_count}}
- **Stories Completed:** {{done_count}}
**Next Steps:**
1. Review the generated {status_file}
2. Use this file to track development progress
3. Agents will update statuses as they work
4. Re-run this workflow to refresh auto-detected statuses
</step>
</workflow>
## Additional Documentation
### Status State Machine
**Epic Status Flow:**
```
backlog → in-progress → done
```
- **backlog**: Epic not yet started
- **in-progress**: Epic actively being worked on (stories being created/implemented)
- **done**: All stories in epic completed
**Story Status Flow:**
```
backlog → ready-for-dev → in-progress → review → done
```
- **backlog**: Story only exists in epic file
- **ready-for-dev**: Story file created (e.g., `stories/1-3-plant-naming.md`)
- **in-progress**: Developer actively working
- **review**: Ready for code review (via Dev's code-review workflow)
- **done**: Completed
**Retrospective Status:**
```
optional ↔ done
```
- **optional**: Ready to be conducted but not required
- **done**: Finished
### Guidelines
1. **Epic Activation**: Mark epic as `in-progress` when starting work on its first story
2. **Sequential Default**: Stories are typically worked in order, but parallel work is supported
3. **Parallel Work Supported**: Multiple stories can be `in-progress` if team capacity allows
4. **Review Before Done**: Stories should pass through `review` before `done`
5. **Learning Transfer**: Developer typically creates next story after previous one is `done` to incorporate learnings

View File

@ -3,4 +3,295 @@ name: bmad-sprint-status
description: 'Summarize sprint status and surface risks. Use when the user says "check sprint status" or "show sprint status"'
---
Follow the instructions in ./workflow.md.
# Sprint Status Workflow
**Goal:** Summarize sprint status, surface risks, and recommend the next workflow action.
**Your Role:** You are a Developer providing clear, actionable sprint visibility. No time estimates — focus on status, risks, and next steps.
## Conventions
- Bare paths (e.g. `checklist.md`) resolve from the skill root.
- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
- `{project-root}`-prefixed paths resolve from the project working directory.
- `{skill-name}` resolves to the skill directory's basename.
## On Activation
### Step 1: Resolve the Workflow Block
Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
1. `{skill-root}/customize.toml` — defaults
2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
### Step 2: Execute Prepend Steps
Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
### Step 3: Load Persistent Facts
Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
### Step 4: Load Config
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
- `project_name`, `user_name`
- `communication_language`, `document_output_language`
- `implementation_artifacts`
- `date` as system-generated current datetime
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Step 5: Greet the User
Greet `{user_name}`, speaking in `{communication_language}`.
### Step 6: Execute Append Steps
Execute each entry in `{workflow.activation_steps_append}` in order.
Activation is complete. Begin the workflow below.
## Paths
- `sprint_status_file` = `{implementation_artifacts}/sprint-status.yaml`
## Input Files
| Input | Path | Load Strategy |
|-------|------|---------------|
| Sprint status | `{sprint_status_file}` | FULL_LOAD |
## Execution
<workflow>
<step n="0" goal="Determine execution mode">
<action>Set mode = {{mode}} if provided by caller; otherwise mode = "interactive"</action>
<check if="mode == data">
<action>Jump to Step 20</action>
</check>
<check if="mode == validate">
<action>Jump to Step 30</action>
</check>
<check if="mode == interactive">
<action>Continue to Step 1</action>
</check>
</step>
<step n="1" goal="Locate sprint status file">
<action>Load {project_context} for project-wide patterns and conventions (if exists)</action>
<action>Try {sprint_status_file}</action>
<check if="file not found">
<output>sprint-status.yaml not found.
Run `/bmad:bmm:workflows:sprint-planning` to generate it, then rerun sprint-status.</output>
<action>Exit workflow</action>
</check>
<action>Continue to Step 2</action>
</step>
<step n="2" goal="Read and parse sprint-status.yaml">
<action>Read the FULL file: {sprint_status_file}</action>
<action>Parse fields: generated, last_updated, project, project_key, tracking_system, story_location</action>
<action>Parse development_status map. Classify keys:</action>
- Epics: keys starting with "epic-" (and not ending with "-retrospective")
- Retrospectives: keys ending with "-retrospective"
- Stories: everything else (e.g., 1-2-login-form)
<action>Map legacy story status "drafted" → "ready-for-dev"</action>
<action>Count story statuses: backlog, ready-for-dev, in-progress, review, done</action>
<action>Map legacy epic status "contexted" → "in-progress"</action>
<action>Count epic statuses: backlog, in-progress, done</action>
<action>Count retrospective statuses: optional, done</action>
<action>Validate all statuses against known values:</action>
- Valid story statuses: backlog, ready-for-dev, in-progress, review, done, drafted (legacy)
- Valid epic statuses: backlog, in-progress, done, contexted (legacy)
- Valid retrospective statuses: optional, done
<check if="any status is unrecognized">
<output>
**Unknown status detected:**
{{#each invalid_entries}}
- `{{key}}`: "{{status}}" (not recognized)
{{/each}}
**Valid statuses:**
- Stories: backlog, ready-for-dev, in-progress, review, done
- Epics: backlog, in-progress, done
- Retrospectives: optional, done
</output>
<ask>How should these be corrected?
{{#each invalid_entries}}
{{@index}}. {{key}}: "{{status}}" → [select valid status]
{{/each}}
Enter corrections (e.g., "1=in-progress, 2=backlog") or "skip" to continue without fixing:</ask>
<check if="user provided corrections">
<action>Update sprint-status.yaml with corrected values</action>
<action>Re-parse the file with corrected statuses</action>
</check>
</check>
<action>Detect risks:</action>
- IF any story has status "review": suggest `/bmad:bmm:workflows:code-review`
- IF any story has status "in-progress" AND no stories have status "ready-for-dev": recommend staying focused on active story
- IF all epics have status "backlog" AND no stories have status "ready-for-dev": prompt `/bmad:bmm:workflows:create-story`
- IF `last_updated` timestamp is more than 7 days old (or `last_updated` is missing, fall back to `generated`): warn "sprint-status.yaml may be stale"
- IF any story key doesn't match an epic pattern (e.g., story "5-1-..." but no "epic-5"): warn "orphaned story detected"
- IF any epic has status in-progress but has no associated stories: warn "in-progress epic has no stories"
</step>
<step n="3" goal="Select next action recommendation">
<action>Pick the next recommended workflow using priority:</action>
<note>When selecting "first" story: sort by epic number, then story number (e.g., 1-1 before 1-2 before 2-1)</note>
1. If any story status == in-progress → recommend `dev-story` for the first in-progress story
2. Else if any story status == review → recommend `code-review` for the first review story
3. Else if any story status == ready-for-dev → recommend `dev-story`
4. Else if any story status == backlog → recommend `create-story`
5. Else if any retrospective status == optional → recommend `retrospective`
6. Else → All implementation items done; congratulate the user - you both did amazing work together!
<action>Store selected recommendation as: next_story_id, next_workflow_id, next_agent (DEV)</action>
</step>
<step n="4" goal="Display summary">
<output>
## Sprint Status
- Project: {{project}} ({{project_key}})
- Tracking: {{tracking_system}}
- Status file: {sprint_status_file}
**Stories:** backlog {{count_backlog}}, ready-for-dev {{count_ready}}, in-progress {{count_in_progress}}, review {{count_review}}, done {{count_done}}
**Epics:** backlog {{epic_backlog}}, in-progress {{epic_in_progress}}, done {{epic_done}}
**Next Recommendation:** /bmad:bmm:workflows:{{next_workflow_id}} ({{next_story_id}})
{{#if risks}}
**Risks:**
{{#each risks}}
- {{this}}
{{/each}}
{{/if}}
</output>
</step>
<step n="5" goal="Offer actions">
<ask>Pick an option:
1) Run recommended workflow now
2) Show all stories grouped by status
3) Show raw sprint-status.yaml
4) Exit
Choice:</ask>
<check if="choice == 1">
<output>Run `/bmad:bmm:workflows:{{next_workflow_id}}`.
If the command targets a story, set `story_key={{next_story_id}}` when prompted.</output>
</check>
<check if="choice == 2">
<output>
### Stories by Status
- In Progress: {{stories_in_progress}}
- Review: {{stories_in_review}}
- Ready for Dev: {{stories_ready_for_dev}}
- Backlog: {{stories_backlog}}
- Done: {{stories_done}}
</output>
</check>
<check if="choice == 3">
<action>Display the full contents of {sprint_status_file}</action>
</check>
<check if="choice == 4">
<action>Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.</action>
<action>Exit workflow</action>
</check>
</step>
<!-- ========================= -->
<!-- Data mode for other flows -->
<!-- ========================= -->
<step n="20" goal="Data mode output">
<action>Load and parse {sprint_status_file} same as Step 2</action>
<action>Compute recommendation same as Step 3</action>
<template-output>next_workflow_id = {{next_workflow_id}}</template-output>
<template-output>next_story_id = {{next_story_id}}</template-output>
<template-output>count_backlog = {{count_backlog}}</template-output>
<template-output>count_ready = {{count_ready}}</template-output>
<template-output>count_in_progress = {{count_in_progress}}</template-output>
<template-output>count_review = {{count_review}}</template-output>
<template-output>count_done = {{count_done}}</template-output>
<template-output>epic_backlog = {{epic_backlog}}</template-output>
<template-output>epic_in_progress = {{epic_in_progress}}</template-output>
<template-output>epic_done = {{epic_done}}</template-output>
<template-output>risks = {{risks}}</template-output>
<action>Return to caller</action>
</step>
<!-- ========================= -->
<!-- Validate mode -->
<!-- ========================= -->
<step n="30" goal="Validate sprint-status file">
<action>Check that {sprint_status_file} exists</action>
<check if="missing">
<template-output>is_valid = false</template-output>
<template-output>error = "sprint-status.yaml missing"</template-output>
<template-output>suggestion = "Run sprint-planning to create it"</template-output>
<action>Return</action>
</check>
<action>Read and parse {sprint_status_file}</action>
<action>Validate required metadata fields exist: generated, project, project_key, tracking_system, story_location (last_updated is optional for backward compatibility)</action>
<check if="any required field missing">
<template-output>is_valid = false</template-output>
<template-output>error = "Missing required field(s): {{missing_fields}}"</template-output>
<template-output>suggestion = "Re-run sprint-planning or add missing fields manually"</template-output>
<action>Return</action>
</check>
<action>Verify development_status section exists with at least one entry</action>
<check if="development_status missing or empty">
<template-output>is_valid = false</template-output>
<template-output>error = "development_status missing or empty"</template-output>
<template-output>suggestion = "Re-run sprint-planning or repair the file manually"</template-output>
<action>Return</action>
</check>
<action>Validate all status values against known valid statuses:</action>
- Stories: backlog, ready-for-dev, in-progress, review, done (legacy: drafted)
- Epics: backlog, in-progress, done (legacy: contexted)
- Retrospectives: optional, done
<check if="any invalid status found">
<template-output>is_valid = false</template-output>
<template-output>error = "Invalid status values: {{invalid_entries}}"</template-output>
<template-output>suggestion = "Fix invalid statuses in sprint-status.yaml"</template-output>
<action>Return</action>
</check>
<template-output>is_valid = true</template-output>
<template-output>message = "sprint-status.yaml valid: metadata complete, all statuses recognized"</template-output>
<action>Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.</action>
</step>
</workflow>

View File

@ -0,0 +1,41 @@
# DO NOT EDIT -- overwritten on every update.
#
# Workflow customization surface for bmad-sprint-status. Mirrors the
# agent customization shape under the [workflow] namespace.
[workflow]
# --- Configurable below. Overrides merge per BMad structural rules: ---
# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
# Steps to run before the standard activation (config load, greet).
# Overrides append. Use for pre-flight loads, compliance checks, etc.
activation_steps_prepend = []
# Steps to run after greet but before the workflow begins.
# Overrides append. Use for context-heavy setup that should happen
# once the user has been acknowledged.
activation_steps_append = []
# Persistent facts the workflow keeps in mind for the whole run
# (standards, compliance constraints, stylistic guardrails).
# Distinct from the runtime memory sidecar — these are static context
# loaded on activation. Overrides append.
#
# Each entry is either:
# - a literal sentence, e.g. "All stories must include testable acceptance criteria."
# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
# (glob patterns are supported; the file's contents are loaded and treated as facts).
persistent_facts = [
"file:{project-root}/**/project-context.md",
]
# Scalar: executed when the workflow reaches its final step,
# after sprint status is summarized and risks are surfaced. Override wins.
# Leave empty for no custom post-completion behavior.
on_complete = ""

View File

@ -1,261 +0,0 @@
# Sprint Status Workflow
**Goal:** Summarize sprint status, surface risks, and recommend the next workflow action.
**Your Role:** You are a Developer providing clear, actionable sprint visibility. No time estimates — focus on status, risks, and next steps.
---
## INITIALIZATION
### Configuration Loading
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
- `project_name`, `user_name`
- `communication_language`, `document_output_language`
- `implementation_artifacts`
- `date` as system-generated current datetime
- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
### Paths
- `sprint_status_file` = `{implementation_artifacts}/sprint-status.yaml`
### Input Files
| Input | Path | Load Strategy |
|-------|------|---------------|
| Sprint status | `{sprint_status_file}` | FULL_LOAD |
### Context
- `project_context` = `**/project-context.md` (load if exists)
---
## EXECUTION
<workflow>
<step n="0" goal="Determine execution mode">
<action>Set mode = {{mode}} if provided by caller; otherwise mode = "interactive"</action>
<check if="mode == data">
<action>Jump to Step 20</action>
</check>
<check if="mode == validate">
<action>Jump to Step 30</action>
</check>
<check if="mode == interactive">
<action>Continue to Step 1</action>
</check>
</step>
<step n="1" goal="Locate sprint status file">
<action>Load {project_context} for project-wide patterns and conventions (if exists)</action>
<action>Try {sprint_status_file}</action>
<check if="file not found">
<output>❌ sprint-status.yaml not found.
Run `/bmad:bmm:workflows:sprint-planning` to generate it, then rerun sprint-status.</output>
<action>Exit workflow</action>
</check>
<action>Continue to Step 2</action>
</step>
<step n="2" goal="Read and parse sprint-status.yaml">
<action>Read the FULL file: {sprint_status_file}</action>
<action>Parse fields: generated, last_updated, project, project_key, tracking_system, story_location</action>
<action>Parse development_status map. Classify keys:</action>
- Epics: keys starting with "epic-" (and not ending with "-retrospective")
- Retrospectives: keys ending with "-retrospective"
- Stories: everything else (e.g., 1-2-login-form)
<action>Map legacy story status "drafted" → "ready-for-dev"</action>
<action>Count story statuses: backlog, ready-for-dev, in-progress, review, done</action>
<action>Map legacy epic status "contexted" → "in-progress"</action>
<action>Count epic statuses: backlog, in-progress, done</action>
<action>Count retrospective statuses: optional, done</action>
<action>Validate all statuses against known values:</action>
- Valid story statuses: backlog, ready-for-dev, in-progress, review, done, drafted (legacy)
- Valid epic statuses: backlog, in-progress, done, contexted (legacy)
- Valid retrospective statuses: optional, done
<check if="any status is unrecognized">
<output>
⚠️ **Unknown status detected:**
{{#each invalid_entries}}
- `{{key}}`: "{{status}}" (not recognized)
{{/each}}
**Valid statuses:**
- Stories: backlog, ready-for-dev, in-progress, review, done
- Epics: backlog, in-progress, done
- Retrospectives: optional, done
</output>
<ask>How should these be corrected?
{{#each invalid_entries}}
{{@index}}. {{key}}: "{{status}}" → [select valid status]
{{/each}}
Enter corrections (e.g., "1=in-progress, 2=backlog") or "skip" to continue without fixing:</ask>
<check if="user provided corrections">
<action>Update sprint-status.yaml with corrected values</action>
<action>Re-parse the file with corrected statuses</action>
</check>
</check>
<action>Detect risks:</action>
- IF any story has status "review": suggest `/bmad:bmm:workflows:code-review`
- IF any story has status "in-progress" AND no stories have status "ready-for-dev": recommend staying focused on active story
- IF all epics have status "backlog" AND no stories have status "ready-for-dev": prompt `/bmad:bmm:workflows:create-story`
- IF `last_updated` timestamp is more than 7 days old (or `last_updated` is missing, fall back to `generated`): warn "sprint-status.yaml may be stale"
- IF any story key doesn't match an epic pattern (e.g., story "5-1-..." but no "epic-5"): warn "orphaned story detected"
- IF any epic has status in-progress but has no associated stories: warn "in-progress epic has no stories"
</step>
<step n="3" goal="Select next action recommendation">
<action>Pick the next recommended workflow using priority:</action>
<note>When selecting "first" story: sort by epic number, then story number (e.g., 1-1 before 1-2 before 2-1)</note>
1. If any story status == in-progress → recommend `dev-story` for the first in-progress story
2. Else if any story status == review → recommend `code-review` for the first review story
3. Else if any story status == ready-for-dev → recommend `dev-story`
4. Else if any story status == backlog → recommend `create-story`
5. Else if any retrospective status == optional → recommend `retrospective`
6. Else → All implementation items done; congratulate the user - you both did amazing work together!
<action>Store selected recommendation as: next_story_id, next_workflow_id, next_agent (DEV)</action>
</step>
<step n="4" goal="Display summary">
<output>
## 📊 Sprint Status
- Project: {{project}} ({{project_key}})
- Tracking: {{tracking_system}}
- Status file: {sprint_status_file}
**Stories:** backlog {{count_backlog}}, ready-for-dev {{count_ready}}, in-progress {{count_in_progress}}, review {{count_review}}, done {{count_done}}
**Epics:** backlog {{epic_backlog}}, in-progress {{epic_in_progress}}, done {{epic_done}}
**Next Recommendation:** /bmad:bmm:workflows:{{next_workflow_id}} ({{next_story_id}})
{{#if risks}}
**Risks:**
{{#each risks}}
- {{this}}
{{/each}}
{{/if}}
</output>
</step>
<step n="5" goal="Offer actions">
<ask>Pick an option:
1) Run recommended workflow now
2) Show all stories grouped by status
3) Show raw sprint-status.yaml
4) Exit
Choice:</ask>
<check if="choice == 1">
<output>Run `/bmad:bmm:workflows:{{next_workflow_id}}`.
If the command targets a story, set `story_key={{next_story_id}}` when prompted.</output>
</check>
<check if="choice == 2">
<output>
### Stories by Status
- In Progress: {{stories_in_progress}}
- Review: {{stories_in_review}}
- Ready for Dev: {{stories_ready_for_dev}}
- Backlog: {{stories_backlog}}
- Done: {{stories_done}}
</output>
</check>
<check if="choice == 3">
<action>Display the full contents of {sprint_status_file}</action>
</check>
<check if="choice == 4">
<action>Exit workflow</action>
</check>
</step>
<!-- ========================= -->
<!-- Data mode for other flows -->
<!-- ========================= -->
<step n="20" goal="Data mode output">
<action>Load and parse {sprint_status_file} same as Step 2</action>
<action>Compute recommendation same as Step 3</action>
<template-output>next_workflow_id = {{next_workflow_id}}</template-output>
<template-output>next_story_id = {{next_story_id}}</template-output>
<template-output>count_backlog = {{count_backlog}}</template-output>
<template-output>count_ready = {{count_ready}}</template-output>
<template-output>count_in_progress = {{count_in_progress}}</template-output>
<template-output>count_review = {{count_review}}</template-output>
<template-output>count_done = {{count_done}}</template-output>
<template-output>epic_backlog = {{epic_backlog}}</template-output>
<template-output>epic_in_progress = {{epic_in_progress}}</template-output>
<template-output>epic_done = {{epic_done}}</template-output>
<template-output>risks = {{risks}}</template-output>
<action>Return to caller</action>
</step>
<!-- ========================= -->
<!-- Validate mode -->
<!-- ========================= -->
<step n="30" goal="Validate sprint-status file">
<action>Check that {sprint_status_file} exists</action>
<check if="missing">
<template-output>is_valid = false</template-output>
<template-output>error = "sprint-status.yaml missing"</template-output>
<template-output>suggestion = "Run sprint-planning to create it"</template-output>
<action>Return</action>
</check>
<action>Read and parse {sprint_status_file}</action>
<action>Validate required metadata fields exist: generated, project, project_key, tracking_system, story_location (last_updated is optional for backward compatibility)</action>
<check if="any required field missing">
<template-output>is_valid = false</template-output>
<template-output>error = "Missing required field(s): {{missing_fields}}"</template-output>
<template-output>suggestion = "Re-run sprint-planning or add missing fields manually"</template-output>
<action>Return</action>
</check>
<action>Verify development_status section exists with at least one entry</action>
<check if="development_status missing or empty">
<template-output>is_valid = false</template-output>
<template-output>error = "development_status missing or empty"</template-output>
<template-output>suggestion = "Re-run sprint-planning or repair the file manually"</template-output>
<action>Return</action>
</check>
<action>Validate all status values against known valid statuses:</action>
- Stories: backlog, ready-for-dev, in-progress, review, done (legacy: drafted)
- Epics: backlog, in-progress, done (legacy: contexted)
- Retrospectives: optional, done
<check if="any invalid status found">
<template-output>is_valid = false</template-output>
<template-output>error = "Invalid status values: {{invalid_entries}}"</template-output>
<template-output>suggestion = "Fix invalid statuses in sprint-status.yaml"</template-output>
<action>Return</action>
</check>
<template-output>is_valid = true</template-output>
<template-output>message = "sprint-status.yaml valid: metadata complete, all statuses recognized"</template-output>
</step>
</workflow>

View File

@ -2622,6 +2622,229 @@ async function runTests() {
}
}
// --- Official module picker uses git tags for external module labels ---
{
const { UI } = require('../tools/installer/ui');
const prompts = require('../tools/installer/prompts');
const channelResolver = require('../tools/installer/modules/channel-resolver');
const { ExternalModuleManager } = require('../tools/installer/modules/external-manager');
const ui = new UI();
const originalOfficialListAvailable39 = OfficialModules.prototype.listAvailable;
const originalExternalListAvailable39 = ExternalModuleManager.prototype.listAvailable;
const originalAutocomplete39 = prompts.autocompleteMultiselect;
const originalSpinner39 = prompts.spinner;
const originalWarn39 = prompts.log.warn;
const originalMessage39 = prompts.log.message;
const originalResolveChannel39 = channelResolver.resolveChannel;
const seenLabels39 = [];
const spinnerStarts39 = [];
const spinnerStops39 = [];
const warnings39 = [];
OfficialModules.prototype.listAvailable = async function () {
return {
modules: [
{
id: 'core',
name: 'BMad Core Module',
description: 'always installed',
defaultSelected: true,
},
],
};
};
ExternalModuleManager.prototype.listAvailable = async function () {
return [
{
code: 'bmb',
name: 'BMad Builder',
description: 'Builder module',
defaultSelected: false,
builtIn: false,
url: 'https://github.com/bmad-code-org/bmad-builder',
defaultChannel: 'stable',
},
{
code: 'tea',
name: 'Test Architect',
description: 'Test architecture module',
defaultSelected: false,
builtIn: false,
url: 'https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise',
defaultChannel: 'stable',
},
];
};
channelResolver.resolveChannel = async function ({ repoUrl, channel }) {
if (channel !== 'stable') {
return { channel, version: channel === 'next' ? 'main' : 'unknown' };
}
if (repoUrl.includes('bmad-builder')) {
return { channel: 'stable', version: 'v1.7.0', ref: 'v1.7.0', resolvedFallback: false };
}
if (repoUrl.includes('bmad-method-test-architecture-enterprise')) {
return { channel: 'stable', version: 'v1.15.0', ref: 'v1.15.0', resolvedFallback: false };
}
throw new Error(`unexpected repo ${repoUrl}`);
};
prompts.autocompleteMultiselect = async (options) => {
seenLabels39.push(...options.options.map((opt) => opt.label));
return ['core'];
};
prompts.spinner = async () => ({
start(message) {
spinnerStarts39.push(message);
},
stop(message) {
spinnerStops39.push(message);
},
error(message) {
spinnerStops39.push(`error:${message}`);
},
});
prompts.log.warn = async (message) => {
warnings39.push(message);
};
prompts.log.message = async () => {};
try {
await ui._selectOfficialModules(
new Set(['bmb']),
new Map([
['bmb', '1.1.0'],
['core', '6.2.0'],
]),
{ global: null, nextSet: new Set(), pins: new Map(), warnings: [] },
);
assert(
seenLabels39.includes('BMad Builder (v1.1.0 → v1.7.0)'),
'official module picker shows installed-to-latest arrow from git tags',
);
assert(seenLabels39.includes('Test Architect (v1.15.0)'), 'official module picker shows latest git-tag version for fresh installs');
assert(
spinnerStarts39.includes('Checking latest module versions...'),
'official module picker wraps external lookups in a single spinner',
);
assert(spinnerStops39.includes('Checked latest module versions.'), 'official module picker stops the version-check spinner');
assert(warnings39.length === 0, 'official module picker does not warn when tag lookups succeed');
} finally {
OfficialModules.prototype.listAvailable = originalOfficialListAvailable39;
ExternalModuleManager.prototype.listAvailable = originalExternalListAvailable39;
prompts.autocompleteMultiselect = originalAutocomplete39;
prompts.spinner = originalSpinner39;
prompts.log.warn = originalWarn39;
prompts.log.message = originalMessage39;
channelResolver.resolveChannel = originalResolveChannel39;
}
}
// --- Official module picker warns and falls back to cached versions when tag lookups fail ---
{
const { UI } = require('../tools/installer/ui');
const prompts = require('../tools/installer/prompts');
const channelResolver = require('../tools/installer/modules/channel-resolver');
const { ExternalModuleManager } = require('../tools/installer/modules/external-manager');
const ui = new UI();
const tempCacheDir39 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-picker-cache-'));
const priorCacheEnv39 = process.env.BMAD_EXTERNAL_MODULES_CACHE;
const originalOfficialListAvailable39 = OfficialModules.prototype.listAvailable;
const originalExternalListAvailable39 = ExternalModuleManager.prototype.listAvailable;
const originalAutocomplete39 = prompts.autocompleteMultiselect;
const originalSpinner39 = prompts.spinner;
const originalWarn39 = prompts.log.warn;
const originalMessage39 = prompts.log.message;
const originalResolveChannel39 = channelResolver.resolveChannel;
const seenLabels39 = [];
const warnings39 = [];
process.env.BMAD_EXTERNAL_MODULES_CACHE = tempCacheDir39;
await fs.ensureDir(path.join(tempCacheDir39, 'bmb'));
await fs.writeFile(
path.join(tempCacheDir39, 'bmb', 'package.json'),
JSON.stringify({ name: 'bmad-builder', version: '1.7.0' }, null, 2) + '\n',
);
OfficialModules.prototype.listAvailable = async function () {
return {
modules: [
{
id: 'core',
name: 'BMad Core Module',
description: 'always installed',
defaultSelected: true,
},
],
};
};
ExternalModuleManager.prototype.listAvailable = async function () {
return [
{
code: 'bmb',
name: 'BMad Builder',
description: 'Builder module',
defaultSelected: false,
builtIn: false,
url: 'https://github.com/bmad-code-org/bmad-builder',
defaultChannel: 'stable',
},
];
};
channelResolver.resolveChannel = async function () {
throw new Error('tag lookup unavailable');
};
prompts.autocompleteMultiselect = async (options) => {
seenLabels39.push(...options.options.map((opt) => opt.label));
return ['core'];
};
prompts.spinner = async () => ({
start() {},
stop() {},
error() {},
});
prompts.log.warn = async (message) => {
warnings39.push(message);
};
prompts.log.message = async () => {};
try {
await ui._selectOfficialModules(new Set(), new Map(), { global: null, nextSet: new Set(), pins: new Map(), warnings: [] });
assert(
seenLabels39.includes('BMad Builder (v1.7.0)'),
'official module picker falls back to cached/local versions when tag lookup fails',
);
assert(
warnings39.includes('Could not check latest module versions; showing cached/local versions.'),
'official module picker warns once when all latest-version lookups fail',
);
} finally {
OfficialModules.prototype.listAvailable = originalOfficialListAvailable39;
ExternalModuleManager.prototype.listAvailable = originalExternalListAvailable39;
prompts.autocompleteMultiselect = originalAutocomplete39;
prompts.spinner = originalSpinner39;
prompts.log.warn = originalWarn39;
prompts.log.message = originalMessage39;
channelResolver.resolveChannel = originalResolveChannel39;
if (priorCacheEnv39 === undefined) {
delete process.env.BMAD_EXTERNAL_MODULES_CACHE;
} else {
process.env.BMAD_EXTERNAL_MODULES_CACHE = priorCacheEnv39;
}
await fs.remove(tempCacheDir39).catch(() => {});
}
}
console.log('');
// ============================================================

View File

@ -0,0 +1,348 @@
/**
* Installer Channel Resolution Tests
*
* Unit tests for the pure planning/resolution modules:
* - tools/installer/modules/channel-plan.js
* - tools/installer/modules/channel-resolver.js
*
* Neither module does I/O outside of GitHub tag lookups (which we don't
* exercise here) and semver math. All tests are deterministic.
*
* Usage: node test/test-installer-channels.js
*/
const {
parseChannelOptions,
decideChannelForModule,
buildPlan,
orphanPinWarnings,
bundledTargetWarnings,
parsePinSpec,
} = require('../tools/installer/modules/channel-plan');
const { parseGitHubRepo, normalizeStableTag, classifyUpgrade, releaseNotesUrl } = require('../tools/installer/modules/channel-resolver');
const colors = {
reset: '',
green: '',
red: '',
yellow: '',
cyan: '',
dim: '',
};
let passed = 0;
let failed = 0;
function assert(condition, testName, errorMessage = '') {
if (condition) {
console.log(`${colors.green}${colors.reset} ${testName}`);
passed++;
} else {
console.log(`${colors.red}${colors.reset} ${testName}`);
if (errorMessage) {
console.log(` ${colors.dim}${errorMessage}${colors.reset}`);
}
failed++;
}
}
function assertEqual(actual, expected, testName) {
const ok = actual === expected;
assert(ok, testName, ok ? '' : `expected ${JSON.stringify(expected)}, got ${JSON.stringify(actual)}`);
}
function section(title) {
console.log(`\n${colors.cyan}── ${title} ──${colors.reset}`);
}
function runTests() {
// ─────────────────────────────────────────────────────────────────────────
// channel-plan.js :: parsePinSpec
// ─────────────────────────────────────────────────────────────────────────
section('channel-plan :: parsePinSpec');
{
const r = parsePinSpec('bmb=v1.2.3');
assert(r && r.code === 'bmb' && r.tag === 'v1.2.3', 'valid CODE=TAG');
}
{
const r = parsePinSpec(' cis = v0.1.0 ');
assert(r && r.code === 'cis' && r.tag === 'v0.1.0', 'trims whitespace around code and tag');
}
assert(parsePinSpec('') === null, 'empty string returns null');
assert(parsePinSpec('bmb') === null, 'missing = returns null');
assert(parsePinSpec('=v1.0.0') === null, 'leading = returns null');
assert(parsePinSpec('bmb=') === null, 'trailing = returns null');
assert(parsePinSpec(null) === null, 'null input returns null');
let undef;
assert(parsePinSpec(undef) === null, 'undefined input returns null');
assert(parsePinSpec(42) === null, 'non-string input returns null');
// ─────────────────────────────────────────────────────────────────────────
// channel-plan.js :: parseChannelOptions
// ─────────────────────────────────────────────────────────────────────────
section('channel-plan :: parseChannelOptions');
{
const r = parseChannelOptions({});
assert(r.global === null, 'empty: global is null');
assert(r.nextSet instanceof Set && r.nextSet.size === 0, 'empty: nextSet is empty Set');
assert(r.pins instanceof Map && r.pins.size === 0, 'empty: pins is empty Map');
assert(Array.isArray(r.warnings) && r.warnings.length === 0, 'empty: no warnings');
assert(r.acceptBypass === false, 'empty: acceptBypass false by default');
}
{
const r = parseChannelOptions({ channel: 'stable' });
assertEqual(r.global, 'stable', '--channel=stable sets global');
}
{
const r = parseChannelOptions({ channel: 'NEXT' });
assertEqual(r.global, 'next', '--channel is case-insensitive');
}
{
const r = parseChannelOptions({ allStable: true });
assertEqual(r.global, 'stable', '--all-stable sets global stable');
}
{
const r = parseChannelOptions({ allNext: true });
assertEqual(r.global, 'next', '--all-next sets global next');
}
{
const r = parseChannelOptions({ channel: 'bogus' });
assert(r.global === null, 'invalid --channel value is rejected (global stays null)');
assert(
r.warnings.some((w) => w.includes("Ignoring invalid --channel value 'bogus'")),
'invalid --channel produces a warning',
);
}
{
// --all-stable and --all-next conflict → warning, first-wins
const r = parseChannelOptions({ allStable: true, allNext: true });
assertEqual(r.global, 'stable', 'conflict: first flag (--all-stable) wins');
assert(
r.warnings.some((w) => w.includes('Conflicting channel flags')),
'conflict produces warning',
);
}
{
const r = parseChannelOptions({ next: ['bmb', 'cis', ' '] });
assert(r.nextSet.has('bmb') && r.nextSet.has('cis'), '--next=CODE adds to nextSet');
assert(!r.nextSet.has(''), 'blank --next entries are skipped');
}
{
const r = parseChannelOptions({ pin: ['bmb=v1.0.0', 'cis=v2.0.0'] });
assertEqual(r.pins.get('bmb'), 'v1.0.0', '--pin bmb=v1.0.0 recorded');
assertEqual(r.pins.get('cis'), 'v2.0.0', '--pin cis=v2.0.0 recorded');
}
{
const r = parseChannelOptions({ pin: ['bmb=v1.0.0', 'bmb=v1.1.0'] });
assertEqual(r.pins.get('bmb'), 'v1.1.0', 'duplicate --pin: last wins');
assert(
r.warnings.some((w) => w.includes('--pin specified multiple times')),
'duplicate --pin produces warning',
);
}
{
const r = parseChannelOptions({ pin: ['malformed-no-equals'] });
assert(r.pins.size === 0, 'malformed --pin is ignored');
assert(
r.warnings.some((w) => w.includes('malformed --pin')),
'malformed --pin warns',
);
}
{
const r = parseChannelOptions({ yes: true });
assertEqual(r.acceptBypass, true, '--yes sets acceptBypass so curator-bypass prompt is auto-confirmed');
}
{
const r = parseChannelOptions({ acceptBypass: true });
assertEqual(r.acceptBypass, true, 'explicit acceptBypass: true honored');
}
// ─────────────────────────────────────────────────────────────────────────
// channel-plan.js :: decideChannelForModule (precedence)
// ─────────────────────────────────────────────────────────────────────────
section('channel-plan :: decideChannelForModule (precedence)');
const emptyOpts = parseChannelOptions({});
{
const r = decideChannelForModule({ code: 'bmb', channelOptions: emptyOpts });
assertEqual(r.channel, 'stable', 'no signal → stable default');
assertEqual(r.source, 'default', 'source: default');
}
{
const r = decideChannelForModule({ code: 'bmb', channelOptions: emptyOpts, registryDefault: 'next' });
assertEqual(r.channel, 'next', 'registry default applied when no flags');
assertEqual(r.source, 'registry', 'source: registry');
}
{
const r = decideChannelForModule({ code: 'bmb', channelOptions: emptyOpts, registryDefault: 'bogus' });
assertEqual(r.channel, 'stable', 'invalid registry default ignored, falls to stable');
}
{
const opts = parseChannelOptions({ channel: 'next' });
const r = decideChannelForModule({ code: 'bmb', channelOptions: opts, registryDefault: 'stable' });
assertEqual(r.channel, 'next', 'global --channel beats registry default');
assertEqual(r.source, 'flag:--channel', 'source reflects --channel origin');
}
{
const opts = parseChannelOptions({ channel: 'stable', next: ['bmb'] });
const r = decideChannelForModule({ code: 'bmb', channelOptions: opts });
assertEqual(r.channel, 'next', '--next=bmb beats --channel=stable for bmb');
assertEqual(r.source, 'flag:--next', 'source: flag:--next');
}
{
const opts = parseChannelOptions({ channel: 'next', pin: ['bmb=v1.0.0'] });
const r = decideChannelForModule({ code: 'bmb', channelOptions: opts });
assertEqual(r.channel, 'pinned', '--pin beats --channel');
assertEqual(r.pin, 'v1.0.0', 'pin value carried through');
assertEqual(r.source, 'flag:--pin', 'source: flag:--pin');
}
{
const opts = parseChannelOptions({ next: ['bmb'], pin: ['bmb=v1.0.0'] });
const r = decideChannelForModule({ code: 'bmb', channelOptions: opts });
assertEqual(r.channel, 'pinned', '--pin beats --next for same code');
}
// ─────────────────────────────────────────────────────────────────────────
// channel-plan.js :: buildPlan, orphanPinWarnings, bundledTargetWarnings
// ─────────────────────────────────────────────────────────────────────────
section('channel-plan :: buildPlan / warnings');
{
const opts = parseChannelOptions({ allStable: true, pin: ['bmb=v1.0.0'] });
const plan = buildPlan({
modules: [
{ code: 'bmb', defaultChannel: 'stable' },
{ code: 'cis', defaultChannel: 'stable' },
],
channelOptions: opts,
});
assertEqual(plan.get('bmb').channel, 'pinned', 'buildPlan: bmb pinned');
assertEqual(plan.get('cis').channel, 'stable', 'buildPlan: cis stable via global');
}
{
const opts = parseChannelOptions({ pin: ['ghost=v1.0.0', 'bmb=v1.0.0'], next: ['gds'] });
const warnings = orphanPinWarnings(opts, ['bmb']);
assert(
warnings.some((w) => w.includes("--pin for 'ghost'")),
'orphanPinWarnings: flags pin for unselected module',
);
assert(
warnings.some((w) => w.includes("--next for 'gds'")),
'orphanPinWarnings: flags --next for unselected module',
);
assert(!warnings.some((w) => w.includes("'bmb'")), 'orphanPinWarnings: no warning for selected module');
}
{
const opts = parseChannelOptions({ pin: ['bmm=v1.0.0'], next: ['core'] });
const warnings = bundledTargetWarnings(opts, ['core', 'bmm']);
assert(
warnings.some((w) => w.includes('bundled module')),
'bundledTargetWarnings: warns bundled pin',
);
assert(warnings.length === 2, 'bundledTargetWarnings: both pin and next warned');
}
// ─────────────────────────────────────────────────────────────────────────
// channel-resolver.js :: parseGitHubRepo
// ─────────────────────────────────────────────────────────────────────────
section('channel-resolver :: parseGitHubRepo');
{
const r = parseGitHubRepo('https://github.com/bmad-code-org/BMAD-METHOD');
assert(r && r.owner === 'bmad-code-org' && r.repo === 'BMAD-METHOD', 'https URL basic');
}
{
const r = parseGitHubRepo('https://github.com/bmad-code-org/BMAD-METHOD.git');
assert(r && r.repo === 'BMAD-METHOD', '.git suffix stripped');
}
{
const r = parseGitHubRepo('https://github.com/bmad-code-org/BMAD-METHOD/');
assert(r && r.repo === 'BMAD-METHOD', 'trailing slash stripped');
}
{
const r = parseGitHubRepo('https://github.com/org/repo/tree/main/subdir');
assert(r && r.owner === 'org' && r.repo === 'repo', 'deep path yields owner/repo');
}
{
const r = parseGitHubRepo('git@github.com:org/repo.git');
assert(r && r.owner === 'org' && r.repo === 'repo', 'SSH URL parsed');
}
assert(parseGitHubRepo('https://gitlab.com/foo/bar') === null, 'non-github URL returns null');
assert(parseGitHubRepo('') === null, 'empty string returns null');
assert(parseGitHubRepo(null) === null, 'null input returns null');
assert(parseGitHubRepo(123) === null, 'non-string input returns null');
// ─────────────────────────────────────────────────────────────────────────
// channel-resolver.js :: normalizeStableTag
// ─────────────────────────────────────────────────────────────────────────
section('channel-resolver :: normalizeStableTag');
assertEqual(normalizeStableTag('v1.2.3'), '1.2.3', 'strips leading v');
assertEqual(normalizeStableTag('1.2.3'), '1.2.3', 'bare semver accepted');
assertEqual(normalizeStableTag('v1.2.3-alpha.1'), null, 'prerelease -alpha excluded');
assertEqual(normalizeStableTag('v1.2.3-beta'), null, 'prerelease -beta excluded');
assertEqual(normalizeStableTag('v1.2.3-rc.1'), null, 'prerelease -rc excluded');
assertEqual(normalizeStableTag('not-a-version'), null, 'invalid string returns null');
assertEqual(normalizeStableTag('v1.2'), null, 'incomplete semver returns null');
assertEqual(normalizeStableTag(null), null, 'null returns null');
assertEqual(normalizeStableTag(123), null, 'non-string returns null');
// ─────────────────────────────────────────────────────────────────────────
// channel-resolver.js :: classifyUpgrade
// ─────────────────────────────────────────────────────────────────────────
section('channel-resolver :: classifyUpgrade');
assertEqual(classifyUpgrade('v1.2.3', 'v1.2.3'), 'none', 'equal versions → none');
assertEqual(classifyUpgrade('v1.2.3', 'v1.2.2'), 'none', 'downgrade → none');
assertEqual(classifyUpgrade('v1.2.3', 'v1.2.4'), 'patch', 'patch bump');
assertEqual(classifyUpgrade('v1.2.3', 'v1.3.0'), 'minor', 'minor bump');
assertEqual(classifyUpgrade('v1.2.3', 'v2.0.0'), 'major', 'major bump');
assertEqual(classifyUpgrade('1.2.3', '1.2.4'), 'patch', 'unprefixed versions work');
assertEqual(classifyUpgrade('main', 'v1.2.3'), 'unknown', 'non-semver current → unknown');
assertEqual(classifyUpgrade('v1.2.3', 'main'), 'unknown', 'non-semver next → unknown');
assertEqual(classifyUpgrade('', ''), 'unknown', 'both empty → unknown');
// ─────────────────────────────────────────────────────────────────────────
// channel-resolver.js :: releaseNotesUrl
// ─────────────────────────────────────────────────────────────────────────
section('channel-resolver :: releaseNotesUrl');
assertEqual(
releaseNotesUrl('https://github.com/bmad-code-org/BMAD-METHOD', 'v1.2.3'),
'https://github.com/bmad-code-org/BMAD-METHOD/releases/tag/v1.2.3',
'builds standard release URL',
);
assertEqual(releaseNotesUrl('https://gitlab.com/foo/bar', 'v1.0.0'), null, 'non-github repo → null');
assertEqual(releaseNotesUrl('https://github.com/foo/bar', null), null, 'null tag → null');
assertEqual(releaseNotesUrl('', 'v1.0.0'), null, 'empty URL → null');
// ─────────────────────────────────────────────────────────────────────────
// Summary
// ─────────────────────────────────────────────────────────────────────────
console.log('');
console.log(`${colors.cyan}========================================`);
console.log('Test Results:');
console.log(` Passed: ${colors.green}${passed}${colors.reset}`);
console.log(` Failed: ${colors.red}${failed}${colors.reset}`);
console.log(`========================================${colors.reset}\n`);
if (failed === 0) {
console.log(`${colors.green}✨ All channel resolution tests passed!${colors.reset}\n`);
process.exit(0);
} else {
console.log(`${colors.red}❌ Some channel resolution tests failed${colors.reset}\n`);
process.exit(1);
}
}
try {
runTests();
} catch (error) {
console.error(`${colors.red}Test runner failed:${colors.reset}`, error.message);
console.error(error.stack);
process.exit(1);
}

View File

@ -24,6 +24,19 @@ module.exports = {
['--output-folder <path>', 'Output folder path relative to project root (default: _bmad-output)'],
['--custom-source <sources>', 'Comma-separated Git URLs or local paths to install custom modules from'],
['-y, --yes', 'Accept all defaults and skip prompts where possible'],
[
'--channel <channel>',
'Apply channel (stable|next) to all external modules being installed. --all-stable and --all-next are aliases.',
],
['--all-stable', 'Alias for --channel=stable. Resolves externals to the highest stable release tag.'],
['--all-next', 'Alias for --channel=next. Resolves externals to main HEAD.'],
['--next <code>', 'Install module <code> from main HEAD (next channel). Repeatable.', (value, prev) => [...(prev || []), value], []],
[
'--pin <spec>',
'Pin module to a specific tag: --pin CODE=TAG (e.g. --pin bmb=v1.7.0). Repeatable.',
(value, prev) => [...(prev || []), value],
[],
],
],
action: async (options) => {
try {

View File

@ -3,7 +3,7 @@
* User input comes from either UI answers or headless CLI flags.
*/
class Config {
constructor({ directory, modules, ides, skipPrompts, verbose, actionType, coreConfig, moduleConfigs, quickUpdate }) {
constructor({ directory, modules, ides, skipPrompts, verbose, actionType, coreConfig, moduleConfigs, quickUpdate, channelOptions }) {
this.directory = directory;
this.modules = Object.freeze([...modules]);
this.ides = Object.freeze([...ides]);
@ -13,6 +13,8 @@ class Config {
this.coreConfig = coreConfig;
this.moduleConfigs = moduleConfigs;
this._quickUpdate = quickUpdate;
// channelOptions carry a Map + Set; don't deep-freeze.
this.channelOptions = channelOptions || null;
Object.freeze(this);
}
@ -37,6 +39,7 @@ class Config {
coreConfig: userInput.coreConfig || {},
moduleConfigs: userInput.moduleConfigs || null,
quickUpdate: userInput._quickUpdate || false,
channelOptions: userInput.channelOptions || null,
});
}

View File

@ -601,22 +601,40 @@ class Installer {
moduleConfig: moduleConfig,
installer: this,
silent: true,
channelOptions: config.channelOptions,
},
);
// Get display name from source module.yaml and resolve the freshest version metadata we can find locally.
const sourcePath = await officialModules.findModuleSource(moduleName, { silent: true });
const sourcePath = await officialModules.findModuleSource(moduleName, {
silent: true,
channelOptions: config.channelOptions,
});
const moduleInfo = sourcePath ? await officialModules.getModuleInfo(sourcePath, moduleName, '') : null;
const displayName = moduleInfo?.name || moduleName;
const externalResolution = officialModules.externalModuleManager.getResolution(moduleName);
let communityResolution = null;
if (!externalResolution) {
const { CommunityModuleManager } = require('../modules/community-manager');
communityResolution = new CommunityModuleManager().getResolution(moduleName);
}
const resolution = externalResolution || communityResolution;
const cachedResolution = CustomModuleManager._resolutionCache.get(moduleName);
const versionInfo = await resolveModuleVersion(moduleName, {
moduleSourcePath: sourcePath,
fallbackVersion: cachedResolution?.version,
fallbackVersion: resolution?.version || cachedResolution?.version,
marketplacePluginNames: cachedResolution?.pluginName ? [cachedResolution.pluginName] : [],
});
const version = versionInfo.version || '';
addResult(displayName, 'ok', '', { moduleCode: moduleName, newVersion: version });
// Prefer the git tag recorded by the resolution (e.g. "v1.7.0") over
// the on-disk package.json (which may be ahead of the released tag).
const version = resolution?.version || versionInfo.version || '';
addResult(displayName, 'ok', '', {
moduleCode: moduleName,
newVersion: version,
newChannel: resolution?.channel || null,
newSha: resolution?.sha || null,
});
}
}
@ -1091,12 +1109,30 @@ class Installer {
let detail = '';
if (r.moduleCode && r.newVersion) {
const oldVersion = preVersions.get(r.moduleCode);
if (oldVersion && oldVersion === r.newVersion) {
detail = ` (v${r.newVersion}, no change)`;
// Format a version label for display:
// "main" → "main @ <short-sha>" (next channel shows what SHA landed)
// "v1.7.0" or "1.7.0" → "v1.7.0" (prefix 'v' when missing)
// anything else (legacy strings) → as-is
const fmt = (v, sha) => {
if (typeof v !== 'string' || !v) return '';
if (v === 'main' || v === 'HEAD') return sha ? `main @ ${sha.slice(0, 7)}` : 'main';
if (/^v?\d+\.\d+\.\d+/.test(v)) return v.startsWith('v') ? v : `v${v}`;
return v;
};
const newV = fmt(r.newVersion, r.newSha);
// 'main'/'HEAD' strings only identify the channel, not the commit, so
// we can't assert "no change" without comparing SHAs — and preVersions
// doesn't carry the old SHA. Render these as a refresh instead of a
// false-negative "no change".
const isMainLike = oldVersion === 'main' || oldVersion === 'HEAD';
if (oldVersion && oldVersion === r.newVersion && !isMainLike) {
detail = ` (${newV}, no change)`;
} else if (oldVersion && isMainLike) {
detail = ` (${newV}, refreshed)`;
} else if (oldVersion) {
detail = ` (v${oldVersion} → v${r.newVersion})`;
detail = ` (${fmt(oldVersion, r.newSha)}${newV})`;
} else {
detail = ` (v${r.newVersion}, installed)`;
detail = ` (${newV}, installed)`;
}
} else if (r.detail) {
detail = ` (${r.detail})`;
@ -1216,9 +1252,59 @@ class Installer {
await prompts.log.warn(`Skipping ${skippedModules.length} module(s) - no source available: ${skippedModules.join(', ')}`);
}
// Build channel options from the existing manifest FIRST so the config
// collector below (which triggers external-module clones via
// findModuleSource) knows each module's recorded channel and doesn't
// silently redecide it. Without this, modules previously on 'next' or
// 'pinned' would trigger a stable-channel tag lookup at config-collection
// time, burning GitHub API quota and potentially failing.
const manifestData = await this.manifest.read(bmadDir);
const channelOptions = { global: null, nextSet: new Set(), pins: new Map(), warnings: [] };
if (manifestData?.modulesDetailed) {
const { fetchStableTags, classifyUpgrade, parseGitHubRepo } = require('../modules/channel-resolver');
for (const entry of manifestData.modulesDetailed) {
if (!entry?.name || !entry?.channel) continue;
if (entry.channel === 'pinned' && entry.version) {
channelOptions.pins.set(entry.name, entry.version);
continue;
}
if (entry.channel === 'next') {
channelOptions.nextSet.add(entry.name);
continue;
}
// Stable: classify the available upgrade. Patches and minors fall
// through (stable default picks up the top tag). A major upgrade
// requires opt-in, so under quick-update's non-interactive semantics
// we pin to the current version to prevent a silent breaking jump.
if (entry.channel === 'stable' && entry.version && entry.repoUrl) {
const parsed = parseGitHubRepo(entry.repoUrl);
if (!parsed) continue;
try {
const tags = await fetchStableTags(parsed.owner, parsed.repo);
if (tags.length === 0) continue;
const topTag = tags[0].tag;
const cls = classifyUpgrade(entry.version, topTag);
if (cls === 'major') {
channelOptions.pins.set(entry.name, entry.version);
await prompts.log.warn(
`${entry.name} ${entry.version}${topTag} is a new major release; staying on ${entry.version}. ` +
`Run \`bmad install\` (Modify) with \`--pin ${entry.name}=${topTag}\` to accept.`,
);
}
} catch (error) {
// Tag lookup failed (offline, rate-limited). Stay on the current
// version rather than guessing — the existing cache is already
// at that ref, so re-using it keeps the install stable.
channelOptions.pins.set(entry.name, entry.version);
await prompts.log.warn(`Could not check ${entry.name} for updates (${error.message}); staying on ${entry.version}.`);
}
}
}
}
// Load existing configs and collect new fields (if any)
await prompts.log.info('Checking for new configuration options...');
const quickModules = new OfficialModules();
const quickModules = new OfficialModules({ channelOptions });
await quickModules.loadExistingConfig(projectDir);
let promptedForNewFields = false;
@ -1257,6 +1343,7 @@ class Installer {
_quickUpdate: true,
_preserveModules: skippedModules,
_existingModules: installedModules,
channelOptions,
};
await this.install(installConfig);

View File

@ -349,7 +349,22 @@ class ManifestGenerator {
npmPackage: versionInfo.npmPackage,
repoUrl: versionInfo.repoUrl,
};
if (versionInfo.localPath) moduleEntry.localPath = versionInfo.localPath;
// Preserve channel/sha from the resolution (external/community/custom)
// or from the existing entry if this is a no-change rewrite.
const channel = versionInfo.channel ?? existing?.channel;
const sha = versionInfo.sha ?? existing?.sha;
if (channel) moduleEntry.channel = channel;
if (sha) moduleEntry.sha = sha;
if (versionInfo.localPath || existing?.localPath) {
moduleEntry.localPath = versionInfo.localPath || existing.localPath;
}
if (versionInfo.rawSource || existing?.rawSource) {
moduleEntry.rawSource = versionInfo.rawSource || existing.rawSource;
}
const regTag = versionInfo.registryApprovedTag ?? existing?.registryApprovedTag;
const regSha = versionInfo.registryApprovedSha ?? existing?.registryApprovedSha;
if (regTag) moduleEntry.registryApprovedTag = regTag;
if (regSha) moduleEntry.registryApprovedSha = regSha;
updatedModules.push(moduleEntry);
}

View File

@ -1,9 +1,20 @@
const path = require('node:path');
const https = require('node:https');
const { execFile } = require('node:child_process');
const { promisify } = require('node:util');
const fs = require('../fs-native');
const crypto = require('node:crypto');
const { resolveModuleVersion } = require('../modules/version-resolver');
const prompts = require('../prompts');
const execFileAsync = promisify(execFile);
const NPM_LOOKUP_TIMEOUT_MS = 10_000;
const NPM_PACKAGE_NAME_PATTERN = /^(?:@[a-z0-9][a-z0-9._~-]*\/)?[a-z0-9][a-z0-9._~-]*$/;
function isValidNpmPackageName(packageName) {
return typeof packageName === 'string' && NPM_PACKAGE_NAME_PATTERN.test(packageName);
}
class Manifest {
/**
* Create a new manifest
@ -180,7 +191,12 @@ class Manifest {
npmPackage: options.npmPackage || null,
repoUrl: options.repoUrl || null,
};
if (options.channel) entry.channel = options.channel;
if (options.sha) entry.sha = options.sha;
if (options.localPath) entry.localPath = options.localPath;
if (options.rawSource) entry.rawSource = options.rawSource;
if (options.registryApprovedTag) entry.registryApprovedTag = options.registryApprovedTag;
if (options.registryApprovedSha) entry.registryApprovedSha = options.registryApprovedSha;
manifest.modules.push(entry);
} else {
// Module exists, update its version info
@ -192,6 +208,11 @@ class Manifest {
npmPackage: options.npmPackage === undefined ? existing.npmPackage : options.npmPackage,
repoUrl: options.repoUrl === undefined ? existing.repoUrl : options.repoUrl,
localPath: options.localPath === undefined ? existing.localPath : options.localPath,
channel: options.channel === undefined ? existing.channel : options.channel,
sha: options.sha === undefined ? existing.sha : options.sha,
rawSource: options.rawSource === undefined ? existing.rawSource : options.rawSource,
registryApprovedTag: options.registryApprovedTag === undefined ? existing.registryApprovedTag : options.registryApprovedTag,
registryApprovedSha: options.registryApprovedSha === undefined ? existing.registryApprovedSha : options.registryApprovedSha,
lastUpdated: new Date().toISOString(),
};
}
@ -275,12 +296,17 @@ class Manifest {
const moduleInfo = await extMgr.getModuleByCode(moduleName);
if (moduleInfo) {
const externalResolution = extMgr.getResolution(moduleName);
const versionInfo = await resolveModuleVersion(moduleName, { moduleSourcePath });
return {
version: versionInfo.version,
// Git tag recorded during install trumps the on-disk package.json
// version, so the manifest carries "v1.7.0" instead of "1.7.0".
version: externalResolution?.version || versionInfo.version,
source: 'external',
npmPackage: moduleInfo.npmPackage || null,
repoUrl: moduleInfo.url || null,
channel: externalResolution?.channel || null,
sha: externalResolution?.sha || null,
};
}
@ -289,15 +315,20 @@ class Manifest {
const communityMgr = new CommunityModuleManager();
const communityInfo = await communityMgr.getModuleByCode(moduleName);
if (communityInfo) {
const communityResolution = communityMgr.getResolution(moduleName);
const versionInfo = await resolveModuleVersion(moduleName, {
moduleSourcePath,
fallbackVersion: communityInfo.version,
});
return {
version: versionInfo.version || communityInfo.version,
version: communityResolution?.version || versionInfo.version || communityInfo.version,
source: 'community',
npmPackage: communityInfo.npmPackage || null,
repoUrl: communityInfo.url || null,
channel: communityResolution?.channel || null,
sha: communityResolution?.sha || null,
registryApprovedTag: communityResolution?.registryApprovedTag || null,
registryApprovedSha: communityResolution?.registryApprovedSha || null,
};
}
@ -312,12 +343,17 @@ class Manifest {
fallbackVersion: resolved?.version,
marketplacePluginNames: resolved?.pluginName ? [resolved.pluginName] : [],
});
const hasGitClone = !!resolved?.repoUrl;
return {
version: versionInfo.version,
// Prefer the git ref we actually cloned over the package.json version.
version: resolved?.cloneRef || (hasGitClone ? 'main' : versionInfo.version),
source: 'custom',
npmPackage: null,
repoUrl: resolved?.repoUrl || null,
localPath: resolved?.localPath || null,
channel: hasGitClone ? (resolved?.cloneRef ? 'pinned' : 'next') : null,
sha: resolved?.cloneSha || null,
rawSource: resolved?.rawInput || null,
};
}
@ -337,23 +373,22 @@ class Manifest {
* @returns {string|null} Latest version or null
*/
async fetchNpmVersion(packageName) {
try {
const https = require('node:https');
const { execSync } = require('node:child_process');
if (!isValidNpmPackageName(packageName)) {
return null;
}
try {
// Try using npm view first (more reliable)
try {
const result = execSync(`npm view ${packageName} version`, {
const { stdout } = await execFileAsync('npm', ['view', packageName, 'version'], {
encoding: 'utf8',
stdio: 'pipe',
timeout: 10_000,
timeout: NPM_LOOKUP_TIMEOUT_MS,
});
return result.trim();
return stdout.trim();
} catch {
// Fallback to npm registry API
return new Promise((resolve, reject) => {
https
.get(`https://registry.npmjs.org/${packageName}`, (res) => {
return new Promise((resolve) => {
const request = https.get(`https://registry.npmjs.org/${encodeURIComponent(packageName)}`, (res) => {
let data = '';
res.on('data', (chunk) => (data += chunk));
res.on('end', () => {
@ -364,8 +399,14 @@ class Manifest {
resolve(null);
}
});
})
.on('error', () => resolve(null));
});
request.setTimeout(NPM_LOOKUP_TIMEOUT_MS, () => {
request.destroy();
resolve(null);
});
request.on('error', () => resolve(null));
});
}
} catch {

View File

@ -0,0 +1,203 @@
/**
* Channel plan: the per-module resolution decision applied at install time.
*
* A "plan entry" for a module is:
* { channel: 'stable'|'next'|'pinned', pin?: string }
*
* We build the plan from:
* 1. CLI flags (--channel / --all-* / --next=CODE / --pin CODE=TAG)
* 2. Interactive answers (the "all stable?" gate + per-module picker)
* 3. Registry defaults (default_channel from registry-fallback.yaml / official.yaml)
* 4. Hardcoded fallback 'stable'
*
* Precedence: --pin > --next=CODE > --channel (global) > registry default > 'stable'.
*
* This module is pure. No prompts, no git, no filesystem.
*/
const VALID_CHANNELS = new Set(['stable', 'next']);
/**
* Parse raw commander options into a structured channel options object.
*
* @param {Object} options - raw command-line options
* @returns {{
* global: 'stable'|'next'|null,
* nextSet: Set<string>,
* pins: Map<string, string>,
* warnings: string[]
* }}
*/
function parseChannelOptions(options = {}) {
const warnings = [];
// Global channel from --channel / --all-stable / --all-next.
let global = null;
const aliases = [];
if (options.channel) aliases.push({ flag: '--channel', value: normalizeChannel(options.channel, warnings, '--channel') });
if (options.allStable) aliases.push({ flag: '--all-stable', value: 'stable' });
if (options.allNext) aliases.push({ flag: '--all-next', value: 'next' });
const distinct = new Set(aliases.map((a) => a.value).filter(Boolean));
if (distinct.size > 1) {
warnings.push(
`Conflicting channel flags: ${aliases
.filter((a) => a.value)
.map((a) => a.flag + '=' + a.value)
.join(', ')}. Using first: ${aliases.find((a) => a.value).flag}.`,
);
}
const firstValid = aliases.find((a) => a.value);
if (firstValid) global = firstValid.value;
// --next=CODE (repeatable)
const nextSet = new Set();
for (const code of options.next || []) {
const trimmed = String(code).trim();
if (!trimmed) continue;
nextSet.add(trimmed);
}
// --pin CODE=TAG (repeatable)
const pins = new Map();
for (const spec of options.pin || []) {
const parsed = parsePinSpec(spec);
if (!parsed) {
warnings.push(`Ignoring malformed --pin value '${spec}'. Expected CODE=TAG.`);
continue;
}
if (pins.has(parsed.code)) {
warnings.push(`--pin specified multiple times for '${parsed.code}'. Using last: ${parsed.tag}.`);
}
pins.set(parsed.code, parsed.tag);
}
// --yes auto-confirms the community-module curator-bypass prompt so
// headless installs with --next=/--pin for a community module don't hang.
const acceptBypass = options.yes === true || options.acceptBypass === true;
return { global, nextSet, pins, warnings, acceptBypass };
}
function normalizeChannel(raw, warnings, flagName) {
if (typeof raw !== 'string') return null;
const lower = raw.trim().toLowerCase();
if (VALID_CHANNELS.has(lower)) return lower;
warnings.push(`Ignoring invalid ${flagName} value '${raw}'. Expected one of: stable, next.`);
return null;
}
function parsePinSpec(spec) {
if (typeof spec !== 'string') return null;
const idx = spec.indexOf('=');
if (idx <= 0 || idx === spec.length - 1) return null;
const code = spec.slice(0, idx).trim();
const tag = spec.slice(idx + 1).trim();
if (!code || !tag) return null;
return { code, tag };
}
/**
* Build a per-module plan entry, applying precedence.
*
* @param {Object} args
* @param {string} args.code
* @param {Object} args.channelOptions - from parseChannelOptions
* @param {string} [args.registryDefault] - module's default_channel, if any
* @returns {{channel: 'stable'|'next'|'pinned', pin?: string, source: string}}
* source describes where the decision came from, for logging / debugging.
*/
function decideChannelForModule({ code, channelOptions, registryDefault }) {
const { global, nextSet, pins } = channelOptions || { nextSet: new Set(), pins: new Map() };
if (pins && pins.has(code)) {
return { channel: 'pinned', pin: pins.get(code), source: 'flag:--pin' };
}
if (nextSet && nextSet.has(code)) {
return { channel: 'next', source: 'flag:--next' };
}
if (global) {
return { channel: global, source: 'flag:--channel' };
}
if (registryDefault && VALID_CHANNELS.has(registryDefault)) {
return { channel: registryDefault, source: 'registry' };
}
return { channel: 'stable', source: 'default' };
}
/**
* Build a full channel plan map for a set of modules.
*
* @param {Object} args
* @param {Array<{code: string, defaultChannel?: string, builtIn?: boolean}>} args.modules
* Only the modules that need a channel entry; callers should filter out
* bundled modules (core/bmm) before calling.
* @param {Object} args.channelOptions - from parseChannelOptions
* @returns {Map<string, {channel: string, pin?: string, source: string}>}
*/
function buildPlan({ modules, channelOptions }) {
const plan = new Map();
for (const mod of modules || []) {
plan.set(
mod.code,
decideChannelForModule({
code: mod.code,
channelOptions,
registryDefault: mod.defaultChannel,
}),
);
}
return plan;
}
/**
* Report any --pin CODE=TAG entries that don't correspond to a selected module.
* These get warned about but don't abort the install.
*/
function orphanPinWarnings(channelOptions, selectedCodes) {
const warnings = [];
const selected = new Set(selectedCodes || []);
for (const code of channelOptions?.pins?.keys() || []) {
if (!selected.has(code)) {
warnings.push(`--pin for '${code}' has no effect (module not selected).`);
}
}
for (const code of channelOptions?.nextSet || []) {
if (!selected.has(code)) {
warnings.push(`--next for '${code}' has no effect (module not selected).`);
}
}
return warnings;
}
/**
* Warn when --pin / --next targets a bundled module (core, bmm). Those are
* shipped inside the installer binary there's no git clone to override, so
* the flag has no effect. Users who actually want a prerelease core/bmm
* should use `npx bmad-method@next install`.
*/
function bundledTargetWarnings(channelOptions, bundledCodes) {
const warnings = [];
const bundled = new Set(bundledCodes || []);
const hint = '(bundled module; use `npx bmad-method@next install` for a prerelease)';
for (const code of channelOptions?.pins?.keys() || []) {
if (bundled.has(code)) {
warnings.push(`--pin for '${code}' has no effect ${hint}.`);
}
}
for (const code of channelOptions?.nextSet || []) {
if (bundled.has(code)) {
warnings.push(`--next for '${code}' has no effect ${hint}.`);
}
}
return warnings;
}
module.exports = {
parseChannelOptions,
decideChannelForModule,
buildPlan,
orphanPinWarnings,
bundledTargetWarnings,
parsePinSpec,
};

View File

@ -0,0 +1,241 @@
const https = require('node:https');
const semver = require('semver');
/**
* Channel resolver for external and community modules.
*
* A "channel" is the resolution strategy that decides which ref of a module
* to clone when no explicit version is supplied:
* - stable: highest pure-semver git tag (excludes -alpha/-beta/-rc)
* - next: main branch HEAD
* - pinned: an explicit user-supplied tag
*
* This module is pure (no prompts, no git, no filesystem). It only talks to
* the GitHub tags API and performs semver math. Clone logic lives in the
* module managers that call resolveChannel().
*/
const GITHUB_API_BASE = 'https://api.github.com';
const DEFAULT_TIMEOUT_MS = 10_000;
const USER_AGENT = 'bmad-method-installer';
// Per-process cache: { 'owner/repo' => string[] sorted desc } of pure-semver tags.
const tagCache = new Map();
/**
* Parse a GitHub repo URL into { owner, repo }. Returns null if the URL is
* not a GitHub URL the resolver can handle.
*/
function parseGitHubRepo(url) {
if (!url || typeof url !== 'string') return null;
const trimmed = url
.trim()
.replace(/\.git$/, '')
.replace(/\/$/, '');
// https://github.com/owner/repo
const httpsMatch = trimmed.match(/^https?:\/\/github\.com\/([^/]+)\/([^/]+)(?:\/.*)?$/i);
if (httpsMatch) return { owner: httpsMatch[1], repo: httpsMatch[2] };
// git@github.com:owner/repo
const sshMatch = trimmed.match(/^git@github\.com:([^/]+)\/([^/]+)$/i);
if (sshMatch) return { owner: sshMatch[1], repo: sshMatch[2] };
return null;
}
function fetchJson(url, { timeout = DEFAULT_TIMEOUT_MS } = {}) {
const headers = {
'User-Agent': USER_AGENT,
Accept: 'application/vnd.github+json',
'X-GitHub-Api-Version': '2022-11-28',
};
if (process.env.GITHUB_TOKEN) {
headers.Authorization = `Bearer ${process.env.GITHUB_TOKEN}`;
}
return new Promise((resolve, reject) => {
const req = https.get(url, { headers, timeout }, (res) => {
let body = '';
res.on('data', (chunk) => (body += chunk));
res.on('end', () => {
if (res.statusCode < 200 || res.statusCode >= 300) {
const err = new Error(`GitHub API ${res.statusCode} for ${url}: ${body.slice(0, 200)}`);
err.statusCode = res.statusCode;
return reject(err);
}
try {
resolve(JSON.parse(body));
} catch (error) {
reject(new Error(`Failed to parse GitHub response: ${error.message}`));
}
});
});
req.on('error', reject);
req.on('timeout', () => {
req.destroy();
reject(new Error(`GitHub API request timed out: ${url}`));
});
});
}
/**
* Strip a leading 'v' and return a valid semver string, or null if the tag
* is not valid semver or is a prerelease (contains -alpha/-beta/-rc/etc.).
*/
function normalizeStableTag(tagName) {
if (typeof tagName !== 'string') return null;
const stripped = tagName.startsWith('v') ? tagName.slice(1) : tagName;
const valid = semver.valid(stripped);
if (!valid) return null;
// Exclude prereleases. semver.prerelease returns null for pure releases.
if (semver.prerelease(valid)) return null;
return valid;
}
/**
* Fetch pure-semver tags (highest first) from a GitHub repo.
* Cached per-process per owner/repo.
*
* @returns {Promise<Array<{tag: string, version: string}>>}
* tag is the original ref name (e.g. "v1.7.0"), version is the cleaned
* semver (e.g. "1.7.0").
*/
async function fetchStableTags(owner, repo, { timeout } = {}) {
const cacheKey = `${owner}/${repo}`;
if (tagCache.has(cacheKey)) return tagCache.get(cacheKey);
// GitHub returns up to 100 tags per page; one page is plenty for our modules.
const url = `${GITHUB_API_BASE}/repos/${owner}/${repo}/tags?per_page=100`;
const raw = await fetchJson(url, { timeout });
if (!Array.isArray(raw)) {
throw new TypeError(`Unexpected response from ${url}`);
}
const stable = [];
for (const entry of raw) {
const version = normalizeStableTag(entry?.name);
if (version) stable.push({ tag: entry.name, version });
}
stable.sort((a, b) => semver.rcompare(a.version, b.version));
tagCache.set(cacheKey, stable);
return stable;
}
/**
* Resolve a channel plan for a single module into a git-clonable ref.
*
* @param {Object} args
* @param {'stable'|'next'|'pinned'} args.channel
* @param {string} [args.pin] - Required when channel === 'pinned'
* @param {string} args.repoUrl - Module's git URL (for tag lookup)
* @returns {Promise<{channel, ref, version}>} where
* ref: the git ref to pass to `git clone --branch`, or null for HEAD (next)
* version: the resolved version string (tag name for stable/pinned, 'main' for next)
*
* Throws on:
* - pinned without a pin value
* - stable with no GitHub repo parseable from the URL (pass through to caller to fall back)
*
* Falls back to next-channel semantics and sets resolvedFallback=true when
* stable resolution turns up no tags.
*/
async function resolveChannel({ channel, pin, repoUrl, timeout }) {
if (channel === 'pinned') {
if (!pin) throw new Error('resolveChannel: pinned channel requires a pin value');
return { channel: 'pinned', ref: pin, version: pin, resolvedFallback: false };
}
if (channel === 'next') {
return { channel: 'next', ref: null, version: 'main', resolvedFallback: false };
}
if (channel === 'stable') {
const parsed = parseGitHubRepo(repoUrl);
if (!parsed) {
// No GitHub URL — caller must handle by falling back to next.
return { channel: 'next', ref: null, version: 'main', resolvedFallback: true, reason: 'not-a-github-url' };
}
try {
const tags = await fetchStableTags(parsed.owner, parsed.repo, { timeout });
if (tags.length === 0) {
return { channel: 'next', ref: null, version: 'main', resolvedFallback: true, reason: 'no-stable-tags' };
}
const top = tags[0];
return { channel: 'stable', ref: top.tag, version: top.tag, resolvedFallback: false };
} catch (error) {
// Propagate the error; callers decide whether to fall back or abort.
error.message = `Failed to resolve stable channel for ${parsed.owner}/${parsed.repo}: ${error.message}`;
throw error;
}
}
throw new Error(`resolveChannel: unknown channel '${channel}'`);
}
/**
* Verify that a specific tag exists in a GitHub repo. Used to validate
* --pin values before the user sits through a long clone that then fails.
*/
async function tagExists(owner, repo, tagName, { timeout } = {}) {
const url = `${GITHUB_API_BASE}/repos/${owner}/${repo}/git/refs/tags/${encodeURIComponent(tagName)}`;
try {
await fetchJson(url, { timeout });
return true;
} catch (error) {
if (error.statusCode === 404) return false;
throw error;
}
}
/**
* Classify the semver delta between two versions.
* - 'none' same version (or downgrade; treated same)
* - 'patch' same major.minor, higher patch
* - 'minor' same major, higher minor
* - 'major' different major
* - 'unknown' either version is not valid semver; caller should treat as major
*/
function classifyUpgrade(currentVersion, newVersion) {
const current = semver.valid(semver.coerce(currentVersion));
const next = semver.valid(semver.coerce(newVersion));
if (!current || !next) return 'unknown';
if (semver.lte(next, current)) return 'none';
const diff = semver.diff(current, next);
if (diff === 'patch') return 'patch';
if (diff === 'minor' || diff === 'preminor') return 'minor';
if (diff === 'major' || diff === 'premajor') return 'major';
// prepatch, prerelease — treat conservatively as minor (prereleases shouldn't
// normally surface here since stable channel filters them out).
return 'minor';
}
/**
* Build the GitHub release notes URL for a resolved tag.
* Returns null if the repo URL isn't a GitHub URL.
*/
function releaseNotesUrl(repoUrl, tag) {
const parsed = parseGitHubRepo(repoUrl);
if (!parsed || !tag) return null;
return `https://github.com/${parsed.owner}/${parsed.repo}/releases/tag/${encodeURIComponent(tag)}`;
}
/**
* Test-only: clear the per-process tag cache.
*/
function _clearTagCache() {
tagCache.clear();
}
module.exports = {
parseGitHubRepo,
fetchStableTags,
resolveChannel,
tagExists,
classifyUpgrade,
releaseNotesUrl,
normalizeStableTag,
_clearTagCache,
};

View File

@ -4,6 +4,8 @@ const path = require('node:path');
const { execSync } = require('node:child_process');
const prompts = require('../prompts');
const { RegistryClient } = require('./registry-client');
const { decideChannelForModule } = require('./channel-plan');
const { parseGitHubRepo, tagExists } = require('./channel-resolver');
const MARKETPLACE_OWNER = 'bmad-code-org';
const MARKETPLACE_REPO = 'bmad-plugins-marketplace';
@ -15,13 +17,29 @@ const MARKETPLACE_REF = 'main';
* Returns empty results when the registry is unreachable.
* Community modules are pinned to approved SHA when set; uses HEAD otherwise.
*/
function quoteShellRef(ref) {
if (typeof ref !== 'string' || !/^[\w.\-+/]+$/.test(ref)) {
throw new Error(`Unsafe ref name: ${JSON.stringify(ref)}`);
}
return `"${ref}"`;
}
class CommunityModuleManager {
// moduleCode → { channel, version, sha, registryApprovedTag, registryApprovedSha, repoUrl, bypassedCurator }
// Shared across all instances; the manifest writer often uses a fresh instance.
static _resolutions = new Map();
constructor() {
this._client = new RegistryClient();
this._cachedIndex = null;
this._cachedCategories = null;
}
/** Get the most recent channel resolution for a community module. */
getResolution(moduleCode) {
return CommunityModuleManager._resolutions.get(moduleCode) || null;
}
// ─── Data Loading ──────────────────────────────────────────────────────────
/**
@ -196,12 +214,49 @@ class CommunityModuleManager {
return await prompts.spinner();
};
const sha = moduleInfo.approvedSha;
// ─── Resolve channel plan ──────────────────────────────────────────────
// Default community behavior (stable channel) honors the curator's
// approved SHA. --next=CODE and --pin CODE=TAG override the curator; we
// warn the user before bypassing the approved version.
const planEntry = decideChannelForModule({
code: moduleCode,
channelOptions: options.channelOptions,
registryDefault: 'stable',
});
const approvedSha = moduleInfo.approvedSha;
const approvedTag = moduleInfo.approvedTag;
let bypassedCurator = false;
if (planEntry.channel !== 'stable') {
bypassedCurator = true;
if (!silent) {
const approvedLabel = approvedTag || approvedSha || 'curator-approved version';
await prompts.log.warn(
`WARNING: Installing '${moduleCode}' from ${
planEntry.channel === 'pinned' ? `tag ${planEntry.pin}` : 'main HEAD'
} bypasses the curator-approved ${approvedLabel}. Proceed only if you trust this source.`,
);
if (!options.channelOptions?.acceptBypass) {
const proceed = await prompts.confirm({
message: `Continue installing '${moduleCode}' with curator bypass?`,
default: false,
});
if (!proceed) {
throw new Error(`Install of community module '${moduleCode}' cancelled by user.`);
}
}
}
}
let needsDependencyInstall = false;
let wasNewClone = false;
if (await fs.pathExists(moduleCacheDir)) {
// Already cloned - update to latest HEAD
// Already cloned — refresh to the correct ref for the resolved channel.
// A pinned install must not reset to origin/HEAD (it would silently drift
// to main on every re-install). Stable + approvedSha is handled below
// by the curator-SHA checkout logic.
const fetchSpinner = await createSpinner();
fetchSpinner.start(`Checking ${moduleInfo.displayName}...`);
try {
@ -211,10 +266,24 @@ class CommunityModuleManager {
stdio: ['ignore', 'pipe', 'pipe'],
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
});
if (planEntry.channel === 'pinned') {
// Fetch the pin tag specifically and check it out.
execSync(`git fetch --depth 1 origin ${quoteShellRef(planEntry.pin)} --no-tags`, {
cwd: moduleCacheDir,
stdio: ['ignore', 'pipe', 'pipe'],
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
});
execSync('git checkout --quiet FETCH_HEAD', {
cwd: moduleCacheDir,
stdio: ['ignore', 'pipe', 'pipe'],
});
} else {
// stable (approvedSha path re-checks out below) and next: track main.
execSync('git reset --hard origin/HEAD', {
cwd: moduleCacheDir,
stdio: ['ignore', 'pipe', 'pipe'],
});
}
const newRef = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim();
if (currentRef !== newRef) needsDependencyInstall = true;
fetchSpinner.stop(`Verified ${moduleInfo.displayName}`);
@ -231,10 +300,17 @@ class CommunityModuleManager {
const fetchSpinner = await createSpinner();
fetchSpinner.start(`Fetching ${moduleInfo.displayName}...`);
try {
if (planEntry.channel === 'pinned') {
execSync(`git clone --depth 1 --branch ${quoteShellRef(planEntry.pin)} "${moduleInfo.url}" "${moduleCacheDir}"`, {
stdio: ['ignore', 'pipe', 'pipe'],
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
});
} else {
execSync(`git clone --depth 1 "${moduleInfo.url}" "${moduleCacheDir}"`, {
stdio: ['ignore', 'pipe', 'pipe'],
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
});
}
fetchSpinner.stop(`Fetched ${moduleInfo.displayName}`);
needsDependencyInstall = true;
} catch (error) {
@ -243,18 +319,19 @@ class CommunityModuleManager {
}
}
// If pinned to a specific SHA, check out that exact commit.
// Refuse to install if the approved SHA cannot be reached - security requirement.
if (sha) {
// ─── Check out the resolved ref per channel ──────────────────────────
if (planEntry.channel === 'stable' && approvedSha) {
// Default path: pin to the curator-approved SHA. Refuse install if the SHA
// is unreachable (tag may have been deleted or rewritten) — security requirement.
const headSha = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim();
if (headSha !== sha) {
if (headSha !== approvedSha) {
try {
execSync(`git fetch --depth 1 origin ${sha}`, {
execSync(`git fetch --depth 1 origin ${quoteShellRef(approvedSha)}`, {
cwd: moduleCacheDir,
stdio: ['ignore', 'pipe', 'pipe'],
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
});
execSync(`git checkout ${sha}`, {
execSync(`git checkout ${quoteShellRef(approvedSha)}`, {
cwd: moduleCacheDir,
stdio: ['ignore', 'pipe', 'pipe'],
});
@ -262,12 +339,37 @@ class CommunityModuleManager {
} catch {
await fs.remove(moduleCacheDir);
throw new Error(
`Community module '${moduleCode}' could not be pinned to its approved commit (${sha}). ` +
`Installation refused for security. The module registry entry may need updating.`,
`Community module '${moduleCode}' could not be pinned to its approved commit (${approvedSha}). ` +
`Installation refused for security. The module registry entry may need updating, ` +
`or use --next=${moduleCode} / --pin ${moduleCode}=<tag> to explicitly bypass.`,
);
}
}
} else if (planEntry.channel === 'stable' && !approvedSha) {
// Registry data gap: tag or SHA missing. Warn but proceed at HEAD (pre-existing behavior).
if (!silent) {
await prompts.log.warn(`Community module '${moduleCode}' has no curator-approved SHA in the registry; installing from main HEAD.`);
}
} else if (planEntry.channel === 'pinned') {
// We cloned the tag directly above (via --branch), but ensure HEAD matches.
// No additional checkout needed.
}
// else: 'next' channel — already at origin/HEAD from the fetch/reset above.
// Record the resolution so the manifest writer can pick up channel/version/sha.
const installedSha = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim();
const recordedVersion =
planEntry.channel === 'pinned' ? planEntry.pin : planEntry.channel === 'next' ? 'main' : approvedTag || installedSha.slice(0, 7);
CommunityModuleManager._resolutions.set(moduleCode, {
channel: planEntry.channel,
version: recordedVersion,
sha: installedSha,
registryApprovedTag: approvedTag || null,
registryApprovedSha: approvedSha || null,
repoUrl: moduleInfo.url,
bypassedCurator,
planSource: planEntry.source,
});
// Install dependencies if needed
const packageJsonPath = path.join(moduleCacheDir, 'package.json');

View File

@ -4,6 +4,13 @@ const path = require('node:path');
const { execSync } = require('node:child_process');
const prompts = require('../prompts');
function quoteCustomRef(ref) {
if (typeof ref !== 'string' || !/^[\w.\-+/]+$/.test(ref)) {
throw new Error(`Unsafe ref name: ${JSON.stringify(ref)}`);
}
return `"${ref}"`;
}
/**
* Manages custom modules installed from user-provided sources.
* Supports any Git host (GitHub, GitLab, Bitbucket, self-hosted) and local file paths.
@ -38,8 +45,8 @@ class CustomModuleManager {
};
}
const trimmed = input.trim();
if (!trimmed) {
const trimmedRaw = input.trim();
if (!trimmedRaw) {
return {
type: null,
cloneUrl: null,
@ -52,8 +59,53 @@ class CustomModuleManager {
};
}
// Extract optional @<tag-or-branch> suffix from the end of the input.
// Semver-valid characters: letters, digits, dot, hyphen, underscore, plus, slash.
// Raw commit SHAs are NOT supported here — `git clone --branch` can't take
// them; use --pin at the module level or check out the SHA manually.
// Only strip when the tail looks like a ref, so we don't disturb
// URLs without a version spec or the SSH protocol's `git@host:...` prefix.
let trimmed = trimmedRaw;
let versionSuffix = null;
const lastAt = trimmedRaw.lastIndexOf('@');
// Skip if @ is part of git@github.com:... (first char cannot be stripped as version)
// and skip if @ appears before the path rather than after a ref-shaped tail.
if (lastAt > 0) {
const candidate = trimmedRaw.slice(lastAt + 1);
const before = trimmedRaw.slice(0, lastAt);
// candidate must be ref-shaped and must not itself look like a URL / SSH host
if (/^[\w.\-+/]+$/.test(candidate) && !candidate.includes(':')) {
// Avoid consuming the @ in `git@host:owner/repo` — `before` wouldn't end with a path separator
// in that case. Require that the @ comes after the host/path, not inside the auth segment.
// Rule: the @ is a version suffix only if `before` looks like a complete URL or local path.
const beforeLooksLikeRepo =
before.startsWith('/') ||
before.startsWith('./') ||
before.startsWith('../') ||
before.startsWith('~') ||
/^https?:\/\//i.test(before) ||
/^git@[^:]+:.+/.test(before);
if (beforeLooksLikeRepo) {
versionSuffix = candidate;
trimmed = before;
}
}
}
// Local path detection: starts with /, ./, ../, or ~
if (trimmed.startsWith('/') || trimmed.startsWith('./') || trimmed.startsWith('../') || trimmed.startsWith('~')) {
if (versionSuffix) {
return {
type: 'local',
cloneUrl: null,
subdir: null,
localPath: null,
cacheKey: null,
displayName: null,
isValid: false,
error: 'Local paths do not support @version suffixes',
};
}
return this._parseLocalPath(trimmed);
}
@ -66,6 +118,8 @@ class CustomModuleManager {
cloneUrl: trimmed,
subdir: null,
localPath: null,
version: versionSuffix || null,
rawInput: trimmedRaw,
cacheKey: `${host}/${owner}/${repo}`,
displayName: `${owner}/${repo}`,
isValid: true,
@ -79,29 +133,47 @@ class CustomModuleManager {
const [, host, owner, repo, remainder] = httpsMatch;
const cloneUrl = `https://${host}/${owner}/${repo}`;
let subdir = null;
let urlRef = null; // branch/tag extracted from /tree/<ref>/subdir
if (remainder) {
// Extract subdir from deep path patterns used by various Git hosts
const deepPathPatterns = [
/^\/(?:-\/)?tree\/[^/]+\/(.+)$/, // GitHub /tree/branch/path, GitLab /-/tree/branch/path
/^\/(?:-\/)?blob\/[^/]+\/(.+)$/, // /blob/branch/path (treat same as tree)
/^\/src\/[^/]+\/(.+)$/, // Gitea/Forgejo /src/branch/path
{ regex: /^\/(?:-\/)?tree\/([^/]+)\/(.+)$/, refIdx: 1, pathIdx: 2 }, // GitHub, GitLab
{ regex: /^\/(?:-\/)?blob\/([^/]+)\/(.+)$/, refIdx: 1, pathIdx: 2 },
{ regex: /^\/src\/([^/]+)\/(.+)$/, refIdx: 1, pathIdx: 2 }, // Gitea/Forgejo
];
// Also match `/tree/<ref>` with no subdir
const refOnlyPatterns = [/^\/(?:-\/)?tree\/([^/]+?)\/?$/, /^\/(?:-\/)?blob\/([^/]+?)\/?$/, /^\/src\/([^/]+?)\/?$/];
for (const pattern of deepPathPatterns) {
const match = remainder.match(pattern);
for (const p of deepPathPatterns) {
const match = remainder.match(p.regex);
if (match) {
subdir = match[1].replace(/\/$/, ''); // strip trailing slash
urlRef = match[p.refIdx];
subdir = match[p.pathIdx].replace(/\/$/, '');
break;
}
}
if (!subdir) {
for (const r of refOnlyPatterns) {
const match = remainder.match(r);
if (match) {
urlRef = match[1];
break;
}
}
}
}
// Precedence: explicit @version suffix > URL /tree/<ref> path segment.
const version = versionSuffix || urlRef || null;
return {
type: 'url',
cloneUrl,
subdir,
localPath: null,
version,
rawInput: trimmedRaw,
cacheKey: `${host}/${owner}/${repo}`,
displayName: `${owner}/${repo}`,
isValid: true,
@ -255,6 +327,10 @@ class CustomModuleManager {
const silent = options.silent || false;
const displayName = parsed.displayName;
// Pin override: --pin CODE=TAG resolved at module-selection time overrides
// any @version suffix present in the URL.
const effectiveVersion = options.pinOverride || parsed.version || null;
await fs.ensureDir(path.dirname(repoCacheDir));
const createSpinner = async () => {
@ -264,8 +340,23 @@ class CustomModuleManager {
return await prompts.spinner();
};
// If an existing cache exists but was cloned at a different version, re-clone.
// Tracked via .bmad-source.json's recorded version.
if (await fs.pathExists(repoCacheDir)) {
// Update existing clone
let cachedVersion = null;
try {
const existing = await fs.readJson(path.join(repoCacheDir, '.bmad-source.json'));
cachedVersion = existing?.version || null;
} catch {
// no metadata; treat as mismatched to be safe if a version was requested
}
if ((effectiveVersion || null) !== (cachedVersion || null)) {
await fs.remove(repoCacheDir);
}
}
if (await fs.pathExists(repoCacheDir)) {
// Update existing clone (same version as before)
const fetchSpinner = await createSpinner();
fetchSpinner.start(`Updating ${displayName}...`);
try {
@ -274,10 +365,25 @@ class CustomModuleManager {
stdio: ['ignore', 'pipe', 'pipe'],
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
});
if (effectiveVersion) {
// Fetch the ref as either a tag or a branch — `origin <ref>` works
// for both, whereas `origin tag <ref>` fails for branch refs parsed
// out of /tree/<branch>/... URLs.
execSync(`git fetch --depth 1 origin ${quoteCustomRef(effectiveVersion)} --no-tags`, {
cwd: repoCacheDir,
stdio: ['ignore', 'pipe', 'pipe'],
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
});
execSync(`git checkout --quiet FETCH_HEAD`, {
cwd: repoCacheDir,
stdio: ['ignore', 'pipe', 'pipe'],
});
} else {
execSync('git reset --hard origin/HEAD', {
cwd: repoCacheDir,
stdio: ['ignore', 'pipe', 'pipe'],
});
}
fetchSpinner.stop(`Updated ${displayName}`);
} catch {
fetchSpinner.error(`Update failed, re-downloading ${displayName}`);
@ -287,25 +393,44 @@ class CustomModuleManager {
if (!(await fs.pathExists(repoCacheDir))) {
const fetchSpinner = await createSpinner();
fetchSpinner.start(`Cloning ${displayName}...`);
fetchSpinner.start(`Cloning ${displayName}${effectiveVersion ? ` @ ${effectiveVersion}` : ''}...`);
try {
if (effectiveVersion) {
execSync(`git clone --depth 1 --branch ${quoteCustomRef(effectiveVersion)} "${parsed.cloneUrl}" "${repoCacheDir}"`, {
stdio: ['ignore', 'pipe', 'pipe'],
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
});
} else {
execSync(`git clone --depth 1 "${parsed.cloneUrl}" "${repoCacheDir}"`, {
stdio: ['ignore', 'pipe', 'pipe'],
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
});
}
fetchSpinner.stop(`Cloned ${displayName}`);
} catch (error_) {
fetchSpinner.error(`Failed to clone ${displayName}`);
throw new Error(`Failed to clone ${parsed.cloneUrl}: ${error_.message}`);
const refSuffix = effectiveVersion ? `@${effectiveVersion}` : '';
throw new Error(`Failed to clone ${parsed.cloneUrl}${refSuffix}: ${error_.message}`);
}
}
// Record the resolved SHA for the manifest writer.
let resolvedSha = null;
try {
resolvedSha = execSync('git rev-parse HEAD', { cwd: repoCacheDir, stdio: 'pipe' }).toString().trim();
} catch {
// swallow — a non-git repo (local path) wouldn't reach here anyway
}
// Write source metadata for later URL reconstruction
const metadataPath = path.join(repoCacheDir, '.bmad-source.json');
await fs.writeJson(metadataPath, {
cloneUrl: parsed.cloneUrl,
cacheKey: parsed.cacheKey,
displayName: parsed.displayName,
version: effectiveVersion || null,
rawInput: parsed.rawInput || sourceInput,
sha: resolvedSha,
clonedAt: new Date().toISOString(),
});
@ -346,10 +471,26 @@ class CustomModuleManager {
const resolver = new PluginResolver();
const resolved = await resolver.resolve(repoPath, plugin);
// Read clone metadata (written by cloneRepo) so we can pick up the
// resolved git ref + SHA for manifest recording.
let cloneMetadata = null;
if (sourceUrl) {
try {
cloneMetadata = await fs.readJson(path.join(repoPath, '.bmad-source.json'));
} catch {
// no metadata — local-source or legacy cache
}
}
// Stamp source info onto each resolved module for manifest tracking
for (const mod of resolved) {
if (sourceUrl) mod.repoUrl = sourceUrl;
if (localPath) mod.localPath = localPath;
if (cloneMetadata) {
mod.cloneRef = cloneMetadata.version || null;
mod.cloneSha = cloneMetadata.sha || null;
mod.rawInput = cloneMetadata.rawInput || null;
}
CustomModuleManager._resolutionCache.set(mod.code, mod);
}

View File

@ -5,6 +5,46 @@ const { execSync } = require('node:child_process');
const yaml = require('yaml');
const prompts = require('../prompts');
const { RegistryClient } = require('./registry-client');
const { resolveChannel, tagExists, parseGitHubRepo } = require('./channel-resolver');
const { decideChannelForModule } = require('./channel-plan');
const VALID_CHANNELS = new Set(['stable', 'next', 'pinned']);
function normalizeChannelName(raw) {
if (typeof raw !== 'string') return null;
const lower = raw.trim().toLowerCase();
return VALID_CHANNELS.has(lower) ? lower : null;
}
/**
* Conservative quoting for tag names passed to git commands. Tags are
* user-typed (--pin) or come from the GitHub API. Only allow the semver
* character class we use to tag BMad releases; anything else throws.
*/
function quoteShell(ref) {
if (typeof ref !== 'string' || !/^[\w.\-+/]+$/.test(ref)) {
throw new Error(`Unsafe ref name: ${JSON.stringify(ref)}`);
}
return `"${ref}"`;
}
async function readChannelMarker(markerPath) {
try {
if (!(await fs.pathExists(markerPath))) return null;
const content = await fs.readFile(markerPath, 'utf8');
return JSON.parse(content);
} catch {
return null;
}
}
async function writeChannelMarker(markerPath, data) {
try {
await fs.writeFile(markerPath, JSON.stringify({ ...data, writtenAt: new Date().toISOString() }, null, 2));
} catch {
// Best-effort: marker is an optimization, not a correctness requirement.
}
}
const MARKETPLACE_OWNER = 'bmad-code-org';
const MARKETPLACE_REPO = 'bmad-plugins-marketplace';
@ -19,10 +59,25 @@ const FALLBACK_CONFIG_PATH = path.join(__dirname, 'registry-fallback.yaml');
* @class ExternalModuleManager
*/
class ExternalModuleManager {
// moduleCode → { channel, version, ref, sha, repoUrl, resolvedFallback }
// Populated when cloneExternalModule resolves a channel. Shared across all
// instances so the manifest writer (which often instantiates a fresh
// ExternalModuleManager) sees resolutions made during install.
static _resolutions = new Map();
constructor() {
this._client = new RegistryClient();
}
/**
* Get the most recent channel resolution for a module (if any).
* @param {string} moduleCode
* @returns {Object|null}
*/
getResolution(moduleCode) {
return ExternalModuleManager._resolutions.get(moduleCode) || null;
}
/**
* Load the official modules registry from GitHub, falling back to the
* bundled YAML file if the fetch fails.
@ -75,6 +130,7 @@ class ExternalModuleManager {
defaultSelected: mod.default_selected === true || mod.defaultSelected === true,
type: mod.type || 'bmad-org',
npmPackage: mod.npm_package || mod.npmPackage || null,
defaultChannel: normalizeChannelName(mod.default_channel || mod.defaultChannel) || 'stable',
builtIn: mod.built_in === true,
isExternal: mod.built_in !== true,
};
@ -120,10 +176,15 @@ class ExternalModuleManager {
}
/**
* Clone an external module repository to cache
* Clone an external module repository to cache, resolving the requested
* channel (stable / next / pinned) to a concrete git ref.
*
* @param {string} moduleCode - Code of the external module
* @param {Object} options - Clone options
* @param {boolean} options.silent - Suppress spinner output
* @param {boolean} [options.silent] - Suppress spinner output
* @param {Object} [options.channelOptions] - Parsed channel flags. See
* modules/channel-plan.js. When absent, the module installs on its
* registry-declared default channel (typically 'stable').
* @returns {string} Path to the cloned repository
*/
async cloneExternalModule(moduleCode, options = {}) {
@ -161,18 +222,132 @@ class ExternalModuleManager {
return await prompts.spinner();
};
// Track if we need to install dependencies
// ─── Resolve channel plan ─────────────────────────────────────────────
// Post-install callers (config generation, directory setup, help catalog
// rebuild) invoke findModuleSource/cloneExternalModule without
// channelOptions just to locate the module's files. Those calls must not
// redecide the channel — the install step already chose one, cloned the
// right ref, and recorded a resolution. If we re-resolve without flags,
// we'd snap back to stable and overwrite a pinned install.
const hasExplicitChannelInput =
options.channelOptions &&
(options.channelOptions.global ||
(options.channelOptions.nextSet && options.channelOptions.nextSet.size > 0) ||
(options.channelOptions.pins && options.channelOptions.pins.size > 0));
const existingResolution = ExternalModuleManager._resolutions.get(moduleCode);
const haveUsableCache = await fs.pathExists(moduleCacheDir);
if (!hasExplicitChannelInput && existingResolution && haveUsableCache) {
// This is a look-up only; the module is already installed at its chosen
// ref. Skip cloning and return the cached path unchanged.
return moduleCacheDir;
}
const planEntry = decideChannelForModule({
code: moduleCode,
channelOptions: options.channelOptions,
registryDefault: moduleInfo.defaultChannel,
});
// Same-plan short-circuit: a single install calls cloneExternalModule
// several times (config collection, directory setup, help-catalog rebuild)
// with the same channelOptions. The first call resolves + clones; later
// calls with an identical plan and a valid cache should return immediately
// instead of re-running resolveChannel() and `git fetch` (slow; can fail
// on flaky networks even though the tagCache dedupes the GitHub API hit).
if (existingResolution && haveUsableCache && existingResolution.channel === planEntry.channel) {
const samePin = planEntry.channel !== 'pinned' || existingResolution.version === planEntry.pin;
if (samePin) return moduleCacheDir;
}
let resolved;
try {
resolved = await resolveChannel({
channel: planEntry.channel,
pin: planEntry.pin,
repoUrl: moduleInfo.url,
});
} catch (error) {
// Tag-API failure (rate limit, transient network). If we already have
// a usable cache at a recorded ref, treat this as "couldn't check for
// updates" and re-use the cached version silently — that's the right
// call for an update/quick-update, since the semantics don't change
// and the user isn't worse off than before they ran this command.
const cachedMarker = await readChannelMarker(path.join(moduleCacheDir, '.bmad-channel.json'));
if (cachedMarker?.channel && (await fs.pathExists(moduleCacheDir))) {
if (!silent) {
await prompts.log.warn(
`Could not check for updates to ${moduleInfo.name} (${error.message}); using cached ${cachedMarker.version || cachedMarker.channel}.`,
);
}
ExternalModuleManager._resolutions.set(moduleCode, {
channel: cachedMarker.channel,
version: cachedMarker.version || 'main',
ref: cachedMarker.version && cachedMarker.version !== 'main' ? cachedMarker.version : null,
sha: cachedMarker.sha,
repoUrl: moduleInfo.url,
resolvedFallback: false,
planSource: 'cached',
});
return moduleCacheDir;
}
// No cache to fall back on — this is effectively a fresh install with
// no offline safety net. Surface a clear error with actionable guidance.
const isRateLimited = /rate limit/i.test(error.message);
const hint = isRateLimited
? process.env.GITHUB_TOKEN
? 'Your GITHUB_TOKEN may have expired or been rate-limited on its own budget. Try a different token or wait for the reset.'
: 'Set a GITHUB_TOKEN env var (any personal access token with public-repo read) to raise the 60-req/hour anonymous limit.'
: `Check your network connection, or rerun with \`--next=${moduleCode}\` / \`--pin ${moduleCode}=<tag>\` to skip the tag lookup.`;
throw new Error(`Could not resolve stable tag for '${moduleCode}' (${error.message}). ${hint}`);
}
if (resolved.resolvedFallback && !silent) {
if (resolved.reason === 'no-stable-tags') {
await prompts.log.warn(`No stable releases found for ${moduleInfo.name}; installing from main.`);
} else if (resolved.reason === 'not-a-github-url') {
await prompts.log.warn(`Cannot determine stable tags for ${moduleInfo.name} (non-GitHub URL); installing from main.`);
}
}
// Validate pin before we burn time cloning. Best-effort: skip on non-GitHub URLs.
if (planEntry.channel === 'pinned') {
const parsed = parseGitHubRepo(moduleInfo.url);
if (parsed) {
try {
const exists = await tagExists(parsed.owner, parsed.repo, planEntry.pin);
if (!exists) {
throw new Error(`Tag '${planEntry.pin}' not found in ${parsed.owner}/${parsed.repo}.`);
}
} catch (error) {
if (error.message?.includes('not found')) throw error;
// Network hiccup on tag verification — let the clone attempt fail clearly.
}
}
}
// ─── Clone or update cache by resolved channel ────────────────────────
const markerPath = path.join(moduleCacheDir, '.bmad-channel.json');
const currentMarker = await readChannelMarker(markerPath);
const needsChannelReset = currentMarker && currentMarker.channel !== resolved.channel;
let needsDependencyInstall = false;
let wasNewClone = false;
// Check if already cloned
if (needsChannelReset && (await fs.pathExists(moduleCacheDir))) {
// Channel changed (e.g. user switched stable→next). Blow away and re-clone
// to avoid tangling shallow clones of different refs.
await fs.remove(moduleCacheDir);
}
if (await fs.pathExists(moduleCacheDir)) {
// Try to update if it's a git repo
// Cache exists on the right channel. Refresh the ref.
const fetchSpinner = await createSpinner();
fetchSpinner.start(`Fetching ${moduleInfo.name}...`);
try {
const currentRef = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim();
// Fetch and reset to remote - works better with shallow clones than pull
const currentSha = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim();
if (resolved.channel === 'next') {
execSync('git fetch origin --depth 1', {
cwd: moduleCacheDir,
stdio: ['ignore', 'pipe', 'pipe'],
@ -183,16 +358,24 @@ class ExternalModuleManager {
stdio: ['ignore', 'pipe', 'pipe'],
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
});
const newRef = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim();
fetchSpinner.stop(`Fetched ${moduleInfo.name}`);
// Force dependency install if we got new code
if (currentRef !== newRef) {
needsDependencyInstall = true;
} else {
// stable or pinned — fetch the specific tag and check it out.
execSync(`git fetch --depth 1 origin tag ${quoteShell(resolved.ref)} --no-tags`, {
cwd: moduleCacheDir,
stdio: ['ignore', 'pipe', 'pipe'],
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
});
execSync(`git checkout --quiet FETCH_HEAD`, {
cwd: moduleCacheDir,
stdio: ['ignore', 'pipe', 'pipe'],
});
}
const newSha = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim();
fetchSpinner.stop(`Fetched ${moduleInfo.name}`);
if (currentSha !== newSha) needsDependencyInstall = true;
} catch {
fetchSpinner.error(`Fetch failed, re-downloading ${moduleInfo.name}`);
// If update fails, remove and re-clone
await fs.remove(moduleCacheDir);
wasNewClone = true;
}
@ -200,22 +383,41 @@ class ExternalModuleManager {
wasNewClone = true;
}
// Clone if not exists or was removed
if (wasNewClone) {
const fetchSpinner = await createSpinner();
fetchSpinner.start(`Fetching ${moduleInfo.name}...`);
try {
if (resolved.channel === 'next') {
execSync(`git clone --depth 1 "${moduleInfo.url}" "${moduleCacheDir}"`, {
stdio: ['ignore', 'pipe', 'pipe'],
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
});
} else {
execSync(`git clone --depth 1 --branch ${quoteShell(resolved.ref)} "${moduleInfo.url}" "${moduleCacheDir}"`, {
stdio: ['ignore', 'pipe', 'pipe'],
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
});
}
fetchSpinner.stop(`Fetched ${moduleInfo.name}`);
} catch (error) {
fetchSpinner.error(`Failed to fetch ${moduleInfo.name}`);
throw new Error(`Failed to clone external module '${moduleCode}': ${error.message}`);
throw new Error(`Failed to clone external module '${moduleCode}' at ${resolved.version}: ${error.message}`);
}
}
// Record resolution (channel + tag + SHA) for the manifest writer to pick up.
const sha = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim();
ExternalModuleManager._resolutions.set(moduleCode, {
channel: resolved.channel,
version: resolved.version,
ref: resolved.ref,
sha,
repoUrl: moduleInfo.url,
resolvedFallback: !!resolved.resolvedFallback,
planSource: planEntry.source,
});
await writeChannelMarker(markerPath, { channel: resolved.channel, version: resolved.version, sha });
// Install dependencies if package.json exists
const packageJsonPath = path.join(moduleCacheDir, 'package.json');
const nodeModulesPath = path.join(moduleCacheDir, 'node_modules');

View File

@ -15,6 +15,11 @@ class OfficialModules {
// Tracked during interactive config collection so {directory_name}
// placeholder defaults can be resolved in buildQuestion().
this.currentProjectDir = null;
// Install-time channel flag state. Set by Config.build once, then used as
// the default for every findModuleSource/cloneExternalModule call so that
// pre-install config collection and the install step agree on which ref
// to clone.
this.channelOptions = options.channelOptions || null;
}
/**
@ -38,7 +43,7 @@ class OfficialModules {
* @returns {OfficialModules}
*/
static async build(config, paths) {
const instance = new OfficialModules();
const instance = new OfficialModules({ channelOptions: config.channelOptions });
// Pre-collected by UI or quickUpdate — store and load existing for path-change detection
if (config.moduleConfigs) {
@ -196,6 +201,12 @@ class OfficialModules {
* @returns {string|null} Path to the module source or null if not found
*/
async findModuleSource(moduleCode, options = {}) {
// Inherit channelOptions from the install-scoped instance when the caller
// didn't pass one explicitly. Keeps pre-install config collection and the
// actual install step looking at the same git ref.
if (options.channelOptions === undefined && this.channelOptions) {
options = { ...options, channelOptions: this.channelOptions };
}
const projectRoot = getProjectRoot();
// Check for core module (directly under src/core-skills)
@ -214,13 +225,13 @@ class OfficialModules {
}
}
// Check external official modules
// Check external official modules (pass channelOptions so channel plan applies)
const externalSource = await this.externalModuleManager.findExternalModuleSource(moduleCode, options);
if (externalSource) {
return externalSource;
}
// Check community modules
// Check community modules (pass channelOptions for --next/--pin overrides)
const { CommunityModuleManager } = require('./community-manager');
const communityMgr = new CommunityModuleManager();
const communitySource = await communityMgr.findModuleSource(moduleCode, options);
@ -258,7 +269,10 @@ class OfficialModules {
return this.installFromResolution(resolved, bmadDir, fileTrackingCallback, options);
}
const sourcePath = await this.findModuleSource(moduleName, { silent: options.silent });
const sourcePath = await this.findModuleSource(moduleName, {
silent: options.silent,
channelOptions: options.channelOptions,
});
const targetPath = path.join(bmadDir, moduleName);
if (!sourcePath) {
@ -281,11 +295,24 @@ class OfficialModules {
const manifestObj = new Manifest();
const versionInfo = await manifestObj.getModuleVersionInfo(moduleName, bmadDir, sourcePath);
// Pick up channel resolution recorded by whichever manager did the clone.
const externalResolution = this.externalModuleManager.getResolution(moduleName);
let communityResolution = null;
if (!externalResolution) {
const { CommunityModuleManager } = require('./community-manager');
communityResolution = new CommunityModuleManager().getResolution(moduleName);
}
const resolution = externalResolution || communityResolution;
await manifestObj.addModule(bmadDir, moduleName, {
version: versionInfo.version,
version: resolution?.version || versionInfo.version,
source: versionInfo.source,
npmPackage: versionInfo.npmPackage,
repoUrl: versionInfo.repoUrl,
channel: resolution?.channel,
sha: resolution?.sha,
registryApprovedTag: communityResolution?.registryApprovedTag,
registryApprovedSha: communityResolution?.registryApprovedSha,
});
return { success: true, module: moduleName, path: targetPath, versionInfo };
@ -333,18 +360,37 @@ class OfficialModules {
await this.createModuleDirectories(resolved.code, bmadDir, options);
}
// Update manifest
// Update manifest. For custom modules, derive channel from the git ref:
// cloneRef present → pinned at that ref
// cloneRef absent → next (main HEAD)
// local path → no channel concept
const { Manifest } = require('../core/manifest');
const manifestObj = new Manifest();
await manifestObj.addModule(bmadDir, resolved.code, {
version: resolved.version || null,
const hasGitClone = !!resolved.repoUrl;
const manifestEntry = {
version: resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || null),
source: 'custom',
npmPackage: null,
repoUrl: resolved.repoUrl || null,
});
};
if (hasGitClone) {
manifestEntry.channel = resolved.cloneRef ? 'pinned' : 'next';
if (resolved.cloneSha) manifestEntry.sha = resolved.cloneSha;
if (resolved.rawInput) manifestEntry.rawSource = resolved.rawInput;
}
if (resolved.localPath) manifestEntry.localPath = resolved.localPath;
await manifestObj.addModule(bmadDir, resolved.code, manifestEntry);
return { success: true, module: resolved.code, path: targetPath, versionInfo: { version: resolved.version || '' } };
return {
success: true,
module: resolved.code,
path: targetPath,
// Match the manifestEntry.version expression above so downstream summary
// lines show the cloned ref (tag or 'main') instead of the on-disk
// package.json version for git-backed custom installs.
versionInfo: { version: resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || '') },
};
}
/**

View File

@ -1,6 +1,10 @@
# Fallback module registry — used only when the BMad Marketplace repo
# (bmad-code-org/bmad-plugins-marketplace) is unreachable.
# The remote registry/official.yaml is the source of truth.
#
# default_channel (optional) — the install channel when the user does not
# override with --channel/--pin/--next. Valid values: stable | next.
# Omit to inherit the installer's hardcoded default (stable).
modules:
bmad-builder:
@ -12,6 +16,7 @@ modules:
defaultSelected: false
type: bmad-org
npmPackage: bmad-builder
default_channel: stable
bmad-creative-intelligence-suite:
url: https://github.com/bmad-code-org/bmad-module-creative-intelligence-suite
@ -22,6 +27,7 @@ modules:
defaultSelected: false
type: bmad-org
npmPackage: bmad-creative-intelligence-suite
default_channel: stable
bmad-game-dev-studio:
url: https://github.com/bmad-code-org/bmad-module-game-dev-studio.git
@ -32,6 +38,7 @@ modules:
defaultSelected: false
type: bmad-org
npmPackage: bmad-game-dev-studio
default_channel: stable
bmad-method-test-architecture-enterprise:
url: https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise
@ -42,3 +49,4 @@ modules:
defaultSelected: false
type: bmad-org
npmPackage: bmad-method-test-architecture-enterprise
default_channel: stable

View File

@ -1,19 +1,107 @@
const path = require('node:path');
const os = require('node:os');
const semver = require('semver');
const fs = require('./fs-native');
const { CLIUtils } = require('./cli-utils');
const { ExternalModuleManager } = require('./modules/external-manager');
const { resolveModuleVersion } = require('./modules/version-resolver');
const { Manifest } = require('./core/manifest');
const {
parseChannelOptions,
buildPlan,
decideChannelForModule,
orphanPinWarnings,
bundledTargetWarnings,
} = require('./modules/channel-plan');
const channelResolver = require('./modules/channel-resolver');
const prompts = require('./prompts');
const manifest = new Manifest();
/**
* Read a module version from the freshest local metadata available.
* @param {string} moduleCode - Module code (e.g., 'core', 'bmm', 'cis')
* @returns {string} Version string or empty string
* Format a resolved version for display in installer labels.
* Semver-like values are normalized to a single leading "v".
* @param {string|null|undefined} version
* @returns {string}
*/
async function getModuleVersion(moduleCode) {
function formatDisplayVersion(version) {
const trimmed = typeof version === 'string' ? version.trim() : '';
if (!trimmed) return '';
const normalized = semver.valid(semver.coerce(trimmed));
if (normalized) {
return `v${normalized}`;
}
return trimmed;
}
/**
* Build the display label for a module, showing an upgrade arrow when an
* installed semver differs from the latest resolvable semver.
* @param {string} name
* @param {string} latestVersion
* @param {string} installedVersion
* @returns {string}
*/
function buildModuleLabel(name, latestVersion, installedVersion = '') {
const latestDisplay = formatDisplayVersion(latestVersion);
if (!latestDisplay) return name;
const installedDisplay = formatDisplayVersion(installedVersion);
const latestSemver = semver.valid(semver.coerce(latestVersion || ''));
const installedSemver = semver.valid(semver.coerce(installedVersion || ''));
if (installedDisplay && latestSemver && installedSemver && semver.neq(installedSemver, latestSemver)) {
return `${name} (${installedDisplay}${latestDisplay})`;
}
return `${name} (${latestDisplay})`;
}
/**
* Resolve the version to show for a module picker entry. External modules use
* the same channel/tag resolver as installs; bundled modules fall back to local
* source metadata.
* @param {string} moduleCode - Module code (e.g., 'core', 'bmm', 'cis')
* @param {Object} options
* @param {string|null} [options.repoUrl] - Module repository URL for tag resolution
* @param {string|null} [options.registryDefault] - Registry default channel
* @param {Object|null} [options.channelOptions] - Parsed installer channel options
* @returns {Promise<{version: string, lookupAttempted: boolean, lookupSucceeded: boolean}>}
*/
async function getModuleVersion(moduleCode, { repoUrl = null, registryDefault = null, channelOptions = null } = {}) {
if (repoUrl) {
const plan = decideChannelForModule({
code: moduleCode,
channelOptions,
registryDefault,
});
try {
const resolved = await channelResolver.resolveChannel({
channel: plan.channel,
pin: plan.pin,
repoUrl,
});
if (resolved?.version) {
return {
version: resolved.version,
lookupAttempted: plan.channel === 'stable',
lookupSucceeded: true,
};
}
} catch {
// Fall back to local metadata when tag resolution is unavailable.
}
}
const versionInfo = await resolveModuleVersion(moduleCode);
return versionInfo.version || '';
return {
version: versionInfo.version || '',
lookupAttempted: !!repoUrl,
lookupSucceeded: false,
};
}
/**
@ -33,6 +121,13 @@ class UI {
const messageLoader = new MessageLoader();
await messageLoader.displayStartMessage();
// Parse channel flags (--channel/--all-*/--next=/--pin) once. Warnings
// are surfaced immediately so the user sees them before any git ops run.
const channelOptions = parseChannelOptions(options);
for (const warning of channelOptions.warnings) {
await prompts.log.warn(warning);
}
// Get directory from options or prompt
let confirmedDirectory;
if (options.directory) {
@ -114,7 +209,7 @@ class UI {
// Return early with modify configuration
if (actionType === 'update') {
// Get existing installation info
const { installedModuleIds } = await this.getExistingInstallation(confirmedDirectory);
const { installedModuleIds, installedModuleVersions } = await this.getExistingInstallation(confirmedDirectory);
await prompts.log.message(`Found existing modules: ${[...installedModuleIds].join(', ')}`);
@ -136,7 +231,7 @@ class UI {
`Non-interactive mode (--yes): using default modules (installed + defaults): ${selectedModules.join(', ')}`,
);
} else {
selectedModules = await this.selectAllModules(installedModuleIds);
selectedModules = await this.selectAllModules(installedModuleIds, installedModuleVersions, channelOptions);
}
// Resolve custom sources from --custom-source flag
@ -152,10 +247,38 @@ class UI {
selectedModules.unshift('core');
}
// For existing installs, resolve per-module update decisions BEFORE
// we clone anything. Reads the existing manifest's recorded channel
// per module and prompts the user on available upgrades (patch/minor
// default Y, major default N). Legacy entries with no channel are
// migrated here too. Mutates channelOptions.pins to lock rejections.
await this._resolveUpdateChannels({
bmadDir,
selectedModules,
channelOptions,
yes: options.yes || false,
});
// Get tool selection
const toolSelection = await this.promptToolSelection(confirmedDirectory, options);
const moduleConfigs = await this.collectModuleConfigs(confirmedDirectory, selectedModules, options);
const moduleConfigs = await this.collectModuleConfigs(confirmedDirectory, selectedModules, {
...options,
channelOptions,
});
// Warn about --pin/--next flags that refer to modules the user didn't
// select, or that target bundled modules (core/bmm) where channel
// flags don't apply.
{
const bundledCodes = await this._bundledModuleCodes();
for (const warning of [
...orphanPinWarnings(channelOptions, selectedModules),
...bundledTargetWarnings(channelOptions, bundledCodes),
]) {
await prompts.log.warn(warning);
}
}
return {
actionType: 'update',
@ -166,12 +289,13 @@ class UI {
coreConfig: moduleConfigs.core || {},
moduleConfigs: moduleConfigs,
skipPrompts: options.yes || false,
channelOptions,
};
}
}
// This section is only for new installations (update returns early above)
const { installedModuleIds } = await this.getExistingInstallation(confirmedDirectory);
const { installedModuleIds, installedModuleVersions } = await this.getExistingInstallation(confirmedDirectory);
// Unified module selection - all modules in one grouped multiselect
let selectedModules;
@ -190,7 +314,7 @@ class UI {
selectedModules = await this.getDefaultModules(installedModuleIds);
await prompts.log.info(`Using default modules (--yes flag): ${selectedModules.join(', ')}`);
} else {
selectedModules = await this.selectAllModules(installedModuleIds);
selectedModules = await this.selectAllModules(installedModuleIds, installedModuleVersions, channelOptions);
}
// Resolve custom sources from --custom-source flag
@ -205,8 +329,31 @@ class UI {
if (!selectedModules.includes('core')) {
selectedModules.unshift('core');
}
// Interactive channel gate: "Ready to install (all stable)? [Y/n]"
// Only shown for fresh installs with no channel flags and an external module
// selected. Non-interactive installs skip this and fall through to the
// registry default (stable) or whatever flags were supplied.
await this._interactiveChannelGate({ options, channelOptions, selectedModules });
let toolSelection = await this.promptToolSelection(confirmedDirectory, options);
const moduleConfigs = await this.collectModuleConfigs(confirmedDirectory, selectedModules, options);
const moduleConfigs = await this.collectModuleConfigs(confirmedDirectory, selectedModules, {
...options,
channelOptions,
});
// Warn about --pin/--next flags that refer to modules the user didn't
// select, or that target bundled modules (core/bmm) where channel
// flags don't apply.
{
const bundledCodes = await this._bundledModuleCodes();
for (const warning of [
...orphanPinWarnings(channelOptions, selectedModules),
...bundledTargetWarnings(channelOptions, bundledCodes),
]) {
await prompts.log.warn(warning);
}
}
return {
actionType: 'install',
@ -217,6 +364,7 @@ class UI {
coreConfig: moduleConfigs.core || {},
moduleConfigs: moduleConfigs,
skipPrompts: options.yes || false,
channelOptions,
};
}
@ -465,7 +613,7 @@ class UI {
/**
* Get existing installation info and installed modules
* @param {string} directory - Installation directory
* @returns {Object} Object with existingInstall, installedModuleIds, and bmadDir
* @returns {Object} Object with existingInstall, installedModuleIds, installedModuleVersions, and bmadDir
*/
async getExistingInstallation(directory) {
const { ExistingInstall } = require('./core/existing-install');
@ -474,8 +622,26 @@ class UI {
const { bmadDir } = await installer.findBmadDir(directory);
const existingInstall = await ExistingInstall.detect(bmadDir);
const installedModuleIds = new Set(existingInstall.moduleIds);
const installedModuleVersions = new Map();
const manifestModules = await manifest.getAllModuleVersions(bmadDir);
return { existingInstall, installedModuleIds, bmadDir };
for (const module of manifestModules) {
if (module?.name && module.version) {
installedModuleVersions.set(module.name, module.version);
}
}
for (const module of existingInstall.modules) {
if (module?.id && module.version && module.version !== 'unknown' && !installedModuleVersions.has(module.id)) {
installedModuleVersions.set(module.id, module.version);
}
}
if (existingInstall.hasCore && existingInstall.version && !installedModuleVersions.has('core')) {
installedModuleVersions.set('core', existingInstall.version);
}
return { existingInstall, installedModuleIds, installedModuleVersions, bmadDir };
}
/**
@ -488,7 +654,7 @@ class UI {
*/
async collectModuleConfigs(directory, modules, options = {}) {
const { OfficialModules } = require('./modules/official-modules');
const configCollector = new OfficialModules();
const configCollector = new OfficialModules({ channelOptions: options.channelOptions });
// Seed core config from CLI options if provided
if (options.userName || options.communicationLanguage || options.documentOutputLanguage || options.outputFolder) {
@ -556,11 +722,13 @@ class UI {
/**
* Select all modules across three tiers: official, community, and custom URL.
* @param {Set} installedModuleIds - Currently installed module IDs
* @param {Map<string, string>} installedModuleVersions - Installed module versions from the local manifest
* @param {Object|null} channelOptions - Parsed installer channel options
* @returns {Array} Selected module codes (excluding core)
*/
async selectAllModules(installedModuleIds = new Set()) {
async selectAllModules(installedModuleIds = new Set(), installedModuleVersions = new Map(), channelOptions = null) {
// Phase 1: Official modules
const officialSelected = await this._selectOfficialModules(installedModuleIds);
const officialSelected = await this._selectOfficialModules(installedModuleIds, installedModuleVersions, channelOptions);
// Determine which installed modules are NOT official (community or custom).
// These must be preserved even if the user declines to browse community/custom.
@ -596,9 +764,11 @@ class UI {
* Select official modules using autocompleteMultiselect.
* Extracted from the original selectAllModules - unchanged behavior.
* @param {Set} installedModuleIds - Currently installed module IDs
* @param {Map<string, string>} installedModuleVersions - Installed module versions from the local manifest
* @param {Object|null} channelOptions - Parsed installer channel options
* @returns {Array} Selected official module codes
*/
async _selectOfficialModules(installedModuleIds = new Set()) {
async _selectOfficialModules(installedModuleIds = new Set(), installedModuleVersions = new Map(), channelOptions = null) {
// Built-in modules (core, bmm) come from local source, not the registry
const { OfficialModules } = require('./modules/official-modules');
const builtInModules = (await new OfficialModules().listAvailable()).modules || [];
@ -611,15 +781,18 @@ class UI {
const initialValues = [];
const lockedValues = ['core'];
const buildModuleEntry = async (code, name, description, isDefault) => {
const buildModuleEntry = async (code, name, description, isDefault, repoUrl = null, registryDefault = null) => {
const isInstalled = installedModuleIds.has(code);
const version = await getModuleVersion(code);
const label = version ? `${name} (v${version})` : name;
const installedVersion = installedModuleVersions.get(code) || '';
const versionState = await getModuleVersion(code, { repoUrl, registryDefault, channelOptions });
const label = buildModuleLabel(name, versionState.version, installedVersion);
return {
label,
value: code,
hint: description,
selected: isInstalled || isDefault,
lookupAttempted: versionState.lookupAttempted,
lookupSucceeded: versionState.lookupSucceeded,
};
};
@ -636,12 +809,38 @@ class UI {
}
// Add external registry modules (skip built-in duplicates)
for (const mod of registryModules) {
if (mod.builtIn || builtInCodes.has(mod.code)) continue;
const entry = await buildModuleEntry(mod.code, mod.name, mod.description, mod.defaultSelected);
const externalRegistryModules = registryModules.filter((mod) => !mod.builtIn && !builtInCodes.has(mod.code));
let externalRegistryEntries = [];
if (externalRegistryModules.length > 0) {
const spinner = await prompts.spinner();
spinner.start('Checking latest module versions...');
externalRegistryEntries = await Promise.all(
externalRegistryModules.map(async (mod) => ({
code: mod.code,
entry: await buildModuleEntry(
mod.code,
mod.name,
mod.description,
mod.defaultSelected,
mod.url || null,
mod.defaultChannel || null,
),
})),
);
spinner.stop('Checked latest module versions.');
const attemptedLookups = externalRegistryEntries.filter(({ entry }) => entry.lookupAttempted).length;
const successfulLookups = externalRegistryEntries.filter(({ entry }) => entry.lookupSucceeded).length;
if (attemptedLookups > 0 && successfulLookups === 0) {
await prompts.log.warn('Could not check latest module versions; showing cached/local versions.');
}
}
for (const { code, entry } of externalRegistryEntries) {
allOptions.push({ label: entry.label, value: entry.value, hint: entry.hint });
if (entry.selected) {
initialValues.push(mod.code);
initialValues.push(code);
}
}
@ -1563,6 +1762,349 @@ class UI {
});
await prompts.log.message('Selected tools:\n' + toolLines.join('\n'));
}
/**
* Return the set of module codes the registry marks as built-in (core, bmm).
* These ship with the installer binary and have no per-module channel.
*/
async _bundledModuleCodes() {
const externalManager = new ExternalModuleManager();
try {
const modules = await externalManager.listAvailable();
return modules.filter((m) => m.builtIn).map((m) => m.code);
} catch {
// Registry unreachable — fall back to the known bundled codes.
return ['core', 'bmm'];
}
}
/**
* Fast-path channel gate: confirm "all stable" or open the per-module picker.
*
* Skipped when:
* - running non-interactively (--yes)
* - the user already passed channel flags (--channel / --pin / --next)
* - no externals/community modules are selected
*
* Mutates channelOptions.pins and channelOptions.nextSet to reflect picker choices.
*/
async _interactiveChannelGate({ options, channelOptions, selectedModules }) {
if (options.yes) return;
// If the user already declared their channel intent via flags, trust them
// and skip the gate.
const haveFlagIntent = channelOptions.global || channelOptions.nextSet.size > 0 || channelOptions.pins.size > 0;
if (haveFlagIntent) return;
// Figure out which selected modules actually get a channel (externals +
// community modules). Bundled core/bmm and custom modules skip the picker.
const externalManager = new ExternalModuleManager();
const externals = await externalManager.listAvailable();
const externalByCode = new Map(externals.map((m) => [m.code, m]));
const { CommunityModuleManager } = require('./modules/community-manager');
const communityMgr = new CommunityModuleManager();
const community = await communityMgr.listAll();
const communityByCode = new Map(community.map((m) => [m.code, m]));
const channelSelectable = selectedModules.filter((code) => {
const info = externalByCode.get(code) || communityByCode.get(code);
return info && !info.builtIn;
});
if (channelSelectable.length === 0) return;
const fastPath = await prompts.confirm({
message: `Ready to install (all stable)? Pick "n" to customize channels or pin versions.`,
default: true,
});
if (fastPath) return; // stable for all, registry default applies
// Customize path: per-module picker.
const { fetchStableTags, parseGitHubRepo } = require('./modules/channel-resolver');
for (const code of channelSelectable) {
const info = externalByCode.get(code) || communityByCode.get(code);
const repoUrl = info.url;
// Try to pre-resolve the top stable tag so we can surface it in the picker.
let stableLabel = 'stable (released version)';
try {
const parsed = repoUrl ? parseGitHubRepo(repoUrl) : null;
if (parsed) {
const tags = await fetchStableTags(parsed.owner, parsed.repo);
if (tags.length > 0) {
stableLabel = `stable ${tags[0].tag} (released version)`;
}
}
} catch {
// fall through with the generic label
}
const choice = await prompts.select({
message: `${code}: choose a channel`,
choices: [
{ name: stableLabel, value: 'stable' },
{ name: 'next (main HEAD \u2014 current development)', value: 'next' },
{ name: 'pin (specific version)', value: 'pin' },
],
default: 'stable',
});
if (choice === 'next') {
channelOptions.nextSet.add(code);
} else if (choice === 'pin') {
const pinValue = await prompts.text({
message: `Enter a version tag for '${code}' (e.g. v1.6.0):`,
validate: (value) => {
if (!value || !/^[\w.\-+/]+$/.test(String(value).trim())) {
return 'Must be a non-empty tag name (letters, digits, dots, hyphens).';
}
},
});
channelOptions.pins.set(code, String(pinValue).trim());
}
// 'stable' is the default; nothing to record.
}
}
/**
* Resolve channel decisions for an update over an existing install.
*
* For each selected external/community module:
* - Read the recorded channel from the existing manifest.
* - On `stable`: query tags; if a newer stable exists, classify the diff
* and prompt. Patch/minor default Y; major defaults N. `--yes` accepts
* defaults (patches/minors) but NOT majors a major under --yes stays
* frozen unless the user also passes `--pin CODE=NEW_TAG`.
* - On `next`: no prompt (pull HEAD).
* - On `pinned`: no prompt (stays pinned).
* - No channel recorded and `version: null`: one-time migration prompt
* ("Switch to stable / Keep on next").
*
* Decisions that freeze the current version are applied by adding a pin to
* `channelOptions.pins` so downstream clone logic honors them.
*/
async _resolveUpdateChannels({ bmadDir, selectedModules, channelOptions, yes }) {
const { Manifest } = require('./core/manifest');
const manifestObj = new Manifest();
const manifest = await manifestObj.read(bmadDir);
const existingByName = new Map();
for (const m of manifest?.modulesDetailed || []) {
if (m?.name) existingByName.set(m.name, m);
}
if (existingByName.size === 0) return;
const externalManager = new ExternalModuleManager();
const externals = await externalManager.listAvailable();
const externalByCode = new Map(externals.map((m) => [m.code, m]));
const { CommunityModuleManager } = require('./modules/community-manager');
const communityMgr = new CommunityModuleManager();
const community = await communityMgr.listAll();
const communityByCode = new Map(community.map((m) => [m.code, m]));
const { fetchStableTags, classifyUpgrade, releaseNotesUrl } = require('./modules/channel-resolver');
const { parseGitHubRepo } = require('./modules/channel-resolver');
// Interactive-only: offer a one-time gate to review / switch channels for
// selected modules that are already installed. Default N so normal Modify
// flows (add/remove modules) aren't interrupted.
let reviewChannels = false;
if (!yes) {
const existingWithChannel = selectedModules.filter((code) => {
const prev = existingByName.get(code);
if (!prev) return false;
const info = externalByCode.get(code) || communityByCode.get(code);
return info && !info.builtIn;
});
if (existingWithChannel.length > 0) {
reviewChannels = await prompts.confirm({
message: 'Review channel assignments (stable / next / pin) for your existing modules?',
default: false,
});
}
}
for (const code of selectedModules) {
const prev = existingByName.get(code);
if (!prev) continue;
const info = externalByCode.get(code) || communityByCode.get(code);
if (!info) continue;
// Bundled modules (core/bmm) ship with the installer binary itself —
// their version is stapled to the CLI version, not a git tag. Skip
// tag-API lookups for them; the "upgrade" mechanism is `npx bmad@X install`.
if (info.builtIn) continue;
const repoUrl = info.url;
const parsed = repoUrl ? parseGitHubRepo(repoUrl) : null;
// Legacy migration: manifest carries no channel and a null/empty
// version. Offer the one-time pick between stable and next.
const recordedChannel = prev.channel || null;
const needsMigration = !recordedChannel && (prev.version == null || prev.version === '');
if (needsMigration) {
if (yes) {
// Conservative headless default: stable.
continue;
}
const chosen = await prompts.select({
message: `${code}: your existing install tracks the main branch. Switch to stable releases (recommended for production), or keep on main?`,
choices: [
{ name: 'Switch to stable', value: 'stable' },
{ name: 'Keep on main (next)', value: 'next' },
],
default: 'stable',
});
if (chosen === 'next') channelOptions.nextSet.add(code);
continue;
}
// Optional channel-switch offer. Fires only when the user opted in via
// the gate above. 'keep' falls through to the existing per-channel
// logic (which runs upgrade classification for stable). Any switch
// records the new intent into channelOptions and skips upgrade prompts.
if (reviewChannels && recordedChannel) {
const switchChoices = [
{
name: `Keep on '${recordedChannel}'${prev.version ? ` @ ${prev.version}` : ''}`,
value: 'keep',
},
];
if (recordedChannel !== 'stable') {
switchChoices.push({ name: 'Switch to stable (released version)', value: 'stable' });
}
if (recordedChannel !== 'next') {
switchChoices.push({ name: 'Switch to next (main HEAD)', value: 'next' });
}
switchChoices.push({ name: 'Pin to a specific version tag', value: 'pin' });
const choice = await prompts.select({
message: `${code} channel:`,
choices: switchChoices,
default: 'keep',
});
if (choice === 'next') {
channelOptions.nextSet.add(code);
continue;
}
if (choice === 'pin') {
const pinValue = await prompts.text({
message: `Enter a version tag for '${code}' (e.g. v1.6.0):`,
validate: (value) => {
if (!value || !/^[\w.\-+/]+$/.test(String(value).trim())) {
return 'Must be a non-empty tag name (letters, digits, dots, hyphens).';
}
},
});
channelOptions.pins.set(code, String(pinValue).trim());
continue;
}
if (choice === 'stable') {
// Switch to stable: install at the top stable tag without an
// upgrade-classification prompt (the user explicitly opted in).
// Also warm the tag cache here so the actual clone step doesn't
// need a second GitHub API call (can hit rate limits).
if (parsed) {
try {
await fetchStableTags(parsed.owner, parsed.repo);
} catch {
// best effort; clone step will surface any failure
}
}
continue;
}
// 'keep' → fall through with recordedChannel below.
}
if (recordedChannel === 'pinned' || recordedChannel === 'next') {
// Respect any explicit channel intent the user already expressed via
// CLI flags (--channel / --all-* / --next=CODE / --pin CODE=TAG) or
// via the interactive review gate above. Only auto-re-assert the
// recorded channel when the user hasn't opted into anything else —
// otherwise --all-stable (or a review "switch to stable") would be
// silently clobbered by the prior channel.
const alreadyDecided = channelOptions.global || channelOptions.nextSet.has(code) || channelOptions.pins.has(code);
if (!alreadyDecided) {
if (recordedChannel === 'pinned' && prev.version) {
channelOptions.pins.set(code, prev.version);
} else if (recordedChannel === 'next') {
channelOptions.nextSet.add(code);
}
}
continue;
}
// Stable channel: check for a newer released tag.
if (!parsed) continue;
// Respect explicit CLI intent (--pin / --next=CODE / --all-*) and any
// choice the user already made in the earlier review gate. Without this
// guard the upgrade classifier below would unconditionally call
// `channelOptions.pins.set(code, prev.version)` on decline/major-refuse/
// fetch-error, silently clobbering the user's override.
const alreadyDecided = channelOptions.global || channelOptions.nextSet.has(code) || channelOptions.pins.has(code);
if (alreadyDecided) continue;
let tags;
try {
tags = await fetchStableTags(parsed.owner, parsed.repo);
} catch (error) {
await prompts.log.warn(`Could not check for updates on ${code} (${error.message}). Leaving at ${prev.version}.`);
if (prev.version) channelOptions.pins.set(code, prev.version);
continue;
}
if (!tags || tags.length === 0) continue;
const topTag = tags[0].tag; // e.g. "v1.7.0"
const currentTag = prev.version || '';
const diffClass = classifyUpgrade(currentTag, topTag);
if (diffClass === 'none') continue; // already at or above top tag
const notes = releaseNotesUrl(repoUrl, topTag);
let accept;
if (diffClass === 'major') {
if (yes) {
// Major under --yes is refused by design.
await prompts.log.warn(
`${code} ${currentTag}${topTag} is a new major release; staying on ${currentTag}. ` +
`To accept, rerun with --pin ${code}=${topTag}.`,
);
channelOptions.pins.set(code, currentTag);
continue;
}
accept = await prompts.confirm({
message:
`${code} ${topTag} available — new major release (may change behavior).` +
(notes ? ` Release notes: ${notes}.` : '') +
' Upgrade?',
default: false,
});
} else if (diffClass === 'minor') {
if (yes) {
accept = true;
} else {
accept = await prompts.confirm({
message: `${code} ${topTag} available (new features).` + (notes ? ` Release notes: ${notes}.` : '') + ' Upgrade?',
default: true,
});
}
} else {
// patch
if (yes) {
accept = true;
} else {
accept = await prompts.confirm({
message: `${code} ${topTag} available. Upgrade?`,
default: true,
});
}
}
if (!accept && currentTag) {
// Freeze the current version by pinning it for this run.
channelOptions.pins.set(code, currentTag);
}
}
}
}
module.exports = { UI };