From 3d824d4c0f459582917f23bf1b1d7149dd4f88dc Mon Sep 17 00:00:00 2001 From: Brian Date: Fri, 24 Apr 2026 08:20:30 -0500 Subject: [PATCH 01/23] feat(installer): channel-based version resolution + interactive channel management (#2305) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * feat(installer): channel-based version resolution for external modules Adds stable/next/pinned channel resolution so external/community modules install at released git tags by default instead of tracking main HEAD. Manifest now records channel, resolved version, and SHA per module for reproducible installs. CLI flags: --channel, --all-stable, --all-next, --next=CODE (repeatable), --pin CODE=TAG (repeatable). Precedence: pin > next > channel > registry default > stable. --yes accepts patch/minor upgrades but refuses majors. Interactive "Ready to install (all stable)?" gate with a per-module picker (stable/next/pin) when declined. Re-install prompts classify tag diffs as patch/minor/major with semver-class-dependent defaults. Legacy version:null manifests get a one-time migration prompt. Custom modules gain an optional @ URL suffix for pinning (https, ssh, /tree//subdir forms supported; local paths rejected). Community modules honor --next/--pin overrides with a curator-bypass warning; default path still enforces the approved SHA. Quick-update now reads the manifest's recorded channel per module so pinned installs don't silently roll forward. * feat(installer): interactive channel switch, upgrade refusal, unified docs Builds on the channel-resolution foundation. The installer now lets users flip a module between stable, next, and pinned after install — either interactively via a "Review channel assignments?" gate, or by flag. Quick and modify re-installs classify stable upgrades; under non-interactive flows, patches and minors apply automatically but majors are refused with a pointer to --pin. Fallback behavior for GitHub rate-limit / network failures is now cache- aware: re-installs reuse the recorded ref silently; fresh installs abort with actionable guidance (set GITHUB_TOKEN or use --next/--pin). Bundled modules (core, bmm) warn when targeted by --pin or --next so users aren't left wondering why the flag had no effect. Install summary labels no longer mangle "main" into "vmain"; next-channel entries render as "main @ " instead. Bundled modules are now correctly skipped from all channel prompts and tag-API lookups. Docs consolidated into a single how-to. install-bmad.md now covers the interactive flow, the channel model (stable/next/pinned plus the npm dist-tag axis for core/bmm), the re-install upgrade prompts, the full flag reference, copy-paste recipes, and troubleshooting. The old non-interactive-installation.md is reduced to a redirect stub. * fix(installer): review fixes + unit tests for channel resolution - ui.js: import parseGitHubRepo; fixes ReferenceError in the interactive channel picker's stable-tag pre-resolve path. - community-manager: pinned modules now fetch+checkout the pin tag on cache refresh instead of resetting to origin/HEAD (was silently drifting to main on re-install). - channel-plan: parseChannelOptions returns acceptBypass so --yes auto-confirms the curator-bypass prompt; headless --next/--pin installs of community modules no longer hang. - community-manager: simplify recordedVersion (dead ternary branch). - custom-module-manager: drop "or sha" from the @ comment (git clone --branch rejects raw SHAs); update-path fetches origin so /tree// URLs work too. - install-bmad.md: rename "Headless / CI installs" to "Headless CI installs" so the stub's #headless-ci-installs anchor resolves. - test/test-installer-channels.js: 83 unit tests for channel-plan and channel-resolver pure modules; wired into npm test as test:channels. * fix(installer): address CodeRabbit review findings - ui.js: skip stable-channel upgrade classification when the user has already declared intent via --pin/--next=/--channel or the review gate. Prevents the decline / major-refused / fetch-error branches from silently overwriting an explicit pin with prev.version. - external-manager.js: short-circuit cloneExternalModule when the requested plan matches an existing in-process resolution and the cache is valid. Avoids redundant resolveChannel() + git fetch on every same-plan lookup in a single install. - installer.js: fall back to CommunityModuleManager.getResolution() when no external resolution exists, so community module result rows carry newChannel/newSha instead of null under --next/--pin. - installer.js: don't label a module as "no change" when its version string is 'main'/'HEAD' — the SHA may have moved and preVersions doesn't track the prior SHA. Show "(refreshed)" instead. - official-modules.js: match versionInfo.version to the manifest's cloneRef || (hasGitClone ? 'main' : version) expression so summary lines report the cloned ref for git-backed custom installs. - install-bmad.md: clarify that sha is only written for git-backed modules and that rerunning the same --modules on another machine does not reproduce stable-channel installs — convert recorded tags into explicit --pin flags for cross-machine reproducibility. --- docs/how-to/install-bmad.md | 260 +++++++---- docs/how-to/non-interactive-installation.md | 192 +------- package-lock.json | 42 +- package.json | 3 +- test/test-installer-channels.js | 348 +++++++++++++++ tools/installer/commands/install.js | 13 + tools/installer/core/config.js | 5 +- tools/installer/core/installer.js | 105 ++++- tools/installer/core/manifest-generator.js | 17 +- tools/installer/core/manifest.js | 31 +- tools/installer/modules/channel-plan.js | 203 +++++++++ tools/installer/modules/channel-resolver.js | 241 ++++++++++ tools/installer/modules/community-manager.js | 138 +++++- .../modules/custom-module-manager.js | 179 +++++++- tools/installer/modules/external-manager.js | 260 +++++++++-- tools/installer/modules/official-modules.js | 66 ++- .../installer/modules/registry-fallback.yaml | 8 + tools/installer/ui.js | 410 +++++++++++++++++- 18 files changed, 2122 insertions(+), 399 deletions(-) create mode 100644 test/test-installer-channels.js create mode 100644 tools/installer/modules/channel-plan.js create mode 100644 tools/installer/modules/channel-resolver.js diff --git a/docs/how-to/install-bmad.md b/docs/how-to/install-bmad.md index e0d276d51..616e6e430 100644 --- a/docs/how-to/install-bmad.md +++ b/docs/how-to/install-bmad.md @@ -1,122 +1,226 @@ --- title: 'How to Install BMad' -description: Step-by-step guide to installing BMad in your project +description: Install, update, and pin BMad for local development, teams, and CI sidebar: order: 1 --- -Use the `npx bmad-method install` command to set up BMad in your project with your choice of modules and AI tools. - -If you want to use a non interactive installer and provide all install options on the command line, see [this guide](./non-interactive-installation.md). +Use `npx bmad-method install` to set up BMad in your project. One command handles first installs, upgrades, channel switching, and scripted CI runs. This page covers all of it. ## When to Use This - Starting a new project with BMad -- Adding BMad to an existing codebase -- Update the existing BMad Installation +- Adding or removing modules on an existing install +- Switching a module to main-HEAD or pinning to a specific release +- Scripting installs for CI pipelines, Dockerfiles, or enterprise rollouts :::note[Prerequisites] -- **Node.js** 20+ (required for the installer) -- **Git** (recommended) -- **AI tool** (Claude Code, Cursor, or similar) - ::: +- **Node.js** 20+ (the installer requires it) +- **Git** (for cloning external modules) +- **An AI tool** such as Claude Code or Cursor — or install without one using `--tools none` -## Steps +::: -### 1. Run the Installer +## First-time install (the fast path) ```bash npx bmad-method install ``` -:::tip[Want the newest prerelease build?] -Use the `next` dist-tag: +The interactive flow asks you five things: + +1. Installation directory (defaults to the current working directory) +2. Which modules to install (checkboxes for core, bmm, bmb, cis, gds, tea) +3. **"Ready to install (all stable)?"** — Yes accepts the latest released tag for every external module +4. Which AI tools/IDEs to integrate with (claude-code, cursor, and others) +5. Per-module config (name, language, output folder) + +Accept the defaults and you land on the latest stable release of every module, configured for your chosen tool. + +:::tip[Just want the newest prerelease?] ```bash npx bmad-method@next install ``` -This gets you newer changes earlier, with a higher chance of churn than the default install. +Runs the prerelease installer, which ships a newer snapshot of core and bmm. More churn, fewer delays between development and release. ::: -:::tip[Bleeding edge] -To install the latest from the main branch (may be unstable): +## Picking a specific version + +Two independent axes control what ends up on disk. + +### Axis 1: external module channels + +Every external module — bmb, cis, gds, tea, and any community module — installs on one of three channels: + +| Channel | What gets installed | Who picks this | +| ------------------ | ---------------------------------------------------------------------------- | --------------------------------------- | +| `stable` (default) | Highest released semver tag. Prereleases like `v2.0.0-alpha.1` are excluded. | Most users | +| `next` | Main branch HEAD at install time | Contributors, early adopters | +| `pinned` | A specific tag you name | Enterprise installs, CI reproducibility | + +Channels are per-module. You can run bmb on `next` while leaving cis on `stable` — the flags below let you mix freely. + +### Axis 2: installer binary version + +The `bmad-method` npm package itself has two dist-tags: + +| Command | What you get | +| ------------------------------------- | ----------------------------------------------------------------- | +| `npx bmad-method install` (`@latest`) | Latest stable installer release | +| `npx bmad-method@next install` | Latest prerelease installer, auto-published on every push to main | + +**The installer binary determines your core and bmm versions.** Those two modules ship bundled inside the installer package rather than being cloned from separate repos. + +### Why core and bmm don't have their own channel + +They're stapled to the installer binary you ran: + +- `npx bmad-method install` → latest stable core and bmm +- `npx bmad-method@next install` → prerelease core and bmm +- `node /path/to/local-checkout/tools/installer/bmad-cli.js install` → whatever your local checkout has + +`--pin bmm=v6.3.0` and `--next=bmm` are silently ineffective against bundled modules, and the installer warns you when you try. A future release extracts bmm from the installer package; once that ships, bmm gets a proper channel selector like bmb has today. + +## Updating an existing install + +Running `npx bmad-method install` in a directory that already contains `_bmad/` gives you a menu: + +| Choice | What it does | +| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **Quick Update** | Re-runs the install with your existing settings. Refreshes files, applies patches and minor stable upgrades, refuses major upgrades. Fast, non-interactive. | +| **Modify Install** | Full interactive flow. Add or remove modules, reconfigure settings, optionally review and switch channels for existing modules. | + +### Upgrade prompts + +When Modify detects a newer stable tag for a module you've installed on `stable`, it classifies the diff and prompts accordingly: + +| Upgrade type | Example | Default | +| ------------ | --------------- | ------- | +| Patch | v1.7.0 → v1.7.1 | Y | +| Minor | v1.7.0 → v1.8.0 | Y | +| Major | v1.7.0 → v2.0.0 | **N** | + +Major defaults to N because breaking changes frequently surface as "instability" when they weren't expected. The prompt includes a GitHub release-notes URL so you can read what changed before accepting. + +Under `--yes`, patch and minor upgrades apply automatically. Majors stay frozen — pass `--pin =` to accept non-interactively. + +### Switching a module's channel + +**Interactively:** choose Modify → answer **Yes** to "Review channel assignments?" → each external module offers Keep, Switch to stable, Switch to next, or Pin to a tag. + +**Via flags:** the recipes in the next section cover the common cases. + +## Headless CI installs + +### Flag reference + +| Flag | Purpose | +| ------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------- | +| `--yes`, `-y` | Skip all prompts; accept flag values + defaults | +| `--directory ` | Install into this directory (default: current working dir) | +| `--modules ` | Exact module set. Core is auto-added. Not a delta — list everything you want kept. | +| `--tools ` or `--tools none` | IDE/tool selection. `none` skips tool config entirely. | +| `--action ` | `install`, `update`, or `quick-update`. Defaults based on existing install state. | +| `--custom-source ` | Install custom modules from Git URLs or local paths | +| `--channel ` | Apply to all externals (aliased as `--all-stable` / `--all-next`) | +| `--all-stable` | Alias for `--channel=stable` | +| `--all-next` | Alias for `--channel=next` | +| `--next=` | Put one module on next. Repeatable. | +| `--pin =` | Pin one module to a specific tag. Repeatable. | +| `--user-name`, `--communication-language`, `--document-output-language`, `--output-folder` | Override per-user config defaults | + +Precedence when flags overlap: `--pin` beats `--next=` beats `--channel` / `--all-*` beats the registry default (`stable`). + +:::note[Example resolution] +`--all-next --pin cis=v0.2.0` puts bmb, gds, and tea on next while pinning cis to v0.2.0. +::: + +### Recipes + +**Default install — latest stable for everything:** ```bash -npx github:bmad-code-org/BMAD-METHOD install +npx bmad-method install --yes --modules bmm,bmb,cis --tools claude-code ``` +**Enterprise pin — reproducible byte-for-byte:** + +```bash +npx bmad-method install --yes \ + --modules bmm,bmb,cis \ + --pin bmb=v1.7.0 --pin cis=v0.2.0 \ + --tools claude-code +``` + +**Bleeding edge — externals on main HEAD:** + +```bash +npx bmad-method install --yes --modules bmm,bmb --all-next --tools claude-code +``` + +**Add a module to an existing install** (keep everything else): + +```bash +npx bmad-method install --yes --action update \ + --modules bmm,bmb,gds \ + --tools none +``` + +**Mix channels — bmb on next, gds on stable:** + +```bash +npx bmad-method install --yes --action update \ + --modules bmm,bmb,cis,gds \ + --next=bmb \ + --tools none +``` + +:::caution[Rate limit on shared IPs] +Anonymous GitHub API calls are capped at 60/hour per IP. A single install hits the API once per external module to resolve the stable tag. Offices behind NAT, CI runner pools, and VPNs can collectively exhaust this. + +Set `GITHUB_TOKEN=` in the environment to raise the limit to 5000/hour per account. Any public-repo-read PAT works; no scopes are required. ::: -### 2. Choose Installation Location +## What got installed -The installer will ask where to install BMad files: +After any install, `_bmad/_config/manifest.yaml` records exactly what's on disk: -- Current directory (recommended for new projects if you created the directory yourself and ran from within the directory) -- Custom path - -### 3. Select Your AI Tools - -Pick which AI tools you use: - -- Claude Code -- Cursor -- Others - -Each tool has its own way of integrating skills. The installer creates tiny prompt files to activate workflows and agents — it just puts them where your tool expects to find them. - -:::note[Enabling Skills] -Some platforms require skills to be explicitly enabled in settings before they appear. If you install BMad and don't see the skills, check your platform's settings or ask your AI assistant how to enable skills. -::: - -### 4. Choose Modules - -The installer shows available modules. Select whichever ones you need — most users just want **BMad Method** (the software development module). - -### 5. Follow the Prompts - -The installer guides you through the rest — settings, tool integrations, etc. - -## What You Get - -```text -your-project/ -├── _bmad/ -│ ├── bmm/ # Your selected modules -│ │ └── config.yaml # Module settings (if you ever need to change them) -│ ├── core/ # Required core module -│ └── ... -├── _bmad-output/ # Generated artifacts -├── .claude/ # Claude Code skills (if using Claude Code) -│ └── skills/ -│ ├── bmad-help/ -│ ├── bmad-persona/ -│ └── ... -└── .cursor/ # Cursor skills (if using Cursor) - └── skills/ - └── ... +```yaml +modules: + - name: bmb + version: v1.7.0 # the tag, or "main" for next + channel: stable # stable | next | pinned + sha: 86033fc9aeae2ca6d52c7cdb675c1f4bf17fc1c1 + source: external + repoUrl: https://github.com/bmad-code-org/bmad-builder ``` -## Verify Installation +The `sha` field is written for git-backed modules (external, community, and URL-based custom). Bundled modules (core, bmm) and local-path custom modules don't have one — their code travels with the installer binary or your filesystem, not a cloneable ref. -Run `bmad-help` to verify everything works and see what to do next. +For cross-machine reproducibility, don't rely on rerunning the same `--modules` command. Stable-channel installs resolve to the highest released tag **at install time**, so a later rerun lands on whatever has been released since. Convert the recorded tags from `manifest.yaml` into explicit `--pin` flags on the target machine, e.g.: -**BMad-Help is your intelligent guide** that will: - -- Confirm your installation is working -- Show what's available based on your installed modules -- Recommend your first step - -You can also ask it questions: - -``` -bmad-help I just installed, what should I do first? -bmad-help What are my options for a SaaS project? +```bash +npx bmad-method install --yes --modules bmb,cis \ + --pin bmb=v1.7.0 --pin cis=v0.4.2 --tools none ``` ## Troubleshooting -**Installer throws an error** — Copy-paste the output into your AI assistant and let it figure it out. +### "Could not resolve stable tag" or "API rate limit exceeded" -**Installer worked but something doesn't work later** — Your AI needs BMad context to help. See [How to Get Answers About BMad](./get-answers-about-bmad.md) for how to point your AI at the right sources. +You've hit GitHub's 60/hr anonymous limit. Set `GITHUB_TOKEN` and retry. If you already have a token set, it may be expired or rate-limited on its own budget — try a different token or wait for the hourly reset. + +### "Tag 'vX.Y.Z' not found" + +The tag you passed to `--pin` doesn't exist in the module's repo. Check the repo's releases page on GitHub for valid tags. + +### A pinned install keeps upgrading + +Pinned installs don't upgrade. Quick-update applies patches and minors on stable channel only; it won't touch `pinned` or `next`. If a pinned install changed, open `_bmad/_config/manifest.yaml` — `channel: pinned` plus a fixed `version` and `sha` should hold across runs unless you explicitly override via flags. + +### `--pin bmm=X` didn't do anything + +bmm is a bundled module — `--pin` and `--next=` don't apply. Use `npx bmad-method@next install` for a prerelease core/bmm, or check out the bmad-bmm repo and run the installer locally to get unreleased changes. diff --git a/docs/how-to/non-interactive-installation.md b/docs/how-to/non-interactive-installation.md index 817c9120a..bfae38d7a 100644 --- a/docs/how-to/non-interactive-installation.md +++ b/docs/how-to/non-interactive-installation.md @@ -1,196 +1,10 @@ --- title: Non-Interactive Installation -description: Install BMad using command-line flags for CI/CD pipelines and automated deployments +description: Headless / CI install docs have moved sidebar: order: 2 --- -Use command-line flags to install BMad non-interactively. This is useful for: - -## When to Use This - -- Automated deployments and CI/CD pipelines -- Scripted installations -- Batch installations across multiple projects -- Quick installations with known configurations - -:::note[Prerequisites] -Requires [Node.js](https://nodejs.org) v20+ and `npx` (included with npm). -::: - -## Available Flags - -### Installation Options - -| Flag | Description | Example | -| --------------------------- | ----------------------------------------------------------------------------------- | ---------------------------------------------- | -| `--directory ` | Installation directory | `--directory ~/projects/myapp` | -| `--modules ` | Comma-separated module IDs | `--modules bmm,bmb` | -| `--tools ` | Comma-separated tool/IDE IDs (use `none` to skip) | `--tools claude-code,cursor` or `--tools none` | -| `--action ` | Action for existing installations: `install` (default), `update`, or `quick-update` | `--action quick-update` | -| `--custom-source ` | Comma-separated Git URLs or local paths for custom modules | `--custom-source /path/to/module` | - -### Core Configuration - -| Flag | Description | Default | -| ----------------------------------- | ----------------------------------------------- | --------------- | -| `--user-name ` | Name for agents to use | System username | -| `--communication-language ` | Agent communication language | English | -| `--document-output-language ` | Document output language | English | -| `--output-folder ` | Output folder path (see resolution rules below) | `_bmad-output` | - -#### Output Folder Path Resolution - -The value passed to `--output-folder` (or entered interactively) is resolved according to these rules: - -| Input type | Example | Resolved as | -| ---------------------------- | -------------------------- | ---------------------------------------------------------- | -| Relative path (default) | `_bmad-output` | `/_bmad-output` | -| Relative path with traversal | `../../shared-outputs` | Normalized absolute path — e.g. `/Users/me/shared-outputs` | -| Absolute path | `/Users/me/shared-outputs` | Used as-is — project root is **not** prepended | - -The resolved path is what agents and workflows use at runtime when writing output files. Using an absolute path or a traversal-based relative path lets you direct all generated artifacts to a directory outside your project tree — useful for shared or monorepo setups. - -### Other Options - -| Flag | Description | -| ------------- | ------------------------------------------- | -| `-y, --yes` | Accept all defaults and skip prompts | -| `-d, --debug` | Enable debug output for manifest generation | - -## Module IDs - -Available module IDs for the `--modules` flag: - -- `bmm` — BMad Method Master -- `bmb` — BMad Builder - -Check the [BMad registry](https://github.com/bmad-code-org) for available external modules. - -## Tool/IDE IDs - -Available tool IDs for the `--tools` flag: - -**Preferred:** `claude-code`, `cursor` - -Run `npx bmad-method install` interactively once to see the full current list of supported tools, or check the [platform codes configuration](https://github.com/bmad-code-org/BMAD-METHOD/blob/main/tools/installer/ide/platform-codes.yaml). - -## Installation Modes - -| Mode | Description | Example | -| --------------------- | --------------------------------------------- | ------------------------------------------------------------------------------------------------- | -| Fully non-interactive | Provide all flags to skip all prompts | `npx bmad-method install --directory . --modules bmm --tools claude-code --yes` | -| Semi-interactive | Provide some flags; BMad prompts for the rest | `npx bmad-method install --directory . --modules bmm` | -| Defaults only | Accept all defaults with `-y` | `npx bmad-method install --yes` | -| Custom source only | Install core + custom module(s) | `npx bmad-method install --directory . --custom-source /path/to/module --tools claude-code --yes` | -| Without tools | Skip tool/IDE configuration | `npx bmad-method install --modules bmm --tools none` | - -## Examples - -### CI/CD Pipeline Installation - -```bash -#!/bin/bash -# install-bmad.sh - -npx bmad-method install \ - --directory "${GITHUB_WORKSPACE}" \ - --modules bmm \ - --tools claude-code \ - --user-name "CI Bot" \ - --communication-language English \ - --document-output-language English \ - --output-folder _bmad-output \ - --yes -``` - -### Update Existing Installation - -```bash -npx bmad-method install \ - --directory ~/projects/myapp \ - --action update \ - --modules bmm,bmb,custom-module -``` - -### Quick Update (Preserve Settings) - -```bash -npx bmad-method install \ - --directory ~/projects/myapp \ - --action quick-update -``` - -### Install from Custom Source - -Install a module from a local path or any Git host: - -```bash -npx bmad-method install \ - --directory . \ - --custom-source /path/to/my-module \ - --tools claude-code \ - --yes -``` - -Combine with official modules: - -```bash -npx bmad-method install \ - --directory . \ - --modules bmm \ - --custom-source https://gitlab.com/myorg/my-module \ - --tools claude-code \ - --yes -``` - -:::note[Custom source behavior] -When `--custom-source` is used without `--modules`, only core and the custom modules are installed. Add `--modules` to include official modules as well. See [Install Custom and Community Modules](./install-custom-modules.md) for details. -::: - -## What You Get - -- A fully configured `_bmad/` directory in your project -- Agents and workflows configured for your selected modules and tools -- A `_bmad-output/` folder for generated artifacts - -## Validation and Error Handling - -BMad validates all provided flags: - -- **Directory** — Must be a valid path with write permissions -- **Modules** — Warns about invalid module IDs (but won't fail) -- **Tools** — Warns about invalid tool IDs (but won't fail) -- **Action** — Must be one of: `install`, `update`, `quick-update` - -Invalid values will either: - -1. Show an error and exit (for critical options like directory) -2. Show a warning and skip (for optional items) -3. Fall back to interactive prompts (for missing required values) - -:::tip[Best Practices] - -- Use absolute paths for `--directory` to avoid ambiguity -- Use an absolute path for `--output-folder` when you want artifacts written outside the project tree (e.g. a shared monorepo outputs directory) -- Test flags locally before using in CI/CD pipelines -- Combine with `-y` for truly unattended installations -- Use `--debug` if you encounter issues during installation - ::: - -## Troubleshooting - -### Installation fails with "Invalid directory" - -- The directory path must exist (or its parent must exist) -- You need write permissions -- The path must be absolute or correctly relative to the current directory - -### Module not found - -- Verify the module ID is correct -- External modules must be available in the registry - -:::note[Still stuck?] -Run with `--debug` for detailed output, try interactive mode to isolate the issue, or report at . +:::note[This page has moved] +Headless and CI install flags, channel selection, and pinning now live in the unified [How to Install BMad](./install-bmad.md) guide. Jump to the [Headless / CI installs](./install-bmad.md#headless-ci-installs) section for the flag reference and copy-paste recipes. ::: diff --git a/package-lock.json b/package-lock.json index bfd60ee1e..d547eff9a 100644 --- a/package-lock.json +++ b/package-lock.json @@ -15,7 +15,6 @@ "chalk": "^4.1.2", "commander": "^14.0.0", "csv-parse": "^6.1.0", - "fs-extra": "^11.3.0", "glob": "^11.0.3", "ignore": "^7.0.5", "js-yaml": "^4.1.0", @@ -25,8 +24,8 @@ "yaml": "^2.7.0" }, "bin": { - "bmad": "tools/bmad-npx-wrapper.js", - "bmad-method": "tools/bmad-npx-wrapper.js" + "bmad": "tools/installer/bmad-cli.js", + "bmad-method": "tools/installer/bmad-cli.js" }, "devDependencies": { "@astrojs/sitemap": "^3.6.0", @@ -46,6 +45,7 @@ "prettier": "^3.7.4", "prettier-plugin-packagejson": "^2.5.19", "sharp": "^0.33.5", + "unist-util-visit": "^5.1.0", "yaml-eslint-parser": "^1.2.3", "yaml-lint": "^1.7.0" }, @@ -6975,20 +6975,6 @@ "url": "https://github.com/sponsors/isaacs" } }, - "node_modules/fs-extra": { - "version": "11.3.3", - "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-11.3.3.tgz", - "integrity": "sha512-VWSRii4t0AFm6ixFFmLLx1t7wS1gh+ckoa84aOeapGum0h+EZd1EhEumSB+ZdDLnEPuucsVB9oB7cxJHap6Afg==", - "license": "MIT", - "dependencies": { - "graceful-fs": "^4.2.0", - "jsonfile": "^6.0.1", - "universalify": "^2.0.0" - }, - "engines": { - "node": ">=14.14" - } - }, "node_modules/fs.realpath": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz", @@ -7227,6 +7213,7 @@ "version": "4.2.11", "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz", "integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==", + "dev": true, "license": "ISC" }, "node_modules/h3": { @@ -9066,18 +9053,6 @@ "dev": true, "license": "MIT" }, - "node_modules/jsonfile": { - "version": "6.2.0", - "resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.2.0.tgz", - "integrity": "sha512-FGuPw30AdOIUTRMC2OMRtQV+jkVj2cfPqSeWXv1NEAJ1qZ5zb1X6z1mFhbfOB/iy3ssJCD+3KuZ8r8C3uVFlAg==", - "license": "MIT", - "dependencies": { - "universalify": "^2.0.0" - }, - "optionalDependencies": { - "graceful-fs": "^4.1.6" - } - }, "node_modules/katex": { "version": "0.16.28", "resolved": "https://registry.npmjs.org/katex/-/katex-0.16.28.tgz", @@ -13607,15 +13582,6 @@ "url": "https://opencollective.com/unified" } }, - "node_modules/universalify": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/universalify/-/universalify-2.0.1.tgz", - "integrity": "sha512-gptHNQghINnc/vTGIk0SOFGFNXw7JVrlRUtConJRlvaw6DuX0wO5Jeko9sWrMBhh+PsYAZ7oXAiOnf/UKogyiw==", - "license": "MIT", - "engines": { - "node": ">= 10.0.0" - } - }, "node_modules/unrs-resolver": { "version": "1.11.1", "resolved": "https://registry.npmjs.org/unrs-resolver/-/unrs-resolver-1.11.1.tgz", diff --git a/package.json b/package.json index a26398fdf..c1e8b4941 100644 --- a/package.json +++ b/package.json @@ -41,7 +41,8 @@ "prepare": "command -v husky >/dev/null 2>&1 && husky || exit 0", "quality": "npm run format:check && npm run lint && npm run lint:md && npm run docs:build && npm run test:install && npm run validate:refs && npm run validate:skills", "rebundle": "node tools/installer/bundlers/bundle-web.js rebundle", - "test": "npm run test:refs && npm run test:install && npm run lint && npm run lint:md && npm run format:check", + "test": "npm run test:refs && npm run test:install && npm run test:channels && npm run lint && npm run lint:md && npm run format:check", + "test:channels": "node test/test-installer-channels.js", "test:install": "node test/test-installation-components.js", "test:refs": "node test/test-file-refs-csv.js", "validate:refs": "node tools/validate-file-refs.js --strict", diff --git a/test/test-installer-channels.js b/test/test-installer-channels.js new file mode 100644 index 000000000..48fedf70e --- /dev/null +++ b/test/test-installer-channels.js @@ -0,0 +1,348 @@ +/** + * Installer Channel Resolution Tests + * + * Unit tests for the pure planning/resolution modules: + * - tools/installer/modules/channel-plan.js + * - tools/installer/modules/channel-resolver.js + * + * Neither module does I/O outside of GitHub tag lookups (which we don't + * exercise here) and semver math. All tests are deterministic. + * + * Usage: node test/test-installer-channels.js + */ + +const { + parseChannelOptions, + decideChannelForModule, + buildPlan, + orphanPinWarnings, + bundledTargetWarnings, + parsePinSpec, +} = require('../tools/installer/modules/channel-plan'); + +const { parseGitHubRepo, normalizeStableTag, classifyUpgrade, releaseNotesUrl } = require('../tools/installer/modules/channel-resolver'); + +const colors = { + reset: '', + green: '', + red: '', + yellow: '', + cyan: '', + dim: '', +}; + +let passed = 0; +let failed = 0; + +function assert(condition, testName, errorMessage = '') { + if (condition) { + console.log(`${colors.green}✓${colors.reset} ${testName}`); + passed++; + } else { + console.log(`${colors.red}✗${colors.reset} ${testName}`); + if (errorMessage) { + console.log(` ${colors.dim}${errorMessage}${colors.reset}`); + } + failed++; + } +} + +function assertEqual(actual, expected, testName) { + const ok = actual === expected; + assert(ok, testName, ok ? '' : `expected ${JSON.stringify(expected)}, got ${JSON.stringify(actual)}`); +} + +function section(title) { + console.log(`\n${colors.cyan}── ${title} ──${colors.reset}`); +} + +function runTests() { + // ───────────────────────────────────────────────────────────────────────── + // channel-plan.js :: parsePinSpec + // ───────────────────────────────────────────────────────────────────────── + section('channel-plan :: parsePinSpec'); + + { + const r = parsePinSpec('bmb=v1.2.3'); + assert(r && r.code === 'bmb' && r.tag === 'v1.2.3', 'valid CODE=TAG'); + } + { + const r = parsePinSpec(' cis = v0.1.0 '); + assert(r && r.code === 'cis' && r.tag === 'v0.1.0', 'trims whitespace around code and tag'); + } + assert(parsePinSpec('') === null, 'empty string returns null'); + assert(parsePinSpec('bmb') === null, 'missing = returns null'); + assert(parsePinSpec('=v1.0.0') === null, 'leading = returns null'); + assert(parsePinSpec('bmb=') === null, 'trailing = returns null'); + assert(parsePinSpec(null) === null, 'null input returns null'); + let undef; + assert(parsePinSpec(undef) === null, 'undefined input returns null'); + assert(parsePinSpec(42) === null, 'non-string input returns null'); + + // ───────────────────────────────────────────────────────────────────────── + // channel-plan.js :: parseChannelOptions + // ───────────────────────────────────────────────────────────────────────── + section('channel-plan :: parseChannelOptions'); + + { + const r = parseChannelOptions({}); + assert(r.global === null, 'empty: global is null'); + assert(r.nextSet instanceof Set && r.nextSet.size === 0, 'empty: nextSet is empty Set'); + assert(r.pins instanceof Map && r.pins.size === 0, 'empty: pins is empty Map'); + assert(Array.isArray(r.warnings) && r.warnings.length === 0, 'empty: no warnings'); + assert(r.acceptBypass === false, 'empty: acceptBypass false by default'); + } + { + const r = parseChannelOptions({ channel: 'stable' }); + assertEqual(r.global, 'stable', '--channel=stable sets global'); + } + { + const r = parseChannelOptions({ channel: 'NEXT' }); + assertEqual(r.global, 'next', '--channel is case-insensitive'); + } + { + const r = parseChannelOptions({ allStable: true }); + assertEqual(r.global, 'stable', '--all-stable sets global stable'); + } + { + const r = parseChannelOptions({ allNext: true }); + assertEqual(r.global, 'next', '--all-next sets global next'); + } + { + const r = parseChannelOptions({ channel: 'bogus' }); + assert(r.global === null, 'invalid --channel value is rejected (global stays null)'); + assert( + r.warnings.some((w) => w.includes("Ignoring invalid --channel value 'bogus'")), + 'invalid --channel produces a warning', + ); + } + { + // --all-stable and --all-next conflict → warning, first-wins + const r = parseChannelOptions({ allStable: true, allNext: true }); + assertEqual(r.global, 'stable', 'conflict: first flag (--all-stable) wins'); + assert( + r.warnings.some((w) => w.includes('Conflicting channel flags')), + 'conflict produces warning', + ); + } + { + const r = parseChannelOptions({ next: ['bmb', 'cis', ' '] }); + assert(r.nextSet.has('bmb') && r.nextSet.has('cis'), '--next=CODE adds to nextSet'); + assert(!r.nextSet.has(''), 'blank --next entries are skipped'); + } + { + const r = parseChannelOptions({ pin: ['bmb=v1.0.0', 'cis=v2.0.0'] }); + assertEqual(r.pins.get('bmb'), 'v1.0.0', '--pin bmb=v1.0.0 recorded'); + assertEqual(r.pins.get('cis'), 'v2.0.0', '--pin cis=v2.0.0 recorded'); + } + { + const r = parseChannelOptions({ pin: ['bmb=v1.0.0', 'bmb=v1.1.0'] }); + assertEqual(r.pins.get('bmb'), 'v1.1.0', 'duplicate --pin: last wins'); + assert( + r.warnings.some((w) => w.includes('--pin specified multiple times')), + 'duplicate --pin produces warning', + ); + } + { + const r = parseChannelOptions({ pin: ['malformed-no-equals'] }); + assert(r.pins.size === 0, 'malformed --pin is ignored'); + assert( + r.warnings.some((w) => w.includes('malformed --pin')), + 'malformed --pin warns', + ); + } + { + const r = parseChannelOptions({ yes: true }); + assertEqual(r.acceptBypass, true, '--yes sets acceptBypass so curator-bypass prompt is auto-confirmed'); + } + { + const r = parseChannelOptions({ acceptBypass: true }); + assertEqual(r.acceptBypass, true, 'explicit acceptBypass: true honored'); + } + + // ───────────────────────────────────────────────────────────────────────── + // channel-plan.js :: decideChannelForModule (precedence) + // ───────────────────────────────────────────────────────────────────────── + section('channel-plan :: decideChannelForModule (precedence)'); + + const emptyOpts = parseChannelOptions({}); + + { + const r = decideChannelForModule({ code: 'bmb', channelOptions: emptyOpts }); + assertEqual(r.channel, 'stable', 'no signal → stable default'); + assertEqual(r.source, 'default', 'source: default'); + } + { + const r = decideChannelForModule({ code: 'bmb', channelOptions: emptyOpts, registryDefault: 'next' }); + assertEqual(r.channel, 'next', 'registry default applied when no flags'); + assertEqual(r.source, 'registry', 'source: registry'); + } + { + const r = decideChannelForModule({ code: 'bmb', channelOptions: emptyOpts, registryDefault: 'bogus' }); + assertEqual(r.channel, 'stable', 'invalid registry default ignored, falls to stable'); + } + { + const opts = parseChannelOptions({ channel: 'next' }); + const r = decideChannelForModule({ code: 'bmb', channelOptions: opts, registryDefault: 'stable' }); + assertEqual(r.channel, 'next', 'global --channel beats registry default'); + assertEqual(r.source, 'flag:--channel', 'source reflects --channel origin'); + } + { + const opts = parseChannelOptions({ channel: 'stable', next: ['bmb'] }); + const r = decideChannelForModule({ code: 'bmb', channelOptions: opts }); + assertEqual(r.channel, 'next', '--next=bmb beats --channel=stable for bmb'); + assertEqual(r.source, 'flag:--next', 'source: flag:--next'); + } + { + const opts = parseChannelOptions({ channel: 'next', pin: ['bmb=v1.0.0'] }); + const r = decideChannelForModule({ code: 'bmb', channelOptions: opts }); + assertEqual(r.channel, 'pinned', '--pin beats --channel'); + assertEqual(r.pin, 'v1.0.0', 'pin value carried through'); + assertEqual(r.source, 'flag:--pin', 'source: flag:--pin'); + } + { + const opts = parseChannelOptions({ next: ['bmb'], pin: ['bmb=v1.0.0'] }); + const r = decideChannelForModule({ code: 'bmb', channelOptions: opts }); + assertEqual(r.channel, 'pinned', '--pin beats --next for same code'); + } + + // ───────────────────────────────────────────────────────────────────────── + // channel-plan.js :: buildPlan, orphanPinWarnings, bundledTargetWarnings + // ───────────────────────────────────────────────────────────────────────── + section('channel-plan :: buildPlan / warnings'); + + { + const opts = parseChannelOptions({ allStable: true, pin: ['bmb=v1.0.0'] }); + const plan = buildPlan({ + modules: [ + { code: 'bmb', defaultChannel: 'stable' }, + { code: 'cis', defaultChannel: 'stable' }, + ], + channelOptions: opts, + }); + assertEqual(plan.get('bmb').channel, 'pinned', 'buildPlan: bmb pinned'); + assertEqual(plan.get('cis').channel, 'stable', 'buildPlan: cis stable via global'); + } + { + const opts = parseChannelOptions({ pin: ['ghost=v1.0.0', 'bmb=v1.0.0'], next: ['gds'] }); + const warnings = orphanPinWarnings(opts, ['bmb']); + assert( + warnings.some((w) => w.includes("--pin for 'ghost'")), + 'orphanPinWarnings: flags pin for unselected module', + ); + assert( + warnings.some((w) => w.includes("--next for 'gds'")), + 'orphanPinWarnings: flags --next for unselected module', + ); + assert(!warnings.some((w) => w.includes("'bmb'")), 'orphanPinWarnings: no warning for selected module'); + } + { + const opts = parseChannelOptions({ pin: ['bmm=v1.0.0'], next: ['core'] }); + const warnings = bundledTargetWarnings(opts, ['core', 'bmm']); + assert( + warnings.some((w) => w.includes('bundled module')), + 'bundledTargetWarnings: warns bundled pin', + ); + assert(warnings.length === 2, 'bundledTargetWarnings: both pin and next warned'); + } + + // ───────────────────────────────────────────────────────────────────────── + // channel-resolver.js :: parseGitHubRepo + // ───────────────────────────────────────────────────────────────────────── + section('channel-resolver :: parseGitHubRepo'); + + { + const r = parseGitHubRepo('https://github.com/bmad-code-org/BMAD-METHOD'); + assert(r && r.owner === 'bmad-code-org' && r.repo === 'BMAD-METHOD', 'https URL basic'); + } + { + const r = parseGitHubRepo('https://github.com/bmad-code-org/BMAD-METHOD.git'); + assert(r && r.repo === 'BMAD-METHOD', '.git suffix stripped'); + } + { + const r = parseGitHubRepo('https://github.com/bmad-code-org/BMAD-METHOD/'); + assert(r && r.repo === 'BMAD-METHOD', 'trailing slash stripped'); + } + { + const r = parseGitHubRepo('https://github.com/org/repo/tree/main/subdir'); + assert(r && r.owner === 'org' && r.repo === 'repo', 'deep path yields owner/repo'); + } + { + const r = parseGitHubRepo('git@github.com:org/repo.git'); + assert(r && r.owner === 'org' && r.repo === 'repo', 'SSH URL parsed'); + } + assert(parseGitHubRepo('https://gitlab.com/foo/bar') === null, 'non-github URL returns null'); + assert(parseGitHubRepo('') === null, 'empty string returns null'); + assert(parseGitHubRepo(null) === null, 'null input returns null'); + assert(parseGitHubRepo(123) === null, 'non-string input returns null'); + + // ───────────────────────────────────────────────────────────────────────── + // channel-resolver.js :: normalizeStableTag + // ───────────────────────────────────────────────────────────────────────── + section('channel-resolver :: normalizeStableTag'); + + assertEqual(normalizeStableTag('v1.2.3'), '1.2.3', 'strips leading v'); + assertEqual(normalizeStableTag('1.2.3'), '1.2.3', 'bare semver accepted'); + assertEqual(normalizeStableTag('v1.2.3-alpha.1'), null, 'prerelease -alpha excluded'); + assertEqual(normalizeStableTag('v1.2.3-beta'), null, 'prerelease -beta excluded'); + assertEqual(normalizeStableTag('v1.2.3-rc.1'), null, 'prerelease -rc excluded'); + assertEqual(normalizeStableTag('not-a-version'), null, 'invalid string returns null'); + assertEqual(normalizeStableTag('v1.2'), null, 'incomplete semver returns null'); + assertEqual(normalizeStableTag(null), null, 'null returns null'); + assertEqual(normalizeStableTag(123), null, 'non-string returns null'); + + // ───────────────────────────────────────────────────────────────────────── + // channel-resolver.js :: classifyUpgrade + // ───────────────────────────────────────────────────────────────────────── + section('channel-resolver :: classifyUpgrade'); + + assertEqual(classifyUpgrade('v1.2.3', 'v1.2.3'), 'none', 'equal versions → none'); + assertEqual(classifyUpgrade('v1.2.3', 'v1.2.2'), 'none', 'downgrade → none'); + assertEqual(classifyUpgrade('v1.2.3', 'v1.2.4'), 'patch', 'patch bump'); + assertEqual(classifyUpgrade('v1.2.3', 'v1.3.0'), 'minor', 'minor bump'); + assertEqual(classifyUpgrade('v1.2.3', 'v2.0.0'), 'major', 'major bump'); + assertEqual(classifyUpgrade('1.2.3', '1.2.4'), 'patch', 'unprefixed versions work'); + assertEqual(classifyUpgrade('main', 'v1.2.3'), 'unknown', 'non-semver current → unknown'); + assertEqual(classifyUpgrade('v1.2.3', 'main'), 'unknown', 'non-semver next → unknown'); + assertEqual(classifyUpgrade('', ''), 'unknown', 'both empty → unknown'); + + // ───────────────────────────────────────────────────────────────────────── + // channel-resolver.js :: releaseNotesUrl + // ───────────────────────────────────────────────────────────────────────── + section('channel-resolver :: releaseNotesUrl'); + + assertEqual( + releaseNotesUrl('https://github.com/bmad-code-org/BMAD-METHOD', 'v1.2.3'), + 'https://github.com/bmad-code-org/BMAD-METHOD/releases/tag/v1.2.3', + 'builds standard release URL', + ); + assertEqual(releaseNotesUrl('https://gitlab.com/foo/bar', 'v1.0.0'), null, 'non-github repo → null'); + assertEqual(releaseNotesUrl('https://github.com/foo/bar', null), null, 'null tag → null'); + assertEqual(releaseNotesUrl('', 'v1.0.0'), null, 'empty URL → null'); + + // ───────────────────────────────────────────────────────────────────────── + // Summary + // ───────────────────────────────────────────────────────────────────────── + console.log(''); + console.log(`${colors.cyan}========================================`); + console.log('Test Results:'); + console.log(` Passed: ${colors.green}${passed}${colors.reset}`); + console.log(` Failed: ${colors.red}${failed}${colors.reset}`); + console.log(`========================================${colors.reset}\n`); + + if (failed === 0) { + console.log(`${colors.green}✨ All channel resolution tests passed!${colors.reset}\n`); + process.exit(0); + } else { + console.log(`${colors.red}❌ Some channel resolution tests failed${colors.reset}\n`); + process.exit(1); + } +} + +try { + runTests(); +} catch (error) { + console.error(`${colors.red}Test runner failed:${colors.reset}`, error.message); + console.error(error.stack); + process.exit(1); +} diff --git a/tools/installer/commands/install.js b/tools/installer/commands/install.js index c6ec46ceb..e10a0c96a 100644 --- a/tools/installer/commands/install.js +++ b/tools/installer/commands/install.js @@ -24,6 +24,19 @@ module.exports = { ['--output-folder ', 'Output folder path relative to project root (default: _bmad-output)'], ['--custom-source ', 'Comma-separated Git URLs or local paths to install custom modules from'], ['-y, --yes', 'Accept all defaults and skip prompts where possible'], + [ + '--channel ', + 'Apply channel (stable|next) to all external modules being installed. --all-stable and --all-next are aliases.', + ], + ['--all-stable', 'Alias for --channel=stable. Resolves externals to the highest stable release tag.'], + ['--all-next', 'Alias for --channel=next. Resolves externals to main HEAD.'], + ['--next ', 'Install module from main HEAD (next channel). Repeatable.', (value, prev) => [...(prev || []), value], []], + [ + '--pin ', + 'Pin module to a specific tag: --pin CODE=TAG (e.g. --pin bmb=v1.7.0). Repeatable.', + (value, prev) => [...(prev || []), value], + [], + ], ], action: async (options) => { try { diff --git a/tools/installer/core/config.js b/tools/installer/core/config.js index c844e2d00..bc359fed9 100644 --- a/tools/installer/core/config.js +++ b/tools/installer/core/config.js @@ -3,7 +3,7 @@ * User input comes from either UI answers or headless CLI flags. */ class Config { - constructor({ directory, modules, ides, skipPrompts, verbose, actionType, coreConfig, moduleConfigs, quickUpdate }) { + constructor({ directory, modules, ides, skipPrompts, verbose, actionType, coreConfig, moduleConfigs, quickUpdate, channelOptions }) { this.directory = directory; this.modules = Object.freeze([...modules]); this.ides = Object.freeze([...ides]); @@ -13,6 +13,8 @@ class Config { this.coreConfig = coreConfig; this.moduleConfigs = moduleConfigs; this._quickUpdate = quickUpdate; + // channelOptions carry a Map + Set; don't deep-freeze. + this.channelOptions = channelOptions || null; Object.freeze(this); } @@ -37,6 +39,7 @@ class Config { coreConfig: userInput.coreConfig || {}, moduleConfigs: userInput.moduleConfigs || null, quickUpdate: userInput._quickUpdate || false, + channelOptions: userInput.channelOptions || null, }); } diff --git a/tools/installer/core/installer.js b/tools/installer/core/installer.js index faf0b262d..ef6e8662f 100644 --- a/tools/installer/core/installer.js +++ b/tools/installer/core/installer.js @@ -601,22 +601,40 @@ class Installer { moduleConfig: moduleConfig, installer: this, silent: true, + channelOptions: config.channelOptions, }, ); // Get display name from source module.yaml and resolve the freshest version metadata we can find locally. - const sourcePath = await officialModules.findModuleSource(moduleName, { silent: true }); + const sourcePath = await officialModules.findModuleSource(moduleName, { + silent: true, + channelOptions: config.channelOptions, + }); const moduleInfo = sourcePath ? await officialModules.getModuleInfo(sourcePath, moduleName, '') : null; const displayName = moduleInfo?.name || moduleName; + const externalResolution = officialModules.externalModuleManager.getResolution(moduleName); + let communityResolution = null; + if (!externalResolution) { + const { CommunityModuleManager } = require('../modules/community-manager'); + communityResolution = new CommunityModuleManager().getResolution(moduleName); + } + const resolution = externalResolution || communityResolution; const cachedResolution = CustomModuleManager._resolutionCache.get(moduleName); const versionInfo = await resolveModuleVersion(moduleName, { moduleSourcePath: sourcePath, - fallbackVersion: cachedResolution?.version, + fallbackVersion: resolution?.version || cachedResolution?.version, marketplacePluginNames: cachedResolution?.pluginName ? [cachedResolution.pluginName] : [], }); - const version = versionInfo.version || ''; - addResult(displayName, 'ok', '', { moduleCode: moduleName, newVersion: version }); + // Prefer the git tag recorded by the resolution (e.g. "v1.7.0") over + // the on-disk package.json (which may be ahead of the released tag). + const version = resolution?.version || versionInfo.version || ''; + addResult(displayName, 'ok', '', { + moduleCode: moduleName, + newVersion: version, + newChannel: resolution?.channel || null, + newSha: resolution?.sha || null, + }); } } @@ -1091,12 +1109,30 @@ class Installer { let detail = ''; if (r.moduleCode && r.newVersion) { const oldVersion = preVersions.get(r.moduleCode); - if (oldVersion && oldVersion === r.newVersion) { - detail = ` (v${r.newVersion}, no change)`; + // Format a version label for display: + // "main" → "main @ " (next channel shows what SHA landed) + // "v1.7.0" or "1.7.0" → "v1.7.0" (prefix 'v' when missing) + // anything else (legacy strings) → as-is + const fmt = (v, sha) => { + if (typeof v !== 'string' || !v) return ''; + if (v === 'main' || v === 'HEAD') return sha ? `main @ ${sha.slice(0, 7)}` : 'main'; + if (/^v?\d+\.\d+\.\d+/.test(v)) return v.startsWith('v') ? v : `v${v}`; + return v; + }; + const newV = fmt(r.newVersion, r.newSha); + // 'main'/'HEAD' strings only identify the channel, not the commit, so + // we can't assert "no change" without comparing SHAs — and preVersions + // doesn't carry the old SHA. Render these as a refresh instead of a + // false-negative "no change". + const isMainLike = oldVersion === 'main' || oldVersion === 'HEAD'; + if (oldVersion && oldVersion === r.newVersion && !isMainLike) { + detail = ` (${newV}, no change)`; + } else if (oldVersion && isMainLike) { + detail = ` (${newV}, refreshed)`; } else if (oldVersion) { - detail = ` (v${oldVersion} → v${r.newVersion})`; + detail = ` (${fmt(oldVersion, r.newSha)} → ${newV})`; } else { - detail = ` (v${r.newVersion}, installed)`; + detail = ` (${newV}, installed)`; } } else if (r.detail) { detail = ` (${r.detail})`; @@ -1216,9 +1252,59 @@ class Installer { await prompts.log.warn(`Skipping ${skippedModules.length} module(s) - no source available: ${skippedModules.join(', ')}`); } + // Build channel options from the existing manifest FIRST so the config + // collector below (which triggers external-module clones via + // findModuleSource) knows each module's recorded channel and doesn't + // silently redecide it. Without this, modules previously on 'next' or + // 'pinned' would trigger a stable-channel tag lookup at config-collection + // time, burning GitHub API quota and potentially failing. + const manifestData = await this.manifest.read(bmadDir); + const channelOptions = { global: null, nextSet: new Set(), pins: new Map(), warnings: [] }; + if (manifestData?.modulesDetailed) { + const { fetchStableTags, classifyUpgrade, parseGitHubRepo } = require('../modules/channel-resolver'); + for (const entry of manifestData.modulesDetailed) { + if (!entry?.name || !entry?.channel) continue; + if (entry.channel === 'pinned' && entry.version) { + channelOptions.pins.set(entry.name, entry.version); + continue; + } + if (entry.channel === 'next') { + channelOptions.nextSet.add(entry.name); + continue; + } + // Stable: classify the available upgrade. Patches and minors fall + // through (stable default picks up the top tag). A major upgrade + // requires opt-in, so under quick-update's non-interactive semantics + // we pin to the current version to prevent a silent breaking jump. + if (entry.channel === 'stable' && entry.version && entry.repoUrl) { + const parsed = parseGitHubRepo(entry.repoUrl); + if (!parsed) continue; + try { + const tags = await fetchStableTags(parsed.owner, parsed.repo); + if (tags.length === 0) continue; + const topTag = tags[0].tag; + const cls = classifyUpgrade(entry.version, topTag); + if (cls === 'major') { + channelOptions.pins.set(entry.name, entry.version); + await prompts.log.warn( + `${entry.name} ${entry.version} → ${topTag} is a new major release; staying on ${entry.version}. ` + + `Run \`bmad install\` (Modify) with \`--pin ${entry.name}=${topTag}\` to accept.`, + ); + } + } catch (error) { + // Tag lookup failed (offline, rate-limited). Stay on the current + // version rather than guessing — the existing cache is already + // at that ref, so re-using it keeps the install stable. + channelOptions.pins.set(entry.name, entry.version); + await prompts.log.warn(`Could not check ${entry.name} for updates (${error.message}); staying on ${entry.version}.`); + } + } + } + } + // Load existing configs and collect new fields (if any) await prompts.log.info('Checking for new configuration options...'); - const quickModules = new OfficialModules(); + const quickModules = new OfficialModules({ channelOptions }); await quickModules.loadExistingConfig(projectDir); let promptedForNewFields = false; @@ -1257,6 +1343,7 @@ class Installer { _quickUpdate: true, _preserveModules: skippedModules, _existingModules: installedModules, + channelOptions, }; await this.install(installConfig); diff --git a/tools/installer/core/manifest-generator.js b/tools/installer/core/manifest-generator.js index 206325638..eb1012036 100644 --- a/tools/installer/core/manifest-generator.js +++ b/tools/installer/core/manifest-generator.js @@ -349,7 +349,22 @@ class ManifestGenerator { npmPackage: versionInfo.npmPackage, repoUrl: versionInfo.repoUrl, }; - if (versionInfo.localPath) moduleEntry.localPath = versionInfo.localPath; + // Preserve channel/sha from the resolution (external/community/custom) + // or from the existing entry if this is a no-change rewrite. + const channel = versionInfo.channel ?? existing?.channel; + const sha = versionInfo.sha ?? existing?.sha; + if (channel) moduleEntry.channel = channel; + if (sha) moduleEntry.sha = sha; + if (versionInfo.localPath || existing?.localPath) { + moduleEntry.localPath = versionInfo.localPath || existing.localPath; + } + if (versionInfo.rawSource || existing?.rawSource) { + moduleEntry.rawSource = versionInfo.rawSource || existing.rawSource; + } + const regTag = versionInfo.registryApprovedTag ?? existing?.registryApprovedTag; + const regSha = versionInfo.registryApprovedSha ?? existing?.registryApprovedSha; + if (regTag) moduleEntry.registryApprovedTag = regTag; + if (regSha) moduleEntry.registryApprovedSha = regSha; updatedModules.push(moduleEntry); } diff --git a/tools/installer/core/manifest.js b/tools/installer/core/manifest.js index f20c2397f..ffe0de4ad 100644 --- a/tools/installer/core/manifest.js +++ b/tools/installer/core/manifest.js @@ -180,7 +180,12 @@ class Manifest { npmPackage: options.npmPackage || null, repoUrl: options.repoUrl || null, }; + if (options.channel) entry.channel = options.channel; + if (options.sha) entry.sha = options.sha; if (options.localPath) entry.localPath = options.localPath; + if (options.rawSource) entry.rawSource = options.rawSource; + if (options.registryApprovedTag) entry.registryApprovedTag = options.registryApprovedTag; + if (options.registryApprovedSha) entry.registryApprovedSha = options.registryApprovedSha; manifest.modules.push(entry); } else { // Module exists, update its version info @@ -192,6 +197,11 @@ class Manifest { npmPackage: options.npmPackage === undefined ? existing.npmPackage : options.npmPackage, repoUrl: options.repoUrl === undefined ? existing.repoUrl : options.repoUrl, localPath: options.localPath === undefined ? existing.localPath : options.localPath, + channel: options.channel === undefined ? existing.channel : options.channel, + sha: options.sha === undefined ? existing.sha : options.sha, + rawSource: options.rawSource === undefined ? existing.rawSource : options.rawSource, + registryApprovedTag: options.registryApprovedTag === undefined ? existing.registryApprovedTag : options.registryApprovedTag, + registryApprovedSha: options.registryApprovedSha === undefined ? existing.registryApprovedSha : options.registryApprovedSha, lastUpdated: new Date().toISOString(), }; } @@ -275,12 +285,17 @@ class Manifest { const moduleInfo = await extMgr.getModuleByCode(moduleName); if (moduleInfo) { + const externalResolution = extMgr.getResolution(moduleName); const versionInfo = await resolveModuleVersion(moduleName, { moduleSourcePath }); return { - version: versionInfo.version, + // Git tag recorded during install trumps the on-disk package.json + // version, so the manifest carries "v1.7.0" instead of "1.7.0". + version: externalResolution?.version || versionInfo.version, source: 'external', npmPackage: moduleInfo.npmPackage || null, repoUrl: moduleInfo.url || null, + channel: externalResolution?.channel || null, + sha: externalResolution?.sha || null, }; } @@ -289,15 +304,20 @@ class Manifest { const communityMgr = new CommunityModuleManager(); const communityInfo = await communityMgr.getModuleByCode(moduleName); if (communityInfo) { + const communityResolution = communityMgr.getResolution(moduleName); const versionInfo = await resolveModuleVersion(moduleName, { moduleSourcePath, fallbackVersion: communityInfo.version, }); return { - version: versionInfo.version || communityInfo.version, + version: communityResolution?.version || versionInfo.version || communityInfo.version, source: 'community', npmPackage: communityInfo.npmPackage || null, repoUrl: communityInfo.url || null, + channel: communityResolution?.channel || null, + sha: communityResolution?.sha || null, + registryApprovedTag: communityResolution?.registryApprovedTag || null, + registryApprovedSha: communityResolution?.registryApprovedSha || null, }; } @@ -312,12 +332,17 @@ class Manifest { fallbackVersion: resolved?.version, marketplacePluginNames: resolved?.pluginName ? [resolved.pluginName] : [], }); + const hasGitClone = !!resolved?.repoUrl; return { - version: versionInfo.version, + // Prefer the git ref we actually cloned over the package.json version. + version: resolved?.cloneRef || (hasGitClone ? 'main' : versionInfo.version), source: 'custom', npmPackage: null, repoUrl: resolved?.repoUrl || null, localPath: resolved?.localPath || null, + channel: hasGitClone ? (resolved?.cloneRef ? 'pinned' : 'next') : null, + sha: resolved?.cloneSha || null, + rawSource: resolved?.rawInput || null, }; } diff --git a/tools/installer/modules/channel-plan.js b/tools/installer/modules/channel-plan.js new file mode 100644 index 000000000..97581bd35 --- /dev/null +++ b/tools/installer/modules/channel-plan.js @@ -0,0 +1,203 @@ +/** + * Channel plan: the per-module resolution decision applied at install time. + * + * A "plan entry" for a module is: + * { channel: 'stable'|'next'|'pinned', pin?: string } + * + * We build the plan from: + * 1. CLI flags (--channel / --all-* / --next=CODE / --pin CODE=TAG) + * 2. Interactive answers (the "all stable?" gate + per-module picker) + * 3. Registry defaults (default_channel from registry-fallback.yaml / official.yaml) + * 4. Hardcoded fallback 'stable' + * + * Precedence: --pin > --next=CODE > --channel (global) > registry default > 'stable'. + * + * This module is pure. No prompts, no git, no filesystem. + */ + +const VALID_CHANNELS = new Set(['stable', 'next']); + +/** + * Parse raw commander options into a structured channel options object. + * + * @param {Object} options - raw command-line options + * @returns {{ + * global: 'stable'|'next'|null, + * nextSet: Set, + * pins: Map, + * warnings: string[] + * }} + */ +function parseChannelOptions(options = {}) { + const warnings = []; + + // Global channel from --channel / --all-stable / --all-next. + let global = null; + const aliases = []; + if (options.channel) aliases.push({ flag: '--channel', value: normalizeChannel(options.channel, warnings, '--channel') }); + if (options.allStable) aliases.push({ flag: '--all-stable', value: 'stable' }); + if (options.allNext) aliases.push({ flag: '--all-next', value: 'next' }); + + const distinct = new Set(aliases.map((a) => a.value).filter(Boolean)); + if (distinct.size > 1) { + warnings.push( + `Conflicting channel flags: ${aliases + .filter((a) => a.value) + .map((a) => a.flag + '=' + a.value) + .join(', ')}. Using first: ${aliases.find((a) => a.value).flag}.`, + ); + } + const firstValid = aliases.find((a) => a.value); + if (firstValid) global = firstValid.value; + + // --next=CODE (repeatable) + const nextSet = new Set(); + for (const code of options.next || []) { + const trimmed = String(code).trim(); + if (!trimmed) continue; + nextSet.add(trimmed); + } + + // --pin CODE=TAG (repeatable) + const pins = new Map(); + for (const spec of options.pin || []) { + const parsed = parsePinSpec(spec); + if (!parsed) { + warnings.push(`Ignoring malformed --pin value '${spec}'. Expected CODE=TAG.`); + continue; + } + if (pins.has(parsed.code)) { + warnings.push(`--pin specified multiple times for '${parsed.code}'. Using last: ${parsed.tag}.`); + } + pins.set(parsed.code, parsed.tag); + } + + // --yes auto-confirms the community-module curator-bypass prompt so + // headless installs with --next=/--pin for a community module don't hang. + const acceptBypass = options.yes === true || options.acceptBypass === true; + + return { global, nextSet, pins, warnings, acceptBypass }; +} + +function normalizeChannel(raw, warnings, flagName) { + if (typeof raw !== 'string') return null; + const lower = raw.trim().toLowerCase(); + if (VALID_CHANNELS.has(lower)) return lower; + warnings.push(`Ignoring invalid ${flagName} value '${raw}'. Expected one of: stable, next.`); + return null; +} + +function parsePinSpec(spec) { + if (typeof spec !== 'string') return null; + const idx = spec.indexOf('='); + if (idx <= 0 || idx === spec.length - 1) return null; + const code = spec.slice(0, idx).trim(); + const tag = spec.slice(idx + 1).trim(); + if (!code || !tag) return null; + return { code, tag }; +} + +/** + * Build a per-module plan entry, applying precedence. + * + * @param {Object} args + * @param {string} args.code + * @param {Object} args.channelOptions - from parseChannelOptions + * @param {string} [args.registryDefault] - module's default_channel, if any + * @returns {{channel: 'stable'|'next'|'pinned', pin?: string, source: string}} + * source describes where the decision came from, for logging / debugging. + */ +function decideChannelForModule({ code, channelOptions, registryDefault }) { + const { global, nextSet, pins } = channelOptions || { nextSet: new Set(), pins: new Map() }; + + if (pins && pins.has(code)) { + return { channel: 'pinned', pin: pins.get(code), source: 'flag:--pin' }; + } + if (nextSet && nextSet.has(code)) { + return { channel: 'next', source: 'flag:--next' }; + } + if (global) { + return { channel: global, source: 'flag:--channel' }; + } + if (registryDefault && VALID_CHANNELS.has(registryDefault)) { + return { channel: registryDefault, source: 'registry' }; + } + return { channel: 'stable', source: 'default' }; +} + +/** + * Build a full channel plan map for a set of modules. + * + * @param {Object} args + * @param {Array<{code: string, defaultChannel?: string, builtIn?: boolean}>} args.modules + * Only the modules that need a channel entry; callers should filter out + * bundled modules (core/bmm) before calling. + * @param {Object} args.channelOptions - from parseChannelOptions + * @returns {Map} + */ +function buildPlan({ modules, channelOptions }) { + const plan = new Map(); + for (const mod of modules || []) { + plan.set( + mod.code, + decideChannelForModule({ + code: mod.code, + channelOptions, + registryDefault: mod.defaultChannel, + }), + ); + } + return plan; +} + +/** + * Report any --pin CODE=TAG entries that don't correspond to a selected module. + * These get warned about but don't abort the install. + */ +function orphanPinWarnings(channelOptions, selectedCodes) { + const warnings = []; + const selected = new Set(selectedCodes || []); + for (const code of channelOptions?.pins?.keys() || []) { + if (!selected.has(code)) { + warnings.push(`--pin for '${code}' has no effect (module not selected).`); + } + } + for (const code of channelOptions?.nextSet || []) { + if (!selected.has(code)) { + warnings.push(`--next for '${code}' has no effect (module not selected).`); + } + } + return warnings; +} + +/** + * Warn when --pin / --next targets a bundled module (core, bmm). Those are + * shipped inside the installer binary — there's no git clone to override, so + * the flag has no effect. Users who actually want a prerelease core/bmm + * should use `npx bmad-method@next install`. + */ +function bundledTargetWarnings(channelOptions, bundledCodes) { + const warnings = []; + const bundled = new Set(bundledCodes || []); + const hint = '(bundled module; use `npx bmad-method@next install` for a prerelease)'; + for (const code of channelOptions?.pins?.keys() || []) { + if (bundled.has(code)) { + warnings.push(`--pin for '${code}' has no effect ${hint}.`); + } + } + for (const code of channelOptions?.nextSet || []) { + if (bundled.has(code)) { + warnings.push(`--next for '${code}' has no effect ${hint}.`); + } + } + return warnings; +} + +module.exports = { + parseChannelOptions, + decideChannelForModule, + buildPlan, + orphanPinWarnings, + bundledTargetWarnings, + parsePinSpec, +}; diff --git a/tools/installer/modules/channel-resolver.js b/tools/installer/modules/channel-resolver.js new file mode 100644 index 000000000..c6e347f13 --- /dev/null +++ b/tools/installer/modules/channel-resolver.js @@ -0,0 +1,241 @@ +const https = require('node:https'); +const semver = require('semver'); + +/** + * Channel resolver for external and community modules. + * + * A "channel" is the resolution strategy that decides which ref of a module + * to clone when no explicit version is supplied: + * - stable: highest pure-semver git tag (excludes -alpha/-beta/-rc) + * - next: main branch HEAD + * - pinned: an explicit user-supplied tag + * + * This module is pure (no prompts, no git, no filesystem). It only talks to + * the GitHub tags API and performs semver math. Clone logic lives in the + * module managers that call resolveChannel(). + */ + +const GITHUB_API_BASE = 'https://api.github.com'; +const DEFAULT_TIMEOUT_MS = 10_000; +const USER_AGENT = 'bmad-method-installer'; + +// Per-process cache: { 'owner/repo' => string[] sorted desc } of pure-semver tags. +const tagCache = new Map(); + +/** + * Parse a GitHub repo URL into { owner, repo }. Returns null if the URL is + * not a GitHub URL the resolver can handle. + */ +function parseGitHubRepo(url) { + if (!url || typeof url !== 'string') return null; + const trimmed = url + .trim() + .replace(/\.git$/, '') + .replace(/\/$/, ''); + + // https://github.com/owner/repo + const httpsMatch = trimmed.match(/^https?:\/\/github\.com\/([^/]+)\/([^/]+)(?:\/.*)?$/i); + if (httpsMatch) return { owner: httpsMatch[1], repo: httpsMatch[2] }; + + // git@github.com:owner/repo + const sshMatch = trimmed.match(/^git@github\.com:([^/]+)\/([^/]+)$/i); + if (sshMatch) return { owner: sshMatch[1], repo: sshMatch[2] }; + + return null; +} + +function fetchJson(url, { timeout = DEFAULT_TIMEOUT_MS } = {}) { + const headers = { + 'User-Agent': USER_AGENT, + Accept: 'application/vnd.github+json', + 'X-GitHub-Api-Version': '2022-11-28', + }; + if (process.env.GITHUB_TOKEN) { + headers.Authorization = `Bearer ${process.env.GITHUB_TOKEN}`; + } + + return new Promise((resolve, reject) => { + const req = https.get(url, { headers, timeout }, (res) => { + let body = ''; + res.on('data', (chunk) => (body += chunk)); + res.on('end', () => { + if (res.statusCode < 200 || res.statusCode >= 300) { + const err = new Error(`GitHub API ${res.statusCode} for ${url}: ${body.slice(0, 200)}`); + err.statusCode = res.statusCode; + return reject(err); + } + try { + resolve(JSON.parse(body)); + } catch (error) { + reject(new Error(`Failed to parse GitHub response: ${error.message}`)); + } + }); + }); + req.on('error', reject); + req.on('timeout', () => { + req.destroy(); + reject(new Error(`GitHub API request timed out: ${url}`)); + }); + }); +} + +/** + * Strip a leading 'v' and return a valid semver string, or null if the tag + * is not valid semver or is a prerelease (contains -alpha/-beta/-rc/etc.). + */ +function normalizeStableTag(tagName) { + if (typeof tagName !== 'string') return null; + const stripped = tagName.startsWith('v') ? tagName.slice(1) : tagName; + const valid = semver.valid(stripped); + if (!valid) return null; + // Exclude prereleases. semver.prerelease returns null for pure releases. + if (semver.prerelease(valid)) return null; + return valid; +} + +/** + * Fetch pure-semver tags (highest first) from a GitHub repo. + * Cached per-process per owner/repo. + * + * @returns {Promise>} + * tag is the original ref name (e.g. "v1.7.0"), version is the cleaned + * semver (e.g. "1.7.0"). + */ +async function fetchStableTags(owner, repo, { timeout } = {}) { + const cacheKey = `${owner}/${repo}`; + if (tagCache.has(cacheKey)) return tagCache.get(cacheKey); + + // GitHub returns up to 100 tags per page; one page is plenty for our modules. + const url = `${GITHUB_API_BASE}/repos/${owner}/${repo}/tags?per_page=100`; + const raw = await fetchJson(url, { timeout }); + if (!Array.isArray(raw)) { + throw new TypeError(`Unexpected response from ${url}`); + } + + const stable = []; + for (const entry of raw) { + const version = normalizeStableTag(entry?.name); + if (version) stable.push({ tag: entry.name, version }); + } + stable.sort((a, b) => semver.rcompare(a.version, b.version)); + + tagCache.set(cacheKey, stable); + return stable; +} + +/** + * Resolve a channel plan for a single module into a git-clonable ref. + * + * @param {Object} args + * @param {'stable'|'next'|'pinned'} args.channel + * @param {string} [args.pin] - Required when channel === 'pinned' + * @param {string} args.repoUrl - Module's git URL (for tag lookup) + * @returns {Promise<{channel, ref, version}>} where + * ref: the git ref to pass to `git clone --branch`, or null for HEAD (next) + * version: the resolved version string (tag name for stable/pinned, 'main' for next) + * + * Throws on: + * - pinned without a pin value + * - stable with no GitHub repo parseable from the URL (pass through to caller to fall back) + * + * Falls back to next-channel semantics and sets resolvedFallback=true when + * stable resolution turns up no tags. + */ +async function resolveChannel({ channel, pin, repoUrl, timeout }) { + if (channel === 'pinned') { + if (!pin) throw new Error('resolveChannel: pinned channel requires a pin value'); + return { channel: 'pinned', ref: pin, version: pin, resolvedFallback: false }; + } + + if (channel === 'next') { + return { channel: 'next', ref: null, version: 'main', resolvedFallback: false }; + } + + if (channel === 'stable') { + const parsed = parseGitHubRepo(repoUrl); + if (!parsed) { + // No GitHub URL — caller must handle by falling back to next. + return { channel: 'next', ref: null, version: 'main', resolvedFallback: true, reason: 'not-a-github-url' }; + } + + try { + const tags = await fetchStableTags(parsed.owner, parsed.repo, { timeout }); + if (tags.length === 0) { + return { channel: 'next', ref: null, version: 'main', resolvedFallback: true, reason: 'no-stable-tags' }; + } + const top = tags[0]; + return { channel: 'stable', ref: top.tag, version: top.tag, resolvedFallback: false }; + } catch (error) { + // Propagate the error; callers decide whether to fall back or abort. + error.message = `Failed to resolve stable channel for ${parsed.owner}/${parsed.repo}: ${error.message}`; + throw error; + } + } + + throw new Error(`resolveChannel: unknown channel '${channel}'`); +} + +/** + * Verify that a specific tag exists in a GitHub repo. Used to validate + * --pin values before the user sits through a long clone that then fails. + */ +async function tagExists(owner, repo, tagName, { timeout } = {}) { + const url = `${GITHUB_API_BASE}/repos/${owner}/${repo}/git/refs/tags/${encodeURIComponent(tagName)}`; + try { + await fetchJson(url, { timeout }); + return true; + } catch (error) { + if (error.statusCode === 404) return false; + throw error; + } +} + +/** + * Classify the semver delta between two versions. + * - 'none' → same version (or downgrade; treated same) + * - 'patch' → same major.minor, higher patch + * - 'minor' → same major, higher minor + * - 'major' → different major + * - 'unknown' → either version is not valid semver; caller should treat as major + */ +function classifyUpgrade(currentVersion, newVersion) { + const current = semver.valid(semver.coerce(currentVersion)); + const next = semver.valid(semver.coerce(newVersion)); + if (!current || !next) return 'unknown'; + if (semver.lte(next, current)) return 'none'; + const diff = semver.diff(current, next); + if (diff === 'patch') return 'patch'; + if (diff === 'minor' || diff === 'preminor') return 'minor'; + if (diff === 'major' || diff === 'premajor') return 'major'; + // prepatch, prerelease — treat conservatively as minor (prereleases shouldn't + // normally surface here since stable channel filters them out). + return 'minor'; +} + +/** + * Build the GitHub release notes URL for a resolved tag. + * Returns null if the repo URL isn't a GitHub URL. + */ +function releaseNotesUrl(repoUrl, tag) { + const parsed = parseGitHubRepo(repoUrl); + if (!parsed || !tag) return null; + return `https://github.com/${parsed.owner}/${parsed.repo}/releases/tag/${encodeURIComponent(tag)}`; +} + +/** + * Test-only: clear the per-process tag cache. + */ +function _clearTagCache() { + tagCache.clear(); +} + +module.exports = { + parseGitHubRepo, + fetchStableTags, + resolveChannel, + tagExists, + classifyUpgrade, + releaseNotesUrl, + normalizeStableTag, + _clearTagCache, +}; diff --git a/tools/installer/modules/community-manager.js b/tools/installer/modules/community-manager.js index aff54ca44..04904a7e1 100644 --- a/tools/installer/modules/community-manager.js +++ b/tools/installer/modules/community-manager.js @@ -4,6 +4,8 @@ const path = require('node:path'); const { execSync } = require('node:child_process'); const prompts = require('../prompts'); const { RegistryClient } = require('./registry-client'); +const { decideChannelForModule } = require('./channel-plan'); +const { parseGitHubRepo, tagExists } = require('./channel-resolver'); const MARKETPLACE_OWNER = 'bmad-code-org'; const MARKETPLACE_REPO = 'bmad-plugins-marketplace'; @@ -15,13 +17,29 @@ const MARKETPLACE_REF = 'main'; * Returns empty results when the registry is unreachable. * Community modules are pinned to approved SHA when set; uses HEAD otherwise. */ +function quoteShellRef(ref) { + if (typeof ref !== 'string' || !/^[\w.\-+/]+$/.test(ref)) { + throw new Error(`Unsafe ref name: ${JSON.stringify(ref)}`); + } + return `"${ref}"`; +} + class CommunityModuleManager { + // moduleCode → { channel, version, sha, registryApprovedTag, registryApprovedSha, repoUrl, bypassedCurator } + // Shared across all instances; the manifest writer often uses a fresh instance. + static _resolutions = new Map(); + constructor() { this._client = new RegistryClient(); this._cachedIndex = null; this._cachedCategories = null; } + /** Get the most recent channel resolution for a community module. */ + getResolution(moduleCode) { + return CommunityModuleManager._resolutions.get(moduleCode) || null; + } + // ─── Data Loading ────────────────────────────────────────────────────────── /** @@ -196,12 +214,49 @@ class CommunityModuleManager { return await prompts.spinner(); }; - const sha = moduleInfo.approvedSha; + // ─── Resolve channel plan ────────────────────────────────────────────── + // Default community behavior (stable channel) honors the curator's + // approved SHA. --next=CODE and --pin CODE=TAG override the curator; we + // warn the user before bypassing the approved version. + const planEntry = decideChannelForModule({ + code: moduleCode, + channelOptions: options.channelOptions, + registryDefault: 'stable', + }); + + const approvedSha = moduleInfo.approvedSha; + const approvedTag = moduleInfo.approvedTag; + + let bypassedCurator = false; + if (planEntry.channel !== 'stable') { + bypassedCurator = true; + if (!silent) { + const approvedLabel = approvedTag || approvedSha || 'curator-approved version'; + await prompts.log.warn( + `WARNING: Installing '${moduleCode}' from ${ + planEntry.channel === 'pinned' ? `tag ${planEntry.pin}` : 'main HEAD' + } bypasses the curator-approved ${approvedLabel}. Proceed only if you trust this source.`, + ); + if (!options.channelOptions?.acceptBypass) { + const proceed = await prompts.confirm({ + message: `Continue installing '${moduleCode}' with curator bypass?`, + default: false, + }); + if (!proceed) { + throw new Error(`Install of community module '${moduleCode}' cancelled by user.`); + } + } + } + } + let needsDependencyInstall = false; let wasNewClone = false; if (await fs.pathExists(moduleCacheDir)) { - // Already cloned - update to latest HEAD + // Already cloned — refresh to the correct ref for the resolved channel. + // A pinned install must not reset to origin/HEAD (it would silently drift + // to main on every re-install). Stable + approvedSha is handled below + // by the curator-SHA checkout logic. const fetchSpinner = await createSpinner(); fetchSpinner.start(`Checking ${moduleInfo.displayName}...`); try { @@ -211,10 +266,24 @@ class CommunityModuleManager { stdio: ['ignore', 'pipe', 'pipe'], env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, }); - execSync('git reset --hard origin/HEAD', { - cwd: moduleCacheDir, - stdio: ['ignore', 'pipe', 'pipe'], - }); + if (planEntry.channel === 'pinned') { + // Fetch the pin tag specifically and check it out. + execSync(`git fetch --depth 1 origin ${quoteShellRef(planEntry.pin)} --no-tags`, { + cwd: moduleCacheDir, + stdio: ['ignore', 'pipe', 'pipe'], + env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, + }); + execSync('git checkout --quiet FETCH_HEAD', { + cwd: moduleCacheDir, + stdio: ['ignore', 'pipe', 'pipe'], + }); + } else { + // stable (approvedSha path re-checks out below) and next: track main. + execSync('git reset --hard origin/HEAD', { + cwd: moduleCacheDir, + stdio: ['ignore', 'pipe', 'pipe'], + }); + } const newRef = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim(); if (currentRef !== newRef) needsDependencyInstall = true; fetchSpinner.stop(`Verified ${moduleInfo.displayName}`); @@ -231,10 +300,17 @@ class CommunityModuleManager { const fetchSpinner = await createSpinner(); fetchSpinner.start(`Fetching ${moduleInfo.displayName}...`); try { - execSync(`git clone --depth 1 "${moduleInfo.url}" "${moduleCacheDir}"`, { - stdio: ['ignore', 'pipe', 'pipe'], - env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, - }); + if (planEntry.channel === 'pinned') { + execSync(`git clone --depth 1 --branch ${quoteShellRef(planEntry.pin)} "${moduleInfo.url}" "${moduleCacheDir}"`, { + stdio: ['ignore', 'pipe', 'pipe'], + env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, + }); + } else { + execSync(`git clone --depth 1 "${moduleInfo.url}" "${moduleCacheDir}"`, { + stdio: ['ignore', 'pipe', 'pipe'], + env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, + }); + } fetchSpinner.stop(`Fetched ${moduleInfo.displayName}`); needsDependencyInstall = true; } catch (error) { @@ -243,18 +319,19 @@ class CommunityModuleManager { } } - // If pinned to a specific SHA, check out that exact commit. - // Refuse to install if the approved SHA cannot be reached - security requirement. - if (sha) { + // ─── Check out the resolved ref per channel ────────────────────────── + if (planEntry.channel === 'stable' && approvedSha) { + // Default path: pin to the curator-approved SHA. Refuse install if the SHA + // is unreachable (tag may have been deleted or rewritten) — security requirement. const headSha = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim(); - if (headSha !== sha) { + if (headSha !== approvedSha) { try { - execSync(`git fetch --depth 1 origin ${sha}`, { + execSync(`git fetch --depth 1 origin ${quoteShellRef(approvedSha)}`, { cwd: moduleCacheDir, stdio: ['ignore', 'pipe', 'pipe'], env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, }); - execSync(`git checkout ${sha}`, { + execSync(`git checkout ${quoteShellRef(approvedSha)}`, { cwd: moduleCacheDir, stdio: ['ignore', 'pipe', 'pipe'], }); @@ -262,12 +339,37 @@ class CommunityModuleManager { } catch { await fs.remove(moduleCacheDir); throw new Error( - `Community module '${moduleCode}' could not be pinned to its approved commit (${sha}). ` + - `Installation refused for security. The module registry entry may need updating.`, + `Community module '${moduleCode}' could not be pinned to its approved commit (${approvedSha}). ` + + `Installation refused for security. The module registry entry may need updating, ` + + `or use --next=${moduleCode} / --pin ${moduleCode}= to explicitly bypass.`, ); } } + } else if (planEntry.channel === 'stable' && !approvedSha) { + // Registry data gap: tag or SHA missing. Warn but proceed at HEAD (pre-existing behavior). + if (!silent) { + await prompts.log.warn(`Community module '${moduleCode}' has no curator-approved SHA in the registry; installing from main HEAD.`); + } + } else if (planEntry.channel === 'pinned') { + // We cloned the tag directly above (via --branch), but ensure HEAD matches. + // No additional checkout needed. } + // else: 'next' channel — already at origin/HEAD from the fetch/reset above. + + // Record the resolution so the manifest writer can pick up channel/version/sha. + const installedSha = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim(); + const recordedVersion = + planEntry.channel === 'pinned' ? planEntry.pin : planEntry.channel === 'next' ? 'main' : approvedTag || installedSha.slice(0, 7); + CommunityModuleManager._resolutions.set(moduleCode, { + channel: planEntry.channel, + version: recordedVersion, + sha: installedSha, + registryApprovedTag: approvedTag || null, + registryApprovedSha: approvedSha || null, + repoUrl: moduleInfo.url, + bypassedCurator, + planSource: planEntry.source, + }); // Install dependencies if needed const packageJsonPath = path.join(moduleCacheDir, 'package.json'); diff --git a/tools/installer/modules/custom-module-manager.js b/tools/installer/modules/custom-module-manager.js index 482c4dc43..f6a26ba37 100644 --- a/tools/installer/modules/custom-module-manager.js +++ b/tools/installer/modules/custom-module-manager.js @@ -4,6 +4,13 @@ const path = require('node:path'); const { execSync } = require('node:child_process'); const prompts = require('../prompts'); +function quoteCustomRef(ref) { + if (typeof ref !== 'string' || !/^[\w.\-+/]+$/.test(ref)) { + throw new Error(`Unsafe ref name: ${JSON.stringify(ref)}`); + } + return `"${ref}"`; +} + /** * Manages custom modules installed from user-provided sources. * Supports any Git host (GitHub, GitLab, Bitbucket, self-hosted) and local file paths. @@ -38,8 +45,8 @@ class CustomModuleManager { }; } - const trimmed = input.trim(); - if (!trimmed) { + const trimmedRaw = input.trim(); + if (!trimmedRaw) { return { type: null, cloneUrl: null, @@ -52,8 +59,53 @@ class CustomModuleManager { }; } + // Extract optional @ suffix from the end of the input. + // Semver-valid characters: letters, digits, dot, hyphen, underscore, plus, slash. + // Raw commit SHAs are NOT supported here — `git clone --branch` can't take + // them; use --pin at the module level or check out the SHA manually. + // Only strip when the tail looks like a ref, so we don't disturb + // URLs without a version spec or the SSH protocol's `git@host:...` prefix. + let trimmed = trimmedRaw; + let versionSuffix = null; + const lastAt = trimmedRaw.lastIndexOf('@'); + // Skip if @ is part of git@github.com:... (first char cannot be stripped as version) + // and skip if @ appears before the path rather than after a ref-shaped tail. + if (lastAt > 0) { + const candidate = trimmedRaw.slice(lastAt + 1); + const before = trimmedRaw.slice(0, lastAt); + // candidate must be ref-shaped and must not itself look like a URL / SSH host + if (/^[\w.\-+/]+$/.test(candidate) && !candidate.includes(':')) { + // Avoid consuming the @ in `git@host:owner/repo` — `before` wouldn't end with a path separator + // in that case. Require that the @ comes after the host/path, not inside the auth segment. + // Rule: the @ is a version suffix only if `before` looks like a complete URL or local path. + const beforeLooksLikeRepo = + before.startsWith('/') || + before.startsWith('./') || + before.startsWith('../') || + before.startsWith('~') || + /^https?:\/\//i.test(before) || + /^git@[^:]+:.+/.test(before); + if (beforeLooksLikeRepo) { + versionSuffix = candidate; + trimmed = before; + } + } + } + // Local path detection: starts with /, ./, ../, or ~ if (trimmed.startsWith('/') || trimmed.startsWith('./') || trimmed.startsWith('../') || trimmed.startsWith('~')) { + if (versionSuffix) { + return { + type: 'local', + cloneUrl: null, + subdir: null, + localPath: null, + cacheKey: null, + displayName: null, + isValid: false, + error: 'Local paths do not support @version suffixes', + }; + } return this._parseLocalPath(trimmed); } @@ -66,6 +118,8 @@ class CustomModuleManager { cloneUrl: trimmed, subdir: null, localPath: null, + version: versionSuffix || null, + rawInput: trimmedRaw, cacheKey: `${host}/${owner}/${repo}`, displayName: `${owner}/${repo}`, isValid: true, @@ -79,29 +133,47 @@ class CustomModuleManager { const [, host, owner, repo, remainder] = httpsMatch; const cloneUrl = `https://${host}/${owner}/${repo}`; let subdir = null; + let urlRef = null; // branch/tag extracted from /tree//subdir if (remainder) { // Extract subdir from deep path patterns used by various Git hosts const deepPathPatterns = [ - /^\/(?:-\/)?tree\/[^/]+\/(.+)$/, // GitHub /tree/branch/path, GitLab /-/tree/branch/path - /^\/(?:-\/)?blob\/[^/]+\/(.+)$/, // /blob/branch/path (treat same as tree) - /^\/src\/[^/]+\/(.+)$/, // Gitea/Forgejo /src/branch/path + { regex: /^\/(?:-\/)?tree\/([^/]+)\/(.+)$/, refIdx: 1, pathIdx: 2 }, // GitHub, GitLab + { regex: /^\/(?:-\/)?blob\/([^/]+)\/(.+)$/, refIdx: 1, pathIdx: 2 }, + { regex: /^\/src\/([^/]+)\/(.+)$/, refIdx: 1, pathIdx: 2 }, // Gitea/Forgejo ]; + // Also match `/tree/` with no subdir + const refOnlyPatterns = [/^\/(?:-\/)?tree\/([^/]+?)\/?$/, /^\/(?:-\/)?blob\/([^/]+?)\/?$/, /^\/src\/([^/]+?)\/?$/]; - for (const pattern of deepPathPatterns) { - const match = remainder.match(pattern); + for (const p of deepPathPatterns) { + const match = remainder.match(p.regex); if (match) { - subdir = match[1].replace(/\/$/, ''); // strip trailing slash + urlRef = match[p.refIdx]; + subdir = match[p.pathIdx].replace(/\/$/, ''); break; } } + if (!subdir) { + for (const r of refOnlyPatterns) { + const match = remainder.match(r); + if (match) { + urlRef = match[1]; + break; + } + } + } } + // Precedence: explicit @version suffix > URL /tree/ path segment. + const version = versionSuffix || urlRef || null; + return { type: 'url', cloneUrl, subdir, localPath: null, + version, + rawInput: trimmedRaw, cacheKey: `${host}/${owner}/${repo}`, displayName: `${owner}/${repo}`, isValid: true, @@ -255,6 +327,10 @@ class CustomModuleManager { const silent = options.silent || false; const displayName = parsed.displayName; + // Pin override: --pin CODE=TAG resolved at module-selection time overrides + // any @version suffix present in the URL. + const effectiveVersion = options.pinOverride || parsed.version || null; + await fs.ensureDir(path.dirname(repoCacheDir)); const createSpinner = async () => { @@ -264,8 +340,23 @@ class CustomModuleManager { return await prompts.spinner(); }; + // If an existing cache exists but was cloned at a different version, re-clone. + // Tracked via .bmad-source.json's recorded version. if (await fs.pathExists(repoCacheDir)) { - // Update existing clone + let cachedVersion = null; + try { + const existing = await fs.readJson(path.join(repoCacheDir, '.bmad-source.json')); + cachedVersion = existing?.version || null; + } catch { + // no metadata; treat as mismatched to be safe if a version was requested + } + if ((effectiveVersion || null) !== (cachedVersion || null)) { + await fs.remove(repoCacheDir); + } + } + + if (await fs.pathExists(repoCacheDir)) { + // Update existing clone (same version as before) const fetchSpinner = await createSpinner(); fetchSpinner.start(`Updating ${displayName}...`); try { @@ -274,10 +365,25 @@ class CustomModuleManager { stdio: ['ignore', 'pipe', 'pipe'], env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, }); - execSync('git reset --hard origin/HEAD', { - cwd: repoCacheDir, - stdio: ['ignore', 'pipe', 'pipe'], - }); + if (effectiveVersion) { + // Fetch the ref as either a tag or a branch — `origin ` works + // for both, whereas `origin tag ` fails for branch refs parsed + // out of /tree//... URLs. + execSync(`git fetch --depth 1 origin ${quoteCustomRef(effectiveVersion)} --no-tags`, { + cwd: repoCacheDir, + stdio: ['ignore', 'pipe', 'pipe'], + env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, + }); + execSync(`git checkout --quiet FETCH_HEAD`, { + cwd: repoCacheDir, + stdio: ['ignore', 'pipe', 'pipe'], + }); + } else { + execSync('git reset --hard origin/HEAD', { + cwd: repoCacheDir, + stdio: ['ignore', 'pipe', 'pipe'], + }); + } fetchSpinner.stop(`Updated ${displayName}`); } catch { fetchSpinner.error(`Update failed, re-downloading ${displayName}`); @@ -287,25 +393,44 @@ class CustomModuleManager { if (!(await fs.pathExists(repoCacheDir))) { const fetchSpinner = await createSpinner(); - fetchSpinner.start(`Cloning ${displayName}...`); + fetchSpinner.start(`Cloning ${displayName}${effectiveVersion ? ` @ ${effectiveVersion}` : ''}...`); try { - execSync(`git clone --depth 1 "${parsed.cloneUrl}" "${repoCacheDir}"`, { - stdio: ['ignore', 'pipe', 'pipe'], - env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, - }); + if (effectiveVersion) { + execSync(`git clone --depth 1 --branch ${quoteCustomRef(effectiveVersion)} "${parsed.cloneUrl}" "${repoCacheDir}"`, { + stdio: ['ignore', 'pipe', 'pipe'], + env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, + }); + } else { + execSync(`git clone --depth 1 "${parsed.cloneUrl}" "${repoCacheDir}"`, { + stdio: ['ignore', 'pipe', 'pipe'], + env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, + }); + } fetchSpinner.stop(`Cloned ${displayName}`); } catch (error_) { fetchSpinner.error(`Failed to clone ${displayName}`); - throw new Error(`Failed to clone ${parsed.cloneUrl}: ${error_.message}`); + const refSuffix = effectiveVersion ? `@${effectiveVersion}` : ''; + throw new Error(`Failed to clone ${parsed.cloneUrl}${refSuffix}: ${error_.message}`); } } + // Record the resolved SHA for the manifest writer. + let resolvedSha = null; + try { + resolvedSha = execSync('git rev-parse HEAD', { cwd: repoCacheDir, stdio: 'pipe' }).toString().trim(); + } catch { + // swallow — a non-git repo (local path) wouldn't reach here anyway + } + // Write source metadata for later URL reconstruction const metadataPath = path.join(repoCacheDir, '.bmad-source.json'); await fs.writeJson(metadataPath, { cloneUrl: parsed.cloneUrl, cacheKey: parsed.cacheKey, displayName: parsed.displayName, + version: effectiveVersion || null, + rawInput: parsed.rawInput || sourceInput, + sha: resolvedSha, clonedAt: new Date().toISOString(), }); @@ -346,10 +471,26 @@ class CustomModuleManager { const resolver = new PluginResolver(); const resolved = await resolver.resolve(repoPath, plugin); + // Read clone metadata (written by cloneRepo) so we can pick up the + // resolved git ref + SHA for manifest recording. + let cloneMetadata = null; + if (sourceUrl) { + try { + cloneMetadata = await fs.readJson(path.join(repoPath, '.bmad-source.json')); + } catch { + // no metadata — local-source or legacy cache + } + } + // Stamp source info onto each resolved module for manifest tracking for (const mod of resolved) { if (sourceUrl) mod.repoUrl = sourceUrl; if (localPath) mod.localPath = localPath; + if (cloneMetadata) { + mod.cloneRef = cloneMetadata.version || null; + mod.cloneSha = cloneMetadata.sha || null; + mod.rawInput = cloneMetadata.rawInput || null; + } CustomModuleManager._resolutionCache.set(mod.code, mod); } diff --git a/tools/installer/modules/external-manager.js b/tools/installer/modules/external-manager.js index b91d353af..7d2add4fb 100644 --- a/tools/installer/modules/external-manager.js +++ b/tools/installer/modules/external-manager.js @@ -5,6 +5,46 @@ const { execSync } = require('node:child_process'); const yaml = require('yaml'); const prompts = require('../prompts'); const { RegistryClient } = require('./registry-client'); +const { resolveChannel, tagExists, parseGitHubRepo } = require('./channel-resolver'); +const { decideChannelForModule } = require('./channel-plan'); + +const VALID_CHANNELS = new Set(['stable', 'next', 'pinned']); + +function normalizeChannelName(raw) { + if (typeof raw !== 'string') return null; + const lower = raw.trim().toLowerCase(); + return VALID_CHANNELS.has(lower) ? lower : null; +} + +/** + * Conservative quoting for tag names passed to git commands. Tags are + * user-typed (--pin) or come from the GitHub API. Only allow the semver + * character class we use to tag BMad releases; anything else throws. + */ +function quoteShell(ref) { + if (typeof ref !== 'string' || !/^[\w.\-+/]+$/.test(ref)) { + throw new Error(`Unsafe ref name: ${JSON.stringify(ref)}`); + } + return `"${ref}"`; +} + +async function readChannelMarker(markerPath) { + try { + if (!(await fs.pathExists(markerPath))) return null; + const content = await fs.readFile(markerPath, 'utf8'); + return JSON.parse(content); + } catch { + return null; + } +} + +async function writeChannelMarker(markerPath, data) { + try { + await fs.writeFile(markerPath, JSON.stringify({ ...data, writtenAt: new Date().toISOString() }, null, 2)); + } catch { + // Best-effort: marker is an optimization, not a correctness requirement. + } +} const MARKETPLACE_OWNER = 'bmad-code-org'; const MARKETPLACE_REPO = 'bmad-plugins-marketplace'; @@ -19,10 +59,25 @@ const FALLBACK_CONFIG_PATH = path.join(__dirname, 'registry-fallback.yaml'); * @class ExternalModuleManager */ class ExternalModuleManager { + // moduleCode → { channel, version, ref, sha, repoUrl, resolvedFallback } + // Populated when cloneExternalModule resolves a channel. Shared across all + // instances so the manifest writer (which often instantiates a fresh + // ExternalModuleManager) sees resolutions made during install. + static _resolutions = new Map(); + constructor() { this._client = new RegistryClient(); } + /** + * Get the most recent channel resolution for a module (if any). + * @param {string} moduleCode + * @returns {Object|null} + */ + getResolution(moduleCode) { + return ExternalModuleManager._resolutions.get(moduleCode) || null; + } + /** * Load the official modules registry from GitHub, falling back to the * bundled YAML file if the fetch fails. @@ -75,6 +130,7 @@ class ExternalModuleManager { defaultSelected: mod.default_selected === true || mod.defaultSelected === true, type: mod.type || 'bmad-org', npmPackage: mod.npm_package || mod.npmPackage || null, + defaultChannel: normalizeChannelName(mod.default_channel || mod.defaultChannel) || 'stable', builtIn: mod.built_in === true, isExternal: mod.built_in !== true, }; @@ -120,10 +176,15 @@ class ExternalModuleManager { } /** - * Clone an external module repository to cache + * Clone an external module repository to cache, resolving the requested + * channel (stable / next / pinned) to a concrete git ref. + * * @param {string} moduleCode - Code of the external module * @param {Object} options - Clone options - * @param {boolean} options.silent - Suppress spinner output + * @param {boolean} [options.silent] - Suppress spinner output + * @param {Object} [options.channelOptions] - Parsed channel flags. See + * modules/channel-plan.js. When absent, the module installs on its + * registry-declared default channel (typically 'stable'). * @returns {string} Path to the cloned repository */ async cloneExternalModule(moduleCode, options = {}) { @@ -161,38 +222,160 @@ class ExternalModuleManager { return await prompts.spinner(); }; - // Track if we need to install dependencies + // ─── Resolve channel plan ───────────────────────────────────────────── + // Post-install callers (config generation, directory setup, help catalog + // rebuild) invoke findModuleSource/cloneExternalModule without + // channelOptions just to locate the module's files. Those calls must not + // redecide the channel — the install step already chose one, cloned the + // right ref, and recorded a resolution. If we re-resolve without flags, + // we'd snap back to stable and overwrite a pinned install. + const hasExplicitChannelInput = + options.channelOptions && + (options.channelOptions.global || + (options.channelOptions.nextSet && options.channelOptions.nextSet.size > 0) || + (options.channelOptions.pins && options.channelOptions.pins.size > 0)); + const existingResolution = ExternalModuleManager._resolutions.get(moduleCode); + const haveUsableCache = await fs.pathExists(moduleCacheDir); + + if (!hasExplicitChannelInput && existingResolution && haveUsableCache) { + // This is a look-up only; the module is already installed at its chosen + // ref. Skip cloning and return the cached path unchanged. + return moduleCacheDir; + } + + const planEntry = decideChannelForModule({ + code: moduleCode, + channelOptions: options.channelOptions, + registryDefault: moduleInfo.defaultChannel, + }); + + // Same-plan short-circuit: a single install calls cloneExternalModule + // several times (config collection, directory setup, help-catalog rebuild) + // with the same channelOptions. The first call resolves + clones; later + // calls with an identical plan and a valid cache should return immediately + // instead of re-running resolveChannel() and `git fetch` (slow; can fail + // on flaky networks even though the tagCache dedupes the GitHub API hit). + if (existingResolution && haveUsableCache && existingResolution.channel === planEntry.channel) { + const samePin = planEntry.channel !== 'pinned' || existingResolution.version === planEntry.pin; + if (samePin) return moduleCacheDir; + } + + let resolved; + try { + resolved = await resolveChannel({ + channel: planEntry.channel, + pin: planEntry.pin, + repoUrl: moduleInfo.url, + }); + } catch (error) { + // Tag-API failure (rate limit, transient network). If we already have + // a usable cache at a recorded ref, treat this as "couldn't check for + // updates" and re-use the cached version silently — that's the right + // call for an update/quick-update, since the semantics don't change + // and the user isn't worse off than before they ran this command. + const cachedMarker = await readChannelMarker(path.join(moduleCacheDir, '.bmad-channel.json')); + if (cachedMarker?.channel && (await fs.pathExists(moduleCacheDir))) { + if (!silent) { + await prompts.log.warn( + `Could not check for updates to ${moduleInfo.name} (${error.message}); using cached ${cachedMarker.version || cachedMarker.channel}.`, + ); + } + ExternalModuleManager._resolutions.set(moduleCode, { + channel: cachedMarker.channel, + version: cachedMarker.version || 'main', + ref: cachedMarker.version && cachedMarker.version !== 'main' ? cachedMarker.version : null, + sha: cachedMarker.sha, + repoUrl: moduleInfo.url, + resolvedFallback: false, + planSource: 'cached', + }); + return moduleCacheDir; + } + // No cache to fall back on — this is effectively a fresh install with + // no offline safety net. Surface a clear error with actionable guidance. + const isRateLimited = /rate limit/i.test(error.message); + const hint = isRateLimited + ? process.env.GITHUB_TOKEN + ? 'Your GITHUB_TOKEN may have expired or been rate-limited on its own budget. Try a different token or wait for the reset.' + : 'Set a GITHUB_TOKEN env var (any personal access token with public-repo read) to raise the 60-req/hour anonymous limit.' + : `Check your network connection, or rerun with \`--next=${moduleCode}\` / \`--pin ${moduleCode}=\` to skip the tag lookup.`; + throw new Error(`Could not resolve stable tag for '${moduleCode}' (${error.message}). ${hint}`); + } + + if (resolved.resolvedFallback && !silent) { + if (resolved.reason === 'no-stable-tags') { + await prompts.log.warn(`No stable releases found for ${moduleInfo.name}; installing from main.`); + } else if (resolved.reason === 'not-a-github-url') { + await prompts.log.warn(`Cannot determine stable tags for ${moduleInfo.name} (non-GitHub URL); installing from main.`); + } + } + + // Validate pin before we burn time cloning. Best-effort: skip on non-GitHub URLs. + if (planEntry.channel === 'pinned') { + const parsed = parseGitHubRepo(moduleInfo.url); + if (parsed) { + try { + const exists = await tagExists(parsed.owner, parsed.repo, planEntry.pin); + if (!exists) { + throw new Error(`Tag '${planEntry.pin}' not found in ${parsed.owner}/${parsed.repo}.`); + } + } catch (error) { + if (error.message?.includes('not found')) throw error; + // Network hiccup on tag verification — let the clone attempt fail clearly. + } + } + } + + // ─── Clone or update cache by resolved channel ──────────────────────── + const markerPath = path.join(moduleCacheDir, '.bmad-channel.json'); + const currentMarker = await readChannelMarker(markerPath); + const needsChannelReset = currentMarker && currentMarker.channel !== resolved.channel; + let needsDependencyInstall = false; let wasNewClone = false; - // Check if already cloned + if (needsChannelReset && (await fs.pathExists(moduleCacheDir))) { + // Channel changed (e.g. user switched stable→next). Blow away and re-clone + // to avoid tangling shallow clones of different refs. + await fs.remove(moduleCacheDir); + } + if (await fs.pathExists(moduleCacheDir)) { - // Try to update if it's a git repo + // Cache exists on the right channel. Refresh the ref. const fetchSpinner = await createSpinner(); fetchSpinner.start(`Fetching ${moduleInfo.name}...`); try { - const currentRef = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim(); - // Fetch and reset to remote - works better with shallow clones than pull - execSync('git fetch origin --depth 1', { - cwd: moduleCacheDir, - stdio: ['ignore', 'pipe', 'pipe'], - env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, - }); - execSync('git reset --hard origin/HEAD', { - cwd: moduleCacheDir, - stdio: ['ignore', 'pipe', 'pipe'], - env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, - }); - const newRef = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim(); + const currentSha = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim(); - fetchSpinner.stop(`Fetched ${moduleInfo.name}`); - // Force dependency install if we got new code - if (currentRef !== newRef) { - needsDependencyInstall = true; + if (resolved.channel === 'next') { + execSync('git fetch origin --depth 1', { + cwd: moduleCacheDir, + stdio: ['ignore', 'pipe', 'pipe'], + env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, + }); + execSync('git reset --hard origin/HEAD', { + cwd: moduleCacheDir, + stdio: ['ignore', 'pipe', 'pipe'], + env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, + }); + } else { + // stable or pinned — fetch the specific tag and check it out. + execSync(`git fetch --depth 1 origin tag ${quoteShell(resolved.ref)} --no-tags`, { + cwd: moduleCacheDir, + stdio: ['ignore', 'pipe', 'pipe'], + env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, + }); + execSync(`git checkout --quiet FETCH_HEAD`, { + cwd: moduleCacheDir, + stdio: ['ignore', 'pipe', 'pipe'], + }); } + + const newSha = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim(); + fetchSpinner.stop(`Fetched ${moduleInfo.name}`); + if (currentSha !== newSha) needsDependencyInstall = true; } catch { fetchSpinner.error(`Fetch failed, re-downloading ${moduleInfo.name}`); - // If update fails, remove and re-clone await fs.remove(moduleCacheDir); wasNewClone = true; } @@ -200,22 +383,41 @@ class ExternalModuleManager { wasNewClone = true; } - // Clone if not exists or was removed if (wasNewClone) { const fetchSpinner = await createSpinner(); fetchSpinner.start(`Fetching ${moduleInfo.name}...`); try { - execSync(`git clone --depth 1 "${moduleInfo.url}" "${moduleCacheDir}"`, { - stdio: ['ignore', 'pipe', 'pipe'], - env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, - }); + if (resolved.channel === 'next') { + execSync(`git clone --depth 1 "${moduleInfo.url}" "${moduleCacheDir}"`, { + stdio: ['ignore', 'pipe', 'pipe'], + env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, + }); + } else { + execSync(`git clone --depth 1 --branch ${quoteShell(resolved.ref)} "${moduleInfo.url}" "${moduleCacheDir}"`, { + stdio: ['ignore', 'pipe', 'pipe'], + env: { ...process.env, GIT_TERMINAL_PROMPT: '0' }, + }); + } fetchSpinner.stop(`Fetched ${moduleInfo.name}`); } catch (error) { fetchSpinner.error(`Failed to fetch ${moduleInfo.name}`); - throw new Error(`Failed to clone external module '${moduleCode}': ${error.message}`); + throw new Error(`Failed to clone external module '${moduleCode}' at ${resolved.version}: ${error.message}`); } } + // Record resolution (channel + tag + SHA) for the manifest writer to pick up. + const sha = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim(); + ExternalModuleManager._resolutions.set(moduleCode, { + channel: resolved.channel, + version: resolved.version, + ref: resolved.ref, + sha, + repoUrl: moduleInfo.url, + resolvedFallback: !!resolved.resolvedFallback, + planSource: planEntry.source, + }); + await writeChannelMarker(markerPath, { channel: resolved.channel, version: resolved.version, sha }); + // Install dependencies if package.json exists const packageJsonPath = path.join(moduleCacheDir, 'package.json'); const nodeModulesPath = path.join(moduleCacheDir, 'node_modules'); diff --git a/tools/installer/modules/official-modules.js b/tools/installer/modules/official-modules.js index 49b555541..baafa7faf 100644 --- a/tools/installer/modules/official-modules.js +++ b/tools/installer/modules/official-modules.js @@ -15,6 +15,11 @@ class OfficialModules { // Tracked during interactive config collection so {directory_name} // placeholder defaults can be resolved in buildQuestion(). this.currentProjectDir = null; + // Install-time channel flag state. Set by Config.build once, then used as + // the default for every findModuleSource/cloneExternalModule call so that + // pre-install config collection and the install step agree on which ref + // to clone. + this.channelOptions = options.channelOptions || null; } /** @@ -38,7 +43,7 @@ class OfficialModules { * @returns {OfficialModules} */ static async build(config, paths) { - const instance = new OfficialModules(); + const instance = new OfficialModules({ channelOptions: config.channelOptions }); // Pre-collected by UI or quickUpdate — store and load existing for path-change detection if (config.moduleConfigs) { @@ -196,6 +201,12 @@ class OfficialModules { * @returns {string|null} Path to the module source or null if not found */ async findModuleSource(moduleCode, options = {}) { + // Inherit channelOptions from the install-scoped instance when the caller + // didn't pass one explicitly. Keeps pre-install config collection and the + // actual install step looking at the same git ref. + if (options.channelOptions === undefined && this.channelOptions) { + options = { ...options, channelOptions: this.channelOptions }; + } const projectRoot = getProjectRoot(); // Check for core module (directly under src/core-skills) @@ -214,13 +225,13 @@ class OfficialModules { } } - // Check external official modules + // Check external official modules (pass channelOptions so channel plan applies) const externalSource = await this.externalModuleManager.findExternalModuleSource(moduleCode, options); if (externalSource) { return externalSource; } - // Check community modules + // Check community modules (pass channelOptions for --next/--pin overrides) const { CommunityModuleManager } = require('./community-manager'); const communityMgr = new CommunityModuleManager(); const communitySource = await communityMgr.findModuleSource(moduleCode, options); @@ -258,7 +269,10 @@ class OfficialModules { return this.installFromResolution(resolved, bmadDir, fileTrackingCallback, options); } - const sourcePath = await this.findModuleSource(moduleName, { silent: options.silent }); + const sourcePath = await this.findModuleSource(moduleName, { + silent: options.silent, + channelOptions: options.channelOptions, + }); const targetPath = path.join(bmadDir, moduleName); if (!sourcePath) { @@ -281,11 +295,24 @@ class OfficialModules { const manifestObj = new Manifest(); const versionInfo = await manifestObj.getModuleVersionInfo(moduleName, bmadDir, sourcePath); + // Pick up channel resolution recorded by whichever manager did the clone. + const externalResolution = this.externalModuleManager.getResolution(moduleName); + let communityResolution = null; + if (!externalResolution) { + const { CommunityModuleManager } = require('./community-manager'); + communityResolution = new CommunityModuleManager().getResolution(moduleName); + } + const resolution = externalResolution || communityResolution; + await manifestObj.addModule(bmadDir, moduleName, { - version: versionInfo.version, + version: resolution?.version || versionInfo.version, source: versionInfo.source, npmPackage: versionInfo.npmPackage, repoUrl: versionInfo.repoUrl, + channel: resolution?.channel, + sha: resolution?.sha, + registryApprovedTag: communityResolution?.registryApprovedTag, + registryApprovedSha: communityResolution?.registryApprovedSha, }); return { success: true, module: moduleName, path: targetPath, versionInfo }; @@ -333,18 +360,37 @@ class OfficialModules { await this.createModuleDirectories(resolved.code, bmadDir, options); } - // Update manifest + // Update manifest. For custom modules, derive channel from the git ref: + // cloneRef present → pinned at that ref + // cloneRef absent → next (main HEAD) + // local path → no channel concept const { Manifest } = require('../core/manifest'); const manifestObj = new Manifest(); - await manifestObj.addModule(bmadDir, resolved.code, { - version: resolved.version || null, + const hasGitClone = !!resolved.repoUrl; + const manifestEntry = { + version: resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || null), source: 'custom', npmPackage: null, repoUrl: resolved.repoUrl || null, - }); + }; + if (hasGitClone) { + manifestEntry.channel = resolved.cloneRef ? 'pinned' : 'next'; + if (resolved.cloneSha) manifestEntry.sha = resolved.cloneSha; + if (resolved.rawInput) manifestEntry.rawSource = resolved.rawInput; + } + if (resolved.localPath) manifestEntry.localPath = resolved.localPath; + await manifestObj.addModule(bmadDir, resolved.code, manifestEntry); - return { success: true, module: resolved.code, path: targetPath, versionInfo: { version: resolved.version || '' } }; + return { + success: true, + module: resolved.code, + path: targetPath, + // Match the manifestEntry.version expression above so downstream summary + // lines show the cloned ref (tag or 'main') instead of the on-disk + // package.json version for git-backed custom installs. + versionInfo: { version: resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || '') }, + }; } /** diff --git a/tools/installer/modules/registry-fallback.yaml b/tools/installer/modules/registry-fallback.yaml index 29b2cc07d..52bc4b4fc 100644 --- a/tools/installer/modules/registry-fallback.yaml +++ b/tools/installer/modules/registry-fallback.yaml @@ -1,6 +1,10 @@ # Fallback module registry — used only when the BMad Marketplace repo # (bmad-code-org/bmad-plugins-marketplace) is unreachable. # The remote registry/official.yaml is the source of truth. +# +# default_channel (optional) — the install channel when the user does not +# override with --channel/--pin/--next. Valid values: stable | next. +# Omit to inherit the installer's hardcoded default (stable). modules: bmad-builder: @@ -12,6 +16,7 @@ modules: defaultSelected: false type: bmad-org npmPackage: bmad-builder + default_channel: stable bmad-creative-intelligence-suite: url: https://github.com/bmad-code-org/bmad-module-creative-intelligence-suite @@ -22,6 +27,7 @@ modules: defaultSelected: false type: bmad-org npmPackage: bmad-creative-intelligence-suite + default_channel: stable bmad-game-dev-studio: url: https://github.com/bmad-code-org/bmad-module-game-dev-studio.git @@ -32,6 +38,7 @@ modules: defaultSelected: false type: bmad-org npmPackage: bmad-game-dev-studio + default_channel: stable bmad-method-test-architecture-enterprise: url: https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise @@ -42,3 +49,4 @@ modules: defaultSelected: false type: bmad-org npmPackage: bmad-method-test-architecture-enterprise + default_channel: stable diff --git a/tools/installer/ui.js b/tools/installer/ui.js index 26b3619c1..030ef5a3b 100644 --- a/tools/installer/ui.js +++ b/tools/installer/ui.js @@ -4,6 +4,7 @@ const fs = require('./fs-native'); const { CLIUtils } = require('./cli-utils'); const { ExternalModuleManager } = require('./modules/external-manager'); const { resolveModuleVersion } = require('./modules/version-resolver'); +const { parseChannelOptions, buildPlan, orphanPinWarnings, bundledTargetWarnings } = require('./modules/channel-plan'); const prompts = require('./prompts'); /** @@ -33,6 +34,13 @@ class UI { const messageLoader = new MessageLoader(); await messageLoader.displayStartMessage(); + // Parse channel flags (--channel/--all-*/--next=/--pin) once. Warnings + // are surfaced immediately so the user sees them before any git ops run. + const channelOptions = parseChannelOptions(options); + for (const warning of channelOptions.warnings) { + await prompts.log.warn(warning); + } + // Get directory from options or prompt let confirmedDirectory; if (options.directory) { @@ -152,10 +160,38 @@ class UI { selectedModules.unshift('core'); } + // For existing installs, resolve per-module update decisions BEFORE + // we clone anything. Reads the existing manifest's recorded channel + // per module and prompts the user on available upgrades (patch/minor + // default Y, major default N). Legacy entries with no channel are + // migrated here too. Mutates channelOptions.pins to lock rejections. + await this._resolveUpdateChannels({ + bmadDir, + selectedModules, + channelOptions, + yes: options.yes || false, + }); + // Get tool selection const toolSelection = await this.promptToolSelection(confirmedDirectory, options); - const moduleConfigs = await this.collectModuleConfigs(confirmedDirectory, selectedModules, options); + const moduleConfigs = await this.collectModuleConfigs(confirmedDirectory, selectedModules, { + ...options, + channelOptions, + }); + + // Warn about --pin/--next flags that refer to modules the user didn't + // select, or that target bundled modules (core/bmm) where channel + // flags don't apply. + { + const bundledCodes = await this._bundledModuleCodes(); + for (const warning of [ + ...orphanPinWarnings(channelOptions, selectedModules), + ...bundledTargetWarnings(channelOptions, bundledCodes), + ]) { + await prompts.log.warn(warning); + } + } return { actionType: 'update', @@ -166,6 +202,7 @@ class UI { coreConfig: moduleConfigs.core || {}, moduleConfigs: moduleConfigs, skipPrompts: options.yes || false, + channelOptions, }; } } @@ -205,8 +242,31 @@ class UI { if (!selectedModules.includes('core')) { selectedModules.unshift('core'); } + + // Interactive channel gate: "Ready to install (all stable)? [Y/n]" + // Only shown for fresh installs with no channel flags and an external module + // selected. Non-interactive installs skip this and fall through to the + // registry default (stable) or whatever flags were supplied. + await this._interactiveChannelGate({ options, channelOptions, selectedModules }); + let toolSelection = await this.promptToolSelection(confirmedDirectory, options); - const moduleConfigs = await this.collectModuleConfigs(confirmedDirectory, selectedModules, options); + const moduleConfigs = await this.collectModuleConfigs(confirmedDirectory, selectedModules, { + ...options, + channelOptions, + }); + + // Warn about --pin/--next flags that refer to modules the user didn't + // select, or that target bundled modules (core/bmm) where channel + // flags don't apply. + { + const bundledCodes = await this._bundledModuleCodes(); + for (const warning of [ + ...orphanPinWarnings(channelOptions, selectedModules), + ...bundledTargetWarnings(channelOptions, bundledCodes), + ]) { + await prompts.log.warn(warning); + } + } return { actionType: 'install', @@ -217,6 +277,7 @@ class UI { coreConfig: moduleConfigs.core || {}, moduleConfigs: moduleConfigs, skipPrompts: options.yes || false, + channelOptions, }; } @@ -488,7 +549,7 @@ class UI { */ async collectModuleConfigs(directory, modules, options = {}) { const { OfficialModules } = require('./modules/official-modules'); - const configCollector = new OfficialModules(); + const configCollector = new OfficialModules({ channelOptions: options.channelOptions }); // Seed core config from CLI options if provided if (options.userName || options.communicationLanguage || options.documentOutputLanguage || options.outputFolder) { @@ -1563,6 +1624,349 @@ class UI { }); await prompts.log.message('Selected tools:\n' + toolLines.join('\n')); } + + /** + * Return the set of module codes the registry marks as built-in (core, bmm). + * These ship with the installer binary and have no per-module channel. + */ + async _bundledModuleCodes() { + const externalManager = new ExternalModuleManager(); + try { + const modules = await externalManager.listAvailable(); + return modules.filter((m) => m.builtIn).map((m) => m.code); + } catch { + // Registry unreachable — fall back to the known bundled codes. + return ['core', 'bmm']; + } + } + + /** + * Fast-path channel gate: confirm "all stable" or open the per-module picker. + * + * Skipped when: + * - running non-interactively (--yes) + * - the user already passed channel flags (--channel / --pin / --next) + * - no externals/community modules are selected + * + * Mutates channelOptions.pins and channelOptions.nextSet to reflect picker choices. + */ + async _interactiveChannelGate({ options, channelOptions, selectedModules }) { + if (options.yes) return; + // If the user already declared their channel intent via flags, trust them + // and skip the gate. + const haveFlagIntent = channelOptions.global || channelOptions.nextSet.size > 0 || channelOptions.pins.size > 0; + if (haveFlagIntent) return; + + // Figure out which selected modules actually get a channel (externals + + // community modules). Bundled core/bmm and custom modules skip the picker. + const externalManager = new ExternalModuleManager(); + const externals = await externalManager.listAvailable(); + const externalByCode = new Map(externals.map((m) => [m.code, m])); + + const { CommunityModuleManager } = require('./modules/community-manager'); + const communityMgr = new CommunityModuleManager(); + const community = await communityMgr.listAll(); + const communityByCode = new Map(community.map((m) => [m.code, m])); + + const channelSelectable = selectedModules.filter((code) => { + const info = externalByCode.get(code) || communityByCode.get(code); + return info && !info.builtIn; + }); + if (channelSelectable.length === 0) return; + + const fastPath = await prompts.confirm({ + message: `Ready to install (all stable)? Pick "n" to customize channels or pin versions.`, + default: true, + }); + if (fastPath) return; // stable for all, registry default applies + + // Customize path: per-module picker. + const { fetchStableTags, parseGitHubRepo } = require('./modules/channel-resolver'); + + for (const code of channelSelectable) { + const info = externalByCode.get(code) || communityByCode.get(code); + const repoUrl = info.url; + + // Try to pre-resolve the top stable tag so we can surface it in the picker. + let stableLabel = 'stable (released version)'; + try { + const parsed = repoUrl ? parseGitHubRepo(repoUrl) : null; + if (parsed) { + const tags = await fetchStableTags(parsed.owner, parsed.repo); + if (tags.length > 0) { + stableLabel = `stable ${tags[0].tag} (released version)`; + } + } + } catch { + // fall through with the generic label + } + + const choice = await prompts.select({ + message: `${code}: choose a channel`, + choices: [ + { name: stableLabel, value: 'stable' }, + { name: 'next (main HEAD \u2014 current development)', value: 'next' }, + { name: 'pin (specific version)', value: 'pin' }, + ], + default: 'stable', + }); + + if (choice === 'next') { + channelOptions.nextSet.add(code); + } else if (choice === 'pin') { + const pinValue = await prompts.text({ + message: `Enter a version tag for '${code}' (e.g. v1.6.0):`, + validate: (value) => { + if (!value || !/^[\w.\-+/]+$/.test(String(value).trim())) { + return 'Must be a non-empty tag name (letters, digits, dots, hyphens).'; + } + }, + }); + channelOptions.pins.set(code, String(pinValue).trim()); + } + // 'stable' is the default; nothing to record. + } + } + + /** + * Resolve channel decisions for an update over an existing install. + * + * For each selected external/community module: + * - Read the recorded channel from the existing manifest. + * - On `stable`: query tags; if a newer stable exists, classify the diff + * and prompt. Patch/minor default Y; major defaults N. `--yes` accepts + * defaults (patches/minors) but NOT majors — a major under --yes stays + * frozen unless the user also passes `--pin CODE=NEW_TAG`. + * - On `next`: no prompt (pull HEAD). + * - On `pinned`: no prompt (stays pinned). + * - No channel recorded and `version: null`: one-time migration prompt + * ("Switch to stable / Keep on next"). + * + * Decisions that freeze the current version are applied by adding a pin to + * `channelOptions.pins` so downstream clone logic honors them. + */ + async _resolveUpdateChannels({ bmadDir, selectedModules, channelOptions, yes }) { + const { Manifest } = require('./core/manifest'); + const manifestObj = new Manifest(); + const manifest = await manifestObj.read(bmadDir); + const existingByName = new Map(); + for (const m of manifest?.modulesDetailed || []) { + if (m?.name) existingByName.set(m.name, m); + } + if (existingByName.size === 0) return; + + const externalManager = new ExternalModuleManager(); + const externals = await externalManager.listAvailable(); + const externalByCode = new Map(externals.map((m) => [m.code, m])); + + const { CommunityModuleManager } = require('./modules/community-manager'); + const communityMgr = new CommunityModuleManager(); + const community = await communityMgr.listAll(); + const communityByCode = new Map(community.map((m) => [m.code, m])); + + const { fetchStableTags, classifyUpgrade, releaseNotesUrl } = require('./modules/channel-resolver'); + const { parseGitHubRepo } = require('./modules/channel-resolver'); + + // Interactive-only: offer a one-time gate to review / switch channels for + // selected modules that are already installed. Default N so normal Modify + // flows (add/remove modules) aren't interrupted. + let reviewChannels = false; + if (!yes) { + const existingWithChannel = selectedModules.filter((code) => { + const prev = existingByName.get(code); + if (!prev) return false; + const info = externalByCode.get(code) || communityByCode.get(code); + return info && !info.builtIn; + }); + if (existingWithChannel.length > 0) { + reviewChannels = await prompts.confirm({ + message: 'Review channel assignments (stable / next / pin) for your existing modules?', + default: false, + }); + } + } + + for (const code of selectedModules) { + const prev = existingByName.get(code); + if (!prev) continue; + + const info = externalByCode.get(code) || communityByCode.get(code); + if (!info) continue; + // Bundled modules (core/bmm) ship with the installer binary itself — + // their version is stapled to the CLI version, not a git tag. Skip + // tag-API lookups for them; the "upgrade" mechanism is `npx bmad@X install`. + if (info.builtIn) continue; + + const repoUrl = info.url; + const parsed = repoUrl ? parseGitHubRepo(repoUrl) : null; + + // Legacy migration: manifest carries no channel and a null/empty + // version. Offer the one-time pick between stable and next. + const recordedChannel = prev.channel || null; + const needsMigration = !recordedChannel && (prev.version == null || prev.version === ''); + if (needsMigration) { + if (yes) { + // Conservative headless default: stable. + continue; + } + const chosen = await prompts.select({ + message: `${code}: your existing install tracks the main branch. Switch to stable releases (recommended for production), or keep on main?`, + choices: [ + { name: 'Switch to stable', value: 'stable' }, + { name: 'Keep on main (next)', value: 'next' }, + ], + default: 'stable', + }); + if (chosen === 'next') channelOptions.nextSet.add(code); + continue; + } + + // Optional channel-switch offer. Fires only when the user opted in via + // the gate above. 'keep' falls through to the existing per-channel + // logic (which runs upgrade classification for stable). Any switch + // records the new intent into channelOptions and skips upgrade prompts. + if (reviewChannels && recordedChannel) { + const switchChoices = [ + { + name: `Keep on '${recordedChannel}'${prev.version ? ` @ ${prev.version}` : ''}`, + value: 'keep', + }, + ]; + if (recordedChannel !== 'stable') { + switchChoices.push({ name: 'Switch to stable (released version)', value: 'stable' }); + } + if (recordedChannel !== 'next') { + switchChoices.push({ name: 'Switch to next (main HEAD)', value: 'next' }); + } + switchChoices.push({ name: 'Pin to a specific version tag', value: 'pin' }); + + const choice = await prompts.select({ + message: `${code} channel:`, + choices: switchChoices, + default: 'keep', + }); + + if (choice === 'next') { + channelOptions.nextSet.add(code); + continue; + } + if (choice === 'pin') { + const pinValue = await prompts.text({ + message: `Enter a version tag for '${code}' (e.g. v1.6.0):`, + validate: (value) => { + if (!value || !/^[\w.\-+/]+$/.test(String(value).trim())) { + return 'Must be a non-empty tag name (letters, digits, dots, hyphens).'; + } + }, + }); + channelOptions.pins.set(code, String(pinValue).trim()); + continue; + } + if (choice === 'stable') { + // Switch to stable: install at the top stable tag without an + // upgrade-classification prompt (the user explicitly opted in). + // Also warm the tag cache here so the actual clone step doesn't + // need a second GitHub API call (can hit rate limits). + if (parsed) { + try { + await fetchStableTags(parsed.owner, parsed.repo); + } catch { + // best effort; clone step will surface any failure + } + } + continue; + } + // 'keep' → fall through with recordedChannel below. + } + + if (recordedChannel === 'pinned' || recordedChannel === 'next') { + // Respect any explicit channel intent the user already expressed via + // CLI flags (--channel / --all-* / --next=CODE / --pin CODE=TAG) or + // via the interactive review gate above. Only auto-re-assert the + // recorded channel when the user hasn't opted into anything else — + // otherwise --all-stable (or a review "switch to stable") would be + // silently clobbered by the prior channel. + const alreadyDecided = channelOptions.global || channelOptions.nextSet.has(code) || channelOptions.pins.has(code); + if (!alreadyDecided) { + if (recordedChannel === 'pinned' && prev.version) { + channelOptions.pins.set(code, prev.version); + } else if (recordedChannel === 'next') { + channelOptions.nextSet.add(code); + } + } + continue; + } + + // Stable channel: check for a newer released tag. + if (!parsed) continue; + // Respect explicit CLI intent (--pin / --next=CODE / --all-*) and any + // choice the user already made in the earlier review gate. Without this + // guard the upgrade classifier below would unconditionally call + // `channelOptions.pins.set(code, prev.version)` on decline/major-refuse/ + // fetch-error, silently clobbering the user's override. + const alreadyDecided = channelOptions.global || channelOptions.nextSet.has(code) || channelOptions.pins.has(code); + if (alreadyDecided) continue; + let tags; + try { + tags = await fetchStableTags(parsed.owner, parsed.repo); + } catch (error) { + await prompts.log.warn(`Could not check for updates on ${code} (${error.message}). Leaving at ${prev.version}.`); + if (prev.version) channelOptions.pins.set(code, prev.version); + continue; + } + if (!tags || tags.length === 0) continue; + const topTag = tags[0].tag; // e.g. "v1.7.0" + const currentTag = prev.version || ''; + const diffClass = classifyUpgrade(currentTag, topTag); + + if (diffClass === 'none') continue; // already at or above top tag + + const notes = releaseNotesUrl(repoUrl, topTag); + let accept; + if (diffClass === 'major') { + if (yes) { + // Major under --yes is refused by design. + await prompts.log.warn( + `${code} ${currentTag} → ${topTag} is a new major release; staying on ${currentTag}. ` + + `To accept, rerun with --pin ${code}=${topTag}.`, + ); + channelOptions.pins.set(code, currentTag); + continue; + } + accept = await prompts.confirm({ + message: + `${code} ${topTag} available — new major release (may change behavior).` + + (notes ? ` Release notes: ${notes}.` : '') + + ' Upgrade?', + default: false, + }); + } else if (diffClass === 'minor') { + if (yes) { + accept = true; + } else { + accept = await prompts.confirm({ + message: `${code} ${topTag} available (new features).` + (notes ? ` Release notes: ${notes}.` : '') + ' Upgrade?', + default: true, + }); + } + } else { + // patch + if (yes) { + accept = true; + } else { + accept = await prompts.confirm({ + message: `${code} ${topTag} available. Upgrade?`, + default: true, + }); + } + } + + if (!accept && currentTag) { + // Freeze the current version by pinning it for this run. + channelOptions.pins.set(code, currentTag); + } + } + } } module.exports = { UI }; From 0533976753643750408e4d61ac357b2f6a219155 Mon Sep 17 00:00:00 2001 From: Murat K Ozcan <34237651+muratkeremozcan@users.noreply.github.com> Date: Fri, 24 Apr 2026 13:13:56 -0500 Subject: [PATCH 02/23] fix: installer live version for external modules (#2307) * resolved merge conflict * fix: addressed PR comments * fix: use git tags for installer module versions --- test/test-installation-components.js | 223 +++++++++++++++++++++++++++ tools/installer/core/manifest.js | 60 ++++--- tools/installer/ui.js | 182 +++++++++++++++++++--- 3 files changed, 421 insertions(+), 44 deletions(-) diff --git a/test/test-installation-components.js b/test/test-installation-components.js index 24cf782e5..58d6c7d8f 100644 --- a/test/test-installation-components.js +++ b/test/test-installation-components.js @@ -2622,6 +2622,229 @@ async function runTests() { } } + // --- Official module picker uses git tags for external module labels --- + { + const { UI } = require('../tools/installer/ui'); + const prompts = require('../tools/installer/prompts'); + const channelResolver = require('../tools/installer/modules/channel-resolver'); + const { ExternalModuleManager } = require('../tools/installer/modules/external-manager'); + + const ui = new UI(); + const originalOfficialListAvailable39 = OfficialModules.prototype.listAvailable; + const originalExternalListAvailable39 = ExternalModuleManager.prototype.listAvailable; + const originalAutocomplete39 = prompts.autocompleteMultiselect; + const originalSpinner39 = prompts.spinner; + const originalWarn39 = prompts.log.warn; + const originalMessage39 = prompts.log.message; + const originalResolveChannel39 = channelResolver.resolveChannel; + + const seenLabels39 = []; + const spinnerStarts39 = []; + const spinnerStops39 = []; + const warnings39 = []; + + OfficialModules.prototype.listAvailable = async function () { + return { + modules: [ + { + id: 'core', + name: 'BMad Core Module', + description: 'always installed', + defaultSelected: true, + }, + ], + }; + }; + + ExternalModuleManager.prototype.listAvailable = async function () { + return [ + { + code: 'bmb', + name: 'BMad Builder', + description: 'Builder module', + defaultSelected: false, + builtIn: false, + url: 'https://github.com/bmad-code-org/bmad-builder', + defaultChannel: 'stable', + }, + { + code: 'tea', + name: 'Test Architect', + description: 'Test architecture module', + defaultSelected: false, + builtIn: false, + url: 'https://github.com/bmad-code-org/bmad-method-test-architecture-enterprise', + defaultChannel: 'stable', + }, + ]; + }; + + channelResolver.resolveChannel = async function ({ repoUrl, channel }) { + if (channel !== 'stable') { + return { channel, version: channel === 'next' ? 'main' : 'unknown' }; + } + if (repoUrl.includes('bmad-builder')) { + return { channel: 'stable', version: 'v1.7.0', ref: 'v1.7.0', resolvedFallback: false }; + } + if (repoUrl.includes('bmad-method-test-architecture-enterprise')) { + return { channel: 'stable', version: 'v1.15.0', ref: 'v1.15.0', resolvedFallback: false }; + } + throw new Error(`unexpected repo ${repoUrl}`); + }; + + prompts.autocompleteMultiselect = async (options) => { + seenLabels39.push(...options.options.map((opt) => opt.label)); + return ['core']; + }; + prompts.spinner = async () => ({ + start(message) { + spinnerStarts39.push(message); + }, + stop(message) { + spinnerStops39.push(message); + }, + error(message) { + spinnerStops39.push(`error:${message}`); + }, + }); + prompts.log.warn = async (message) => { + warnings39.push(message); + }; + prompts.log.message = async () => {}; + + try { + await ui._selectOfficialModules( + new Set(['bmb']), + new Map([ + ['bmb', '1.1.0'], + ['core', '6.2.0'], + ]), + { global: null, nextSet: new Set(), pins: new Map(), warnings: [] }, + ); + + assert( + seenLabels39.includes('BMad Builder (v1.1.0 → v1.7.0)'), + 'official module picker shows installed-to-latest arrow from git tags', + ); + assert(seenLabels39.includes('Test Architect (v1.15.0)'), 'official module picker shows latest git-tag version for fresh installs'); + assert( + spinnerStarts39.includes('Checking latest module versions...'), + 'official module picker wraps external lookups in a single spinner', + ); + assert(spinnerStops39.includes('Checked latest module versions.'), 'official module picker stops the version-check spinner'); + assert(warnings39.length === 0, 'official module picker does not warn when tag lookups succeed'); + } finally { + OfficialModules.prototype.listAvailable = originalOfficialListAvailable39; + ExternalModuleManager.prototype.listAvailable = originalExternalListAvailable39; + prompts.autocompleteMultiselect = originalAutocomplete39; + prompts.spinner = originalSpinner39; + prompts.log.warn = originalWarn39; + prompts.log.message = originalMessage39; + channelResolver.resolveChannel = originalResolveChannel39; + } + } + + // --- Official module picker warns and falls back to cached versions when tag lookups fail --- + { + const { UI } = require('../tools/installer/ui'); + const prompts = require('../tools/installer/prompts'); + const channelResolver = require('../tools/installer/modules/channel-resolver'); + const { ExternalModuleManager } = require('../tools/installer/modules/external-manager'); + + const ui = new UI(); + const tempCacheDir39 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-picker-cache-')); + const priorCacheEnv39 = process.env.BMAD_EXTERNAL_MODULES_CACHE; + const originalOfficialListAvailable39 = OfficialModules.prototype.listAvailable; + const originalExternalListAvailable39 = ExternalModuleManager.prototype.listAvailable; + const originalAutocomplete39 = prompts.autocompleteMultiselect; + const originalSpinner39 = prompts.spinner; + const originalWarn39 = prompts.log.warn; + const originalMessage39 = prompts.log.message; + const originalResolveChannel39 = channelResolver.resolveChannel; + + const seenLabels39 = []; + const warnings39 = []; + + process.env.BMAD_EXTERNAL_MODULES_CACHE = tempCacheDir39; + await fs.ensureDir(path.join(tempCacheDir39, 'bmb')); + await fs.writeFile( + path.join(tempCacheDir39, 'bmb', 'package.json'), + JSON.stringify({ name: 'bmad-builder', version: '1.7.0' }, null, 2) + '\n', + ); + + OfficialModules.prototype.listAvailable = async function () { + return { + modules: [ + { + id: 'core', + name: 'BMad Core Module', + description: 'always installed', + defaultSelected: true, + }, + ], + }; + }; + + ExternalModuleManager.prototype.listAvailable = async function () { + return [ + { + code: 'bmb', + name: 'BMad Builder', + description: 'Builder module', + defaultSelected: false, + builtIn: false, + url: 'https://github.com/bmad-code-org/bmad-builder', + defaultChannel: 'stable', + }, + ]; + }; + + channelResolver.resolveChannel = async function () { + throw new Error('tag lookup unavailable'); + }; + + prompts.autocompleteMultiselect = async (options) => { + seenLabels39.push(...options.options.map((opt) => opt.label)); + return ['core']; + }; + prompts.spinner = async () => ({ + start() {}, + stop() {}, + error() {}, + }); + prompts.log.warn = async (message) => { + warnings39.push(message); + }; + prompts.log.message = async () => {}; + + try { + await ui._selectOfficialModules(new Set(), new Map(), { global: null, nextSet: new Set(), pins: new Map(), warnings: [] }); + + assert( + seenLabels39.includes('BMad Builder (v1.7.0)'), + 'official module picker falls back to cached/local versions when tag lookup fails', + ); + assert( + warnings39.includes('Could not check latest module versions; showing cached/local versions.'), + 'official module picker warns once when all latest-version lookups fail', + ); + } finally { + OfficialModules.prototype.listAvailable = originalOfficialListAvailable39; + ExternalModuleManager.prototype.listAvailable = originalExternalListAvailable39; + prompts.autocompleteMultiselect = originalAutocomplete39; + prompts.spinner = originalSpinner39; + prompts.log.warn = originalWarn39; + prompts.log.message = originalMessage39; + channelResolver.resolveChannel = originalResolveChannel39; + if (priorCacheEnv39 === undefined) { + delete process.env.BMAD_EXTERNAL_MODULES_CACHE; + } else { + process.env.BMAD_EXTERNAL_MODULES_CACHE = priorCacheEnv39; + } + await fs.remove(tempCacheDir39).catch(() => {}); + } + } + console.log(''); // ============================================================ diff --git a/tools/installer/core/manifest.js b/tools/installer/core/manifest.js index ffe0de4ad..d604bf2fe 100644 --- a/tools/installer/core/manifest.js +++ b/tools/installer/core/manifest.js @@ -1,9 +1,20 @@ const path = require('node:path'); +const https = require('node:https'); +const { execFile } = require('node:child_process'); +const { promisify } = require('node:util'); const fs = require('../fs-native'); const crypto = require('node:crypto'); const { resolveModuleVersion } = require('../modules/version-resolver'); const prompts = require('../prompts'); +const execFileAsync = promisify(execFile); +const NPM_LOOKUP_TIMEOUT_MS = 10_000; +const NPM_PACKAGE_NAME_PATTERN = /^(?:@[a-z0-9][a-z0-9._~-]*\/)?[a-z0-9][a-z0-9._~-]*$/; + +function isValidNpmPackageName(packageName) { + return typeof packageName === 'string' && NPM_PACKAGE_NAME_PATTERN.test(packageName); +} + class Manifest { /** * Create a new manifest @@ -362,35 +373,40 @@ class Manifest { * @returns {string|null} Latest version or null */ async fetchNpmVersion(packageName) { - try { - const https = require('node:https'); - const { execSync } = require('node:child_process'); + if (!isValidNpmPackageName(packageName)) { + return null; + } + try { // Try using npm view first (more reliable) try { - const result = execSync(`npm view ${packageName} version`, { + const { stdout } = await execFileAsync('npm', ['view', packageName, 'version'], { encoding: 'utf8', - stdio: 'pipe', - timeout: 10_000, + timeout: NPM_LOOKUP_TIMEOUT_MS, }); - return result.trim(); + return stdout.trim(); } catch { // Fallback to npm registry API - return new Promise((resolve, reject) => { - https - .get(`https://registry.npmjs.org/${packageName}`, (res) => { - let data = ''; - res.on('data', (chunk) => (data += chunk)); - res.on('end', () => { - try { - const pkg = JSON.parse(data); - resolve(pkg['dist-tags']?.latest || pkg.version || null); - } catch { - resolve(null); - } - }); - }) - .on('error', () => resolve(null)); + return new Promise((resolve) => { + const request = https.get(`https://registry.npmjs.org/${encodeURIComponent(packageName)}`, (res) => { + let data = ''; + res.on('data', (chunk) => (data += chunk)); + res.on('end', () => { + try { + const pkg = JSON.parse(data); + resolve(pkg['dist-tags']?.latest || pkg.version || null); + } catch { + resolve(null); + } + }); + }); + + request.setTimeout(NPM_LOOKUP_TIMEOUT_MS, () => { + request.destroy(); + resolve(null); + }); + + request.on('error', () => resolve(null)); }); } } catch { diff --git a/tools/installer/ui.js b/tools/installer/ui.js index 030ef5a3b..f2f6e31c1 100644 --- a/tools/installer/ui.js +++ b/tools/installer/ui.js @@ -1,20 +1,107 @@ const path = require('node:path'); const os = require('node:os'); +const semver = require('semver'); const fs = require('./fs-native'); const { CLIUtils } = require('./cli-utils'); const { ExternalModuleManager } = require('./modules/external-manager'); const { resolveModuleVersion } = require('./modules/version-resolver'); -const { parseChannelOptions, buildPlan, orphanPinWarnings, bundledTargetWarnings } = require('./modules/channel-plan'); +const { Manifest } = require('./core/manifest'); +const { + parseChannelOptions, + buildPlan, + decideChannelForModule, + orphanPinWarnings, + bundledTargetWarnings, +} = require('./modules/channel-plan'); +const channelResolver = require('./modules/channel-resolver'); const prompts = require('./prompts'); +const manifest = new Manifest(); + /** - * Read a module version from the freshest local metadata available. - * @param {string} moduleCode - Module code (e.g., 'core', 'bmm', 'cis') - * @returns {string} Version string or empty string + * Format a resolved version for display in installer labels. + * Semver-like values are normalized to a single leading "v". + * @param {string|null|undefined} version + * @returns {string} */ -async function getModuleVersion(moduleCode) { +function formatDisplayVersion(version) { + const trimmed = typeof version === 'string' ? version.trim() : ''; + if (!trimmed) return ''; + + const normalized = semver.valid(semver.coerce(trimmed)); + if (normalized) { + return `v${normalized}`; + } + + return trimmed; +} + +/** + * Build the display label for a module, showing an upgrade arrow when an + * installed semver differs from the latest resolvable semver. + * @param {string} name + * @param {string} latestVersion + * @param {string} installedVersion + * @returns {string} + */ +function buildModuleLabel(name, latestVersion, installedVersion = '') { + const latestDisplay = formatDisplayVersion(latestVersion); + if (!latestDisplay) return name; + + const installedDisplay = formatDisplayVersion(installedVersion); + const latestSemver = semver.valid(semver.coerce(latestVersion || '')); + const installedSemver = semver.valid(semver.coerce(installedVersion || '')); + + if (installedDisplay && latestSemver && installedSemver && semver.neq(installedSemver, latestSemver)) { + return `${name} (${installedDisplay} → ${latestDisplay})`; + } + + return `${name} (${latestDisplay})`; +} + +/** + * Resolve the version to show for a module picker entry. External modules use + * the same channel/tag resolver as installs; bundled modules fall back to local + * source metadata. + * @param {string} moduleCode - Module code (e.g., 'core', 'bmm', 'cis') + * @param {Object} options + * @param {string|null} [options.repoUrl] - Module repository URL for tag resolution + * @param {string|null} [options.registryDefault] - Registry default channel + * @param {Object|null} [options.channelOptions] - Parsed installer channel options + * @returns {Promise<{version: string, lookupAttempted: boolean, lookupSucceeded: boolean}>} + */ +async function getModuleVersion(moduleCode, { repoUrl = null, registryDefault = null, channelOptions = null } = {}) { + if (repoUrl) { + const plan = decideChannelForModule({ + code: moduleCode, + channelOptions, + registryDefault, + }); + + try { + const resolved = await channelResolver.resolveChannel({ + channel: plan.channel, + pin: plan.pin, + repoUrl, + }); + if (resolved?.version) { + return { + version: resolved.version, + lookupAttempted: plan.channel === 'stable', + lookupSucceeded: true, + }; + } + } catch { + // Fall back to local metadata when tag resolution is unavailable. + } + } + const versionInfo = await resolveModuleVersion(moduleCode); - return versionInfo.version || ''; + return { + version: versionInfo.version || '', + lookupAttempted: !!repoUrl, + lookupSucceeded: false, + }; } /** @@ -122,7 +209,7 @@ class UI { // Return early with modify configuration if (actionType === 'update') { // Get existing installation info - const { installedModuleIds } = await this.getExistingInstallation(confirmedDirectory); + const { installedModuleIds, installedModuleVersions } = await this.getExistingInstallation(confirmedDirectory); await prompts.log.message(`Found existing modules: ${[...installedModuleIds].join(', ')}`); @@ -144,7 +231,7 @@ class UI { `Non-interactive mode (--yes): using default modules (installed + defaults): ${selectedModules.join(', ')}`, ); } else { - selectedModules = await this.selectAllModules(installedModuleIds); + selectedModules = await this.selectAllModules(installedModuleIds, installedModuleVersions, channelOptions); } // Resolve custom sources from --custom-source flag @@ -208,7 +295,7 @@ class UI { } // This section is only for new installations (update returns early above) - const { installedModuleIds } = await this.getExistingInstallation(confirmedDirectory); + const { installedModuleIds, installedModuleVersions } = await this.getExistingInstallation(confirmedDirectory); // Unified module selection - all modules in one grouped multiselect let selectedModules; @@ -227,7 +314,7 @@ class UI { selectedModules = await this.getDefaultModules(installedModuleIds); await prompts.log.info(`Using default modules (--yes flag): ${selectedModules.join(', ')}`); } else { - selectedModules = await this.selectAllModules(installedModuleIds); + selectedModules = await this.selectAllModules(installedModuleIds, installedModuleVersions, channelOptions); } // Resolve custom sources from --custom-source flag @@ -526,7 +613,7 @@ class UI { /** * Get existing installation info and installed modules * @param {string} directory - Installation directory - * @returns {Object} Object with existingInstall, installedModuleIds, and bmadDir + * @returns {Object} Object with existingInstall, installedModuleIds, installedModuleVersions, and bmadDir */ async getExistingInstallation(directory) { const { ExistingInstall } = require('./core/existing-install'); @@ -535,8 +622,26 @@ class UI { const { bmadDir } = await installer.findBmadDir(directory); const existingInstall = await ExistingInstall.detect(bmadDir); const installedModuleIds = new Set(existingInstall.moduleIds); + const installedModuleVersions = new Map(); + const manifestModules = await manifest.getAllModuleVersions(bmadDir); - return { existingInstall, installedModuleIds, bmadDir }; + for (const module of manifestModules) { + if (module?.name && module.version) { + installedModuleVersions.set(module.name, module.version); + } + } + + for (const module of existingInstall.modules) { + if (module?.id && module.version && module.version !== 'unknown' && !installedModuleVersions.has(module.id)) { + installedModuleVersions.set(module.id, module.version); + } + } + + if (existingInstall.hasCore && existingInstall.version && !installedModuleVersions.has('core')) { + installedModuleVersions.set('core', existingInstall.version); + } + + return { existingInstall, installedModuleIds, installedModuleVersions, bmadDir }; } /** @@ -617,11 +722,13 @@ class UI { /** * Select all modules across three tiers: official, community, and custom URL. * @param {Set} installedModuleIds - Currently installed module IDs + * @param {Map} installedModuleVersions - Installed module versions from the local manifest + * @param {Object|null} channelOptions - Parsed installer channel options * @returns {Array} Selected module codes (excluding core) */ - async selectAllModules(installedModuleIds = new Set()) { + async selectAllModules(installedModuleIds = new Set(), installedModuleVersions = new Map(), channelOptions = null) { // Phase 1: Official modules - const officialSelected = await this._selectOfficialModules(installedModuleIds); + const officialSelected = await this._selectOfficialModules(installedModuleIds, installedModuleVersions, channelOptions); // Determine which installed modules are NOT official (community or custom). // These must be preserved even if the user declines to browse community/custom. @@ -657,9 +764,11 @@ class UI { * Select official modules using autocompleteMultiselect. * Extracted from the original selectAllModules - unchanged behavior. * @param {Set} installedModuleIds - Currently installed module IDs + * @param {Map} installedModuleVersions - Installed module versions from the local manifest + * @param {Object|null} channelOptions - Parsed installer channel options * @returns {Array} Selected official module codes */ - async _selectOfficialModules(installedModuleIds = new Set()) { + async _selectOfficialModules(installedModuleIds = new Set(), installedModuleVersions = new Map(), channelOptions = null) { // Built-in modules (core, bmm) come from local source, not the registry const { OfficialModules } = require('./modules/official-modules'); const builtInModules = (await new OfficialModules().listAvailable()).modules || []; @@ -672,15 +781,18 @@ class UI { const initialValues = []; const lockedValues = ['core']; - const buildModuleEntry = async (code, name, description, isDefault) => { + const buildModuleEntry = async (code, name, description, isDefault, repoUrl = null, registryDefault = null) => { const isInstalled = installedModuleIds.has(code); - const version = await getModuleVersion(code); - const label = version ? `${name} (v${version})` : name; + const installedVersion = installedModuleVersions.get(code) || ''; + const versionState = await getModuleVersion(code, { repoUrl, registryDefault, channelOptions }); + const label = buildModuleLabel(name, versionState.version, installedVersion); return { label, value: code, hint: description, selected: isInstalled || isDefault, + lookupAttempted: versionState.lookupAttempted, + lookupSucceeded: versionState.lookupSucceeded, }; }; @@ -697,12 +809,38 @@ class UI { } // Add external registry modules (skip built-in duplicates) - for (const mod of registryModules) { - if (mod.builtIn || builtInCodes.has(mod.code)) continue; - const entry = await buildModuleEntry(mod.code, mod.name, mod.description, mod.defaultSelected); + const externalRegistryModules = registryModules.filter((mod) => !mod.builtIn && !builtInCodes.has(mod.code)); + let externalRegistryEntries = []; + if (externalRegistryModules.length > 0) { + const spinner = await prompts.spinner(); + spinner.start('Checking latest module versions...'); + + externalRegistryEntries = await Promise.all( + externalRegistryModules.map(async (mod) => ({ + code: mod.code, + entry: await buildModuleEntry( + mod.code, + mod.name, + mod.description, + mod.defaultSelected, + mod.url || null, + mod.defaultChannel || null, + ), + })), + ); + + spinner.stop('Checked latest module versions.'); + + const attemptedLookups = externalRegistryEntries.filter(({ entry }) => entry.lookupAttempted).length; + const successfulLookups = externalRegistryEntries.filter(({ entry }) => entry.lookupSucceeded).length; + if (attemptedLookups > 0 && successfulLookups === 0) { + await prompts.log.warn('Could not check latest module versions; showing cached/local versions.'); + } + } + for (const { code, entry } of externalRegistryEntries) { allOptions.push({ label: entry.label, value: entry.value, hint: entry.hint }); if (entry.selected) { - initialValues.push(mod.code); + initialValues.push(code); } } From e7a213ed07e4b676130af12386428abc4f8c794a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?J=C3=A9r=C3=B4me=20Revillard?= Date: Sat, 25 Apr 2026 00:45:25 +0200 Subject: [PATCH 03/23] feat: uniform customize.toml support across all BMM workflows (#2308) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * feat: extend customize.toml to all 6 developer-execution workflows (#2303) Add uniform customization support to dev-story, code-review, sprint-planning, sprint-status, quick-dev, and checkpoint-preview, matching the same 4 extension points (activation_steps_prepend, activation_steps_append, persistent_facts, on_complete) already available on 17 BMM workflows from PR #2287. - Create customize.toml for each workflow - Add 6-step activation block to SKILL.md (merge workflow.md content in, delete workflow.md per PR #2287 pattern) - Wire on_complete at terminal steps (inline for XML workflows, ## On Complete section for step-file workflows) - Fix pre-existing step number reference in dev-story (Step 6 → 9) * fix: correct goto step="6" → step="9" in dev-story The XML goto at line 203 still pointed to step 6 ("Author comprehensive tests") instead of step 9 ("Story completion and mark for review"), which is the actual completion gate. This was the same class of pre-existing bug fixed in the text (M-1) but missed in the XML action. --------- Co-authored-by: Brian --- .../bmad-checkpoint-preview/SKILL.md | 59 ++- .../bmad-checkpoint-preview/customize.toml | 41 ++ .../bmad-checkpoint-preview/step-05-wrapup.md | 6 + .../bmad-code-review/SKILL.md | 86 +++- .../bmad-code-review/customize.toml | 41 ++ .../bmad-code-review/steps/step-04-present.md | 6 + .../bmad-code-review/workflow.md | 55 -- .../4-implementation/bmad-dev-story/SKILL.md | 481 +++++++++++++++++- .../bmad-dev-story/customize.toml | 41 ++ .../bmad-dev-story/workflow.md | 450 ---------------- .../4-implementation/bmad-quick-dev/SKILL.md | 107 +++- .../bmad-quick-dev/customize.toml | 41 ++ .../bmad-quick-dev/step-05-present.md | 6 + .../bmad-quick-dev/step-oneshot.md | 6 + .../bmad-quick-dev/workflow.md | 76 --- .../bmad-sprint-planning/SKILL.md | 295 ++++++++++- .../bmad-sprint-planning/customize.toml | 41 ++ .../bmad-sprint-planning/workflow.md | 263 ---------- .../bmad-sprint-status/SKILL.md | 293 ++++++++++- .../bmad-sprint-status/customize.toml | 41 ++ .../bmad-sprint-status/workflow.md | 261 ---------- 21 files changed, 1576 insertions(+), 1120 deletions(-) create mode 100644 src/bmm-skills/4-implementation/bmad-checkpoint-preview/customize.toml create mode 100644 src/bmm-skills/4-implementation/bmad-code-review/customize.toml delete mode 100644 src/bmm-skills/4-implementation/bmad-code-review/workflow.md create mode 100644 src/bmm-skills/4-implementation/bmad-dev-story/customize.toml delete mode 100644 src/bmm-skills/4-implementation/bmad-dev-story/workflow.md create mode 100644 src/bmm-skills/4-implementation/bmad-quick-dev/customize.toml delete mode 100644 src/bmm-skills/4-implementation/bmad-quick-dev/workflow.md create mode 100644 src/bmm-skills/4-implementation/bmad-sprint-planning/customize.toml delete mode 100644 src/bmm-skills/4-implementation/bmad-sprint-planning/workflow.md create mode 100644 src/bmm-skills/4-implementation/bmad-sprint-status/customize.toml delete mode 100644 src/bmm-skills/4-implementation/bmad-sprint-status/workflow.md diff --git a/src/bmm-skills/4-implementation/bmad-checkpoint-preview/SKILL.md b/src/bmm-skills/4-implementation/bmad-checkpoint-preview/SKILL.md index 2cfd04420..101dcf2bc 100644 --- a/src/bmm-skills/4-implementation/bmad-checkpoint-preview/SKILL.md +++ b/src/bmm-skills/4-implementation/bmad-checkpoint-preview/SKILL.md @@ -7,7 +7,55 @@ description: 'LLM-assisted human-in-the-loop review. Make sense of a change, foc **Goal:** Guide a human through reviewing a change — from purpose and context into details. -You are assisting the user in reviewing a change. +**Your Role:** You are assisting the user in reviewing a change. + +## Conventions + +- Bare paths (e.g. `step-01-orientation.md`) resolve from the skill root. +- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives). +- `{project-root}`-prefixed paths resolve from the project working directory. +- `{skill-name}` resolves to the skill directory's basename. + +## On Activation + +### Step 1: Resolve the Workflow Block + +Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow` + +**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver: + +1. `{skill-root}/customize.toml` — defaults +2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides +3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides + +Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append. + +### Step 2: Execute Prepend Steps + +Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding. + +### Step 3: Load Persistent Facts + +Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim. + +### Step 4: Load Config + +Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve: + +- `implementation_artifacts` +- `planning_artifacts` +- `communication_language` +- `document_output_language` + +### Step 5: Greet the User + +Greet the user, speaking in `{communication_language}`. + +### Step 6: Execute Append Steps + +Execute each entry in `{workflow.activation_steps_append}` in order. + +Activation is complete. Begin the workflow below. ## Global Step Rules (apply to every step) @@ -15,15 +63,6 @@ You are assisting the user in reviewing a change. - **Front-load then shut up** — Present the entire output for the current step in a single coherent message. Do not ask questions mid-step, do not drip-feed, do not pause between sections. - **Language** — Speak in `{communication_language}`. Write any file output in `{document_output_language}`. -## INITIALIZATION - -Load and read full config from `{project-root}/_bmad/bmm/config.yaml` and resolve: - -- `implementation_artifacts` -- `planning_artifacts` -- `communication_language` -- `document_output_language` - ## FIRST STEP Read fully and follow `./step-01-orientation.md` to begin. diff --git a/src/bmm-skills/4-implementation/bmad-checkpoint-preview/customize.toml b/src/bmm-skills/4-implementation/bmad-checkpoint-preview/customize.toml new file mode 100644 index 000000000..2f9b034ac --- /dev/null +++ b/src/bmm-skills/4-implementation/bmad-checkpoint-preview/customize.toml @@ -0,0 +1,41 @@ +# DO NOT EDIT -- overwritten on every update. +# +# Workflow customization surface for bmad-checkpoint-preview. Mirrors the +# agent customization shape under the [workflow] namespace. + +[workflow] + +# --- Configurable below. Overrides merge per BMad structural rules: --- +# scalars: override wins • arrays (persistent_facts, activation_steps_*): append +# arrays-of-tables with `code`/`id`: replace matching items, append new ones. + +# Steps to run before the standard activation (config load, greet). +# Overrides append. Use for pre-flight loads, compliance checks, etc. + +activation_steps_prepend = [] + +# Steps to run after greet but before the workflow begins. +# Overrides append. Use for context-heavy setup that should happen +# once the user has been acknowledged. + +activation_steps_append = [] + +# Persistent facts the workflow keeps in mind for the whole run +# (standards, compliance constraints, stylistic guardrails). +# Distinct from the runtime memory sidecar — these are static context +# loaded on activation. Overrides append. +# +# Each entry is either: +# - a literal sentence, e.g. "All stories must include testable acceptance criteria." +# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md" +# (glob patterns are supported; the file's contents are loaded and treated as facts). + +persistent_facts = [ + "file:{project-root}/**/project-context.md", +] + +# Scalar: executed when the workflow reaches its final step, +# after the review decision (approve/rework/discuss) is made. Override wins. +# Leave empty for no custom post-completion behavior. + +on_complete = "" diff --git a/src/bmm-skills/4-implementation/bmad-checkpoint-preview/step-05-wrapup.md b/src/bmm-skills/4-implementation/bmad-checkpoint-preview/step-05-wrapup.md index 5f293d56c..346a1c535 100644 --- a/src/bmm-skills/4-implementation/bmad-checkpoint-preview/step-05-wrapup.md +++ b/src/bmm-skills/4-implementation/bmad-checkpoint-preview/step-05-wrapup.md @@ -22,3 +22,9 @@ HALT — do not proceed until the user makes their choice. - **Approve**: Acknowledge briefly. If the human wants to patch something before shipping, help apply the fix interactively. If reviewing a PR, offer to approve via `gh pr review --approve` — but confirm with the human before executing, since this is a visible action on a shared resource. - **Rework**: Ask what went wrong — was it the approach, the spec, or the implementation? Help the human decide on next steps (revert commit, open an issue, revise the spec, etc.). Help draft specific, actionable feedback tied to `path:line` locations if the change is a PR from someone else. - **Discuss**: Open conversation — answer questions, explore concerns, dig into any aspect. After discussion, return to the decision prompt above. + +## On Complete + +Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` + +If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting. diff --git a/src/bmm-skills/4-implementation/bmad-code-review/SKILL.md b/src/bmm-skills/4-implementation/bmad-code-review/SKILL.md index 32f020af7..44223f11a 100644 --- a/src/bmm-skills/4-implementation/bmad-code-review/SKILL.md +++ b/src/bmm-skills/4-implementation/bmad-code-review/SKILL.md @@ -3,4 +3,88 @@ name: bmad-code-review description: 'Review code changes adversarially using parallel review layers (Blind Hunter, Edge Case Hunter, Acceptance Auditor) with structured triage into actionable categories. Use when the user says "run code review" or "review this code"' --- -Follow the instructions in ./workflow.md. +# Code Review Workflow + +**Goal:** Review code changes adversarially using parallel review layers and structured triage. + +**Your Role:** You are an elite code reviewer. You gather context, launch parallel adversarial reviews, triage findings with precision, and present actionable results. No noise, no filler. + +## Conventions + +- Bare paths (e.g. `checklist.md`) resolve from the skill root. +- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives). +- `{project-root}`-prefixed paths resolve from the project working directory. +- `{skill-name}` resolves to the skill directory's basename. + +## On Activation + +### Step 1: Resolve the Workflow Block + +Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow` + +**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver: + +1. `{skill-root}/customize.toml` — defaults +2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides +3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides + +Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append. + +### Step 2: Execute Prepend Steps + +Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding. + +### Step 3: Load Persistent Facts + +Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim. + +### Step 4: Load Config + +Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve: + +- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name` +- `communication_language`, `document_output_language`, `user_skill_level` +- `date` as system-generated current datetime +- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml` +- `project_context` = `**/project-context.md` (load if exists) +- CLAUDE.md / memory files (load if exist) +- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}` + +### Step 5: Greet the User + +Greet `{user_name}`, speaking in `{communication_language}`. + +### Step 6: Execute Append Steps + +Execute each entry in `{workflow.activation_steps_append}` in order. + +Activation is complete. Begin the workflow below. + +## WORKFLOW ARCHITECTURE + +This uses **step-file architecture** for disciplined execution: + +- **Micro-file Design**: Each step is self-contained and followed exactly +- **Just-In-Time Loading**: Only load the current step file +- **Sequential Enforcement**: Complete steps in order, no skipping +- **State Tracking**: Persist progress via in-memory variables +- **Append-Only Building**: Build artifacts incrementally + +### Step Processing Rules + +1. **READ COMPLETELY**: Read the entire step file before acting +2. **FOLLOW SEQUENCE**: Execute sections in order +3. **WAIT FOR INPUT**: Halt at checkpoints and wait for human +4. **LOAD NEXT**: When directed, read fully and follow the next step file + +### Critical Rules (NO EXCEPTIONS) + +- **NEVER** load multiple step files simultaneously +- **ALWAYS** read entire step file before execution +- **NEVER** skip steps or optimize the sequence +- **ALWAYS** follow the exact instructions in the step file +- **ALWAYS** halt at checkpoints and wait for human input + +## FIRST STEP + +Read fully and follow: `./steps/step-01-gather-context.md` diff --git a/src/bmm-skills/4-implementation/bmad-code-review/customize.toml b/src/bmm-skills/4-implementation/bmad-code-review/customize.toml new file mode 100644 index 000000000..26ba792f9 --- /dev/null +++ b/src/bmm-skills/4-implementation/bmad-code-review/customize.toml @@ -0,0 +1,41 @@ +# DO NOT EDIT -- overwritten on every update. +# +# Workflow customization surface for bmad-code-review. Mirrors the +# agent customization shape under the [workflow] namespace. + +[workflow] + +# --- Configurable below. Overrides merge per BMad structural rules: --- +# scalars: override wins • arrays (persistent_facts, activation_steps_*): append +# arrays-of-tables with `code`/`id`: replace matching items, append new ones. + +# Steps to run before the standard activation (config load, greet). +# Overrides append. Use for pre-flight loads, compliance checks, etc. + +activation_steps_prepend = [] + +# Steps to run after greet but before the workflow begins. +# Overrides append. Use for context-heavy setup that should happen +# once the user has been acknowledged. + +activation_steps_append = [] + +# Persistent facts the workflow keeps in mind for the whole run +# (standards, compliance constraints, stylistic guardrails). +# Distinct from the runtime memory sidecar — these are static context +# loaded on activation. Overrides append. +# +# Each entry is either: +# - a literal sentence, e.g. "All stories must include testable acceptance criteria." +# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md" +# (glob patterns are supported; the file's contents are loaded and treated as facts). + +persistent_facts = [ + "file:{project-root}/**/project-context.md", +] + +# Scalar: executed when the workflow reaches its final step, +# after review findings are presented and sprint status is synced. Override wins. +# Leave empty for no custom post-completion behavior. + +on_complete = "" diff --git a/src/bmm-skills/4-implementation/bmad-code-review/steps/step-04-present.md b/src/bmm-skills/4-implementation/bmad-code-review/steps/step-04-present.md index 2a6a70e44..1697c769c 100644 --- a/src/bmm-skills/4-implementation/bmad-code-review/steps/step-04-present.md +++ b/src/bmm-skills/4-implementation/bmad-code-review/steps/step-04-present.md @@ -124,3 +124,9 @@ Present the user with follow-up options: > 3. **Done** — end the workflow **HALT** — I am waiting for your choice. Do not proceed until the user selects an option. + +## On Complete + +Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` + +If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting. diff --git a/src/bmm-skills/4-implementation/bmad-code-review/workflow.md b/src/bmm-skills/4-implementation/bmad-code-review/workflow.md deleted file mode 100644 index 2cad2d870..000000000 --- a/src/bmm-skills/4-implementation/bmad-code-review/workflow.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -main_config: '{project-root}/_bmad/bmm/config.yaml' ---- - -# Code Review Workflow - -**Goal:** Review code changes adversarially using parallel review layers and structured triage. - -**Your Role:** You are an elite code reviewer. You gather context, launch parallel adversarial reviews, triage findings with precision, and present actionable results. No noise, no filler. - - -## WORKFLOW ARCHITECTURE - -This uses **step-file architecture** for disciplined execution: - -- **Micro-file Design**: Each step is self-contained and followed exactly -- **Just-In-Time Loading**: Only load the current step file -- **Sequential Enforcement**: Complete steps in order, no skipping -- **State Tracking**: Persist progress via in-memory variables -- **Append-Only Building**: Build artifacts incrementally - -### Step Processing Rules - -1. **READ COMPLETELY**: Read the entire step file before acting -2. **FOLLOW SEQUENCE**: Execute sections in order -3. **WAIT FOR INPUT**: Halt at checkpoints and wait for human -4. **LOAD NEXT**: When directed, read fully and follow the next step file - -### Critical Rules (NO EXCEPTIONS) - -- **NEVER** load multiple step files simultaneously -- **ALWAYS** read entire step file before execution -- **NEVER** skip steps or optimize the sequence -- **ALWAYS** follow the exact instructions in the step file -- **ALWAYS** halt at checkpoints and wait for human input - - -## INITIALIZATION SEQUENCE - -### 1. Configuration Loading - -Load and read full config from `{main_config}` and resolve: - -- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name` -- `communication_language`, `document_output_language`, `user_skill_level` -- `date` as system-generated current datetime -- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml` -- `project_context` = `**/project-context.md` (load if exists) -- CLAUDE.md / memory files (load if exist) - -YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`. - -### 2. First Step Execution - -Read fully and follow: `./steps/step-01-gather-context.md` to begin the workflow. diff --git a/src/bmm-skills/4-implementation/bmad-dev-story/SKILL.md b/src/bmm-skills/4-implementation/bmad-dev-story/SKILL.md index 0eb505cc7..218b234ab 100644 --- a/src/bmm-skills/4-implementation/bmad-dev-story/SKILL.md +++ b/src/bmm-skills/4-implementation/bmad-dev-story/SKILL.md @@ -3,4 +3,483 @@ name: bmad-dev-story description: 'Execute story implementation following a context filled story spec file. Use when the user says "dev this story [story file]" or "implement the next story in the sprint plan"' --- -Follow the instructions in ./workflow.md. +# Dev Story Workflow + +**Goal:** Execute story implementation following a context filled story spec file. + +**Your Role:** Developer implementing the story. +- Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level} +- Generate all documents in {document_output_language} +- Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, Change Log, and Status +- Execute ALL steps in exact order; do NOT skip steps +- Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives other instruction. +- Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 9 decides completion. +- User skill level ({user_skill_level}) affects conversation style ONLY, not code updates. + +## Conventions + +- Bare paths (e.g. `steps/step-01-init.md`) resolve from the skill root. +- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives). +- `{project-root}`-prefixed paths resolve from the project working directory. +- `{skill-name}` resolves to the skill directory's basename. + +## On Activation + +### Step 1: Resolve the Workflow Block + +Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow` + +**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver: + +1. `{skill-root}/customize.toml` — defaults +2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides +3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides + +Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append. + +### Step 2: Execute Prepend Steps + +Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding. + +### Step 3: Load Persistent Facts + +Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim. + +### Step 4: Load Config + +Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve: + +- `project_name`, `user_name` +- `communication_language`, `document_output_language` +- `user_skill_level` +- `implementation_artifacts` +- `date` as system-generated current datetime + +### Step 5: Greet the User + +Greet `{user_name}`, speaking in `{communication_language}`. + +### Step 6: Execute Append Steps + +Execute each entry in `{workflow.activation_steps_append}` in order. + +Activation is complete. Begin the workflow below. + +## Paths + +- `story_file` = `` (explicit story path; auto-discovered if empty) +- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml` + +## Execution + + + Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level} + Generate all documents in {document_output_language} + Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, + Change Log, and Status + Execute ALL steps in exact order; do NOT skip steps + Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution + until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives + other instruction. + Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 9 decides completion. + User skill level ({user_skill_level}) affects conversation style ONLY, not code updates. + + + + Use {{story_path}} directly + Read COMPLETE story file + Extract story_key from filename or metadata + + + + + + MUST read COMPLETE sprint-status.yaml file from start to end to preserve order + Load the FULL file: {{sprint_status}} + Read ALL lines from beginning to end - do not skip any content + Parse the development_status section completely to understand story order + + Find the FIRST story (by reading in order from top to bottom) where: + - Key matches pattern: number-number-name (e.g., "1-2-user-auth") + - NOT an epic key (epic-X) or retrospective (epic-X-retrospective) + - Status value equals "ready-for-dev" + + + + 📋 No ready-for-dev stories found in sprint-status.yaml + + **Current Sprint Status:** {{sprint_status_summary}} + + **What would you like to do?** + 1. Run `create-story` to create next story from epics with comprehensive context + 2. Run `*validate-create-story` to improve existing stories before development (recommended quality check) + 3. Specify a particular story file to develop (provide full path) + 4. Check {{sprint_status}} file to see current sprint status + + 💡 **Tip:** Stories in `ready-for-dev` may not have been validated. Consider running `validate-create-story` first for a quality + check. + + Choose option [1], [2], [3], or [4], or specify story file path: + + + HALT - Run create-story to create next story + + + + HALT - Run validate-create-story to improve existing stories + + + + Provide the story file path to develop: + Store user-provided story path as {{story_path}} + + + + + Loading {{sprint_status}} for detailed status review... + Display detailed sprint status analysis + HALT - User can review sprint status and provide story path + + + + Store user-provided story path as {{story_path}} + + + + + + + + Search {implementation_artifacts} for stories directly + Find stories with "ready-for-dev" status in files + Look for story files matching pattern: *-*-*.md + Read each candidate story file to check Status section + + + 📋 No ready-for-dev stories found + + **Available Options:** + 1. Run `create-story` to create next story from epics with comprehensive context + 2. Run `*validate-create-story` to improve existing stories + 3. Specify which story to develop + + What would you like to do? Choose option [1], [2], or [3]: + + + HALT - Run create-story to create next story + + + + HALT - Run validate-create-story to improve existing stories + + + + It's unclear what story you want developed. Please provide the full path to the story file: + Store user-provided story path as {{story_path}} + Continue with provided story file + + + + + Use discovered story file and extract story_key + + + + Store the found story_key (e.g., "1-2-user-authentication") for later status updates + Find matching story file in {implementation_artifacts} using story_key pattern: {{story_key}}.md + Read COMPLETE story file from discovered path + + + + Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status + + Load comprehensive context from story file's Dev Notes section + Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications + Use enhanced story context to inform implementation decisions and approaches + + Identify first incomplete task (unchecked [ ]) in Tasks/Subtasks + + + Completion sequence + + HALT: "Cannot develop story without access to story file" + ASK user to clarify or HALT + + + + Load all available context to inform implementation + + Load {project_context} for coding standards and project-wide patterns (if exists) + Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status + Load comprehensive context from story file's Dev Notes section + Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications + Use enhanced story context to inform implementation decisions and approaches + ✅ **Context Loaded** + Story and project context available for implementation + + + + + Determine if this is a fresh start or continuation after code review + + Check if "Senior Developer Review (AI)" section exists in the story file + Check if "Review Follow-ups (AI)" subsection exists under Tasks/Subtasks + + + Set review_continuation = true + Extract from "Senior Developer Review (AI)" section: + - Review outcome (Approve/Changes Requested/Blocked) + - Review date + - Total action items with checkboxes (count checked vs unchecked) + - Severity breakdown (High/Med/Low counts) + + Count unchecked [ ] review follow-up tasks in "Review Follow-ups (AI)" subsection + Store list of unchecked review items as {{pending_review_items}} + + ⏯️ **Resuming Story After Code Review** ({{review_date}}) + + **Review Outcome:** {{review_outcome}} + **Action Items:** {{unchecked_review_count}} remaining to address + **Priorities:** {{high_count}} High, {{med_count}} Medium, {{low_count}} Low + + **Strategy:** Will prioritize review follow-up tasks (marked [AI-Review]) before continuing with regular tasks. + + + + + Set review_continuation = false + Set {{pending_review_items}} = empty + + 🚀 **Starting Fresh Implementation** + + Story: {{story_key}} + Story Status: {{current_status}} + First incomplete task: {{first_task_description}} + + + + + + + Load the FULL file: {{sprint_status}} + Read all development_status entries to find {{story_key}} + Get current status value for development_status[{{story_key}}] + + + Update the story in the sprint status report to = "in-progress" + Update last_updated field to current date + 🚀 Starting work on story {{story_key}} + Status updated: ready-for-dev → in-progress + + + + + ⏯️ Resuming work on story {{story_key}} + Story is already marked in-progress + + + + + ⚠️ Unexpected story status: {{current_status}} + Expected ready-for-dev or in-progress. Continuing anyway... + + + + Store {{current_sprint_status}} for later use + + + + ℹ️ No sprint status file exists - story progress will be tracked in story file only + Set {{current_sprint_status}} = "no-sprint-tracking" + + + + + FOLLOW THE STORY FILE TASKS/SUBTASKS SEQUENCE EXACTLY AS WRITTEN - NO DEVIATION + + Review the current task/subtask from the story file - this is your authoritative implementation guide + Plan implementation following red-green-refactor cycle + + + Write FAILING tests first for the task/subtask functionality + Confirm tests fail before implementation - this validates test correctness + + + Implement MINIMAL code to make tests pass + Run tests to confirm they now pass + Handle error conditions and edge cases as specified in task/subtask + + + Improve code structure while keeping tests green + Ensure code follows architecture patterns and coding standards from Dev Notes + + Document technical approach and decisions in Dev Agent Record → Implementation Plan + + HALT: "Additional dependencies need user approval" + HALT and request guidance + HALT: "Cannot proceed without necessary configuration files" + + NEVER implement anything not mapped to a specific task/subtask in the story file + NEVER proceed to next task until current task/subtask is complete AND tests pass + Execute continuously without pausing until all tasks/subtasks are complete or explicit HALT condition + Do NOT propose to pause for review until Step 9 completion gates are satisfied + + + + Create unit tests for business logic and core functionality introduced/changed by the task + Add integration tests for component interactions specified in story requirements + Include end-to-end tests for critical user flows when story requirements demand them + Cover edge cases and error handling scenarios identified in story Dev Notes + + + + Determine how to run tests for this repo (infer test framework from project structure) + Run all existing tests to ensure no regressions + Run the new tests to verify implementation correctness + Run linting and code quality checks if configured in project + Validate implementation meets ALL story acceptance criteria; enforce quantitative thresholds explicitly + STOP and fix before continuing - identify breaking changes immediately + STOP and fix before continuing - ensure implementation correctness + + + + NEVER mark a task complete unless ALL conditions are met - NO LYING OR CHEATING + + + Verify ALL tests for this task/subtask ACTUALLY EXIST and PASS 100% + Confirm implementation matches EXACTLY what the task/subtask specifies - no extra features + Validate that ALL acceptance criteria related to this task are satisfied + Run full test suite to ensure NO regressions introduced + + + + Extract review item details (severity, description, related AC/file) + Add to resolution tracking list: {{resolved_review_items}} + + + Mark task checkbox [x] in "Tasks/Subtasks → Review Follow-ups (AI)" section + + + Find matching action item in "Senior Developer Review (AI) → Action Items" section by matching description + Mark that action item checkbox [x] as resolved + + Add to Dev Agent Record → Completion Notes: "✅ Resolved review finding [{{severity}}]: {{description}}" + + + + + ONLY THEN mark the task (and subtasks) checkbox with [x] + Update File List section with ALL new, modified, or deleted files (paths relative to repo root) + Add completion notes to Dev Agent Record summarizing what was ACTUALLY implemented and tested + + + + DO NOT mark task complete - fix issues first + HALT if unable to fix validation failures + + + + Count total resolved review items in this session + Add Change Log entry: "Addressed code review findings - {{resolved_count}} items resolved (Date: {{date}})" + + + Save the story file + Determine if more incomplete tasks remain + + Next task + + + Completion + + + + + Verify ALL tasks and subtasks are marked [x] (re-scan the story document now) + Run the full regression suite (do not skip) + Confirm File List includes every changed file + Execute enhanced definition-of-done validation + Update the story Status to: "review" + + + Validate definition-of-done checklist with essential requirements: + - All tasks/subtasks marked complete with [x] + - Implementation satisfies every Acceptance Criterion + - Unit tests for core functionality added/updated + - Integration tests for component interactions added when required + - End-to-end tests for critical flows added when story demands them + - All tests pass (no regressions, new tests successful) + - Code quality checks pass (linting, static analysis if configured) + - File List includes every new/modified/deleted file (relative paths) + - Dev Agent Record contains implementation notes + - Change Log includes summary of changes + - Only permitted story sections were modified + + + + + Load the FULL file: {sprint_status} + Find development_status key matching {{story_key}} + Verify current status is "in-progress" (expected previous state) + Update development_status[{{story_key}}] = "review" + Update last_updated field to current date + Save file, preserving ALL comments and structure including STATUS DEFINITIONS + ✅ Story status updated to "review" in sprint-status.yaml + + + + ℹ️ Story status updated to "review" in story file (no sprint tracking configured) + + + + ⚠️ Story file updated, but sprint-status update failed: {{story_key}} not found + + Story status is set to "review" in file, but sprint-status.yaml may be out of sync. + + + + + HALT - Complete remaining tasks before marking ready for review + HALT - Fix regression issues before completing + HALT - Update File List with all changed files + HALT - Address DoD failures before completing + + + + Execute the enhanced definition-of-done checklist using the validation framework + Prepare a concise summary in Dev Agent Record → Completion Notes + + Communicate to {user_name} that story implementation is complete and ready for review + Summarize key accomplishments: story ID, story key, title, key changes made, tests added, files modified + Provide the story file path and current status (now "review") + + Based on {user_skill_level}, ask if user needs any explanations about: + - What was implemented and how it works + - Why certain technical decisions were made + - How to test or verify the changes + - Any patterns, libraries, or approaches used + - Anything else they'd like clarified + + + + Provide clear, contextual explanations tailored to {user_skill_level} + Use examples and references to specific code when helpful + + + Once explanations are complete (or user indicates no questions), suggest logical next steps + Recommended next steps (flexible based on project setup): + - Review the implemented story and test the changes + - Verify all acceptance criteria are met + - Ensure deployment readiness if applicable + - Run `code-review` workflow for peer review + - Optional: If Test Architect module installed, run `/bmad:tea:automate` to expand guardrail tests + + + 💡 **Tip:** For best results, run `code-review` using a **different** LLM than the one that implemented this story. + + Suggest checking {sprint_status} to see project progress + + Remain flexible - allow user to choose their own path or ask for other assistance + Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting. + + + diff --git a/src/bmm-skills/4-implementation/bmad-dev-story/customize.toml b/src/bmm-skills/4-implementation/bmad-dev-story/customize.toml new file mode 100644 index 000000000..84f5dcbe4 --- /dev/null +++ b/src/bmm-skills/4-implementation/bmad-dev-story/customize.toml @@ -0,0 +1,41 @@ +# DO NOT EDIT -- overwritten on every update. +# +# Workflow customization surface for bmad-dev-story. Mirrors the +# agent customization shape under the [workflow] namespace. + +[workflow] + +# --- Configurable below. Overrides merge per BMad structural rules: --- +# scalars: override wins • arrays (persistent_facts, activation_steps_*): append +# arrays-of-tables with `code`/`id`: replace matching items, append new ones. + +# Steps to run before the standard activation (config load, greet). +# Overrides append. Use for pre-flight loads, compliance checks, etc. + +activation_steps_prepend = [] + +# Steps to run after greet but before the workflow begins. +# Overrides append. Use for context-heavy setup that should happen +# once the user has been acknowledged. + +activation_steps_append = [] + +# Persistent facts the workflow keeps in mind for the whole run +# (standards, compliance constraints, stylistic guardrails). +# Distinct from the runtime memory sidecar — these are static context +# loaded on activation. Overrides append. +# +# Each entry is either: +# - a literal sentence, e.g. "All stories must include testable acceptance criteria." +# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md" +# (glob patterns are supported; the file's contents are loaded and treated as facts). + +persistent_facts = [ + "file:{project-root}/**/project-context.md", +] + +# Scalar: executed when the workflow reaches its final step, +# after the story implementation is complete and status is updated. Override wins. +# Leave empty for no custom post-completion behavior. + +on_complete = "" diff --git a/src/bmm-skills/4-implementation/bmad-dev-story/workflow.md b/src/bmm-skills/4-implementation/bmad-dev-story/workflow.md deleted file mode 100644 index 4164479c3..000000000 --- a/src/bmm-skills/4-implementation/bmad-dev-story/workflow.md +++ /dev/null @@ -1,450 +0,0 @@ -# Dev Story Workflow - -**Goal:** Execute story implementation following a context filled story spec file. - -**Your Role:** Developer implementing the story. -- Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level} -- Generate all documents in {document_output_language} -- Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, Change Log, and Status -- Execute ALL steps in exact order; do NOT skip steps -- Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives other instruction. -- Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 6 decides completion. -- User skill level ({user_skill_level}) affects conversation style ONLY, not code updates. - ---- - -## INITIALIZATION - -### Configuration Loading - -Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve: - -- `project_name`, `user_name` -- `communication_language`, `document_output_language` -- `user_skill_level` -- `implementation_artifacts` -- `date` as system-generated current datetime - -### Paths - -- `story_file` = `` (explicit story path; auto-discovered if empty) -- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml` - -### Context - -- `project_context` = `**/project-context.md` (load if exists) - ---- - -## EXECUTION - - - Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level} - Generate all documents in {document_output_language} - Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, - Change Log, and Status - Execute ALL steps in exact order; do NOT skip steps - Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution - until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives - other instruction. - Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 6 decides completion. - User skill level ({user_skill_level}) affects conversation style ONLY, not code updates. - - - - Use {{story_path}} directly - Read COMPLETE story file - Extract story_key from filename or metadata - - - - - - MUST read COMPLETE sprint-status.yaml file from start to end to preserve order - Load the FULL file: {{sprint_status}} - Read ALL lines from beginning to end - do not skip any content - Parse the development_status section completely to understand story order - - Find the FIRST story (by reading in order from top to bottom) where: - - Key matches pattern: number-number-name (e.g., "1-2-user-auth") - - NOT an epic key (epic-X) or retrospective (epic-X-retrospective) - - Status value equals "ready-for-dev" - - - - 📋 No ready-for-dev stories found in sprint-status.yaml - - **Current Sprint Status:** {{sprint_status_summary}} - - **What would you like to do?** - 1. Run `create-story` to create next story from epics with comprehensive context - 2. Run `*validate-create-story` to improve existing stories before development (recommended quality check) - 3. Specify a particular story file to develop (provide full path) - 4. Check {{sprint_status}} file to see current sprint status - - 💡 **Tip:** Stories in `ready-for-dev` may not have been validated. Consider running `validate-create-story` first for a quality - check. - - Choose option [1], [2], [3], or [4], or specify story file path: - - - HALT - Run create-story to create next story - - - - HALT - Run validate-create-story to improve existing stories - - - - Provide the story file path to develop: - Store user-provided story path as {{story_path}} - - - - - Loading {{sprint_status}} for detailed status review... - Display detailed sprint status analysis - HALT - User can review sprint status and provide story path - - - - Store user-provided story path as {{story_path}} - - - - - - - - Search {implementation_artifacts} for stories directly - Find stories with "ready-for-dev" status in files - Look for story files matching pattern: *-*-*.md - Read each candidate story file to check Status section - - - 📋 No ready-for-dev stories found - - **Available Options:** - 1. Run `create-story` to create next story from epics with comprehensive context - 2. Run `*validate-create-story` to improve existing stories - 3. Specify which story to develop - - What would you like to do? Choose option [1], [2], or [3]: - - - HALT - Run create-story to create next story - - - - HALT - Run validate-create-story to improve existing stories - - - - It's unclear what story you want developed. Please provide the full path to the story file: - Store user-provided story path as {{story_path}} - Continue with provided story file - - - - - Use discovered story file and extract story_key - - - - Store the found story_key (e.g., "1-2-user-authentication") for later status updates - Find matching story file in {implementation_artifacts} using story_key pattern: {{story_key}}.md - Read COMPLETE story file from discovered path - - - - Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status - - Load comprehensive context from story file's Dev Notes section - Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications - Use enhanced story context to inform implementation decisions and approaches - - Identify first incomplete task (unchecked [ ]) in Tasks/Subtasks - - - Completion sequence - - HALT: "Cannot develop story without access to story file" - ASK user to clarify or HALT - - - - Load all available context to inform implementation - - Load {project_context} for coding standards and project-wide patterns (if exists) - Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status - Load comprehensive context from story file's Dev Notes section - Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications - Use enhanced story context to inform implementation decisions and approaches - ✅ **Context Loaded** - Story and project context available for implementation - - - - - Determine if this is a fresh start or continuation after code review - - Check if "Senior Developer Review (AI)" section exists in the story file - Check if "Review Follow-ups (AI)" subsection exists under Tasks/Subtasks - - - Set review_continuation = true - Extract from "Senior Developer Review (AI)" section: - - Review outcome (Approve/Changes Requested/Blocked) - - Review date - - Total action items with checkboxes (count checked vs unchecked) - - Severity breakdown (High/Med/Low counts) - - Count unchecked [ ] review follow-up tasks in "Review Follow-ups (AI)" subsection - Store list of unchecked review items as {{pending_review_items}} - - ⏯️ **Resuming Story After Code Review** ({{review_date}}) - - **Review Outcome:** {{review_outcome}} - **Action Items:** {{unchecked_review_count}} remaining to address - **Priorities:** {{high_count}} High, {{med_count}} Medium, {{low_count}} Low - - **Strategy:** Will prioritize review follow-up tasks (marked [AI-Review]) before continuing with regular tasks. - - - - - Set review_continuation = false - Set {{pending_review_items}} = empty - - 🚀 **Starting Fresh Implementation** - - Story: {{story_key}} - Story Status: {{current_status}} - First incomplete task: {{first_task_description}} - - - - - - - Load the FULL file: {{sprint_status}} - Read all development_status entries to find {{story_key}} - Get current status value for development_status[{{story_key}}] - - - Update the story in the sprint status report to = "in-progress" - Update last_updated field to current date - 🚀 Starting work on story {{story_key}} - Status updated: ready-for-dev → in-progress - - - - - ⏯️ Resuming work on story {{story_key}} - Story is already marked in-progress - - - - - ⚠️ Unexpected story status: {{current_status}} - Expected ready-for-dev or in-progress. Continuing anyway... - - - - Store {{current_sprint_status}} for later use - - - - ℹ️ No sprint status file exists - story progress will be tracked in story file only - Set {{current_sprint_status}} = "no-sprint-tracking" - - - - - FOLLOW THE STORY FILE TASKS/SUBTASKS SEQUENCE EXACTLY AS WRITTEN - NO DEVIATION - - Review the current task/subtask from the story file - this is your authoritative implementation guide - Plan implementation following red-green-refactor cycle - - - Write FAILING tests first for the task/subtask functionality - Confirm tests fail before implementation - this validates test correctness - - - Implement MINIMAL code to make tests pass - Run tests to confirm they now pass - Handle error conditions and edge cases as specified in task/subtask - - - Improve code structure while keeping tests green - Ensure code follows architecture patterns and coding standards from Dev Notes - - Document technical approach and decisions in Dev Agent Record → Implementation Plan - - HALT: "Additional dependencies need user approval" - HALT and request guidance - HALT: "Cannot proceed without necessary configuration files" - - NEVER implement anything not mapped to a specific task/subtask in the story file - NEVER proceed to next task until current task/subtask is complete AND tests pass - Execute continuously without pausing until all tasks/subtasks are complete or explicit HALT condition - Do NOT propose to pause for review until Step 9 completion gates are satisfied - - - - Create unit tests for business logic and core functionality introduced/changed by the task - Add integration tests for component interactions specified in story requirements - Include end-to-end tests for critical user flows when story requirements demand them - Cover edge cases and error handling scenarios identified in story Dev Notes - - - - Determine how to run tests for this repo (infer test framework from project structure) - Run all existing tests to ensure no regressions - Run the new tests to verify implementation correctness - Run linting and code quality checks if configured in project - Validate implementation meets ALL story acceptance criteria; enforce quantitative thresholds explicitly - STOP and fix before continuing - identify breaking changes immediately - STOP and fix before continuing - ensure implementation correctness - - - - NEVER mark a task complete unless ALL conditions are met - NO LYING OR CHEATING - - - Verify ALL tests for this task/subtask ACTUALLY EXIST and PASS 100% - Confirm implementation matches EXACTLY what the task/subtask specifies - no extra features - Validate that ALL acceptance criteria related to this task are satisfied - Run full test suite to ensure NO regressions introduced - - - - Extract review item details (severity, description, related AC/file) - Add to resolution tracking list: {{resolved_review_items}} - - - Mark task checkbox [x] in "Tasks/Subtasks → Review Follow-ups (AI)" section - - - Find matching action item in "Senior Developer Review (AI) → Action Items" section by matching description - Mark that action item checkbox [x] as resolved - - Add to Dev Agent Record → Completion Notes: "✅ Resolved review finding [{{severity}}]: {{description}}" - - - - - ONLY THEN mark the task (and subtasks) checkbox with [x] - Update File List section with ALL new, modified, or deleted files (paths relative to repo root) - Add completion notes to Dev Agent Record summarizing what was ACTUALLY implemented and tested - - - - DO NOT mark task complete - fix issues first - HALT if unable to fix validation failures - - - - Count total resolved review items in this session - Add Change Log entry: "Addressed code review findings - {{resolved_count}} items resolved (Date: {{date}})" - - - Save the story file - Determine if more incomplete tasks remain - - Next task - - - Completion - - - - - Verify ALL tasks and subtasks are marked [x] (re-scan the story document now) - Run the full regression suite (do not skip) - Confirm File List includes every changed file - Execute enhanced definition-of-done validation - Update the story Status to: "review" - - - Validate definition-of-done checklist with essential requirements: - - All tasks/subtasks marked complete with [x] - - Implementation satisfies every Acceptance Criterion - - Unit tests for core functionality added/updated - - Integration tests for component interactions added when required - - End-to-end tests for critical flows added when story demands them - - All tests pass (no regressions, new tests successful) - - Code quality checks pass (linting, static analysis if configured) - - File List includes every new/modified/deleted file (relative paths) - - Dev Agent Record contains implementation notes - - Change Log includes summary of changes - - Only permitted story sections were modified - - - - - Load the FULL file: {sprint_status} - Find development_status key matching {{story_key}} - Verify current status is "in-progress" (expected previous state) - Update development_status[{{story_key}}] = "review" - Update last_updated field to current date - Save file, preserving ALL comments and structure including STATUS DEFINITIONS - ✅ Story status updated to "review" in sprint-status.yaml - - - - ℹ️ Story status updated to "review" in story file (no sprint tracking configured) - - - - ⚠️ Story file updated, but sprint-status update failed: {{story_key}} not found - - Story status is set to "review" in file, but sprint-status.yaml may be out of sync. - - - - - HALT - Complete remaining tasks before marking ready for review - HALT - Fix regression issues before completing - HALT - Update File List with all changed files - HALT - Address DoD failures before completing - - - - Execute the enhanced definition-of-done checklist using the validation framework - Prepare a concise summary in Dev Agent Record → Completion Notes - - Communicate to {user_name} that story implementation is complete and ready for review - Summarize key accomplishments: story ID, story key, title, key changes made, tests added, files modified - Provide the story file path and current status (now "review") - - Based on {user_skill_level}, ask if user needs any explanations about: - - What was implemented and how it works - - Why certain technical decisions were made - - How to test or verify the changes - - Any patterns, libraries, or approaches used - - Anything else they'd like clarified - - - - Provide clear, contextual explanations tailored to {user_skill_level} - Use examples and references to specific code when helpful - - - Once explanations are complete (or user indicates no questions), suggest logical next steps - Recommended next steps (flexible based on project setup): - - Review the implemented story and test the changes - - Verify all acceptance criteria are met - - Ensure deployment readiness if applicable - - Run `code-review` workflow for peer review - - Optional: If Test Architect module installed, run `/bmad:tea:automate` to expand guardrail tests - - - 💡 **Tip:** For best results, run `code-review` using a **different** LLM than the one that implemented this story. - - Suggest checking {sprint_status} to see project progress - - Remain flexible - allow user to choose their own path or ask for other assistance - - - diff --git a/src/bmm-skills/4-implementation/bmad-quick-dev/SKILL.md b/src/bmm-skills/4-implementation/bmad-quick-dev/SKILL.md index b2f0df476..f5326fc3f 100644 --- a/src/bmm-skills/4-implementation/bmad-quick-dev/SKILL.md +++ b/src/bmm-skills/4-implementation/bmad-quick-dev/SKILL.md @@ -3,4 +3,109 @@ name: bmad-quick-dev description: 'Implements any user intent, requirement, story, bug fix or change request by producing clean working code artifacts that follow the project''s existing architecture, patterns and conventions. Use when the user wants to build, fix, tweak, refactor, add or modify any code, component or feature.' --- -Follow the instructions in ./workflow.md. +# Quick Dev New Preview Workflow + +**Goal:** Turn user intent into a hardened, reviewable artifact. + +**CRITICAL:** If a step says "read fully and follow step-XX", you read and follow step-XX. No exceptions. + +## READY FOR DEVELOPMENT STANDARD + +A specification is "Ready for Development" when: + +- **Actionable**: Every task has a file path and specific action. +- **Logical**: Tasks ordered by dependency. +- **Testable**: All ACs use Given/When/Then. +- **Complete**: No placeholders or TBDs. + +## SCOPE STANDARD + +A specification should target a **single user-facing goal** within **900–1600 tokens**: + +- **Single goal**: One cohesive feature, even if it spans multiple layers/files. Multi-goal means >=2 **top-level independent shippable deliverables** — each could be reviewed, tested, and merged as a separate PR without breaking the others. Never count surface verbs, "and" conjunctions, or noun phrases. Never split cross-layer implementation details inside one user goal. + - Split: "add dark mode toggle AND refactor auth to JWT AND build admin dashboard" + - Don't split: "add validation and display errors" / "support drag-and-drop AND paste AND retry" +- **900–1600 tokens**: Optimal range for LLM consumption. Below 900 risks ambiguity; above 1600 risks context-rot in implementation agents. +- **Neither limit is a gate.** Both are proposals with user override. + +## Conventions + +- Bare paths (e.g. `step-01-clarify-and-route.md`) resolve from the skill root. +- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives). +- `{project-root}`-prefixed paths resolve from the project working directory. +- `{skill-name}` resolves to the skill directory's basename. + +## On Activation + +### Step 1: Resolve the Workflow Block + +Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow` + +**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver: + +1. `{skill-root}/customize.toml` — defaults +2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides +3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides + +Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append. + +### Step 2: Execute Prepend Steps + +Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding. + +### Step 3: Load Persistent Facts + +Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` -- load the referenced contents as facts. All other entries are facts verbatim. + +### Step 4: Load Config + +Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve: + +- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name` +- `communication_language`, `document_output_language`, `user_skill_level` +- `date` as system-generated current datetime +- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml` +- `project_context` = `**/project-context.md` (load if exists) +- CLAUDE.md / memory files (load if exist) +- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}` +- Language MUST be tailored to `{user_skill_level}` +- Generate all documents in `{document_output_language}` + +### Step 5: Greet the User + +Greet `{user_name}`, speaking in `{communication_language}`. + +### Step 6: Execute Append Steps + +Execute each entry in `{workflow.activation_steps_append}` in order. + +Activation is complete. Begin the workflow below. + +## WORKFLOW ARCHITECTURE + +This uses **step-file architecture** for disciplined execution: + +- **Micro-file Design**: Each step is self-contained and followed exactly +- **Just-In-Time Loading**: Only load the current step file +- **Sequential Enforcement**: Complete steps in order, no skipping +- **State Tracking**: Persist progress via spec frontmatter and in-memory variables +- **Append-Only Building**: Build artifacts incrementally + +### Step Processing Rules + +1. **READ COMPLETELY**: Read the entire step file before acting +2. **FOLLOW SEQUENCE**: Execute sections in order +3. **WAIT FOR INPUT**: Halt at checkpoints and wait for human +4. **LOAD NEXT**: When directed, read fully and follow the next step file + +### Critical Rules (NO EXCEPTIONS) + +- **NEVER** load multiple step files simultaneously +- **ALWAYS** read entire step file before execution +- **NEVER** skip steps or optimize the sequence +- **ALWAYS** follow the exact instructions in the step file +- **ALWAYS** halt at checkpoints and wait for human input + +## FIRST STEP + +Read fully and follow: `./step-01-clarify-and-route.md` to begin the workflow. diff --git a/src/bmm-skills/4-implementation/bmad-quick-dev/customize.toml b/src/bmm-skills/4-implementation/bmad-quick-dev/customize.toml new file mode 100644 index 000000000..351465443 --- /dev/null +++ b/src/bmm-skills/4-implementation/bmad-quick-dev/customize.toml @@ -0,0 +1,41 @@ +# DO NOT EDIT -- overwritten on every update. +# +# Workflow customization surface for bmad-quick-dev. Mirrors the +# agent customization shape under the [workflow] namespace. + +[workflow] + +# --- Configurable below. Overrides merge per BMad structural rules: --- +# scalars: override wins • arrays (persistent_facts, activation_steps_*): append +# arrays-of-tables with `code`/`id`: replace matching items, append new ones. + +# Steps to run before the standard activation (config load, greet). +# Overrides append. Use for pre-flight loads, compliance checks, etc. + +activation_steps_prepend = [] + +# Steps to run after greet but before the workflow begins. +# Overrides append. Use for context-heavy setup that should happen +# once the user has been acknowledged. + +activation_steps_append = [] + +# Persistent facts the workflow keeps in mind for the whole run +# (standards, compliance constraints, stylistic guardrails). +# Distinct from the runtime memory sidecar — these are static context +# loaded on activation. Overrides append. +# +# Each entry is either: +# - a literal sentence, e.g. "All stories must include testable acceptance criteria." +# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md" +# (glob patterns are supported; the file's contents are loaded and treated as facts). + +persistent_facts = [ + "file:{project-root}/**/project-context.md", +] + +# Scalar: executed when the workflow reaches its final step, +# after implementation is complete and explanations are provided. Override wins. +# Leave empty for no custom post-completion behavior. + +on_complete = "" diff --git a/src/bmm-skills/4-implementation/bmad-quick-dev/step-05-present.md b/src/bmm-skills/4-implementation/bmad-quick-dev/step-05-present.md index 6b1a1501b..5efe96164 100644 --- a/src/bmm-skills/4-implementation/bmad-quick-dev/step-05-present.md +++ b/src/bmm-skills/4-implementation/bmad-quick-dev/step-05-present.md @@ -70,3 +70,9 @@ Display summary of your work to the user, including the commit hash if one was c - Offer to push and/or create a pull request. Workflow complete. + +## On Complete + +Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` + +If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting. diff --git a/src/bmm-skills/4-implementation/bmad-quick-dev/step-oneshot.md b/src/bmm-skills/4-implementation/bmad-quick-dev/step-oneshot.md index 62192c74a..72078b34d 100644 --- a/src/bmm-skills/4-implementation/bmad-quick-dev/step-oneshot.md +++ b/src/bmm-skills/4-implementation/bmad-quick-dev/step-oneshot.md @@ -63,3 +63,9 @@ If version control is available and the tree is dirty, create a local commit wit HALT and wait for human input. Workflow complete. + +## On Complete + +Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` + +If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting. diff --git a/src/bmm-skills/4-implementation/bmad-quick-dev/workflow.md b/src/bmm-skills/4-implementation/bmad-quick-dev/workflow.md deleted file mode 100644 index 8e13989fb..000000000 --- a/src/bmm-skills/4-implementation/bmad-quick-dev/workflow.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -main_config: '{project-root}/_bmad/bmm/config.yaml' ---- - -# Quick Dev New Preview Workflow - -**Goal:** Turn user intent into a hardened, reviewable artifact. - -**CRITICAL:** If a step says "read fully and follow step-XX", you read and follow step-XX. No exceptions. - - -## READY FOR DEVELOPMENT STANDARD - -A specification is "Ready for Development" when: - -- **Actionable**: Every task has a file path and specific action. -- **Logical**: Tasks ordered by dependency. -- **Testable**: All ACs use Given/When/Then. -- **Complete**: No placeholders or TBDs. - - -## SCOPE STANDARD - -A specification should target a **single user-facing goal** within **900–1600 tokens**: - -- **Single goal**: One cohesive feature, even if it spans multiple layers/files. Multi-goal means >=2 **top-level independent shippable deliverables** — each could be reviewed, tested, and merged as a separate PR without breaking the others. Never count surface verbs, "and" conjunctions, or noun phrases. Never split cross-layer implementation details inside one user goal. - - Split: "add dark mode toggle AND refactor auth to JWT AND build admin dashboard" - - Don't split: "add validation and display errors" / "support drag-and-drop AND paste AND retry" -- **900–1600 tokens**: Optimal range for LLM consumption. Below 900 risks ambiguity; above 1600 risks context-rot in implementation agents. -- **Neither limit is a gate.** Both are proposals with user override. - - -## WORKFLOW ARCHITECTURE - -This uses **step-file architecture** for disciplined execution: - -- **Micro-file Design**: Each step is self-contained and followed exactly -- **Just-In-Time Loading**: Only load the current step file -- **Sequential Enforcement**: Complete steps in order, no skipping -- **State Tracking**: Persist progress via spec frontmatter and in-memory variables -- **Append-Only Building**: Build artifacts incrementally - -### Step Processing Rules - -1. **READ COMPLETELY**: Read the entire step file before acting -2. **FOLLOW SEQUENCE**: Execute sections in order -3. **WAIT FOR INPUT**: Halt at checkpoints and wait for human -4. **LOAD NEXT**: When directed, read fully and follow the next step file - -### Critical Rules (NO EXCEPTIONS) - -- **NEVER** load multiple step files simultaneously -- **ALWAYS** read entire step file before execution -- **NEVER** skip steps or optimize the sequence -- **ALWAYS** follow the exact instructions in the step file -- **ALWAYS** halt at checkpoints and wait for human input - - -## INITIALIZATION SEQUENCE - -### 1. Configuration Loading - -Load and read full config from `{main_config}` and resolve: - -- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name` -- `communication_language`, `document_output_language`, `user_skill_level` -- `date` as system-generated current datetime -- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml` -- `project_context` = `**/project-context.md` (load if exists) -- CLAUDE.md / memory files (load if exist) - -YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`. - -### 2. First Step Execution - -Read fully and follow: `./step-01-clarify-and-route.md` to begin the workflow. diff --git a/src/bmm-skills/4-implementation/bmad-sprint-planning/SKILL.md b/src/bmm-skills/4-implementation/bmad-sprint-planning/SKILL.md index 85783cf00..25266d716 100644 --- a/src/bmm-skills/4-implementation/bmad-sprint-planning/SKILL.md +++ b/src/bmm-skills/4-implementation/bmad-sprint-planning/SKILL.md @@ -3,4 +3,297 @@ name: bmad-sprint-planning description: 'Generate sprint status tracking from epics. Use when the user says "run sprint planning" or "generate sprint plan"' --- -Follow the instructions in ./workflow.md. +# Sprint Planning Workflow + +**Goal:** Generate sprint status tracking from epics, detecting current story statuses and building a complete sprint-status.yaml file. + +**Your Role:** You are a Developer generating and maintaining sprint tracking. Parse epic files, detect story statuses, and produce a structured sprint-status.yaml. + +## Conventions + +- Bare paths (e.g. `checklist.md`) resolve from the skill root. +- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives). +- `{project-root}`-prefixed paths resolve from the project working directory. +- `{skill-name}` resolves to the skill directory's basename. + +## On Activation + +### Step 1: Resolve the Workflow Block + +Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow` + +**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver: + +1. `{skill-root}/customize.toml` — defaults +2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides +3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides + +Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append. + +### Step 2: Execute Prepend Steps + +Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding. + +### Step 3: Load Persistent Facts + +Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim. + +### Step 4: Load Config + +Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve: + +- `project_name`, `user_name` +- `communication_language`, `document_output_language` +- `implementation_artifacts` +- `planning_artifacts` +- `date` as system-generated current datetime +- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}` +- Generate all documents in `{document_output_language}` + +### Step 5: Greet the User + +Greet `{user_name}`, speaking in `{communication_language}`. + +### Step 6: Execute Append Steps + +Execute each entry in `{workflow.activation_steps_append}` in order. + +Activation is complete. Begin the workflow below. + +## Paths + +- `tracking_system` = `file-system` +- `project_key` = `NOKEY` +- `story_location` = `{implementation_artifacts}` +- `story_location_absolute` = `{implementation_artifacts}` +- `epics_location` = `{planning_artifacts}` +- `epics_pattern` = `*epic*.md` +- `status_file` = `{implementation_artifacts}/sprint-status.yaml` + +## Input Files + +| Input | Path | Load Strategy | +|-------|------|---------------| +| Epics | `{planning_artifacts}/*epic*.md` (whole) or `{planning_artifacts}/*epic*/*.md` (sharded) | FULL_LOAD | + +## Execution + +### Document Discovery - Full Epic Loading + +**Strategy**: Sprint planning needs ALL epics and stories to build complete status tracking. + +**Epic Discovery Process:** + +1. **Search for whole document first** - Look for `epics.md`, `bmm-epics.md`, or any `*epic*.md` file +2. **Check for sharded version** - If whole document not found, look for `epics/index.md` +3. **If sharded version found**: + - Read `index.md` to understand the document structure + - Read ALL epic section files listed in the index (e.g., `epic-1.md`, `epic-2.md`, etc.) + - Process all epics and their stories from the combined content + - This ensures complete sprint status coverage +4. **Priority**: If both whole and sharded versions exist, use the whole document + +**Fuzzy matching**: Be flexible with document names - users may use variations like `epics.md`, `bmm-epics.md`, `user-stories.md`, etc. + + + + +Load {project_context} for project-wide patterns and conventions (if exists) +Communicate in {communication_language} with {user_name} +Look for all files matching `{epics_pattern}` in {epics_location} +Could be a single `epics.md` file or multiple `epic-1.md`, `epic-2.md` files + +For each epic file found, extract: + +- Epic numbers from headers like `## Epic 1:` or `## Epic 2:` +- Story IDs and titles from patterns like `### Story 1.1: User Authentication` +- Convert story format from `Epic.Story: Title` to kebab-case key: `epic-story-title` + +**Story ID Conversion Rules:** + +- Original: `### Story 1.1: User Authentication` +- Replace period with dash: `1-1` +- Convert title to kebab-case: `user-authentication` +- Final key: `1-1-user-authentication` + +Build complete inventory of all epics and stories from all epic files + + + +For each epic found, create entries in this order: + +1. **Epic entry** - Key: `epic-{num}`, Default status: `backlog` +2. **Story entries** - Key: `{epic}-{story}-{title}`, Default status: `backlog` +3. **Retrospective entry** - Key: `epic-{num}-retrospective`, Default status: `optional` + +**Example structure:** + +```yaml +development_status: + epic-1: backlog + 1-1-user-authentication: backlog + 1-2-account-management: backlog + epic-1-retrospective: optional +``` + + + + +For each story, detect current status by checking files: + +**Story file detection:** + +- Check: `{story_location_absolute}/{story-key}.md` (e.g., `stories/1-1-user-authentication.md`) +- If exists → upgrade status to at least `ready-for-dev` + +**Preservation rule:** + +- If existing `{status_file}` exists and has more advanced status, preserve it +- Never downgrade status (e.g., don't change `done` to `ready-for-dev`) + +**Status Flow Reference:** + +- Epic: `backlog` → `in-progress` → `done` +- Story: `backlog` → `ready-for-dev` → `in-progress` → `review` → `done` +- Retrospective: `optional` ↔ `done` + + + +Create or update {status_file} with: + +**File Structure:** + +```yaml +# generated: {date} +# last_updated: {date} +# project: {project_name} +# project_key: {project_key} +# tracking_system: {tracking_system} +# story_location: {story_location} + +# STATUS DEFINITIONS: +# ================== +# Epic Status: +# - backlog: Epic not yet started +# - in-progress: Epic actively being worked on +# - done: All stories in epic completed +# +# Epic Status Transitions: +# - backlog → in-progress: Automatically when first story is created (via create-story) +# - in-progress → done: Manually when all stories reach 'done' status +# +# Story Status: +# - backlog: Story only exists in epic file +# - ready-for-dev: Story file created in stories folder +# - in-progress: Developer actively working on implementation +# - review: Ready for code review (via Dev's code-review workflow) +# - done: Story completed +# +# Retrospective Status: +# - optional: Can be completed but not required +# - done: Retrospective has been completed +# +# WORKFLOW NOTES: +# =============== +# - Epic transitions to 'in-progress' automatically when first story is created +# - Stories can be worked in parallel if team capacity allows +# - Developer typically creates next story after previous one is 'done' to incorporate learnings +# - Dev moves story to 'review', then runs code-review (fresh context, different LLM recommended) + +generated: { date } +last_updated: { date } +project: { project_name } +project_key: { project_key } +tracking_system: { tracking_system } +story_location: { story_location } + +development_status: + # All epics, stories, and retrospectives in order +``` + +Write the complete sprint status YAML to {status_file} +CRITICAL: Metadata appears TWICE - once as comments (#) for documentation, once as YAML key:value fields for parsing +Ensure all items are ordered: epic, its stories, its retrospective, next epic... + + + +Perform validation checks: + +- [ ] Every epic in epic files appears in {status_file} +- [ ] Every story in epic files appears in {status_file} +- [ ] Every epic has a corresponding retrospective entry +- [ ] No items in {status_file} that don't exist in epic files +- [ ] All status values are legal (match state machine definitions) +- [ ] File is valid YAML syntax + +Count totals: + +- Total epics: {{epic_count}} +- Total stories: {{story_count}} +- Epics in-progress: {{in_progress_count}} +- Stories done: {{done_count}} + +Display completion summary to {user_name} in {communication_language}: + +**Sprint Status Generated Successfully** + +- **File Location:** {status_file} +- **Total Epics:** {{epic_count}} +- **Total Stories:** {{story_count}} +- **Epics In Progress:** {{in_progress_count}} +- **Stories Completed:** {{done_count}} + +**Next Steps:** + +1. Review the generated {status_file} +2. Use this file to track development progress +3. Agents will update statuses as they work +4. Re-run this workflow to refresh auto-detected statuses + +Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting. + + + + +## Additional Documentation + +### Status State Machine + +**Epic Status Flow:** + +``` +backlog → in-progress → done +``` + +- **backlog**: Epic not yet started +- **in-progress**: Epic actively being worked on (stories being created/implemented) +- **done**: All stories in epic completed + +**Story Status Flow:** + +``` +backlog → ready-for-dev → in-progress → review → done +``` + +- **backlog**: Story only exists in epic file +- **ready-for-dev**: Story file created (e.g., `stories/1-3-plant-naming.md`) +- **in-progress**: Developer actively working +- **review**: Ready for code review (via Dev's code-review workflow) +- **done**: Completed + +**Retrospective Status:** + +``` +optional ↔ done +``` + +- **optional**: Ready to be conducted but not required +- **done**: Finished + +### Guidelines + +1. **Epic Activation**: Mark epic as `in-progress` when starting work on its first story +2. **Sequential Default**: Stories are typically worked in order, but parallel work is supported +3. **Parallel Work Supported**: Multiple stories can be `in-progress` if team capacity allows +4. **Review Before Done**: Stories should pass through `review` before `done` +5. **Learning Transfer**: Developer typically creates next story after previous one is `done` to incorporate learnings diff --git a/src/bmm-skills/4-implementation/bmad-sprint-planning/customize.toml b/src/bmm-skills/4-implementation/bmad-sprint-planning/customize.toml new file mode 100644 index 000000000..bc89e8230 --- /dev/null +++ b/src/bmm-skills/4-implementation/bmad-sprint-planning/customize.toml @@ -0,0 +1,41 @@ +# DO NOT EDIT -- overwritten on every update. +# +# Workflow customization surface for bmad-sprint-planning. Mirrors the +# agent customization shape under the [workflow] namespace. + +[workflow] + +# --- Configurable below. Overrides merge per BMad structural rules: --- +# scalars: override wins • arrays (persistent_facts, activation_steps_*): append +# arrays-of-tables with `code`/`id`: replace matching items, append new ones. + +# Steps to run before the standard activation (config load, greet). +# Overrides append. Use for pre-flight loads, compliance checks, etc. + +activation_steps_prepend = [] + +# Steps to run after greet but before the workflow begins. +# Overrides append. Use for context-heavy setup that should happen +# once the user has been acknowledged. + +activation_steps_append = [] + +# Persistent facts the workflow keeps in mind for the whole run +# (standards, compliance constraints, stylistic guardrails). +# Distinct from the runtime memory sidecar — these are static context +# loaded on activation. Overrides append. +# +# Each entry is either: +# - a literal sentence, e.g. "All stories must include testable acceptance criteria." +# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md" +# (glob patterns are supported; the file's contents are loaded and treated as facts). + +persistent_facts = [ + "file:{project-root}/**/project-context.md", +] + +# Scalar: executed when the workflow reaches its final step, +# after sprint-status.yaml is generated and validated. Override wins. +# Leave empty for no custom post-completion behavior. + +on_complete = "" diff --git a/src/bmm-skills/4-implementation/bmad-sprint-planning/workflow.md b/src/bmm-skills/4-implementation/bmad-sprint-planning/workflow.md deleted file mode 100644 index 99a2e2528..000000000 --- a/src/bmm-skills/4-implementation/bmad-sprint-planning/workflow.md +++ /dev/null @@ -1,263 +0,0 @@ -# Sprint Planning Workflow - -**Goal:** Generate sprint status tracking from epics, detecting current story statuses and building a complete sprint-status.yaml file. - -**Your Role:** You are a Developer generating and maintaining sprint tracking. Parse epic files, detect story statuses, and produce a structured sprint-status.yaml. - ---- - -## INITIALIZATION - -### Configuration Loading - -Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve: - -- `project_name`, `user_name` -- `communication_language`, `document_output_language` -- `implementation_artifacts` -- `planning_artifacts` -- `date` as system-generated current datetime -- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}` - -### Paths - -- `tracking_system` = `file-system` -- `project_key` = `NOKEY` -- `story_location` = `{implementation_artifacts}` -- `story_location_absolute` = `{implementation_artifacts}` -- `epics_location` = `{planning_artifacts}` -- `epics_pattern` = `*epic*.md` -- `status_file` = `{implementation_artifacts}/sprint-status.yaml` - -### Input Files - -| Input | Path | Load Strategy | -|-------|------|---------------| -| Epics | `{planning_artifacts}/*epic*.md` (whole) or `{planning_artifacts}/*epic*/*.md` (sharded) | FULL_LOAD | - -### Context - -- `project_context` = `**/project-context.md` (load if exists) - ---- - -## EXECUTION - -### Document Discovery - Full Epic Loading - -**Strategy**: Sprint planning needs ALL epics and stories to build complete status tracking. - -**Epic Discovery Process:** - -1. **Search for whole document first** - Look for `epics.md`, `bmm-epics.md`, or any `*epic*.md` file -2. **Check for sharded version** - If whole document not found, look for `epics/index.md` -3. **If sharded version found**: - - Read `index.md` to understand the document structure - - Read ALL epic section files listed in the index (e.g., `epic-1.md`, `epic-2.md`, etc.) - - Process all epics and their stories from the combined content - - This ensures complete sprint status coverage -4. **Priority**: If both whole and sharded versions exist, use the whole document - -**Fuzzy matching**: Be flexible with document names - users may use variations like `epics.md`, `bmm-epics.md`, `user-stories.md`, etc. - - - - -Load {project_context} for project-wide patterns and conventions (if exists) -Communicate in {communication_language} with {user_name} -Look for all files matching `{epics_pattern}` in {epics_location} -Could be a single `epics.md` file or multiple `epic-1.md`, `epic-2.md` files - -For each epic file found, extract: - -- Epic numbers from headers like `## Epic 1:` or `## Epic 2:` -- Story IDs and titles from patterns like `### Story 1.1: User Authentication` -- Convert story format from `Epic.Story: Title` to kebab-case key: `epic-story-title` - -**Story ID Conversion Rules:** - -- Original: `### Story 1.1: User Authentication` -- Replace period with dash: `1-1` -- Convert title to kebab-case: `user-authentication` -- Final key: `1-1-user-authentication` - -Build complete inventory of all epics and stories from all epic files - - - -For each epic found, create entries in this order: - -1. **Epic entry** - Key: `epic-{num}`, Default status: `backlog` -2. **Story entries** - Key: `{epic}-{story}-{title}`, Default status: `backlog` -3. **Retrospective entry** - Key: `epic-{num}-retrospective`, Default status: `optional` - -**Example structure:** - -```yaml -development_status: - epic-1: backlog - 1-1-user-authentication: backlog - 1-2-account-management: backlog - epic-1-retrospective: optional -``` - - - - -For each story, detect current status by checking files: - -**Story file detection:** - -- Check: `{story_location_absolute}/{story-key}.md` (e.g., `stories/1-1-user-authentication.md`) -- If exists → upgrade status to at least `ready-for-dev` - -**Preservation rule:** - -- If existing `{status_file}` exists and has more advanced status, preserve it -- Never downgrade status (e.g., don't change `done` to `ready-for-dev`) - -**Status Flow Reference:** - -- Epic: `backlog` → `in-progress` → `done` -- Story: `backlog` → `ready-for-dev` → `in-progress` → `review` → `done` -- Retrospective: `optional` ↔ `done` - - - -Create or update {status_file} with: - -**File Structure:** - -```yaml -# generated: {date} -# last_updated: {date} -# project: {project_name} -# project_key: {project_key} -# tracking_system: {tracking_system} -# story_location: {story_location} - -# STATUS DEFINITIONS: -# ================== -# Epic Status: -# - backlog: Epic not yet started -# - in-progress: Epic actively being worked on -# - done: All stories in epic completed -# -# Epic Status Transitions: -# - backlog → in-progress: Automatically when first story is created (via create-story) -# - in-progress → done: Manually when all stories reach 'done' status -# -# Story Status: -# - backlog: Story only exists in epic file -# - ready-for-dev: Story file created in stories folder -# - in-progress: Developer actively working on implementation -# - review: Ready for code review (via Dev's code-review workflow) -# - done: Story completed -# -# Retrospective Status: -# - optional: Can be completed but not required -# - done: Retrospective has been completed -# -# WORKFLOW NOTES: -# =============== -# - Epic transitions to 'in-progress' automatically when first story is created -# - Stories can be worked in parallel if team capacity allows -# - Developer typically creates next story after previous one is 'done' to incorporate learnings -# - Dev moves story to 'review', then runs code-review (fresh context, different LLM recommended) - -generated: { date } -last_updated: { date } -project: { project_name } -project_key: { project_key } -tracking_system: { tracking_system } -story_location: { story_location } - -development_status: - # All epics, stories, and retrospectives in order -``` - -Write the complete sprint status YAML to {status_file} -CRITICAL: Metadata appears TWICE - once as comments (#) for documentation, once as YAML key:value fields for parsing -Ensure all items are ordered: epic, its stories, its retrospective, next epic... - - - -Perform validation checks: - -- [ ] Every epic in epic files appears in {status_file} -- [ ] Every story in epic files appears in {status_file} -- [ ] Every epic has a corresponding retrospective entry -- [ ] No items in {status_file} that don't exist in epic files -- [ ] All status values are legal (match state machine definitions) -- [ ] File is valid YAML syntax - -Count totals: - -- Total epics: {{epic_count}} -- Total stories: {{story_count}} -- Epics in-progress: {{in_progress_count}} -- Stories done: {{done_count}} - -Display completion summary to {user_name} in {communication_language}: - -**Sprint Status Generated Successfully** - -- **File Location:** {status_file} -- **Total Epics:** {{epic_count}} -- **Total Stories:** {{story_count}} -- **Epics In Progress:** {{in_progress_count}} -- **Stories Completed:** {{done_count}} - -**Next Steps:** - -1. Review the generated {status_file} -2. Use this file to track development progress -3. Agents will update statuses as they work -4. Re-run this workflow to refresh auto-detected statuses - - - - - -## Additional Documentation - -### Status State Machine - -**Epic Status Flow:** - -``` -backlog → in-progress → done -``` - -- **backlog**: Epic not yet started -- **in-progress**: Epic actively being worked on (stories being created/implemented) -- **done**: All stories in epic completed - -**Story Status Flow:** - -``` -backlog → ready-for-dev → in-progress → review → done -``` - -- **backlog**: Story only exists in epic file -- **ready-for-dev**: Story file created (e.g., `stories/1-3-plant-naming.md`) -- **in-progress**: Developer actively working -- **review**: Ready for code review (via Dev's code-review workflow) -- **done**: Completed - -**Retrospective Status:** - -``` -optional ↔ done -``` - -- **optional**: Ready to be conducted but not required -- **done**: Finished - -### Guidelines - -1. **Epic Activation**: Mark epic as `in-progress` when starting work on its first story -2. **Sequential Default**: Stories are typically worked in order, but parallel work is supported -3. **Parallel Work Supported**: Multiple stories can be `in-progress` if team capacity allows -4. **Review Before Done**: Stories should pass through `review` before `done` -5. **Learning Transfer**: Developer typically creates next story after previous one is `done` to incorporate learnings diff --git a/src/bmm-skills/4-implementation/bmad-sprint-status/SKILL.md b/src/bmm-skills/4-implementation/bmad-sprint-status/SKILL.md index 3a15968e8..c52a84947 100644 --- a/src/bmm-skills/4-implementation/bmad-sprint-status/SKILL.md +++ b/src/bmm-skills/4-implementation/bmad-sprint-status/SKILL.md @@ -3,4 +3,295 @@ name: bmad-sprint-status description: 'Summarize sprint status and surface risks. Use when the user says "check sprint status" or "show sprint status"' --- -Follow the instructions in ./workflow.md. +# Sprint Status Workflow + +**Goal:** Summarize sprint status, surface risks, and recommend the next workflow action. + +**Your Role:** You are a Developer providing clear, actionable sprint visibility. No time estimates — focus on status, risks, and next steps. + +## Conventions + +- Bare paths (e.g. `checklist.md`) resolve from the skill root. +- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives). +- `{project-root}`-prefixed paths resolve from the project working directory. +- `{skill-name}` resolves to the skill directory's basename. + +## On Activation + +### Step 1: Resolve the Workflow Block + +Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow` + +**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver: + +1. `{skill-root}/customize.toml` — defaults +2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides +3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides + +Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append. + +### Step 2: Execute Prepend Steps + +Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding. + +### Step 3: Load Persistent Facts + +Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim. + +### Step 4: Load Config + +Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve: + +- `project_name`, `user_name` +- `communication_language`, `document_output_language` +- `implementation_artifacts` +- `date` as system-generated current datetime +- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}` + +### Step 5: Greet the User + +Greet `{user_name}`, speaking in `{communication_language}`. + +### Step 6: Execute Append Steps + +Execute each entry in `{workflow.activation_steps_append}` in order. + +Activation is complete. Begin the workflow below. + +## Paths + +- `sprint_status_file` = `{implementation_artifacts}/sprint-status.yaml` + +## Input Files + +| Input | Path | Load Strategy | +|-------|------|---------------| +| Sprint status | `{sprint_status_file}` | FULL_LOAD | + +## Execution + + + + + Set mode = {{mode}} if provided by caller; otherwise mode = "interactive" + + + Jump to Step 20 + + + + Jump to Step 30 + + + + Continue to Step 1 + + + + + Load {project_context} for project-wide patterns and conventions (if exists) + Try {sprint_status_file} + + sprint-status.yaml not found. +Run `/bmad:bmm:workflows:sprint-planning` to generate it, then rerun sprint-status. + Exit workflow + + Continue to Step 2 + + + + Read the FULL file: {sprint_status_file} + Parse fields: generated, last_updated, project, project_key, tracking_system, story_location + Parse development_status map. Classify keys: +- Epics: keys starting with "epic-" (and not ending with "-retrospective") +- Retrospectives: keys ending with "-retrospective" +- Stories: everything else (e.g., 1-2-login-form) + Map legacy story status "drafted" → "ready-for-dev" + Count story statuses: backlog, ready-for-dev, in-progress, review, done + Map legacy epic status "contexted" → "in-progress" + Count epic statuses: backlog, in-progress, done + Count retrospective statuses: optional, done + +Validate all statuses against known values: + +- Valid story statuses: backlog, ready-for-dev, in-progress, review, done, drafted (legacy) +- Valid epic statuses: backlog, in-progress, done, contexted (legacy) +- Valid retrospective statuses: optional, done + + + +**Unknown status detected:** +{{#each invalid_entries}} + +- `{{key}}`: "{{status}}" (not recognized) + {{/each}} + +**Valid statuses:** + +- Stories: backlog, ready-for-dev, in-progress, review, done +- Epics: backlog, in-progress, done +- Retrospectives: optional, done + + How should these be corrected? + {{#each invalid_entries}} + {{@index}}. {{key}}: "{{status}}" → [select valid status] + {{/each}} + +Enter corrections (e.g., "1=in-progress, 2=backlog") or "skip" to continue without fixing: + +Update sprint-status.yaml with corrected values +Re-parse the file with corrected statuses + + + +Detect risks: + +- IF any story has status "review": suggest `/bmad:bmm:workflows:code-review` +- IF any story has status "in-progress" AND no stories have status "ready-for-dev": recommend staying focused on active story +- IF all epics have status "backlog" AND no stories have status "ready-for-dev": prompt `/bmad:bmm:workflows:create-story` +- IF `last_updated` timestamp is more than 7 days old (or `last_updated` is missing, fall back to `generated`): warn "sprint-status.yaml may be stale" +- IF any story key doesn't match an epic pattern (e.g., story "5-1-..." but no "epic-5"): warn "orphaned story detected" +- IF any epic has status in-progress but has no associated stories: warn "in-progress epic has no stories" + + + + Pick the next recommended workflow using priority: + When selecting "first" story: sort by epic number, then story number (e.g., 1-1 before 1-2 before 2-1) + 1. If any story status == in-progress → recommend `dev-story` for the first in-progress story + 2. Else if any story status == review → recommend `code-review` for the first review story + 3. Else if any story status == ready-for-dev → recommend `dev-story` + 4. Else if any story status == backlog → recommend `create-story` + 5. Else if any retrospective status == optional → recommend `retrospective` + 6. Else → All implementation items done; congratulate the user - you both did amazing work together! + Store selected recommendation as: next_story_id, next_workflow_id, next_agent (DEV) + + + + +## Sprint Status + +- Project: {{project}} ({{project_key}}) +- Tracking: {{tracking_system}} +- Status file: {sprint_status_file} + +**Stories:** backlog {{count_backlog}}, ready-for-dev {{count_ready}}, in-progress {{count_in_progress}}, review {{count_review}}, done {{count_done}} + +**Epics:** backlog {{epic_backlog}}, in-progress {{epic_in_progress}}, done {{epic_done}} + +**Next Recommendation:** /bmad:bmm:workflows:{{next_workflow_id}} ({{next_story_id}}) + +{{#if risks}} +**Risks:** +{{#each risks}} + +- {{this}} + {{/each}} + {{/if}} + + + + + + Pick an option: +1) Run recommended workflow now +2) Show all stories grouped by status +3) Show raw sprint-status.yaml +4) Exit +Choice: + + + Run `/bmad:bmm:workflows:{{next_workflow_id}}`. +If the command targets a story, set `story_key={{next_story_id}}` when prompted. + + + + +### Stories by Status +- In Progress: {{stories_in_progress}} +- Review: {{stories_in_review}} +- Ready for Dev: {{stories_ready_for_dev}} +- Backlog: {{stories_backlog}} +- Done: {{stories_done}} + + + + + Display the full contents of {sprint_status_file} + + + + Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting. + Exit workflow + + + + + + + + + Load and parse {sprint_status_file} same as Step 2 + Compute recommendation same as Step 3 + next_workflow_id = {{next_workflow_id}} + next_story_id = {{next_story_id}} + count_backlog = {{count_backlog}} + count_ready = {{count_ready}} + count_in_progress = {{count_in_progress}} + count_review = {{count_review}} + count_done = {{count_done}} + epic_backlog = {{epic_backlog}} + epic_in_progress = {{epic_in_progress}} + epic_done = {{epic_done}} + risks = {{risks}} + Return to caller + + + + + + + + Check that {sprint_status_file} exists + + is_valid = false + error = "sprint-status.yaml missing" + suggestion = "Run sprint-planning to create it" + Return + + +Read and parse {sprint_status_file} + +Validate required metadata fields exist: generated, project, project_key, tracking_system, story_location (last_updated is optional for backward compatibility) + +is_valid = false +error = "Missing required field(s): {{missing_fields}}" +suggestion = "Re-run sprint-planning or add missing fields manually" +Return + + +Verify development_status section exists with at least one entry + +is_valid = false +error = "development_status missing or empty" +suggestion = "Re-run sprint-planning or repair the file manually" +Return + + +Validate all status values against known valid statuses: + +- Stories: backlog, ready-for-dev, in-progress, review, done (legacy: drafted) +- Epics: backlog, in-progress, done (legacy: contexted) +- Retrospectives: optional, done + + is_valid = false + error = "Invalid status values: {{invalid_entries}}" + suggestion = "Fix invalid statuses in sprint-status.yaml" + Return + + +is_valid = true +message = "sprint-status.yaml valid: metadata complete, all statuses recognized" +Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting. + + + diff --git a/src/bmm-skills/4-implementation/bmad-sprint-status/customize.toml b/src/bmm-skills/4-implementation/bmad-sprint-status/customize.toml new file mode 100644 index 000000000..c3c5600c4 --- /dev/null +++ b/src/bmm-skills/4-implementation/bmad-sprint-status/customize.toml @@ -0,0 +1,41 @@ +# DO NOT EDIT -- overwritten on every update. +# +# Workflow customization surface for bmad-sprint-status. Mirrors the +# agent customization shape under the [workflow] namespace. + +[workflow] + +# --- Configurable below. Overrides merge per BMad structural rules: --- +# scalars: override wins • arrays (persistent_facts, activation_steps_*): append +# arrays-of-tables with `code`/`id`: replace matching items, append new ones. + +# Steps to run before the standard activation (config load, greet). +# Overrides append. Use for pre-flight loads, compliance checks, etc. + +activation_steps_prepend = [] + +# Steps to run after greet but before the workflow begins. +# Overrides append. Use for context-heavy setup that should happen +# once the user has been acknowledged. + +activation_steps_append = [] + +# Persistent facts the workflow keeps in mind for the whole run +# (standards, compliance constraints, stylistic guardrails). +# Distinct from the runtime memory sidecar — these are static context +# loaded on activation. Overrides append. +# +# Each entry is either: +# - a literal sentence, e.g. "All stories must include testable acceptance criteria." +# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md" +# (glob patterns are supported; the file's contents are loaded and treated as facts). + +persistent_facts = [ + "file:{project-root}/**/project-context.md", +] + +# Scalar: executed when the workflow reaches its final step, +# after sprint status is summarized and risks are surfaced. Override wins. +# Leave empty for no custom post-completion behavior. + +on_complete = "" diff --git a/src/bmm-skills/4-implementation/bmad-sprint-status/workflow.md b/src/bmm-skills/4-implementation/bmad-sprint-status/workflow.md deleted file mode 100644 index 7b72c717c..000000000 --- a/src/bmm-skills/4-implementation/bmad-sprint-status/workflow.md +++ /dev/null @@ -1,261 +0,0 @@ -# Sprint Status Workflow - -**Goal:** Summarize sprint status, surface risks, and recommend the next workflow action. - -**Your Role:** You are a Developer providing clear, actionable sprint visibility. No time estimates — focus on status, risks, and next steps. - ---- - -## INITIALIZATION - -### Configuration Loading - -Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve: - -- `project_name`, `user_name` -- `communication_language`, `document_output_language` -- `implementation_artifacts` -- `date` as system-generated current datetime -- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}` - -### Paths - -- `sprint_status_file` = `{implementation_artifacts}/sprint-status.yaml` - -### Input Files - -| Input | Path | Load Strategy | -|-------|------|---------------| -| Sprint status | `{sprint_status_file}` | FULL_LOAD | - -### Context - -- `project_context` = `**/project-context.md` (load if exists) - ---- - -## EXECUTION - - - - - Set mode = {{mode}} if provided by caller; otherwise mode = "interactive" - - - Jump to Step 20 - - - - Jump to Step 30 - - - - Continue to Step 1 - - - - - Load {project_context} for project-wide patterns and conventions (if exists) - Try {sprint_status_file} - - ❌ sprint-status.yaml not found. -Run `/bmad:bmm:workflows:sprint-planning` to generate it, then rerun sprint-status. - Exit workflow - - Continue to Step 2 - - - - Read the FULL file: {sprint_status_file} - Parse fields: generated, last_updated, project, project_key, tracking_system, story_location - Parse development_status map. Classify keys: - - Epics: keys starting with "epic-" (and not ending with "-retrospective") - - Retrospectives: keys ending with "-retrospective" - - Stories: everything else (e.g., 1-2-login-form) - Map legacy story status "drafted" → "ready-for-dev" - Count story statuses: backlog, ready-for-dev, in-progress, review, done - Map legacy epic status "contexted" → "in-progress" - Count epic statuses: backlog, in-progress, done - Count retrospective statuses: optional, done - -Validate all statuses against known values: - -- Valid story statuses: backlog, ready-for-dev, in-progress, review, done, drafted (legacy) -- Valid epic statuses: backlog, in-progress, done, contexted (legacy) -- Valid retrospective statuses: optional, done - - - -⚠️ **Unknown status detected:** -{{#each invalid_entries}} - -- `{{key}}`: "{{status}}" (not recognized) - {{/each}} - -**Valid statuses:** - -- Stories: backlog, ready-for-dev, in-progress, review, done -- Epics: backlog, in-progress, done -- Retrospectives: optional, done - - How should these be corrected? - {{#each invalid_entries}} - {{@index}}. {{key}}: "{{status}}" → [select valid status] - {{/each}} - -Enter corrections (e.g., "1=in-progress, 2=backlog") or "skip" to continue without fixing: - -Update sprint-status.yaml with corrected values -Re-parse the file with corrected statuses - - - -Detect risks: - -- IF any story has status "review": suggest `/bmad:bmm:workflows:code-review` -- IF any story has status "in-progress" AND no stories have status "ready-for-dev": recommend staying focused on active story -- IF all epics have status "backlog" AND no stories have status "ready-for-dev": prompt `/bmad:bmm:workflows:create-story` -- IF `last_updated` timestamp is more than 7 days old (or `last_updated` is missing, fall back to `generated`): warn "sprint-status.yaml may be stale" -- IF any story key doesn't match an epic pattern (e.g., story "5-1-..." but no "epic-5"): warn "orphaned story detected" -- IF any epic has status in-progress but has no associated stories: warn "in-progress epic has no stories" - - - - Pick the next recommended workflow using priority: - When selecting "first" story: sort by epic number, then story number (e.g., 1-1 before 1-2 before 2-1) - 1. If any story status == in-progress → recommend `dev-story` for the first in-progress story - 2. Else if any story status == review → recommend `code-review` for the first review story - 3. Else if any story status == ready-for-dev → recommend `dev-story` - 4. Else if any story status == backlog → recommend `create-story` - 5. Else if any retrospective status == optional → recommend `retrospective` - 6. Else → All implementation items done; congratulate the user - you both did amazing work together! - Store selected recommendation as: next_story_id, next_workflow_id, next_agent (DEV) - - - - -## 📊 Sprint Status - -- Project: {{project}} ({{project_key}}) -- Tracking: {{tracking_system}} -- Status file: {sprint_status_file} - -**Stories:** backlog {{count_backlog}}, ready-for-dev {{count_ready}}, in-progress {{count_in_progress}}, review {{count_review}}, done {{count_done}} - -**Epics:** backlog {{epic_backlog}}, in-progress {{epic_in_progress}}, done {{epic_done}} - -**Next Recommendation:** /bmad:bmm:workflows:{{next_workflow_id}} ({{next_story_id}}) - -{{#if risks}} -**Risks:** -{{#each risks}} - -- {{this}} - {{/each}} - {{/if}} - - - - - - Pick an option: -1) Run recommended workflow now -2) Show all stories grouped by status -3) Show raw sprint-status.yaml -4) Exit -Choice: - - - Run `/bmad:bmm:workflows:{{next_workflow_id}}`. -If the command targets a story, set `story_key={{next_story_id}}` when prompted. - - - - -### Stories by Status -- In Progress: {{stories_in_progress}} -- Review: {{stories_in_review}} -- Ready for Dev: {{stories_ready_for_dev}} -- Backlog: {{stories_backlog}} -- Done: {{stories_done}} - - - - - Display the full contents of {sprint_status_file} - - - - Exit workflow - - - - - - - - - Load and parse {sprint_status_file} same as Step 2 - Compute recommendation same as Step 3 - next_workflow_id = {{next_workflow_id}} - next_story_id = {{next_story_id}} - count_backlog = {{count_backlog}} - count_ready = {{count_ready}} - count_in_progress = {{count_in_progress}} - count_review = {{count_review}} - count_done = {{count_done}} - epic_backlog = {{epic_backlog}} - epic_in_progress = {{epic_in_progress}} - epic_done = {{epic_done}} - risks = {{risks}} - Return to caller - - - - - - - - Check that {sprint_status_file} exists - - is_valid = false - error = "sprint-status.yaml missing" - suggestion = "Run sprint-planning to create it" - Return - - -Read and parse {sprint_status_file} - -Validate required metadata fields exist: generated, project, project_key, tracking_system, story_location (last_updated is optional for backward compatibility) - -is_valid = false -error = "Missing required field(s): {{missing_fields}}" -suggestion = "Re-run sprint-planning or add missing fields manually" -Return - - -Verify development_status section exists with at least one entry - -is_valid = false -error = "development_status missing or empty" -suggestion = "Re-run sprint-planning or repair the file manually" -Return - - -Validate all status values against known valid statuses: - -- Stories: backlog, ready-for-dev, in-progress, review, done (legacy: drafted) -- Epics: backlog, in-progress, done (legacy: contexted) -- Retrospectives: optional, done - - is_valid = false - error = "Invalid status values: {{invalid_entries}}" - suggestion = "Fix invalid statuses in sprint-status.yaml" - Return - - -is_valid = true -message = "sprint-status.yaml valid: metadata complete, all statuses recognized" - - - From c29b72ecc0177f98658eaa3233e5a4fbf47b8c9b Mon Sep 17 00:00:00 2001 From: Pablo Ontiveros Date: Sat, 25 Apr 2026 01:21:10 +0200 Subject: [PATCH 04/23] fix(create-story): read UPDATE files before generating dev notes (#2274) When a story modifies existing files, create-story must read those files before generating dev notes. Without this, dev agents improvise design decisions without knowing the current state of the code, leading to regressions caught only at review time. Adds a step at the end of Step 3 (Architecture analysis) that reads every file marked UPDATE in the architecture directory structure and documents its current state, what the story changes, and what must be preserved. Fixes #2273 Co-authored-by: Brian Madison --- .../4-implementation/bmad-create-story/SKILL.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/src/bmm-skills/4-implementation/bmad-create-story/SKILL.md b/src/bmm-skills/4-implementation/bmad-create-story/SKILL.md index b746b9f57..cf14039c1 100644 --- a/src/bmm-skills/4-implementation/bmad-create-story/SKILL.md +++ b/src/bmm-skills/4-implementation/bmad-create-story/SKILL.md @@ -302,6 +302,18 @@ Activation is complete. Begin the workflow below. processes - **Integration Patterns:** External service integrations, data flows Extract any story-specific requirements that the developer MUST follow Identify any architectural decisions that override previous patterns + + + 📂 READ FILES BEING MODIFIED — skipping this is the primary cause of implementation failures and review cycles + From the architecture directory structure, identify every file marked UPDATE (not NEW) that this story will touch + Read each relevant UPDATE file completely. For each one, document in dev notes: + - Current state: what it does today (state machine, API calls, data shapes, existing behaviors) + - What this story changes: the specific sections or behaviors being modified + - What must be preserved: existing interactions and behaviors the story must not break + + A story implementation must leave the system working end-to-end — not just satisfy its stated ACs. + If a behavior is required for the feature to work correctly in the existing system, it is a requirement + whether or not it is explicitly written in the story. The dev agent owns this. From 9ff9d6f8f301e162bbcc6b37d5b1028fb27fd0b4 Mon Sep 17 00:00:00 2001 From: Yahya Bin Naveed <57190471+TheAntiFlash@users.noreply.github.com> Date: Sat, 25 Apr 2026 04:22:09 +0500 Subject: [PATCH 05/23] feat: add Kimi Code CLI support (#2302) Adds kimi-code to both platform-codes.yaml files so Kimi Code CLI is available as an install target via the config-driven installer. Skills are installed to .kimi/skills/, which is the project-level skills directory per the official Kimi Code CLI documentation. Closes #1630 Co-authored-by: Brian --- tools/installer/ide/platform-codes.yaml | 6 ++++++ tools/platform-codes.yaml | 6 ++++++ 2 files changed, 12 insertions(+) diff --git a/tools/installer/ide/platform-codes.yaml b/tools/installer/ide/platform-codes.yaml index 4b08046f1..1899473c0 100644 --- a/tools/installer/ide/platform-codes.yaml +++ b/tools/installer/ide/platform-codes.yaml @@ -114,6 +114,12 @@ platforms: - .kilocode/workflows target_dir: .kilocode/skills + kimi-code: + name: "Kimi Code" + preferred: false + installer: + target_dir: .kimi/skills + kiro: name: "Kiro" preferred: false diff --git a/tools/platform-codes.yaml b/tools/platform-codes.yaml index 7227af0ce..f57e9ef5c 100644 --- a/tools/platform-codes.yaml +++ b/tools/platform-codes.yaml @@ -103,6 +103,12 @@ platforms: category: ide description: "AI coding platform" + kimi-code: + name: "Kimi Code" + preferred: false + category: cli + description: "Moonshot AI's Kimi Code CLI" + crush: name: "Crush" preferred: false From 314fe69d14bc9dcdbc5e918f7859b2f692b925bf Mon Sep 17 00:00:00 2001 From: Brian Date: Fri, 24 Apr 2026 22:31:01 -0500 Subject: [PATCH 06/23] docs: add v6.4.0 changelog entry (#2310) --- CHANGELOG.md | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 62 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index b67ee2f62..bcd28889a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,67 @@ # Changelog +## v6.4.0 - 2026-04-24 + +### ✨ Headline + +**Full agent and workflow customization across the entire BMad Method.** Every agent and workflow in BMM, Core, CIS, GDS, and TEA can now be customized via TOML overrides in `_bmad/custom/`. Customize agents to apply tooling, version control, or behavior changes across whole groups of workflows. Drop in fine-grained per-workflow overrides where you need them. Built for power users who want BMad to fit their stack without forking. + +**Stable and bleeding-edge release channels, standardized across all modules.** Pick `stable` or `next` per module, pin specific versions, and switch channels interactively or via CLI flags (`--channel`, `--all-stable`, `--all-next`, `--next=CODE`, `--pin CODE=TAG`). Same model across BMM, Core, and every external module. + +### 💥 Breaking Changes + +* Customization is now TOML-based; the briefly introduced YAML-based customization is no longer supported (#2284, #2283) + +### 🎁 Features + +**Customization framework** + +* TOML-based agent and workflow customization with flat schema, structural merge rules (scalars, tables, code-keyed arrays, append arrays), and `persistent_facts` unification (#2284) +* Central `_bmad/config.toml` surface with four-file architecture (`config.toml`, `config.user.toml`, `custom/config.toml`, `custom/config.user.toml`) for agent roster and scope-partitioned install answers (#2285) +* `customize.toml` support extended to 17 bmm-skills workflows with flattened SKILL.md architecture and standardized `[workflow]` block (#2287) +* `customize.toml` extended to all six developer-execution workflows: bmad-dev-story, bmad-code-review, bmad-sprint-planning, bmad-sprint-status, bmad-quick-dev, bmad-checkpoint-preview (#2308) +* `bmad-customize` skill — guided authoring of TOML overrides in `_bmad/custom/` with stdlib-only resolver verification (#2289) +* Wire `on_complete` hook into all 23 workflow terminal steps with full customize.toml documentation (#2290) + +**Release channels & installer** + +* Channel-based version resolution for external modules with interactive channel management (`stable` / `next` / `pinned`) and CLI flags (`--channel`, `--all-stable`, `--all-next`, `--next=CODE`, `--pin CODE=TAG`) (#2305) +* GitHub API as primary fetch with raw CDN fallback in installer registry client to support corporate proxies (#2248) + +**Other** + +* Kimi Code CLI support for installing BMM skills in `.kimi/skills/` (#2302) +* `bmad-create-story` now reads every UPDATE-marked file before generating dev notes so brownfield stories preserve current behavior instead of improvising at implementation time (#2274) +* Sync `sprint-status.yaml` from quick-dev on epic-story implementation with idempotent writes tracking `in-progress` and `review` transitions (#2234) +* Enforce model parity for all code review subagents to match orchestrator session capability for improved rare-event detection (#2236) +* Set `team: software-development` on all six BMM agents for unified grouping in party-mode and retrospective skills (#2286) + +### 🐛 Bug Fixes + +* PRD workflow no longer silently de-scopes user requirements or invents MVP/Growth/Vision phasing; requires explicit confirmation before any scope reduction (#1927) +* Installer shows live npm version for external modules instead of stale cached metadata (#2307) +* Resolve external-module agents from cache during manifest write so agents land in `config.toml` (#2295) +* Fix installer version resolution for external modules with shared resolver preferring package.json > module.yaml > marketplace.json (#2298) +* Replace fs-extra with native `node:fs` to prevent file loss during multi-module installs from deferred retry-queue races (#2253) +* Add `move()` and overwrite support to fs-native wrapper for directory migrations during upgrades (#2253) +* Stop skill scanner from recursing into discovered skills to prevent spurious errors on nested template files (#2255) +* Source built-in modules locally in installer UI to preserve core and bmm in module list when registry is unreachable (#2251) +* Remove dead Batch-apply option from code-review patch menu and rename apply options for clarity (#2225) + +### ♻️ Refactoring + +* Remove 1,683 lines of dead code: three entirely dead files (agent-command-generator.js, bmad-artifacts.js, module-injections.js) and ~50 unused exports across installer modules (#2247) +* Remove dead template and agent-command pipeline from installer; SKILL.md directory copying is the sole installation path (#2244) + +### 📚 Documentation + +* Sync and update Vietnamese (vi-VN) docs with missing pages and refreshed translations (#2291, #2222) +* Sync French (fr-FR) translations with upstream, restore Amelia as dev agent, fix sidebar ordering (#2231) +* Add Czech (cs-CZ) `analysis-phase.md` translation; normalize typographic quotes (#2240, #2241, #2242) +* Add missing Chinese (zh-CN) translations for 3 documents (#2254) +* Update stale Analyst agent triggers and add PRFAQ link (#2238) +* Remove Bob from workflow map diagrams reflecting consolidation into Amelia in v6.3.0 (#2252) + ## v6.3.0 - 2026-04-09 ### 💥 Breaking Changes From 119712200115c835521b1a72f209f4a8f1b10901 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Sat, 25 Apr 2026 03:34:02 +0000 Subject: [PATCH 07/23] chore(release): v6.4.0 [skip ci] --- package-lock.json | 4 ++-- package.json | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/package-lock.json b/package-lock.json index d547eff9a..0bd26eff7 100644 --- a/package-lock.json +++ b/package-lock.json @@ -1,12 +1,12 @@ { "name": "bmad-method", - "version": "6.3.0", + "version": "6.4.0", "lockfileVersion": 3, "requires": true, "packages": { "": { "name": "bmad-method", - "version": "6.3.0", + "version": "6.4.0", "license": "MIT", "dependencies": { "@clack/core": "^1.0.0", diff --git a/package.json b/package.json index c1e8b4941..f34e2e84b 100644 --- a/package.json +++ b/package.json @@ -1,7 +1,7 @@ { "$schema": "https://json.schemastore.org/package.json", "name": "bmad-method", - "version": "6.3.0", + "version": "6.4.0", "description": "Breakthrough Method of Agile AI-driven Development", "keywords": [ "agile", From 01cc32540b5f4eb3c0f6befb5b6c7084250cdd66 Mon Sep 17 00:00:00 2001 From: Brian Date: Sat, 25 Apr 2026 21:14:00 -0500 Subject: [PATCH 08/23] feat(installer): expand to 42 platforms with shared target_dir coordination (#2313) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * refactor(installer): replace legacy_targets auto-cleanup with upgrade warnings Removes the legacy_targets YAML field and its install-time auto-migration of pre-v6.1.0 directories (.claude/commands, .opencode/agents, etc.). On install, surface a warning instead: read manifest version and scan 24 known legacy paths, then print rm -rf commands the user can run themselves. Also deletes orphan tools/platform-codes.yaml (never loaded by any code) and fixes a stale URL in the cs translation. * feat(installer): consolidate to .agents/skills and add global_target_dir for all platforms Updates platform-codes.yaml against verified primary docs for all 24 supported platforms. 14 platforms (auggie, codex, crush, cursor, gemini, github-copilot, kilo, kimi-code, opencode, pi, roo, rovo-dev, windsurf) move their project target_dir to the cross-tool .agents/skills/ standard. Junie moves from the broken .agents/skills/ to its own .junie/skills/ per JetBrains docs. Adds global_target_dir to every platform: 11 share ~/.agents/skills/, Crush uses XDG ~/.config/agents/skills/, Codex global stays ~/.codex/skills/, the rest are tool-specific. Ona and Trae omit global (no documented home path). Note: installer logic does not yet dedupe writes for platforms sharing a target_dir — users installing multiple .agents/skills/ tools together will overwrite the same files (harmless on install, but uninstalling one clears the dir for the others). Coordination logic is the next step. * feat(installer): add 18 new platforms, dedup shared target_dir, ownership-aware cleanup Adds 18 platforms from the verified Vercel list (adal, amp, bob, command-code, cortex, droid, firebender, goose, kode, mistral-vibe, mux, neovate, openclaw, openhands, pochi, replit, warp, zencoder). Marks codex and github-copilot as preferred alongside claude-code and cursor. Coordination for platforms sharing a target_dir: - IdeManager.setupBatch dedups skill writes when multiple selected platforms point at the same target_dir (e.g. .agents/skills/). The first platform writes, peers skip the redundant wipe-and-rewrite. Result reports the same count and target dir for every member so the install summary is consistent. - IdeManager.cleanupByList accepts remainingIdes; when removing one platform from a shared dir while another co-installed platform still owns it, the target_dir wipe is skipped. Platform-specific hooks (copilot markers, kilo modes, rovodev prompts) still run. - _setupIdes uses setupBatch; _removeDeselectedIdes passes remainingIdes so partial reconfigure preserves shared skills. Skill ownership now uses skill-manifest.csv canonicalIds, not the bmad- prefix. This unblocks custom modules that ship skills with non-bmad names (e.g. fred-cool-skill). Affected sites: - _config-driven.detect: reads canonicalIds from the project's bmadDir - _config-driven.findAncestorConflict: reads canonicalIds from the ancestor's own bmadDir, falling back to the prefix only when no manifest exists - legacy-warnings.findStaleLegacyDirs: same canonicalId-based detection Migration warnings: LEGACY_SKILL_PATHS adds 12 skill dirs that moved to the .agents/skills/ standard (cursor, gemini, github-copilot, kimi, opencode, pi, roo, rovodev, windsurf, plus their globals). Users with stale skills in those locations get a one-line warning with the rm command per dir. New shared helper tools/installer/ide/shared/installed-skills.js exposes getInstalledCanonicalIds(bmadDir) and isBmadOwnedEntry(entry, canonicalIds). Tests: 9 new assertions across two suites covering dedup, partial uninstall preservation, and custom-module skill detection. All 286 tests pass. * fix(installer): setupBatch must not claim a shared target_dir on failure If the first platform's setup throws or returns success: false, the dedup map previously still recorded the claim with skillCount: 0, causing every peer sharing the target_dir to skip its install — leaving the dir empty/broken behind a cascade of misleading "shares with X" rows. Now the claim is only recorded when the install succeeded and wrote skills. On failure, the next peer becomes the new first writer and recovers. Adds Suite 40b regression test that monkey-patches cursor.setup to throw and verifies gemini still populates the shared dir. * fix(installer): address PR #2313 review findings Three issues raised by augmentcode and coderabbit bot reviewers: 1. _removeDeselectedIdes silently swallowed cleanup failures after the refactor to cleanupByList. The old per-IDE try/catch logged a warning; the new path discarded the result array. Now logs a warning per failed ide so failures stay visible. 2. The legacy-dir cleanup hint printed `rm -rf ""/bmad*` which both matched bmad-os-* utility skills the user should keep AND missed the custom-module skills (e.g. fred-cool-skill) that the new canonical-id detection now finds. Findings now carry the exact entry names from the scan, and the warning prints one precise rm line per entry. 3. warnPreNativeSkillsLegacy did unguarded fs reads at install start. A permission/IO error would have aborted the whole install. Wrapped the call site in try/catch so legacy-scan failures only emit a warning. --- .../cs/how-to/non-interactive-installation.md | 2 +- test/test-installation-components.js | 426 ++++++++---------- .../docs/native-skills-migration-checklist.md | 4 - tools/installer/core/installer.js | 43 +- tools/installer/core/legacy-warnings.js | 151 +++++++ tools/installer/ide/_config-driven.js | 141 ++---- tools/installer/ide/manager.js | 85 +++- tools/installer/ide/platform-codes.yaml | 226 +++++++--- .../installer/ide/shared/installed-skills.js | 50 ++ tools/platform-codes.yaml | 175 ------- 10 files changed, 685 insertions(+), 618 deletions(-) create mode 100644 tools/installer/core/legacy-warnings.js create mode 100644 tools/installer/ide/shared/installed-skills.js delete mode 100644 tools/platform-codes.yaml diff --git a/docs/cs/how-to/non-interactive-installation.md b/docs/cs/how-to/non-interactive-installation.md index 12ea31eb3..4d784f923 100644 --- a/docs/cs/how-to/non-interactive-installation.md +++ b/docs/cs/how-to/non-interactive-installation.md @@ -60,7 +60,7 @@ Dostupná ID nástrojů pro příznak `--tools`: **Preferované:** `claude-code`, `cursor` -Spusťte `npx bmad-method install` interaktivně jednou pro zobrazení aktuálního seznamu podporovaných nástrojů, nebo zkontrolujte [konfiguraci kódů platforem](https://github.com/bmad-code-org/BMAD-METHOD/blob/main/tools/cli/installers/lib/ide/platform-codes.yaml). +Spusťte `npx bmad-method install` interaktivně jednou pro zobrazení aktuálního seznamu podporovaných nástrojů, nebo zkontrolujte [konfiguraci kódů platforem](https://github.com/bmad-code-org/BMAD-METHOD/blob/main/tools/installer/ide/platform-codes.yaml). ## Režimy instalace diff --git a/test/test-installation-components.js b/test/test-installation-components.js index 58d6c7d8f..4827afcbf 100644 --- a/test/test-installation-components.js +++ b/test/test-installation-components.js @@ -139,19 +139,10 @@ async function runTests() { const platformCodes = await loadPlatformCodes(); const windsurfInstaller = platformCodes.platforms.windsurf?.installer; - assert(windsurfInstaller?.target_dir === '.windsurf/skills', 'Windsurf target_dir uses native skills path'); - - assert( - Array.isArray(windsurfInstaller?.legacy_targets) && windsurfInstaller.legacy_targets.includes('.windsurf/workflows'), - 'Windsurf installer cleans legacy workflow output', - ); + assert(windsurfInstaller?.target_dir === '.agents/skills', 'Windsurf target_dir uses native skills path'); const tempProjectDir = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-windsurf-test-')); const installedBmadDir = await createTestBmadFixture(); - const legacyDir = path.join(tempProjectDir, '.windsurf', 'workflows', 'bmad-legacy-dir'); - await fs.ensureDir(legacyDir); - await fs.writeFile(path.join(tempProjectDir, '.windsurf', 'workflows', 'bmad-legacy.md'), 'legacy\n'); - await fs.writeFile(path.join(legacyDir, 'SKILL.md'), 'legacy\n'); const ideManager = new IdeManager(); await ideManager.ensureInitialized(); @@ -162,11 +153,9 @@ async function runTests() { assert(result.success === true, 'Windsurf setup succeeds against temp project'); - const skillFile = path.join(tempProjectDir, '.windsurf', 'skills', 'bmad-master', 'SKILL.md'); + const skillFile = path.join(tempProjectDir, '.agents', 'skills', 'bmad-master', 'SKILL.md'); assert(await fs.pathExists(skillFile), 'Windsurf install writes SKILL.md directory output'); - assert(!(await fs.pathExists(path.join(tempProjectDir, '.windsurf', 'workflows'))), 'Windsurf setup removes legacy workflows dir'); - await fs.remove(tempProjectDir); await fs.remove(path.dirname(installedBmadDir)); } catch (error) { @@ -187,17 +176,8 @@ async function runTests() { assert(kiroInstaller?.target_dir === '.kiro/skills', 'Kiro target_dir uses native skills path'); - assert( - Array.isArray(kiroInstaller?.legacy_targets) && kiroInstaller.legacy_targets.includes('.kiro/steering'), - 'Kiro installer cleans legacy steering output', - ); - const tempProjectDir = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-kiro-test-')); const installedBmadDir = await createTestBmadFixture(); - const legacyDir = path.join(tempProjectDir, '.kiro', 'steering', 'bmad-legacy-dir'); - await fs.ensureDir(legacyDir); - await fs.writeFile(path.join(tempProjectDir, '.kiro', 'steering', 'bmad-legacy.md'), 'legacy\n'); - await fs.writeFile(path.join(legacyDir, 'SKILL.md'), 'legacy\n'); const ideManager = new IdeManager(); await ideManager.ensureInitialized(); @@ -211,8 +191,6 @@ async function runTests() { const skillFile = path.join(tempProjectDir, '.kiro', 'skills', 'bmad-master', 'SKILL.md'); assert(await fs.pathExists(skillFile), 'Kiro install writes SKILL.md directory output'); - assert(!(await fs.pathExists(path.join(tempProjectDir, '.kiro', 'steering'))), 'Kiro setup removes legacy steering dir'); - await fs.remove(tempProjectDir); await fs.remove(path.dirname(installedBmadDir)); } catch (error) { @@ -233,17 +211,8 @@ async function runTests() { assert(antigravityInstaller?.target_dir === '.agent/skills', 'Antigravity target_dir uses native skills path'); - assert( - Array.isArray(antigravityInstaller?.legacy_targets) && antigravityInstaller.legacy_targets.includes('.agent/workflows'), - 'Antigravity installer cleans legacy workflow output', - ); - const tempProjectDir = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-antigravity-test-')); const installedBmadDir = await createTestBmadFixture(); - const legacyDir = path.join(tempProjectDir, '.agent', 'workflows', 'bmad-legacy-dir'); - await fs.ensureDir(legacyDir); - await fs.writeFile(path.join(tempProjectDir, '.agent', 'workflows', 'bmad-legacy.md'), 'legacy\n'); - await fs.writeFile(path.join(legacyDir, 'SKILL.md'), 'legacy\n'); const ideManager = new IdeManager(); await ideManager.ensureInitialized(); @@ -257,8 +226,6 @@ async function runTests() { const skillFile = path.join(tempProjectDir, '.agent', 'skills', 'bmad-master', 'SKILL.md'); assert(await fs.pathExists(skillFile), 'Antigravity install writes SKILL.md directory output'); - assert(!(await fs.pathExists(path.join(tempProjectDir, '.agent', 'workflows'))), 'Antigravity setup removes legacy workflows dir'); - await fs.remove(tempProjectDir); await fs.remove(path.dirname(installedBmadDir)); } catch (error) { @@ -277,12 +244,7 @@ async function runTests() { const platformCodes = await loadPlatformCodes(); const auggieInstaller = platformCodes.platforms.auggie?.installer; - assert(auggieInstaller?.target_dir === '.augment/skills', 'Auggie target_dir uses native skills path'); - - assert( - Array.isArray(auggieInstaller?.legacy_targets) && auggieInstaller.legacy_targets.includes('.augment/commands'), - 'Auggie installer cleans legacy command output', - ); + assert(auggieInstaller?.target_dir === '.agents/skills', 'Auggie target_dir uses native skills path'); assert( auggieInstaller?.ancestor_conflict_check !== true, @@ -291,10 +253,6 @@ async function runTests() { const tempProjectDir = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-auggie-test-')); const installedBmadDir = await createTestBmadFixture(); - const legacyDir = path.join(tempProjectDir, '.augment', 'commands', 'bmad-legacy-dir'); - await fs.ensureDir(legacyDir); - await fs.writeFile(path.join(tempProjectDir, '.augment', 'commands', 'bmad-legacy.md'), 'legacy\n'); - await fs.writeFile(path.join(legacyDir, 'SKILL.md'), 'legacy\n'); const ideManager = new IdeManager(); await ideManager.ensureInitialized(); @@ -305,11 +263,9 @@ async function runTests() { assert(result.success === true, 'Auggie setup succeeds against temp project'); - const skillFile = path.join(tempProjectDir, '.augment', 'skills', 'bmad-master', 'SKILL.md'); + const skillFile = path.join(tempProjectDir, '.agents', 'skills', 'bmad-master', 'SKILL.md'); assert(await fs.pathExists(skillFile), 'Auggie install writes SKILL.md directory output'); - assert(!(await fs.pathExists(path.join(tempProjectDir, '.augment', 'commands'))), 'Auggie setup removes legacy commands dir'); - await fs.remove(tempProjectDir); await fs.remove(path.dirname(installedBmadDir)); } catch (error) { @@ -328,30 +284,10 @@ async function runTests() { const platformCodes = await loadPlatformCodes(); const opencodeInstaller = platformCodes.platforms.opencode?.installer; - assert(opencodeInstaller?.target_dir === '.opencode/skills', 'OpenCode target_dir uses native skills path'); - - assert( - Array.isArray(opencodeInstaller?.legacy_targets) && - ['.opencode/agents', '.opencode/commands', '.opencode/agent', '.opencode/command'].every((legacyTarget) => - opencodeInstaller.legacy_targets.includes(legacyTarget), - ), - 'OpenCode installer cleans split legacy agent and command output', - ); + assert(opencodeInstaller?.target_dir === '.agents/skills', 'OpenCode target_dir uses native skills path'); const tempProjectDir = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-opencode-test-')); const installedBmadDir = await createTestBmadFixture(); - const legacyDirs = [ - path.join(tempProjectDir, '.opencode', 'agents', 'bmad-legacy-agent'), - path.join(tempProjectDir, '.opencode', 'commands', 'bmad-legacy-command'), - path.join(tempProjectDir, '.opencode', 'agent', 'bmad-legacy-agent-singular'), - path.join(tempProjectDir, '.opencode', 'command', 'bmad-legacy-command-singular'), - ]; - - for (const legacyDir of legacyDirs) { - await fs.ensureDir(legacyDir); - await fs.writeFile(path.join(legacyDir, 'SKILL.md'), 'legacy\n'); - await fs.writeFile(path.join(path.dirname(legacyDir), `${path.basename(legacyDir)}.md`), 'legacy\n'); - } const ideManager = new IdeManager(); await ideManager.ensureInitialized(); @@ -362,16 +298,9 @@ async function runTests() { assert(result.success === true, 'OpenCode setup succeeds against temp project'); - const skillFile = path.join(tempProjectDir, '.opencode', 'skills', 'bmad-master', 'SKILL.md'); + const skillFile = path.join(tempProjectDir, '.agents', 'skills', 'bmad-master', 'SKILL.md'); assert(await fs.pathExists(skillFile), 'OpenCode install writes SKILL.md directory output'); - for (const legacyDir of ['agents', 'commands', 'agent', 'command']) { - assert( - !(await fs.pathExists(path.join(tempProjectDir, '.opencode', legacyDir))), - `OpenCode setup removes legacy .opencode/${legacyDir} dir`, - ); - } - await fs.remove(tempProjectDir); await fs.remove(path.dirname(installedBmadDir)); } catch (error) { @@ -392,16 +321,8 @@ async function runTests() { assert(claudeInstaller?.target_dir === '.claude/skills', 'Claude Code target_dir uses native skills path'); - assert( - Array.isArray(claudeInstaller?.legacy_targets) && claudeInstaller.legacy_targets.includes('.claude/commands'), - 'Claude Code installer cleans legacy command output', - ); - const tempProjectDir9 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-claude-code-test-')); const installedBmadDir9 = await createTestBmadFixture(); - const legacyDir9 = path.join(tempProjectDir9, '.claude', 'commands'); - await fs.ensureDir(legacyDir9); - await fs.writeFile(path.join(legacyDir9, 'bmad-legacy.md'), 'legacy\n'); const ideManager9 = new IdeManager(); await ideManager9.ensureInitialized(); @@ -420,8 +341,6 @@ async function runTests() { const nameMatch9 = skillContent9.match(/^name:\s*(.+)$/m); assert(nameMatch9 && nameMatch9[1].trim() === 'bmad-master', 'Claude Code skill name frontmatter matches directory name exactly'); - assert(!(await fs.pathExists(legacyDir9)), 'Claude Code setup removes legacy commands dir'); - await fs.remove(tempProjectDir9); await fs.remove(path.dirname(installedBmadDir9)); } catch (error) { @@ -444,16 +363,8 @@ async function runTests() { assert(codexInstaller?.target_dir === '.agents/skills', 'Codex target_dir uses native skills path'); - assert( - Array.isArray(codexInstaller?.legacy_targets) && codexInstaller.legacy_targets.includes('.codex/prompts'), - 'Codex installer cleans legacy prompt output', - ); - const tempProjectDir11 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-codex-test-')); const installedBmadDir11 = await createTestBmadFixture(); - const legacyDir11 = path.join(tempProjectDir11, '.codex', 'prompts'); - await fs.ensureDir(legacyDir11); - await fs.writeFile(path.join(legacyDir11, 'bmad-legacy.md'), 'legacy\n'); const ideManager11 = new IdeManager(); await ideManager11.ensureInitialized(); @@ -472,8 +383,6 @@ async function runTests() { const nameMatch11 = skillContent11.match(/^name:\s*(.+)$/m); assert(nameMatch11 && nameMatch11[1].trim() === 'bmad-master', 'Codex skill name frontmatter matches directory name exactly'); - assert(!(await fs.pathExists(legacyDir11)), 'Codex setup removes legacy prompts dir'); - await fs.remove(tempProjectDir11); await fs.remove(path.dirname(installedBmadDir11)); } catch (error) { @@ -494,20 +403,12 @@ async function runTests() { const platformCodes13 = await loadPlatformCodes(); const cursorInstaller = platformCodes13.platforms.cursor?.installer; - assert(cursorInstaller?.target_dir === '.cursor/skills', 'Cursor target_dir uses native skills path'); - - assert( - Array.isArray(cursorInstaller?.legacy_targets) && cursorInstaller.legacy_targets.includes('.cursor/commands'), - 'Cursor installer cleans legacy command output', - ); + assert(cursorInstaller?.target_dir === '.agents/skills', 'Cursor target_dir uses native skills path'); assert(!cursorInstaller?.ancestor_conflict_check, 'Cursor installer does not enable ancestor conflict checks'); const tempProjectDir13c = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-cursor-test-')); const installedBmadDir13c = await createTestBmadFixture(); - const legacyDir13c = path.join(tempProjectDir13c, '.cursor', 'commands'); - await fs.ensureDir(legacyDir13c); - await fs.writeFile(path.join(legacyDir13c, 'bmad-legacy.md'), 'legacy\n'); const ideManager13c = new IdeManager(); await ideManager13c.ensureInitialized(); @@ -518,7 +419,7 @@ async function runTests() { assert(result13c.success === true, 'Cursor setup succeeds against temp project'); - const skillFile13c = path.join(tempProjectDir13c, '.cursor', 'skills', 'bmad-master', 'SKILL.md'); + const skillFile13c = path.join(tempProjectDir13c, '.agents', 'skills', 'bmad-master', 'SKILL.md'); assert(await fs.pathExists(skillFile13c), 'Cursor install writes SKILL.md directory output'); // Verify name frontmatter matches directory name @@ -526,8 +427,6 @@ async function runTests() { const nameMatch13c = skillContent13c.match(/^name:\s*(.+)$/m); assert(nameMatch13c && nameMatch13c[1].trim() === 'bmad-master', 'Cursor skill name frontmatter matches directory name exactly'); - assert(!(await fs.pathExists(legacyDir13c)), 'Cursor setup removes legacy commands dir'); - await fs.remove(tempProjectDir13c); await fs.remove(path.dirname(installedBmadDir13c)); } catch (error) { @@ -546,19 +445,10 @@ async function runTests() { const platformCodes13 = await loadPlatformCodes(); const rooInstaller = platformCodes13.platforms.roo?.installer; - assert(rooInstaller?.target_dir === '.roo/skills', 'Roo target_dir uses native skills path'); - - assert( - Array.isArray(rooInstaller?.legacy_targets) && rooInstaller.legacy_targets.includes('.roo/commands'), - 'Roo installer cleans legacy command output', - ); + assert(rooInstaller?.target_dir === '.agents/skills', 'Roo target_dir uses native skills path'); const tempProjectDir13 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-roo-test-')); const installedBmadDir13 = await createTestBmadFixture(); - const legacyDir13 = path.join(tempProjectDir13, '.roo', 'commands', 'bmad-legacy-dir'); - await fs.ensureDir(legacyDir13); - await fs.writeFile(path.join(tempProjectDir13, '.roo', 'commands', 'bmad-legacy.md'), 'legacy\n'); - await fs.writeFile(path.join(legacyDir13, 'SKILL.md'), 'legacy\n'); const ideManager13 = new IdeManager(); await ideManager13.ensureInitialized(); @@ -569,7 +459,7 @@ async function runTests() { assert(result13.success === true, 'Roo setup succeeds against temp project'); - const skillFile13 = path.join(tempProjectDir13, '.roo', 'skills', 'bmad-master', 'SKILL.md'); + const skillFile13 = path.join(tempProjectDir13, '.agents', 'skills', 'bmad-master', 'SKILL.md'); assert(await fs.pathExists(skillFile13), 'Roo install writes SKILL.md directory output'); // Verify name frontmatter matches directory name (Roo constraint: lowercase alphanumeric + hyphens) @@ -580,8 +470,6 @@ async function runTests() { 'Roo skill name frontmatter matches directory name exactly (lowercase alphanumeric + hyphens)', ); - assert(!(await fs.pathExists(path.join(tempProjectDir13, '.roo', 'commands'))), 'Roo setup removes legacy commands dir'); - // Reinstall/upgrade: run setup again over existing skills output const result13b = await ideManager13.setup('roo', tempProjectDir13, installedBmadDir13, { silent: true, @@ -615,31 +503,13 @@ async function runTests() { const platformCodes17 = await loadPlatformCodes(); const copilotInstaller = platformCodes17.platforms['github-copilot']?.installer; - assert(copilotInstaller?.target_dir === '.github/skills', 'GitHub Copilot target_dir uses native skills path'); - - assert( - Array.isArray(copilotInstaller?.legacy_targets) && copilotInstaller.legacy_targets.includes('.github/agents'), - 'GitHub Copilot installer cleans legacy agents output', - ); - - assert( - Array.isArray(copilotInstaller?.legacy_targets) && copilotInstaller.legacy_targets.includes('.github/prompts'), - 'GitHub Copilot installer cleans legacy prompts output', - ); + assert(copilotInstaller?.target_dir === '.agents/skills', 'GitHub Copilot target_dir uses native skills path'); const tempProjectDir17 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-copilot-test-')); const installedBmadDir17 = await createTestBmadFixture(); - // Create legacy .github/agents/ and .github/prompts/ files - const legacyAgentsDir17 = path.join(tempProjectDir17, '.github', 'agents'); - const legacyPromptsDir17 = path.join(tempProjectDir17, '.github', 'prompts'); - await fs.ensureDir(legacyAgentsDir17); - await fs.ensureDir(legacyPromptsDir17); - await fs.writeFile(path.join(legacyAgentsDir17, 'bmad-legacy.agent.md'), 'legacy agent\n'); - await fs.writeFile(path.join(legacyPromptsDir17, 'bmad-legacy.prompt.md'), 'legacy prompt\n'); - - // Create legacy copilot-instructions.md with BMAD markers const copilotInstructionsPath17 = path.join(tempProjectDir17, '.github', 'copilot-instructions.md'); + await fs.ensureDir(path.dirname(copilotInstructionsPath17)); await fs.writeFile( copilotInstructionsPath17, 'User content before\n\nBMAD generated content\n\nUser content after\n', @@ -654,7 +524,7 @@ async function runTests() { assert(result17.success === true, 'GitHub Copilot setup succeeds against temp project'); - const skillFile17 = path.join(tempProjectDir17, '.github', 'skills', 'bmad-master', 'SKILL.md'); + const skillFile17 = path.join(tempProjectDir17, '.agents', 'skills', 'bmad-master', 'SKILL.md'); assert(await fs.pathExists(skillFile17), 'GitHub Copilot install writes SKILL.md directory output'); // Verify name frontmatter matches directory name @@ -662,10 +532,6 @@ async function runTests() { const nameMatch17 = skillContent17.match(/^name:\s*(.+)$/m); assert(nameMatch17 && nameMatch17[1].trim() === 'bmad-master', 'GitHub Copilot skill name frontmatter matches directory name exactly'); - assert(!(await fs.pathExists(legacyAgentsDir17)), 'GitHub Copilot setup removes legacy agents dir'); - - assert(!(await fs.pathExists(legacyPromptsDir17)), 'GitHub Copilot setup removes legacy prompts dir'); - // Verify copilot-instructions.md BMAD markers were stripped but user content preserved const cleanedInstructions17 = await fs.readFile(copilotInstructionsPath17, 'utf8'); assert( @@ -697,17 +563,8 @@ async function runTests() { assert(clineInstaller?.target_dir === '.cline/skills', 'Cline target_dir uses native skills path'); - assert( - Array.isArray(clineInstaller?.legacy_targets) && clineInstaller.legacy_targets.includes('.clinerules/workflows'), - 'Cline installer cleans legacy workflow output', - ); - const tempProjectDir18 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-cline-test-')); const installedBmadDir18 = await createTestBmadFixture(); - const legacyDir18 = path.join(tempProjectDir18, '.clinerules', 'workflows', 'bmad-legacy-dir'); - await fs.ensureDir(legacyDir18); - await fs.writeFile(path.join(tempProjectDir18, '.clinerules', 'workflows', 'bmad-legacy.md'), 'legacy\n'); - await fs.writeFile(path.join(legacyDir18, 'SKILL.md'), 'legacy\n'); const ideManager18 = new IdeManager(); await ideManager18.ensureInitialized(); @@ -726,8 +583,6 @@ async function runTests() { const nameMatch18 = skillContent18.match(/^name:\s*(.+)$/m); assert(nameMatch18 && nameMatch18[1].trim() === 'bmad-master', 'Cline skill name frontmatter matches directory name exactly'); - assert(!(await fs.pathExists(path.join(tempProjectDir18, '.clinerules', 'workflows'))), 'Cline setup removes legacy workflows dir'); - // Reinstall/upgrade: run setup again over existing skills output const result18b = await ideManager18.setup('cline', tempProjectDir18, installedBmadDir18, { silent: true, @@ -757,17 +612,8 @@ async function runTests() { assert(codebuddyInstaller?.target_dir === '.codebuddy/skills', 'CodeBuddy target_dir uses native skills path'); - assert( - Array.isArray(codebuddyInstaller?.legacy_targets) && codebuddyInstaller.legacy_targets.includes('.codebuddy/commands'), - 'CodeBuddy installer cleans legacy command output', - ); - const tempProjectDir19 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-codebuddy-test-')); const installedBmadDir19 = await createTestBmadFixture(); - const legacyDir19 = path.join(tempProjectDir19, '.codebuddy', 'commands', 'bmad-legacy-dir'); - await fs.ensureDir(legacyDir19); - await fs.writeFile(path.join(tempProjectDir19, '.codebuddy', 'commands', 'bmad-legacy.md'), 'legacy\n'); - await fs.writeFile(path.join(legacyDir19, 'SKILL.md'), 'legacy\n'); const ideManager19 = new IdeManager(); await ideManager19.ensureInitialized(); @@ -785,8 +631,6 @@ async function runTests() { const nameMatch19 = skillContent19.match(/^name:\s*(.+)$/m); assert(nameMatch19 && nameMatch19[1].trim() === 'bmad-master', 'CodeBuddy skill name frontmatter matches directory name exactly'); - assert(!(await fs.pathExists(path.join(tempProjectDir19, '.codebuddy', 'commands'))), 'CodeBuddy setup removes legacy commands dir'); - const result19b = await ideManager19.setup('codebuddy', tempProjectDir19, installedBmadDir19, { silent: true, selectedModules: ['bmm'], @@ -813,19 +657,10 @@ async function runTests() { const platformCodes20 = await loadPlatformCodes(); const crushInstaller = platformCodes20.platforms.crush?.installer; - assert(crushInstaller?.target_dir === '.crush/skills', 'Crush target_dir uses native skills path'); - - assert( - Array.isArray(crushInstaller?.legacy_targets) && crushInstaller.legacy_targets.includes('.crush/commands'), - 'Crush installer cleans legacy command output', - ); + assert(crushInstaller?.target_dir === '.agents/skills', 'Crush target_dir uses native skills path'); const tempProjectDir20 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-crush-test-')); const installedBmadDir20 = await createTestBmadFixture(); - const legacyDir20 = path.join(tempProjectDir20, '.crush', 'commands', 'bmad-legacy-dir'); - await fs.ensureDir(legacyDir20); - await fs.writeFile(path.join(tempProjectDir20, '.crush', 'commands', 'bmad-legacy.md'), 'legacy\n'); - await fs.writeFile(path.join(legacyDir20, 'SKILL.md'), 'legacy\n'); const ideManager20 = new IdeManager(); await ideManager20.ensureInitialized(); @@ -836,15 +671,13 @@ async function runTests() { assert(result20.success === true, 'Crush setup succeeds against temp project'); - const skillFile20 = path.join(tempProjectDir20, '.crush', 'skills', 'bmad-master', 'SKILL.md'); + const skillFile20 = path.join(tempProjectDir20, '.agents', 'skills', 'bmad-master', 'SKILL.md'); assert(await fs.pathExists(skillFile20), 'Crush install writes SKILL.md directory output'); const skillContent20 = await fs.readFile(skillFile20, 'utf8'); const nameMatch20 = skillContent20.match(/^name:\s*(.+)$/m); assert(nameMatch20 && nameMatch20[1].trim() === 'bmad-master', 'Crush skill name frontmatter matches directory name exactly'); - assert(!(await fs.pathExists(path.join(tempProjectDir20, '.crush', 'commands'))), 'Crush setup removes legacy commands dir'); - const result20b = await ideManager20.setup('crush', tempProjectDir20, installedBmadDir20, { silent: true, selectedModules: ['bmm'], @@ -873,16 +706,8 @@ async function runTests() { assert(traeInstaller?.target_dir === '.trae/skills', 'Trae target_dir uses native skills path'); - assert( - Array.isArray(traeInstaller?.legacy_targets) && traeInstaller.legacy_targets.includes('.trae/rules'), - 'Trae installer cleans legacy rules output', - ); - const tempProjectDir21 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-trae-test-')); const installedBmadDir21 = await createTestBmadFixture(); - const legacyDir21 = path.join(tempProjectDir21, '.trae', 'rules'); - await fs.ensureDir(legacyDir21); - await fs.writeFile(path.join(legacyDir21, 'bmad-legacy.md'), 'legacy\n'); const ideManager21 = new IdeManager(); await ideManager21.ensureInitialized(); @@ -900,8 +725,6 @@ async function runTests() { const nameMatch21 = skillContent21.match(/^name:\s*(.+)$/m); assert(nameMatch21 && nameMatch21[1].trim() === 'bmad-master', 'Trae skill name frontmatter matches directory name exactly'); - assert(!(await fs.pathExists(path.join(tempProjectDir21, '.trae', 'rules'))), 'Trae setup removes legacy rules dir'); - const result21b = await ideManager21.setup('trae', tempProjectDir21, installedBmadDir21, { silent: true, selectedModules: ['bmm'], @@ -930,12 +753,7 @@ async function runTests() { assert(!kiloConfig22?.suspended, 'KiloCoder is not suspended'); - assert(kiloConfig22?.installer?.target_dir === '.kilocode/skills', 'KiloCoder target_dir uses native skills path'); - - assert( - Array.isArray(kiloConfig22?.installer?.legacy_targets) && kiloConfig22.installer.legacy_targets.includes('.kilocode/workflows'), - 'KiloCoder installer cleans legacy workflows output', - ); + assert(kiloConfig22?.installer?.target_dir === '.agents/skills', 'KiloCoder target_dir uses native skills path'); const ideManager22 = new IdeManager(); await ideManager22.ensureInitialized(); @@ -950,11 +768,6 @@ async function runTests() { const tempProjectDir22 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-kilo-test-')); const installedBmadDir22 = await createTestBmadFixture(); - // Pre-populate legacy Kilo artifacts that should be cleaned up - const legacyDir22 = path.join(tempProjectDir22, '.kilocode', 'workflows'); - await fs.ensureDir(legacyDir22); - await fs.writeFile(path.join(legacyDir22, 'bmad-legacy.md'), 'legacy\n'); - const result22 = await ideManager22.setup('kilo', tempProjectDir22, installedBmadDir22, { silent: true, selectedModules: ['bmm'], @@ -962,15 +775,13 @@ async function runTests() { assert(result22.success === true, 'KiloCoder setup succeeds against temp project'); - const skillFile22 = path.join(tempProjectDir22, '.kilocode', 'skills', 'bmad-master', 'SKILL.md'); + const skillFile22 = path.join(tempProjectDir22, '.agents', 'skills', 'bmad-master', 'SKILL.md'); assert(await fs.pathExists(skillFile22), 'KiloCoder install writes SKILL.md directory output'); const skillContent22 = await fs.readFile(skillFile22, 'utf8'); const nameMatch22 = skillContent22.match(/^name:\s*(.+)$/m); assert(nameMatch22 && nameMatch22[1].trim() === 'bmad-master', 'KiloCoder skill name frontmatter matches directory name exactly'); - assert(!(await fs.pathExists(path.join(tempProjectDir22, '.kilocode', 'workflows'))), 'KiloCoder setup removes legacy workflows dir'); - const result22b = await ideManager22.setup('kilo', tempProjectDir22, installedBmadDir22, { silent: true, selectedModules: ['bmm'], @@ -997,18 +808,10 @@ async function runTests() { const platformCodes23 = await loadPlatformCodes(); const geminiInstaller = platformCodes23.platforms.gemini?.installer; - assert(geminiInstaller?.target_dir === '.gemini/skills', 'Gemini target_dir uses native skills path'); - - assert( - Array.isArray(geminiInstaller?.legacy_targets) && geminiInstaller.legacy_targets.includes('.gemini/commands'), - 'Gemini installer cleans legacy commands output', - ); + assert(geminiInstaller?.target_dir === '.agents/skills', 'Gemini target_dir uses native skills path'); const tempProjectDir23 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-gemini-test-')); const installedBmadDir23 = await createTestBmadFixture(); - const legacyDir23 = path.join(tempProjectDir23, '.gemini', 'commands'); - await fs.ensureDir(legacyDir23); - await fs.writeFile(path.join(legacyDir23, 'bmad-legacy.toml'), 'legacy\n'); const ideManager23 = new IdeManager(); await ideManager23.ensureInitialized(); @@ -1019,15 +822,13 @@ async function runTests() { assert(result23.success === true, 'Gemini setup succeeds against temp project'); - const skillFile23 = path.join(tempProjectDir23, '.gemini', 'skills', 'bmad-master', 'SKILL.md'); + const skillFile23 = path.join(tempProjectDir23, '.agents', 'skills', 'bmad-master', 'SKILL.md'); assert(await fs.pathExists(skillFile23), 'Gemini install writes SKILL.md directory output'); const skillContent23 = await fs.readFile(skillFile23, 'utf8'); const nameMatch23 = skillContent23.match(/^name:\s*(.+)$/m); assert(nameMatch23 && nameMatch23[1].trim() === 'bmad-master', 'Gemini skill name frontmatter matches directory name exactly'); - assert(!(await fs.pathExists(path.join(tempProjectDir23, '.gemini', 'commands'))), 'Gemini setup removes legacy commands dir'); - const result23b = await ideManager23.setup('gemini', tempProjectDir23, installedBmadDir23, { silent: true, selectedModules: ['bmm'], @@ -1055,16 +856,9 @@ async function runTests() { const iflowInstaller = platformCodes24.platforms.iflow?.installer; assert(iflowInstaller?.target_dir === '.iflow/skills', 'iFlow target_dir uses native skills path'); - assert( - Array.isArray(iflowInstaller?.legacy_targets) && iflowInstaller.legacy_targets.includes('.iflow/commands'), - 'iFlow installer cleans legacy commands output', - ); const tempProjectDir24 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-iflow-test-')); const installedBmadDir24 = await createTestBmadFixture(); - const legacyDir24 = path.join(tempProjectDir24, '.iflow', 'commands'); - await fs.ensureDir(legacyDir24); - await fs.writeFile(path.join(legacyDir24, 'bmad-legacy.md'), 'legacy\n'); const ideManager24 = new IdeManager(); await ideManager24.ensureInitialized(); @@ -1083,8 +877,6 @@ async function runTests() { const nameMatch24 = skillContent24.match(/^name:\s*(.+)$/m); assert(nameMatch24 && nameMatch24[1].trim() === 'bmad-master', 'iFlow skill name frontmatter matches directory name exactly'); - assert(!(await fs.pathExists(path.join(tempProjectDir24, '.iflow', 'commands'))), 'iFlow setup removes legacy commands dir'); - await fs.remove(tempProjectDir24); await fs.remove(path.dirname(installedBmadDir24)); } catch (error) { @@ -1104,16 +896,9 @@ async function runTests() { const qwenInstaller = platformCodes25.platforms.qwen?.installer; assert(qwenInstaller?.target_dir === '.qwen/skills', 'QwenCoder target_dir uses native skills path'); - assert( - Array.isArray(qwenInstaller?.legacy_targets) && qwenInstaller.legacy_targets.includes('.qwen/commands'), - 'QwenCoder installer cleans legacy commands output', - ); const tempProjectDir25 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-qwen-test-')); const installedBmadDir25 = await createTestBmadFixture(); - const legacyDir25 = path.join(tempProjectDir25, '.qwen', 'commands'); - await fs.ensureDir(legacyDir25); - await fs.writeFile(path.join(legacyDir25, 'bmad-legacy.md'), 'legacy\n'); const ideManager25 = new IdeManager(); await ideManager25.ensureInitialized(); @@ -1132,8 +917,6 @@ async function runTests() { const nameMatch25 = skillContent25.match(/^name:\s*(.+)$/m); assert(nameMatch25 && nameMatch25[1].trim() === 'bmad-master', 'QwenCoder skill name frontmatter matches directory name exactly'); - assert(!(await fs.pathExists(path.join(tempProjectDir25, '.qwen', 'commands'))), 'QwenCoder setup removes legacy commands dir'); - await fs.remove(tempProjectDir25); await fs.remove(path.dirname(installedBmadDir25)); } catch (error) { @@ -1152,17 +935,10 @@ async function runTests() { const platformCodes26 = await loadPlatformCodes(); const rovoInstaller = platformCodes26.platforms['rovo-dev']?.installer; - assert(rovoInstaller?.target_dir === '.rovodev/skills', 'Rovo Dev target_dir uses native skills path'); - assert( - Array.isArray(rovoInstaller?.legacy_targets) && rovoInstaller.legacy_targets.includes('.rovodev/workflows'), - 'Rovo Dev installer cleans legacy workflows output', - ); + assert(rovoInstaller?.target_dir === '.agents/skills', 'Rovo Dev target_dir uses native skills path'); const tempProjectDir26 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-rovodev-test-')); const installedBmadDir26 = await createTestBmadFixture(); - const legacyDir26 = path.join(tempProjectDir26, '.rovodev', 'workflows'); - await fs.ensureDir(legacyDir26); - await fs.writeFile(path.join(legacyDir26, 'bmad-legacy.md'), 'legacy\n'); // Create a prompts.yml with BMAD entries and a user entry const yaml26 = require('yaml'); @@ -1173,6 +949,7 @@ async function runTests() { { name: 'my-custom-prompt', description: 'User prompt', content_file: 'custom.md' }, ], }); + await fs.ensureDir(path.dirname(promptsPath26)); await fs.writeFile(promptsPath26, promptsContent26); const ideManager26 = new IdeManager(); @@ -1184,7 +961,7 @@ async function runTests() { assert(result26.success === true, 'Rovo Dev setup succeeds against temp project'); - const skillFile26 = path.join(tempProjectDir26, '.rovodev', 'skills', 'bmad-master', 'SKILL.md'); + const skillFile26 = path.join(tempProjectDir26, '.agents', 'skills', 'bmad-master', 'SKILL.md'); assert(await fs.pathExists(skillFile26), 'Rovo Dev install writes SKILL.md directory output'); // Verify name frontmatter matches directory name @@ -1192,8 +969,6 @@ async function runTests() { const nameMatch26 = skillContent26.match(/^name:\s*(.+)$/m); assert(nameMatch26 && nameMatch26[1].trim() === 'bmad-master', 'Rovo Dev skill name frontmatter matches directory name exactly'); - assert(!(await fs.pathExists(path.join(tempProjectDir26, '.rovodev', 'workflows'))), 'Rovo Dev setup removes legacy workflows dir'); - // Verify prompts.yml cleanup: BMAD entries removed, user entry preserved const cleanedPrompts26 = yaml26.parse(await fs.readFile(promptsPath26, 'utf8')); assert( @@ -1295,7 +1070,7 @@ async function runTests() { const platformCodes28 = await loadPlatformCodes(); const piInstaller = platformCodes28.platforms.pi?.installer; - assert(piInstaller?.target_dir === '.pi/skills', 'Pi target_dir uses native skills path'); + assert(piInstaller?.target_dir === '.agents/skills', 'Pi target_dir uses native skills path'); tempProjectDir28 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-pi-test-')); installedBmadDir28 = await createTestBmadFixture(); @@ -1325,7 +1100,7 @@ async function runTests() { const detectedAfter28 = await ideManager28.detectInstalledIdes(tempProjectDir28); assert(detectedAfter28.includes('pi'), 'Pi is detected after install'); - const skillFile28 = path.join(tempProjectDir28, '.pi', 'skills', 'bmad-master', 'SKILL.md'); + const skillFile28 = path.join(tempProjectDir28, '.agents', 'skills', 'bmad-master', 'SKILL.md'); assert(await fs.pathExists(skillFile28), 'Pi install writes SKILL.md directory output'); // Parse YAML frontmatter between --- markers @@ -1607,7 +1382,7 @@ async function runTests() { }); assert(result.success === true, 'Antigravity setup succeeds with overlapping skill names'); - assert(result.detail === '1 skills', 'Installer detail reports skill count'); + assert(result.detail === '1 skills → .agent/skills', 'Installer detail reports skill count and target dir'); assert(result.handlerResult.results.skillDirectories === 1, 'Result exposes unique skill directory count'); assert(result.handlerResult.results.skills === 1, 'Result retains verbatim skill count'); assert( @@ -2847,6 +2622,157 @@ async function runTests() { console.log(''); + // ============================================================ + // Test Suite 40: Shared target_dir coordination + // ============================================================ + console.log(`${colors.yellow}Test Suite 40: Shared target_dir coordination${colors.reset}\n`); + + try { + // Cursor and Gemini both use .agents/skills — verify they coordinate. + clearCache(); + const platformCodes40 = await loadPlatformCodes(); + const cursorTarget = platformCodes40.platforms.cursor?.installer?.target_dir; + const geminiTarget = platformCodes40.platforms.gemini?.installer?.target_dir; + assert(cursorTarget === '.agents/skills' && geminiTarget === '.agents/skills', 'Cursor and Gemini share .agents/skills target_dir'); + + const tempProjectDir40 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-shared-target-')); + const installedBmadDir40 = await createTestBmadFixture(); + + const ideManager40 = new IdeManager(); + await ideManager40.ensureInitialized(); + + // Run setupBatch with both platforms — second should skip skill write. + const batchResults = await ideManager40.setupBatch(['cursor', 'gemini'], tempProjectDir40, installedBmadDir40, { + silent: true, + selectedModules: ['core'], + }); + + assert(batchResults.length === 2, 'setupBatch returns one result per IDE'); + assert(batchResults[0].success === true, 'First platform (cursor) succeeds'); + assert(batchResults[1].success === true, 'Second platform (gemini) succeeds'); + assert( + batchResults[1].handlerResult?.results?.sharedTargetHandledByPeer === true, + 'Second platform marked sharedTargetHandledByPeer (skipped redundant write)', + ); + + // Skill should be present in the shared dir after batch. + const sharedDir = path.join(tempProjectDir40, '.agents', 'skills'); + const sharedDirEntries = await fs.readdir(sharedDir); + assert(sharedDirEntries.includes('bmad-master'), 'Shared .agents/skills/ contains bmad-master after batched install'); + + // Now uninstall just cursor while gemini remains. Skills must survive. + const cleanupResults = await ideManager40.cleanupByList(tempProjectDir40, ['cursor'], { + silent: true, + remainingIdes: ['gemini'], + }); + assert(cleanupResults[0].skippedTarget === true, 'Cursor cleanup skips target_dir wipe when Gemini remains'); + const stillThere = await fs.readdir(sharedDir); + assert(stillThere.includes('bmad-master'), 'bmad-master still present after partial uninstall (gemini still installed)'); + + // (Cleanup of the last sharing platform requires bmadDir to be inside + // projectDir to compute removalSet; that's the production layout. The + // fixture above keeps bmad in a separate temp dir, so test 41 below + // exercises the in-project layout instead.) + + await fs.remove(tempProjectDir40).catch(() => {}); + await fs.remove(path.dirname(installedBmadDir40)).catch(() => {}); + } catch (error) { + console.log(`${colors.red}Test Suite 40 setup failed: ${error.message}${colors.reset}`); + failed++; + } + + console.log(''); + + // ============================================================ + // Test Suite 40b: setupBatch — failed first writer does not poison peers + // ============================================================ + console.log(`${colors.yellow}Test Suite 40b: setupBatch resilience to first-writer failure${colors.reset}\n`); + + try { + const tempProjectDir40b = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-batch-fail-')); + const installedBmadDir40b = await createTestBmadFixture(); + + const ideManager40b = new IdeManager(); + await ideManager40b.ensureInitialized(); + + // Force cursor's setup() to fail. With the bug, gemini would see the + // claimed target and skip — leaving .agents/skills/ empty. + const cursorHandler40b = ideManager40b.handlers.get('cursor'); + const originalSetup = cursorHandler40b.setup.bind(cursorHandler40b); + cursorHandler40b.setup = async () => { + throw new Error('Simulated cursor failure'); + }; + + const batchResults40b = await ideManager40b.setupBatch(['cursor', 'gemini'], tempProjectDir40b, installedBmadDir40b, { + silent: true, + selectedModules: ['core'], + }); + + // Restore so other tests aren't affected. + cursorHandler40b.setup = originalSetup; + + assert(batchResults40b[0].success === false, 'Cursor reports failure'); + assert(batchResults40b[1].success === true, 'Gemini still succeeds despite cursor failure'); + assert( + batchResults40b[1].handlerResult?.results?.sharedTargetHandledByPeer !== true, + 'Gemini does NOT skip its own write — it becomes the new first writer', + ); + + const sharedDir40b = path.join(tempProjectDir40b, '.agents', 'skills'); + const entries40b = await fs.readdir(sharedDir40b); + assert(entries40b.includes('bmad-master'), 'Shared dir is populated by gemini after cursor failure'); + + await fs.remove(tempProjectDir40b).catch(() => {}); + await fs.remove(path.dirname(installedBmadDir40b)).catch(() => {}); + } catch (error) { + console.log(`${colors.red}Test Suite 40b setup failed: ${error.message}${colors.reset}`); + failed++; + } + + console.log(''); + + // ============================================================ + // Test Suite 41: Custom-module skill ownership (non-bmad prefix) + // ============================================================ + console.log(`${colors.yellow}Test Suite 41: Custom-module skill ownership${colors.reset}\n`); + + try { + // A custom module can ship a skill with any canonicalId (e.g. "fred-cool-skill"). + // detect() must recognize it as BMAD-owned via the manifest, not the bmad- prefix. + const fixtureRoot41 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-custom-prefix-')); + const bmadDir41 = path.join(fixtureRoot41, '_bmad'); + await fs.ensureDir(path.join(bmadDir41, '_config')); + await fs.writeFile( + path.join(bmadDir41, '_config', 'skill-manifest.csv'), + [ + 'canonicalId,name,description,module,path', + '"fred-cool-skill","fred-cool-skill","Custom module skill","fred","_bmad/fred/skills/fred-cool-skill/SKILL.md"', + '', + ].join('\n'), + ); + const fredSkill = path.join(bmadDir41, 'fred', 'skills', 'fred-cool-skill'); + await fs.ensureDir(fredSkill); + await fs.writeFile( + path.join(fredSkill, 'SKILL.md'), + ['---', 'name: fred-cool-skill', 'description: Custom module skill', '---', '', 'A custom module skill.'].join('\n'), + ); + + const ideManager41 = new IdeManager(); + await ideManager41.ensureInitialized(); + await ideManager41.setup('cursor', fixtureRoot41, bmadDir41, { silent: true, selectedModules: ['fred'] }); + + const cursorHandler = ideManager41.handlers.get('cursor'); + const detected = await cursorHandler.detect(fixtureRoot41); + assert(detected === true, 'detect() recognizes non-bmad-prefixed skill as BMAD-owned via skill-manifest.csv'); + + await fs.remove(fixtureRoot41).catch(() => {}); + } catch (error) { + console.log(`${colors.red}Test Suite 41 setup failed: ${error.message}${colors.reset}`); + failed++; + } + + console.log(''); + // ============================================================ // Summary // ============================================================ diff --git a/tools/docs/native-skills-migration-checklist.md b/tools/docs/native-skills-migration-checklist.md index 80c6a9296..e8fa4ad34 100644 --- a/tools/docs/native-skills-migration-checklist.md +++ b/tools/docs/native-skills-migration-checklist.md @@ -222,7 +222,6 @@ Support assumption: full Agent Skills support. Gemini CLI docs confirm workspace - [x] Confirm Gemini CLI native skills path is `.gemini/skills/{skill-name}/SKILL.md` (per [geminicli.com/docs/cli/skills](https://geminicli.com/docs/cli/skills/)) - [x] Implement native skills output — target_dir `.gemini/skills`, skill_format true, template_type default (replaces TOML templates) -- [x] Add legacy cleanup for `.gemini/commands` (via `legacy_targets`) - [x] Test fresh install — skills written to `.gemini/skills/bmad-master/SKILL.md` with correct frontmatter - [x] Test reinstall/upgrade from legacy TOML command output — legacy dir removed, skills installed - [x] Confirm no ancestor conflict protection is needed — Gemini CLI uses workspace > user > extension precedence, no ancestor directory inheritance @@ -236,7 +235,6 @@ Support assumption: full Agent Skills support. iFlow docs confirm workspace skil - [x] Confirm iFlow native skills path is `.iflow/skills/{skill-name}/SKILL.md` - [x] Implement native skills output — target_dir `.iflow/skills`, skill_format true, template_type default -- [x] Add legacy cleanup for `.iflow/commands` (via `legacy_targets`) - [x] Test fresh install — skills written to `.iflow/skills/bmad-master/SKILL.md` - [x] Test legacy cleanup — legacy commands dir removed - [x] Implement/extend automated tests — 6 assertions in test suite 24 @@ -249,7 +247,6 @@ Support assumption: full Agent Skills support. Qwen Code supports workspace skil - [x] Confirm QwenCoder native skills path is `.qwen/skills/{skill-name}/SKILL.md` - [x] Implement native skills output — target_dir `.qwen/skills`, skill_format true, template_type default -- [x] Add legacy cleanup for `.qwen/commands` (via `legacy_targets`) - [x] Test fresh install — skills written to `.qwen/skills/bmad-master/SKILL.md` - [x] Test legacy cleanup — legacy commands dir removed - [x] Implement/extend automated tests — 6 assertions in test suite 25 @@ -262,7 +259,6 @@ Support assumption: full Agent Skills support. Rovo Dev now supports workspace s - [x] Confirm Rovo Dev native skills path is `.rovodev/skills/{skill-name}/SKILL.md` (per Atlassian blog) - [x] Replace 257-line custom `rovodev.js` with config-driven entry in `platform-codes.yaml` -- [x] Add legacy cleanup for `.rovodev/workflows` (via `legacy_targets`) and BMAD entries in `prompts.yml` (via `cleanupRovoDevPrompts()` in `_config-driven.js`) - [x] Test fresh install — skills written to `.rovodev/skills/bmad-master/SKILL.md` - [x] Test legacy cleanup — legacy workflows dir removed, `prompts.yml` BMAD entries stripped while preserving user entries - [x] Implement/extend automated tests — 8 assertions in test suite 26 diff --git a/tools/installer/core/installer.js b/tools/installer/core/installer.js index ef6e8662f..a68193bc6 100644 --- a/tools/installer/core/installer.js +++ b/tools/installer/core/installer.js @@ -14,6 +14,7 @@ const { ExternalModuleManager } = require('../modules/external-manager'); const { resolveModuleVersion } = require('../modules/version-resolver'); const { ExistingInstall } = require('./existing-install'); +const { warnPreNativeSkillsLegacy } = require('./legacy-warnings'); class Installer { constructor() { @@ -41,6 +42,16 @@ class Installer { const officialModules = await OfficialModules.build(config, paths); const existingInstall = await ExistingInstall.detect(paths.bmadDir); + try { + await warnPreNativeSkillsLegacy({ + projectRoot: paths.projectRoot, + existingVersion: existingInstall.installed ? existingInstall.version : null, + }); + } catch (error) { + // Legacy-dir scan is informational; never let it abort install. + await prompts.log.warn(`Warning: Could not check for legacy BMAD entries: ${error.message}`); + } + if (existingInstall.installed) { await this._removeDeselectedModules(existingInstall, config, paths); updateState = await this._prepareUpdateState(paths, config, existingInstall, officialModules); @@ -183,15 +194,16 @@ class Installer { if (toRemove.length === 0) return; - await this.ideManager.ensureInitialized(); - for (const ide of toRemove) { - try { - const handler = this.ideManager.handlers.get(ide); - if (handler) { - await handler.cleanup(paths.projectRoot); - } - } catch (error) { - await prompts.log.warn(`Warning: Failed to remove ${ide}: ${error.message}`); + // Pass the newly-selected list as remainingIdes so cleanupByList skips + // target_dir wipes for IDEs whose directory is still owned by a peer + // (e.g. removing 'cursor' while 'gemini' remains — both share .agents/skills). + const results = await this.ideManager.cleanupByList(paths.projectRoot, toRemove, { + remainingIdes: [...newlySelected], + }); + + for (const result of results || []) { + if (result && result.success === false) { + await prompts.log.warn(`Warning: Failed to remove ${result.ide}: ${result.error || 'unknown error'}`); } } } @@ -342,13 +354,14 @@ class Installer { return; } - for (const ide of validIdes) { - const setupResult = await this.ideManager.setup(ide, paths.projectRoot, paths.bmadDir, { - selectedModules: allModules || [], - verbose: config.verbose, - previousSkillIds, - }); + const setupResults = await this.ideManager.setupBatch(validIdes, paths.projectRoot, paths.bmadDir, { + selectedModules: allModules || [], + verbose: config.verbose, + previousSkillIds, + }); + for (const setupResult of setupResults) { + const ide = setupResult.ide; if (setupResult.success) { addResult(ide, 'ok', setupResult.detail || ''); } else { diff --git a/tools/installer/core/legacy-warnings.js b/tools/installer/core/legacy-warnings.js new file mode 100644 index 000000000..e3098b82b --- /dev/null +++ b/tools/installer/core/legacy-warnings.js @@ -0,0 +1,151 @@ +const os = require('node:os'); +const path = require('node:path'); +const semver = require('semver'); +const fs = require('../fs-native'); +const prompts = require('../prompts'); +const { BMAD_FOLDER_NAME } = require('../ide/shared/path-utils'); +const { getInstalledCanonicalIds, isBmadOwnedEntry } = require('../ide/shared/installed-skills'); + +const MIN_NATIVE_SKILLS_VERSION = '6.1.0'; + +// Pre-v6.1.0 paths: BMAD used to install commands/workflows/etc in tool-specific dirs. +// In v6.1.0 BMAD switched to native SKILL.md format. +const LEGACY_COMMAND_PATHS = [ + '.agent/workflows', + '.augment/commands', + '.claude/commands', + '.clinerules/workflows', + '.codex/prompts', + '~/.codex/prompts', + '.codebuddy/commands', + '.crush/commands', + '.cursor/commands', + '.gemini/commands', + '.github/agents', + '.github/prompts', + '.iflow/commands', + '.kilocode/workflows', + '.kiro/steering', + '.opencode/agents', + '.opencode/commands', + '.opencode/agent', + '.opencode/command', + '.qwen/commands', + '.roo/commands', + '.rovodev/workflows', + '.trae/rules', + '.windsurf/workflows', +]; + +// Skill paths that moved to the cross-tool .agents/skills/ standard. +// Users upgrading from a prior install may have stale BMAD skills here that +// the AI tool will load alongside the new ones, causing duplicates. +const LEGACY_SKILL_PATHS = [ + '.augment/skills', + '~/.augment/skills', + '.codex/skills', + '.crush/skills', + '.cursor/skills', + '~/.cursor/skills', + '.gemini/skills', + '~/.gemini/skills', + '.github/skills', + '~/.github/skills', + '.kilocode/skills', + '.kimi/skills', + '~/.kimi/skills', + '.opencode/skills', + '~/.opencode/skills', + '.pi/skills', + '~/.pi/skills', + '.roo/skills', + '~/.roo/skills', + '.rovodev/skills', + '~/.rovodev/skills', + '.windsurf/skills', + '~/.windsurf/skills', + '~/.codeium/windsurf/skills', +]; + +const LEGACY_PATHS = [...LEGACY_COMMAND_PATHS, ...LEGACY_SKILL_PATHS]; + +function expandPath(p) { + if (p === '~') return os.homedir(); + if (p.startsWith('~/')) return path.join(os.homedir(), p.slice(2)); + return p; +} + +function resolveLegacyPath(projectRoot, p) { + if (path.isAbsolute(p) || p.startsWith('~')) return expandPath(p); + return path.join(projectRoot, p); +} + +async function findStaleLegacyDirs(projectRoot) { + const bmadDir = path.join(projectRoot, BMAD_FOLDER_NAME); + const canonicalIds = await getInstalledCanonicalIds(bmadDir); + + const findings = []; + for (const legacyPath of LEGACY_PATHS) { + const resolved = resolveLegacyPath(projectRoot, legacyPath); + if (!(await fs.pathExists(resolved))) continue; + try { + const entries = await fs.readdir(resolved); + const bmadEntries = entries.filter((e) => isBmadOwnedEntry(e, canonicalIds)); + if (bmadEntries.length > 0) { + findings.push({ path: resolved, displayPath: legacyPath, count: bmadEntries.length, entries: bmadEntries }); + } + } catch { + // Unreadable dir — skip + } + } + return findings; +} + +function isPreNativeSkillsVersion(version) { + if (!version) return false; + const coerced = semver.valid(version) || semver.valid(semver.coerce(version)); + if (!coerced) return false; + return semver.lt(coerced, MIN_NATIVE_SKILLS_VERSION); +} + +async function warnPreNativeSkillsLegacy({ projectRoot, existingVersion } = {}) { + const versionTriggered = isPreNativeSkillsVersion(existingVersion); + const staleDirs = await findStaleLegacyDirs(projectRoot); + + if (!versionTriggered && staleDirs.length === 0) return; + + if (versionTriggered) { + await prompts.log.warn( + `Detected previous BMAD install v${existingVersion} (pre-${MIN_NATIVE_SKILLS_VERSION}). ` + + `BMAD switched to native skills format in v${MIN_NATIVE_SKILLS_VERSION}; old command/workflow directories from your prior install may still be present.`, + ); + } + + if (staleDirs.length > 0) { + await prompts.log.warn( + `Found stale BMAD entries in ${staleDirs.length} legacy location(s) that the new installer no longer manages. ` + + `Your AI tool may load these alongside the new skills, causing duplicates. Remove them manually:`, + ); + for (const finding of staleDirs) { + // Print each entry by exact name. A `bmad*` glob would (a) miss + // custom-module skills the canonicalId scan now picks up, and + // (b) match bmad-os-* utility skills the user should keep. + const entries = finding.entries || []; + for (const entry of entries) { + await prompts.log.message(` rm -rf "${path.join(finding.path, entry)}"`); + } + } + } else if (versionTriggered) { + await prompts.log.message( + ' No stale legacy directories detected, but if your AI tool shows duplicate BMAD commands after install, check for old `bmad-*` entries in tool-specific dirs (e.g. .claude/commands, .cursor/commands).', + ); + } +} + +module.exports = { + warnPreNativeSkillsLegacy, + findStaleLegacyDirs, + isPreNativeSkillsVersion, + LEGACY_PATHS, + MIN_NATIVE_SKILLS_VERSION, +}; diff --git a/tools/installer/ide/_config-driven.js b/tools/installer/ide/_config-driven.js index 563818f67..737e10862 100644 --- a/tools/installer/ide/_config-driven.js +++ b/tools/installer/ide/_config-driven.js @@ -1,10 +1,10 @@ -const os = require('node:os'); const path = require('node:path'); const fs = require('../fs-native'); const yaml = require('yaml'); const prompts = require('../prompts'); const csv = require('csv-parse/sync'); const { BMAD_FOLDER_NAME } = require('./shared/path-utils'); +const { getInstalledCanonicalIds, isBmadOwnedEntry } = require('./shared/installed-skills'); /** * Config-driven IDE setup handler @@ -16,7 +16,7 @@ const { BMAD_FOLDER_NAME } = require('./shared/path-utils'); * Features: * - Config-driven from platform-codes.yaml * - Verbatim skill installation from skill-manifest.csv - * - Legacy directory cleanup and IDE-specific marker removal + * - IDE-specific marker removal (copilot-instructions, kilo modes, rovodev prompts) */ class ConfigDrivenIdeSetup { constructor(platformCode, platformConfig) { @@ -44,16 +44,20 @@ class ConfigDrivenIdeSetup { async detect(projectDir) { if (!this.configDir) return false; - const dir = path.join(projectDir || process.cwd(), this.configDir); - if (await fs.pathExists(dir)) { - try { - const entries = await fs.readdir(dir); - return entries.some((e) => typeof e === 'string' && e.startsWith('bmad')); - } catch { - return false; - } + const root = projectDir || process.cwd(); + const dir = path.join(root, this.configDir); + if (!(await fs.pathExists(dir))) return false; + + let entries; + try { + entries = await fs.readdir(dir); + } catch { + return false; } - return false; + + const bmadDir = await this._findBmadDir(root); + const canonicalIds = await getInstalledCanonicalIds(bmadDir); + return entries.some((e) => isBmadOwnedEntry(e, canonicalIds)); } /** @@ -92,6 +96,12 @@ class ConfigDrivenIdeSetup { return { success: false, reason: 'no-config' }; } + // When a peer platform in the same install batch owns this target_dir, + // skip the skill write — the peer has already populated it. + if (options.skipTarget) { + return { success: true, results: { skills: 0, sharedTargetHandledByPeer: true } }; + } + if (this.installerConfig.target_dir) { return this.installToTarget(projectDir, bmadDir, this.installerConfig, options); } @@ -222,27 +232,6 @@ class ConfigDrivenIdeSetup { removalSet = new Set(); } - // Migrate legacy target directories (e.g. .opencode/agent → .opencode/agents) - // Legacy dirs are abandoned entirely, so use prefix matching (null removalSet) - if (this.installerConfig?.legacy_targets) { - const legacyDirsExist = await Promise.all( - this.installerConfig.legacy_targets.map((d) => - this.isGlobalPath(d) ? fs.pathExists(d.replace(/^~/, os.homedir())) : fs.pathExists(path.join(projectDir, d)), - ), - ); - if (legacyDirsExist.some(Boolean)) { - if (!options.silent) await prompts.log.message(' Migrating legacy directories...'); - for (const legacyDir of this.installerConfig.legacy_targets) { - if (this.isGlobalPath(legacyDir)) { - await this.warnGlobalLegacy(legacyDir, options); - } else { - await this.cleanupTarget(projectDir, legacyDir, options, null); - await this.removeEmptyParents(projectDir, legacyDir); - } - } - } - } - // Strip BMAD markers from copilot-instructions.md if present if (this.name === 'github-copilot') { await this.cleanupCopilotInstructions(projectDir, options); @@ -258,47 +247,17 @@ class ConfigDrivenIdeSetup { await this.cleanupRovoDevPrompts(projectDir, options); } + // Skip target_dir cleanup when a peer platform owns this directory + // (set during dedup'd install or when uninstalling one of several + // platforms that share the same target_dir). + if (options.skipTarget) return; + // Clean current target directory if (this.installerConfig?.target_dir) { await this.cleanupTarget(projectDir, this.installerConfig.target_dir, options, removalSet); } } - /** - * Check if a path is global (starts with ~ or is absolute) - * @param {string} p - Path to check - * @returns {boolean} - */ - isGlobalPath(p) { - return p.startsWith('~') || path.isAbsolute(p); - } - - /** - * Warn about stale BMAD files in a global legacy directory (never auto-deletes) - * @param {string} legacyDir - Legacy directory path (may start with ~) - * @param {Object} options - Options (silent, etc.) - */ - async warnGlobalLegacy(legacyDir, options = {}) { - try { - const expanded = legacyDir.startsWith('~/') - ? path.join(os.homedir(), legacyDir.slice(2)) - : legacyDir === '~' - ? os.homedir() - : legacyDir; - - if (!(await fs.pathExists(expanded))) return; - - const entries = await fs.readdir(expanded); - const bmadFiles = entries.filter((e) => typeof e === 'string' && e.startsWith('bmad')); - - if (bmadFiles.length > 0 && !options.silent) { - await prompts.log.warn(`Found ${bmadFiles.length} stale BMAD file(s) in ${expanded}. Remove manually: rm ${expanded}/bmad-*`); - } - } catch { - // Errors reading global paths are silently ignored - } - } - /** * Find the _bmad directory in a project * @param {string} projectDir - Project directory @@ -426,8 +385,8 @@ class ConfigDrivenIdeSetup { // Always preserve bmad-os-* utility skills regardless of cleanup mode if (entry.startsWith('bmad-os-')) continue; - // Surgical removal from set, or legacy prefix matching when set is null - const shouldRemove = removalSet ? removalSet.has(entry) : entry.startsWith('bmad'); + // Surgical removal from set, or fallback to manifest+prefix detection when null + const shouldRemove = removalSet ? removalSet.has(entry) : isBmadOwnedEntry(entry, null); if (shouldRemove) { try { @@ -590,10 +549,9 @@ class ConfigDrivenIdeSetup { try { if (await fs.pathExists(candidatePath)) { const entries = await fs.readdir(candidatePath); - const hasBmad = entries.some( - (e) => typeof e === 'string' && e.toLowerCase().startsWith('bmad') && !e.toLowerCase().startsWith('bmad-os-'), - ); - if (hasBmad) { + const ancestorBmadDir = await this._findBmadDir(current); + const canonicalIds = await getInstalledCanonicalIds(ancestorBmadDir); + if (entries.some((e) => isBmadOwnedEntry(e, canonicalIds))) { return candidatePath; } } @@ -605,43 +563,6 @@ class ConfigDrivenIdeSetup { return null; } - - /** - * Walk up ancestor directories from relativeDir toward projectDir, removing each if empty - * Stops at projectDir boundary — never removes projectDir itself - * @param {string} projectDir - Project root (boundary) - * @param {string} relativeDir - Relative directory to start from - */ - async removeEmptyParents(projectDir, relativeDir) { - const resolvedProject = path.resolve(projectDir); - let current = relativeDir; - let last = null; - while (current && current !== '.' && current !== last) { - last = current; - const fullPath = path.resolve(projectDir, current); - // Boundary guard: never traverse outside projectDir - if (!fullPath.startsWith(resolvedProject + path.sep) && fullPath !== resolvedProject) break; - try { - if (!(await fs.pathExists(fullPath))) { - // Dir already gone — advance current; last is reset at top of next iteration - current = path.dirname(current); - continue; - } - const remaining = await fs.readdir(fullPath); - if (remaining.length > 0) break; - await fs.rmdir(fullPath); - } catch (error) { - // ENOTEMPTY: TOCTOU race (file added between readdir and rmdir) — skip level, continue upward - // ENOENT: dir removed by another process between pathExists and rmdir — skip level, continue upward - if (error.code === 'ENOTEMPTY' || error.code === 'ENOENT') { - current = path.dirname(current); - continue; - } - break; // fatal error (e.g. EACCES) — stop upward walk - } - current = path.dirname(current); - } - } } module.exports = { ConfigDrivenIdeSetup }; diff --git a/tools/installer/ide/manager.js b/tools/installer/ide/manager.js index ac49a8773..6370e4f41 100644 --- a/tools/installer/ide/manager.js +++ b/tools/installer/ide/manager.js @@ -160,8 +160,18 @@ class IdeManager { let detail = ''; if (handlerResult && handlerResult.results) { const r = handlerResult.results; - const count = r.skillDirectories || r.skills || 0; - if (count > 0) detail = `${count} skills`; + let count = r.skillDirectories || r.skills || 0; + // Dedup'd platform: report the count its peer wrote so the user sees + // a consistent picture across all platforms sharing the dir. + if (count === 0 && r.sharedTargetHandledByPeer && options.sharedSkillCount) { + count = options.sharedSkillCount; + } + const targetDir = handler.installerConfig?.target_dir || null; + if (count > 0 && targetDir) { + detail = `${count} skills → ${targetDir}`; + } else if (count > 0) { + detail = `${count} skills`; + } } // Propagate handler's success status (default true for backward compat) const success = handlerResult?.success !== false; @@ -172,6 +182,57 @@ class IdeManager { } } + /** + * Run setup for multiple IDEs as a single batch. + * Dedupes work when several selected platforms share the same target_dir: + * the first platform owns the directory write, peers skip it. + * @param {Array} ideList - IDE names to set up + * @param {string} projectDir + * @param {string} bmadDir + * @param {Object} [options] - Forwarded to each handler.setup + * @returns {Promise} Per-IDE results + */ + async setupBatch(ideList, projectDir, bmadDir, options = {}) { + await this.ensureInitialized(); + const results = []; + // target_dir → { firstIde, skillCount } from the platform that actually wrote it + const claimedTargets = new Map(); + + for (const ideName of ideList) { + const handler = this.handlers.get(ideName.toLowerCase()); + if (!handler) { + results.push(await this.setup(ideName, projectDir, bmadDir, options)); + continue; + } + + const target = handler.installerConfig?.target_dir || null; + const claim = target ? claimedTargets.get(target) : null; + const skipTarget = !!claim; + + const result = await this.setup(ideName, projectDir, bmadDir, { + ...options, + skipTarget, + sharedWith: claim?.firstIde || null, + sharedTarget: target, + sharedSkillCount: claim?.skillCount || 0, + }); + + if (target && !claim) { + const writtenCount = result.handlerResult?.results?.skillDirectories || result.handlerResult?.results?.skills || 0; + // Only claim the target when the install actually succeeded and wrote skills. + // If the first platform fails (ancestor conflict, exception, etc.), leave the + // dir unclaimed so the next peer becomes the new first writer instead of + // silently skipping into a broken/empty target_dir. + if (result.success && writtenCount > 0) { + claimedTargets.set(target, { firstIde: ideName, skillCount: writtenCount }); + } + } + results.push(result); + } + + return results; + } + /** * Cleanup IDE configurations * @param {string} projectDir - Project directory @@ -198,6 +259,8 @@ class IdeManager { * @param {string} projectDir - Project directory * @param {Array} ideList - List of IDE names to clean up * @param {Object} [options] - Cleanup options passed through to handlers + * options.remainingIdes - IDE names still installed after this cleanup; used + * to skip target_dir wipe when a co-installed platform shares the dir. * @returns {Array} Results array */ async cleanupByList(projectDir, ideList, options = {}) { @@ -211,13 +274,27 @@ class IdeManager { // Build lowercase lookup for case-insensitive matching const lowercaseHandlers = new Map([...this.handlers.entries()].map(([k, v]) => [k.toLowerCase(), v])); + // Resolve target_dirs for IDEs that will remain installed after this cleanup + const remainingTargets = new Set(); + if (Array.isArray(options.remainingIdes)) { + for (const remaining of options.remainingIdes) { + const h = lowercaseHandlers.get(String(remaining).toLowerCase()); + const t = h?.installerConfig?.target_dir; + if (t) remainingTargets.add(t); + } + } + for (const ideName of ideList) { const handler = lowercaseHandlers.get(ideName.toLowerCase()); if (!handler) continue; + const target = handler.installerConfig?.target_dir || null; + const skipTarget = target && remainingTargets.has(target); + const cleanupOptions = skipTarget ? { ...options, skipTarget: true } : options; + try { - await handler.cleanup(projectDir, options); - results.push({ ide: ideName, success: true }); + await handler.cleanup(projectDir, cleanupOptions); + results.push({ ide: ideName, success: true, skippedTarget: !!skipTarget }); } catch (error) { results.push({ ide: ideName, success: false, error: error.message }); } diff --git a/tools/installer/ide/platform-codes.yaml b/tools/installer/ide/platform-codes.yaml index 1899473c0..0f49a7fbe 100644 --- a/tools/installer/ide/platform-codes.yaml +++ b/tools/installer/ide/platform-codes.yaml @@ -5,128 +5,203 @@ # preferred: Whether shown as a recommended option on install # suspended: (optional) Message explaining why install is blocked # installer: -# target_dir: Directory where skill directories are installed -# legacy_targets: (optional) Old target dirs to clean up on reinstall +# target_dir: Directory where skill directories are installed (project/workspace) +# global_target_dir: (optional) User-home directory for global install # ancestor_conflict_check: (optional) Refuse install when ancestor dir has BMAD files +# +# Multiple platforms may share the same target_dir or global_target_dir — many tools +# read from the shared `.agents/skills/` and `~/.agents/skills/` cross-tool standard. +# Paths verified against each tool's primary docs as of 2026-04-25. platforms: + adal: + name: "AdaL" + preferred: false + installer: + target_dir: .adal/skills + global_target_dir: ~/.adal/skills + + amp: + name: "Sourcegraph Amp" + preferred: false + installer: + target_dir: .agents/skills + global_target_dir: ~/.config/agents/skills + antigravity: name: "Google Antigravity" preferred: false installer: - legacy_targets: - - .agent/workflows target_dir: .agent/skills + global_target_dir: ~/.gemini/antigravity/skills auggie: name: "Auggie" preferred: false installer: - legacy_targets: - - .augment/commands - target_dir: .augment/skills + target_dir: .agents/skills + global_target_dir: ~/.agents/skills + + bob: + name: "IBM Bob" + preferred: false + installer: + target_dir: .bob/skills + global_target_dir: ~/.bob/skills claude-code: name: "Claude Code" preferred: true installer: - legacy_targets: - - .claude/commands target_dir: .claude/skills + global_target_dir: ~/.claude/skills cline: name: "Cline" preferred: false installer: - legacy_targets: - - .clinerules/workflows target_dir: .cline/skills + global_target_dir: ~/.cline/skills codex: name: "Codex" - preferred: false + preferred: true installer: - legacy_targets: - - .codex/prompts - - ~/.codex/prompts target_dir: .agents/skills + global_target_dir: ~/.codex/skills codebuddy: name: "CodeBuddy" preferred: false installer: - legacy_targets: - - .codebuddy/commands target_dir: .codebuddy/skills + global_target_dir: ~/.codebuddy/skills + + command-code: + name: "Command Code" + preferred: false + installer: + target_dir: .agents/skills + global_target_dir: ~/.agents/skills + + cortex: + name: "Snowflake Cortex Code" + preferred: false + installer: + target_dir: .cortex/skills + global_target_dir: ~/.snowflake/cortex/skills crush: name: "Crush" preferred: false installer: - legacy_targets: - - .crush/commands - target_dir: .crush/skills + target_dir: .agents/skills + global_target_dir: ~/.config/agents/skills cursor: name: "Cursor" preferred: true installer: - legacy_targets: - - .cursor/commands - target_dir: .cursor/skills + target_dir: .agents/skills + global_target_dir: ~/.agents/skills + + droid: + name: "Factory Droid" + preferred: false + installer: + target_dir: .factory/skills + global_target_dir: ~/.factory/skills + + firebender: + name: "Firebender" + preferred: false + installer: + target_dir: .firebender/skills + global_target_dir: ~/.agents/skills gemini: name: "Gemini CLI" preferred: false installer: - legacy_targets: - - .gemini/commands - target_dir: .gemini/skills + target_dir: .agents/skills + global_target_dir: ~/.agents/skills github-copilot: name: "GitHub Copilot" + preferred: true + installer: + target_dir: .agents/skills + global_target_dir: ~/.agents/skills + + goose: + name: "Block Goose" preferred: false installer: - legacy_targets: - - .github/agents - - .github/prompts - target_dir: .github/skills + target_dir: .agents/skills + global_target_dir: ~/.config/agents/skills iflow: name: "iFlow" preferred: false installer: - legacy_targets: - - .iflow/commands target_dir: .iflow/skills + global_target_dir: ~/.iflow/skills junie: name: "Junie" preferred: false installer: - target_dir: .agents/skills + target_dir: .junie/skills + global_target_dir: ~/.junie/skills kilo: name: "KiloCoder" preferred: false installer: - legacy_targets: - - .kilocode/workflows - target_dir: .kilocode/skills + target_dir: .agents/skills + global_target_dir: ~/.kilocode/skills kimi-code: name: "Kimi Code" preferred: false installer: - target_dir: .kimi/skills + target_dir: .agents/skills + global_target_dir: ~/.agents/skills kiro: name: "Kiro" preferred: false installer: - legacy_targets: - - .kiro/steering target_dir: .kiro/skills + global_target_dir: ~/.kiro/skills + + kode: + name: "Kode" + preferred: false + installer: + target_dir: .kode/skills + global_target_dir: ~/.kode/skills + + mistral-vibe: + name: "Mistral Vibe" + preferred: false + installer: + target_dir: .agents/skills + global_target_dir: ~/.vibe/skills + + mux: + name: "Mux" + preferred: false + installer: + target_dir: .agents/skills + global_target_dir: ~/.agents/skills + + neovate: + name: "Neovate" + preferred: false + installer: + target_dir: .neovate/skills + global_target_dir: ~/.neovate/skills ona: name: "Ona" @@ -134,65 +209,98 @@ platforms: installer: target_dir: .ona/skills + openclaw: + name: "OpenClaw" + preferred: false + installer: + target_dir: .agents/skills + global_target_dir: ~/.agents/skills + opencode: name: "OpenCode" preferred: false installer: - legacy_targets: - - .opencode/agents - - .opencode/commands - - .opencode/agent - - .opencode/command - target_dir: .opencode/skills + target_dir: .agents/skills + global_target_dir: ~/.agents/skills + + openhands: + name: "OpenHands" + preferred: false + installer: + target_dir: .agents/skills + global_target_dir: ~/.agents/skills pi: name: "Pi" preferred: false installer: - target_dir: .pi/skills + target_dir: .agents/skills + global_target_dir: ~/.agents/skills + + pochi: + name: "Pochi" + preferred: false + installer: + target_dir: .agents/skills + global_target_dir: ~/.agents/skills qoder: name: "Qoder" preferred: false installer: target_dir: .qoder/skills + global_target_dir: ~/.qoder/skills qwen: name: "QwenCoder" preferred: false installer: - legacy_targets: - - .qwen/commands target_dir: .qwen/skills + global_target_dir: ~/.qwen/skills + + replit: + name: "Replit Agent" + preferred: false + installer: + target_dir: .agents/skills roo: name: "Roo Code" preferred: false installer: - legacy_targets: - - .roo/commands - target_dir: .roo/skills + target_dir: .agents/skills + global_target_dir: ~/.agents/skills rovo-dev: name: "Rovo Dev" preferred: false installer: - legacy_targets: - - .rovodev/workflows - target_dir: .rovodev/skills + target_dir: .agents/skills + global_target_dir: ~/.agents/skills trae: name: "Trae" preferred: false installer: - legacy_targets: - - .trae/rules target_dir: .trae/skills + warp: + name: "Warp" + preferred: false + installer: + target_dir: .agents/skills + global_target_dir: ~/.agents/skills + windsurf: name: "Windsurf" preferred: false installer: - legacy_targets: - - .windsurf/workflows - target_dir: .windsurf/skills + target_dir: .agents/skills + global_target_dir: ~/.agents/skills + + zencoder: + name: "Zencoder" + preferred: false + installer: + target_dir: .zencoder/skills + global_target_dir: ~/.zencoder/skills diff --git a/tools/installer/ide/shared/installed-skills.js b/tools/installer/ide/shared/installed-skills.js new file mode 100644 index 000000000..7c68f990f --- /dev/null +++ b/tools/installer/ide/shared/installed-skills.js @@ -0,0 +1,50 @@ +const path = require('node:path'); +const fs = require('../../fs-native'); +const csv = require('csv-parse/sync'); + +/** + * Read the global skill-manifest.csv and return the set of canonicalIds. + * These define which directory entries in a target_dir are BMAD-owned, regardless + * of whether they happen to start with "bmad-" (custom modules can ship skills + * with any prefix, e.g. "fred-cool-skill"). + * + * @param {string} bmadDir - Path to the _bmad install directory + * @returns {Promise>} Set of canonicalIds, or empty set if manifest missing + */ +async function getInstalledCanonicalIds(bmadDir) { + const ids = new Set(); + if (!bmadDir) return ids; + + const csvPath = path.join(bmadDir, '_config', 'skill-manifest.csv'); + if (!(await fs.pathExists(csvPath))) return ids; + + try { + const content = await fs.readFile(csvPath, 'utf8'); + const records = csv.parse(content, { columns: true, skip_empty_lines: true }); + for (const record of records) { + if (record.canonicalId) ids.add(record.canonicalId); + } + } catch { + // Unreadable/invalid manifest — treat as no info + } + + return ids; +} + +/** + * Test whether a directory entry is BMAD-owned. + * Prefers the manifest's canonicalIds; falls back to the legacy "bmad" prefix + * when no manifest is available (early install, ancestor lookup with no bmad dir). + * + * @param {string} entry - Directory entry name + * @param {Set|null} canonicalIds - From getInstalledCanonicalIds, or null + * @returns {boolean} + */ +function isBmadOwnedEntry(entry, canonicalIds) { + if (!entry || typeof entry !== 'string') return false; + if (entry.toLowerCase().startsWith('bmad-os-')) return false; + if (canonicalIds && canonicalIds.size > 0) return canonicalIds.has(entry); + return entry.toLowerCase().startsWith('bmad'); +} + +module.exports = { getInstalledCanonicalIds, isBmadOwnedEntry }; diff --git a/tools/platform-codes.yaml b/tools/platform-codes.yaml deleted file mode 100644 index f57e9ef5c..000000000 --- a/tools/platform-codes.yaml +++ /dev/null @@ -1,175 +0,0 @@ -# BMAD Platform Codes Configuration -# Central configuration for all platform/IDE codes used in the BMAD system -# -# This file defines the standardized platform codes that are used throughout -# the installation system to identify different platforms (IDEs, tools, etc.) -# -# Format: -# code: Platform identifier used internally -# name: Display name shown to users -# preferred: Whether this platform is shown as a recommended option on install -# category: Type of platform (ide, tool, service, etc.) - -platforms: - # Recommended Platforms - claude-code: - name: "Claude Code" - preferred: true - category: cli - description: "Anthropic's official CLI for Claude" - - cursor: - name: "Cursor" - preferred: true - category: ide - description: "AI-first code editor" - - # Other IDEs and Tools - cline: - name: "Cline" - preferred: false - category: ide - description: "AI coding assistant" - - opencode: - name: "OpenCode" - preferred: false - category: ide - description: "OpenCode terminal coding assistant" - - codebuddy: - name: "CodeBuddy" - preferred: false - category: ide - description: "Tencent Cloud Code Assistant - AI-powered coding companion" - - auggie: - name: "Auggie" - preferred: false - category: cli - description: "AI development tool" - - roo: - name: "Roo Code" - preferred: false - category: ide - description: "Enhanced Cline fork" - - rovo-dev: - name: "Rovo Dev" - preferred: false - category: ide - description: "Atlassian's Rovo development environment" - - kiro: - name: "Kiro" - preferred: false - category: ide - description: "Amazon's AI-powered IDE" - - github-copilot: - name: "GitHub Copilot" - preferred: false - category: ide - description: "GitHub's AI pair programmer" - - codex: - name: "Codex" - preferred: false - category: cli - description: "OpenAI Codex integration" - - qwen: - name: "QwenCoder" - preferred: false - category: ide - description: "Qwen AI coding assistant" - - gemini: - name: "Gemini CLI" - preferred: false - category: cli - description: "Google's CLI for Gemini" - - iflow: - name: "iFlow" - preferred: false - category: ide - description: "AI workflow automation" - - kilo: - name: "KiloCoder" - preferred: false - category: ide - description: "AI coding platform" - - kimi-code: - name: "Kimi Code" - preferred: false - category: cli - description: "Moonshot AI's Kimi Code CLI" - - crush: - name: "Crush" - preferred: false - category: ide - description: "AI development assistant" - - antigravity: - name: "Google Antigravity" - preferred: false - category: ide - description: "Google's AI development environment" - - trae: - name: "Trae" - preferred: false - category: ide - description: "AI coding tool" - - windsurf: - name: "Windsurf" - preferred: false - category: ide - description: "AI-powered IDE with cascade flows" - - junie: - name: "Junie" - preferred: false - category: cli - description: "AI coding agent by JetBrains" - - ona: - name: "Ona" - preferred: false - category: ide - description: "Ona AI development environment" - -# Platform categories -categories: - ide: - name: "Integrated Development Environment" - description: "Full-featured code editors with AI assistance" - - cli: - name: "Command Line Interface" - description: "Terminal-based tools" - - tool: - name: "Development Tool" - description: "Standalone development utilities" - - service: - name: "Cloud Service" - description: "Cloud-based development platforms" - - extension: - name: "Editor Extension" - description: "Plugins for existing editors" - -# Naming conventions and rules -conventions: - code_format: "lowercase-kebab-case" - name_format: "Title Case" - max_code_length: 20 - allowed_characters: "a-z0-9-" From 1d35acfd8440798cc1eea2496ccb5e1ec8691985 Mon Sep 17 00:00:00 2001 From: Brian Date: Sat, 25 Apr 2026 21:24:43 -0500 Subject: [PATCH 09/23] docs: add v6.5.0 changelog entry (#2314) --- CHANGELOG.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index bcd28889a..bbb0373a4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,12 @@ # Changelog +## v6.5.0 - 2026-04-26 + +### 🎁 Features + +* Support for 18 new agent platforms: AdaL, Sourcegraph Amp, IBM Bob, Command Code, Snowflake Cortex Code, Factory Droid, Firebender, Block Goose, Kode, Mistral Vibe, Mux, Neovate, OpenClaw, OpenHands, Pochi, Replit Agent, Warp, Zencoder — bringing total supported platforms to 42 (#2313) +* All platforms that support the cross-tool `.agents/skills/` standard now use it (#2313) + ## v6.4.0 - 2026-04-24 ### ✨ Headline From 69cbeb4d07f318180c3d610c511381b9f494e786 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Sun, 26 Apr 2026 02:25:31 +0000 Subject: [PATCH 10/23] chore(release): v6.5.0 [skip ci] --- package-lock.json | 4 ++-- package.json | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/package-lock.json b/package-lock.json index 0bd26eff7..2a9d9657f 100644 --- a/package-lock.json +++ b/package-lock.json @@ -1,12 +1,12 @@ { "name": "bmad-method", - "version": "6.4.0", + "version": "6.5.0", "lockfileVersion": 3, "requires": true, "packages": { "": { "name": "bmad-method", - "version": "6.4.0", + "version": "6.5.0", "license": "MIT", "dependencies": { "@clack/core": "^1.0.0", diff --git a/package.json b/package.json index f34e2e84b..023b3c41f 100644 --- a/package.json +++ b/package.json @@ -1,7 +1,7 @@ { "$schema": "https://json.schemastore.org/package.json", "name": "bmad-method", - "version": "6.4.0", + "version": "6.5.0", "description": "Breakthrough Method of Agile AI-driven Development", "keywords": [ "agile", From 88b9a1c8421e1ad15288df00059d5b4f1ed85af3 Mon Sep 17 00:00:00 2001 From: Brian Date: Sat, 25 Apr 2026 22:08:44 -0500 Subject: [PATCH 11/23] fix(installer): remove pre-v6.2.0 wrapper skills on update (closes #2309) (#2315) Adds 32 entries to removals.txt covering the module-prefixed wrapper skill names used pre-v6.2.0 (bmad-bmm-* and bmad-agent-bmm-*). Users upgrading from v6.0.x / v6.1.x had these installed in their IDE skill directories, but the v6.2.0 architecture switch dropped the module prefix and the cleanup never knew the old names. Stale wrappers stayed behind alongside the new self-contained skills, causing duplicates and broken-file errors when invoked (referenced files no longer exist). The removals.txt entries get added to the cleanup removalSet on every install/update, so the next install run for an upgrading user removes the stale wrappers automatically. --- removals.txt | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/removals.txt b/removals.txt index 81a2b5dce..5a7659dd2 100644 --- a/removals.txt +++ b/removals.txt @@ -15,3 +15,40 @@ bmad-quick-spec bmad-quick-flow bmad-quick-dev-new-preview bmad-init + +# Pre-v6.2.0 wrapper skills (module-prefixed naming, dropped in v6.2.0). +# Users upgrading from v6.0.x / v6.1.x had these installed and the cleanup +# never knew to remove them; they remained alongside the new self-contained +# skills causing duplicates and broken-file errors. See issue #2309. +bmad-agent-bmm-analyst +bmad-agent-bmm-architect +bmad-agent-bmm-dev +bmad-agent-bmm-pm +bmad-agent-bmm-qa +bmad-agent-bmm-quick-flow-solo-dev +bmad-agent-bmm-sm +bmad-agent-bmm-tech-writer +bmad-agent-bmm-ux-designer +bmad-bmm-check-implementation-readiness +bmad-bmm-code-review +bmad-bmm-correct-course +bmad-bmm-create-architecture +bmad-bmm-create-epics-and-stories +bmad-bmm-create-prd +bmad-bmm-create-product-brief +bmad-bmm-create-story +bmad-bmm-create-ux-design +bmad-bmm-dev-story +bmad-bmm-document-project +bmad-bmm-domain-research +bmad-bmm-edit-prd +bmad-bmm-generate-project-context +bmad-bmm-market-research +bmad-bmm-qa-generate-e2e-tests +bmad-bmm-quick-dev +bmad-bmm-quick-spec +bmad-bmm-retrospective +bmad-bmm-sprint-planning +bmad-bmm-sprint-status +bmad-bmm-technical-research +bmad-bmm-validate-prd From 7baa30c567fe8a7e7189f7d65b2282e4290875a5 Mon Sep 17 00:00:00 2001 From: Brian Date: Sun, 26 Apr 2026 10:30:41 -0500 Subject: [PATCH 12/23] fix(publish): advance @next dist-tag after stable release (#2320) * fix(publish): advance @next dist-tag after stable release When a stable release publishes via workflow_dispatch, @latest can leapfrog the existing @next prerelease (e.g. latest=6.5.0 while next=6.4.1-next.0), turning `npx bmad-method@next install` into a silent downgrade until the next qualifying push to main republishes a fresh -next.0. - publish.yaml: after stable publish, repoint @next at the just-published stable version. The existing derive-prerelease step picks max(latest, next) as its base, so subsequent push-driven prereleases bump from there. - bmad-cli.js: checkForUpdate was querying the @beta dist-tag (which this package does not use). Replace string-matching with semver.prerelease() and query @next for prerelease users. * fix(publish): harden next-tag advance step and broaden path filter - continue-on-error on the dist-tag advance: failure leaves @next stale until the next push-driven prerelease, which is recoverable; failing the job after a successful publish + git tag + GH release is not. - Status echo so release-log triage can confirm the advance ran. - Add removals.txt to the push-trigger path filter. Installer-affecting changes outside src/** (like the post-6.5.0 removals.txt fix) should still trigger a fresh -next.0 publish. --- .github/workflows/publish.yaml | 17 +++++++++++++++++ tools/installer/bmad-cli.js | 11 ++++------- 2 files changed, 21 insertions(+), 7 deletions(-) diff --git a/.github/workflows/publish.yaml b/.github/workflows/publish.yaml index 0079a5e81..696ac8f6a 100644 --- a/.github/workflows/publish.yaml +++ b/.github/workflows/publish.yaml @@ -7,6 +7,7 @@ on: - "src/**" - "tools/installer/**" - "package.json" + - "removals.txt" workflow_dispatch: inputs: channel: @@ -135,6 +136,22 @@ jobs: env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} + - name: Advance @next dist-tag to stable + if: github.event_name == 'workflow_dispatch' && inputs.channel == 'latest' + # Failure here leaves @next stale until the next push-driven prerelease + # republishes — annoying but not release-breaking. Don't fail the job + # after a successful stable publish + tag + GH release. + continue-on-error: true + run: | + # Without this, @latest can leapfrog @next (e.g. latest=6.5.0 while + # next=6.4.1-next.0) and `npx bmad-method@next install` silently + # downgrades users. Point @next at the just-published stable so + # @next >= @latest always holds; the next push-driven prerelease will + # bump from this base via the existing derive step above. + VERSION=$(node -p 'require("./package.json").version') + npm dist-tag add "bmad-method@${VERSION}" next + echo "Advanced @next dist-tag to ${VERSION}" + - name: Notify Discord if: github.event_name == 'workflow_dispatch' && inputs.channel == 'latest' continue-on-error: true diff --git a/tools/installer/bmad-cli.js b/tools/installer/bmad-cli.js index 042714e45..a108b3a44 100755 --- a/tools/installer/bmad-cli.js +++ b/tools/installer/bmad-cli.js @@ -23,13 +23,10 @@ checkForUpdate().catch(() => { async function checkForUpdate() { try { - // For beta versions, check the beta tag; otherwise check latest - const isBeta = - packageJson.version.includes('Beta') || - packageJson.version.includes('beta') || - packageJson.version.includes('alpha') || - packageJson.version.includes('rc'); - const tag = isBeta ? 'beta' : 'latest'; + // Prereleases (e.g. 6.5.1-next.0) live on the `next` dist-tag; stable + // releases live on `latest`. semver.prerelease() returns null for stable, + // so this correctly routes pre-1.0-next/rc/etc. without string matching. + const tag = semver.prerelease(packageJson.version) ? 'next' : 'latest'; const result = execSync(`npm view ${packageName}@${tag} version`, { encoding: 'utf8', From 04cfde145418392ac119a8d027d96c82555c6251 Mon Sep 17 00:00:00 2001 From: Brian Date: Sun, 26 Apr 2026 10:54:38 -0500 Subject: [PATCH 13/23] fix(installer): mirror launch channel as default for external modules (#2321) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * fix(installer): mirror launch channel as default for external modules When the user runs `npx bmad-method@next install`, the installer itself runs from a prerelease, but the interactive channel gate previously hardcoded "(all stable)" — defaulting tea/community modules to stable while bmad-method itself was on next. The bleeding-edge launch did not flow through. Detect the installer's own version via semver.prerelease() and default the gate (and per-module picker) to match — "all next" for prerelease launches, "all stable" for stable. Users keep full control: hit "n" to customize per module, or pass explicit --channel / --pin / --next flags to override. * fix(installer): seed channelOptions before module picker, not gate CodeRabbit caught a label/install mismatch in the previous approach: the module picker resolves version labels via decideChannelForModule, which runs before _interactiveChannelGate. With channelOptions.global still null at picker time, labels rendered from stable tags — then the gate flipped global to 'next' and externals installed from main HEAD. Net effect on @next launches: "tea (v1.6.0)" in the picker, but install pulled HEAD. Move the launch detection up into promptInstall, immediately after parseChannelOptions. Seeding channelOptions.global = 'next' before the picker makes labels resolve from main HEAD (matching the install) and lets the existing gate's haveFlagIntent check skip cleanly — the @next user already declared their intent by typing it. Per-module customization remains available via --pin / --next / --channel flags, same as for any pre-set global. --- tools/installer/ui.js | 29 ++++++++++++++++++++++++++--- 1 file changed, 26 insertions(+), 3 deletions(-) diff --git a/tools/installer/ui.js b/tools/installer/ui.js index f2f6e31c1..4ec0ef118 100644 --- a/tools/installer/ui.js +++ b/tools/installer/ui.js @@ -2,6 +2,7 @@ const path = require('node:path'); const os = require('node:os'); const semver = require('semver'); const fs = require('./fs-native'); +const installerPackageJson = require('../../package.json'); const { CLIUtils } = require('./cli-utils'); const { ExternalModuleManager } = require('./modules/external-manager'); const { resolveModuleVersion } = require('./modules/version-resolver'); @@ -128,6 +129,24 @@ class UI { await prompts.log.warn(warning); } + // When the user launched the installer from a prerelease (npx bmad-method@next), + // mirror that intent for external modules: seed the global channel to 'next' so + // the module picker's version labels resolve from main HEAD (matching what + // actually gets installed) and the interactive channel gate skips — the user + // already declared "next" intent by typing @next. Explicit channel flags + // override this seed. + if ( + semver.prerelease(installerPackageJson.version) !== null && + !channelOptions.global && + channelOptions.nextSet.size === 0 && + channelOptions.pins.size === 0 + ) { + channelOptions.global = 'next'; + await prompts.log.info( + 'Launched from a prerelease — installing all external modules from main HEAD (next channel). Pass --all-stable or --pin to override.', + ); + } + // Get directory from options or prompt let confirmedDirectory; if (options.directory) { @@ -332,8 +351,10 @@ class UI { // Interactive channel gate: "Ready to install (all stable)? [Y/n]" // Only shown for fresh installs with no channel flags and an external module - // selected. Non-interactive installs skip this and fall through to the - // registry default (stable) or whatever flags were supplied. + // selected. Skipped for prerelease launches because channelOptions.global + // was already seeded to 'next' upstream. Non-interactive installs skip this + // and fall through to the registry default (stable) or whatever flags were + // supplied. await this._interactiveChannelGate({ options, channelOptions, selectedModules }); let toolSelection = await this.promptToolSelection(confirmedDirectory, options); @@ -1783,7 +1804,9 @@ class UI { * * Skipped when: * - running non-interactively (--yes) - * - the user already passed channel flags (--channel / --pin / --next) + * - the user already passed channel flags (--channel / --pin / --next), OR + * the installer was launched from a prerelease (which seeds + * channelOptions.global = 'next' upstream in promptInstall) * - no externals/community modules are selected * * Mutates channelOptions.pins and channelOptions.nextSet to reflect picker choices. From be85e5b4a01664f2f4a2a80c9960f65bb30f8b22 Mon Sep 17 00:00:00 2001 From: Curtis Ide <60450113+cidemaxio@users.noreply.github.com> Date: Sun, 26 Apr 2026 11:55:56 -0600 Subject: [PATCH 14/23] fix(installer): support local custom-source modules in resolveInstalledModuleYaml and TOML key (#2316) - resolveInstalledModuleYaml: fall back to CustomModuleManager._resolutionCache for local custom-source modules (external cache path doesn't exist for these); refactor candidate-path search into shared searchRoot() helper; add *-setup/assets/module.yaml BMB standard path - manifest-generator: use module code field (not display name) as TOML section key [modules.X] Co-authored-by: cidemaxio --- tools/installer/core/manifest-generator.js | 11 +++- tools/installer/project-root.js | 62 ++++++++++++++++------ 2 files changed, 56 insertions(+), 17 deletions(-) diff --git a/tools/installer/core/manifest-generator.js b/tools/installer/core/manifest-generator.js index eb1012036..f7b5d0084 100644 --- a/tools/installer/core/manifest-generator.js +++ b/tools/installer/core/manifest-generator.js @@ -435,6 +435,9 @@ class ManifestGenerator { // this means user-scoped keys (e.g. user_name) could mis-file into the // team config, so the operator should notice. const scopeByModuleKey = {}; + // Maps installer moduleName (may be full display name) → module code field + // from module.yaml, so TOML sections use [modules.] not [modules.]. + const codeByModuleName = {}; for (const moduleName of this.updatedModules) { const moduleYamlPath = await resolveInstalledModuleYaml(moduleName); if (!moduleYamlPath) { @@ -447,6 +450,7 @@ class ManifestGenerator { try { const parsed = yaml.parse(await fs.readFile(moduleYamlPath, 'utf8')); if (!parsed || typeof parsed !== 'object') continue; + if (parsed.code) codeByModuleName[moduleName] = parsed.code; scopeByModuleKey[moduleName] = {}; for (const [key, value] of Object.entries(parsed)) { if (value && typeof value === 'object' && 'prompt' in value) { @@ -545,6 +549,9 @@ class ManifestGenerator { if (moduleName === 'core') continue; const cfg = moduleConfigs[moduleName]; if (!cfg || Object.keys(cfg).length === 0) continue; + // Use the module's code field from module.yaml as the TOML key so the + // section is [modules.mdo] not [modules.MDO: Maxio DevOps Operations]. + const sectionKey = codeByModuleName[moduleName] || moduleName; // Only filter out spread-from-core pollution when we actually know // this module's prompt schema. For external/marketplace modules whose // module.yaml isn't in the src tree, fall through as all-team so we @@ -552,14 +559,14 @@ class ManifestGenerator { const haveSchema = Object.keys(scopeByModuleKey[moduleName] || {}).length > 0; const { team: modTeam, user: modUser } = partition(moduleName, cfg, haveSchema); if (Object.keys(modTeam).length > 0) { - teamLines.push(`[modules.${moduleName}]`); + teamLines.push(`[modules.${sectionKey}]`); for (const [key, value] of Object.entries(modTeam)) { teamLines.push(`${key} = ${formatTomlValue(value)}`); } teamLines.push(''); } if (Object.keys(modUser).length > 0) { - userLines.push(`[modules.${moduleName}]`); + userLines.push(`[modules.${sectionKey}]`); for (const [key, value] of Object.entries(modUser)) { userLines.push(`${key} = ${formatTomlValue(value)}`); } diff --git a/tools/installer/project-root.js b/tools/installer/project-root.js index 1cdc30566..123bd5978 100644 --- a/tools/installer/project-root.js +++ b/tools/installer/project-root.js @@ -86,6 +86,8 @@ function getExternalModuleCachePath(moduleName, ...segments) { * Built-in modules (core, bmm) live under . External official modules are * cloned into ~/.bmad/cache/external-modules// with varying internal * layouts (some at src/module.yaml, some at skills/module.yaml, some nested). + * Local custom-source modules are not cached; their path is read from the + * CustomModuleManager resolution cache set during the same install run. * This mirrors the candidate-path search in * ExternalModuleManager.findExternalModuleSource but performs no git/network * work, which keeps it safe to call during manifest writing. @@ -97,26 +99,56 @@ async function resolveInstalledModuleYaml(moduleName) { const builtIn = path.join(getModulePath(moduleName), 'module.yaml'); if (await fs.pathExists(builtIn)) return builtIn; - const cacheRoot = getExternalModuleCachePath(moduleName); - if (!(await fs.pathExists(cacheRoot))) return null; + // Search a resolved root directory using the same candidate-path pattern. + async function searchRoot(root) { + for (const dir of ['skills', 'src']) { + const direct = path.join(root, dir, 'module.yaml'); + if (await fs.pathExists(direct)) return direct; - for (const dir of ['skills', 'src']) { - const direct = path.join(cacheRoot, dir, 'module.yaml'); - if (await fs.pathExists(direct)) return direct; - - const dirPath = path.join(cacheRoot, dir); - if (await fs.pathExists(dirPath)) { - const entries = await fs.readdir(dirPath, { withFileTypes: true }); - for (const entry of entries) { - if (!entry.isDirectory()) continue; - const nested = path.join(dirPath, entry.name, 'module.yaml'); - if (await fs.pathExists(nested)) return nested; + const dirPath = path.join(root, dir); + if (await fs.pathExists(dirPath)) { + const entries = await fs.readdir(dirPath, { withFileTypes: true }); + for (const entry of entries) { + if (!entry.isDirectory()) continue; + const nested = path.join(dirPath, entry.name, 'module.yaml'); + if (await fs.pathExists(nested)) return nested; + } } } + + // BMB standard: {setup-skill}/assets/module.yaml (setup skill is any *-setup directory) + const rootEntries = await fs.readdir(root, { withFileTypes: true }); + for (const entry of rootEntries) { + if (!entry.isDirectory() || !entry.name.endsWith('-setup')) continue; + const setupAssets = path.join(root, entry.name, 'assets', 'module.yaml'); + if (await fs.pathExists(setupAssets)) return setupAssets; + } + + const atRoot = path.join(root, 'module.yaml'); + if (await fs.pathExists(atRoot)) return atRoot; + return null; } - const atRoot = path.join(cacheRoot, 'module.yaml'); - if (await fs.pathExists(atRoot)) return atRoot; + const cacheRoot = getExternalModuleCachePath(moduleName); + if (await fs.pathExists(cacheRoot)) { + const found = await searchRoot(cacheRoot); + if (found) return found; + } + + // Fallback: local custom-source modules store their source path in the + // CustomModuleManager resolution cache populated during the same install run. + // Match by code OR name since callers may use either form. + try { + const { CustomModuleManager } = require('./modules/custom-module-manager'); + for (const [, mod] of CustomModuleManager._resolutionCache) { + if ((mod.code === moduleName || mod.name === moduleName) && mod.localPath) { + const found = await searchRoot(mod.localPath); + if (found) return found; + } + } + } catch { + // Resolution cache unavailable — continue + } return null; } From 350688df67335a932b7bd9ba914640b46453e5e3 Mon Sep 17 00:00:00 2001 From: Brian Date: Sun, 26 Apr 2026 15:53:36 -0500 Subject: [PATCH 15/23] fix(installer): resolve url-source custom modules from custom-modules cache (#2323) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * fix(installer): resolve url-source custom modules from custom-modules cache resolveInstalledModuleYaml previously only searched ~/.bmad/cache/external-modules/, so modules installed via --custom-source (cached at ~/.bmad/cache/custom-modules////) could not be located on re-install runs. This caused warnings during npx bmad-method install: [warn] collectAgentsFromModuleYaml: could not locate module.yaml for '' [warn] writeCentralConfig: could not locate module.yaml for '' Adds a fallback that walks the custom-modules cache via _findCacheRepoRoots (identifying repo roots by .bmad-source.json or .claude-plugin/, not marketplace.json, so direct-mode modules are also covered), reuses the same searchRoot candidate-path logic, and matches by the discovered yaml's code or name field. Works without needing _resolutionCache to be populated, which fixes the re-install scenario where no --custom-source flag is passed. Closes #2312 * fix(installer): enumerate all module.yamls when walking custom-modules cache A url-source custom-modules repo can host multiple plugins in discovery mode (e.g. skills/module-a/module.yaml and skills/module-b/module.yaml). The previous walk used searchRoot which returned only the first match, so asking for module-b would surface module-a's yaml, fail the code/name check, and skip the repo entirely — never inspecting module-b. Splits the candidate-path traversal into searchRootAll (returns every module.yaml in priority order) and a thin searchRoot wrapper for the existing single-module fallbacks. The custom-modules walk now iterates every yaml per repo and matches each against code or name. --- tools/installer/project-root.js | 63 ++++++++++++++++++++++++++++----- 1 file changed, 54 insertions(+), 9 deletions(-) diff --git a/tools/installer/project-root.js b/tools/installer/project-root.js index 123bd5978..f883c8a2e 100644 --- a/tools/installer/project-root.js +++ b/tools/installer/project-root.js @@ -1,5 +1,6 @@ const path = require('node:path'); const os = require('node:os'); +const yaml = require('yaml'); const fs = require('./fs-native'); /** @@ -86,8 +87,11 @@ function getExternalModuleCachePath(moduleName, ...segments) { * Built-in modules (core, bmm) live under . External official modules are * cloned into ~/.bmad/cache/external-modules// with varying internal * layouts (some at src/module.yaml, some at skills/module.yaml, some nested). - * Local custom-source modules are not cached; their path is read from the - * CustomModuleManager resolution cache set during the same install run. + * Url-source custom modules are cloned into ~/.bmad/cache/custom-modules//// + * and are resolved by walking the cache and matching `code` or `name` from the + * discovered module.yaml. Local custom-source modules are not cached; their + * path is read from the CustomModuleManager resolution cache set during the + * same install run. * This mirrors the candidate-path search in * ExternalModuleManager.findExternalModuleSource but performs no git/network * work, which keeps it safe to call during manifest writing. @@ -99,11 +103,14 @@ async function resolveInstalledModuleYaml(moduleName) { const builtIn = path.join(getModulePath(moduleName), 'module.yaml'); if (await fs.pathExists(builtIn)) return builtIn; - // Search a resolved root directory using the same candidate-path pattern. - async function searchRoot(root) { + // Collect every module.yaml under a root using the standard candidate paths. + // Url-source repos can host multiple plugins (discovery mode), so we need all + // matches, not just the first. Returned in priority order. + async function searchRootAll(root) { + const results = []; for (const dir of ['skills', 'src']) { const direct = path.join(root, dir, 'module.yaml'); - if (await fs.pathExists(direct)) return direct; + if (await fs.pathExists(direct)) results.push(direct); const dirPath = path.join(root, dir); if (await fs.pathExists(dirPath)) { @@ -111,7 +118,7 @@ async function resolveInstalledModuleYaml(moduleName) { for (const entry of entries) { if (!entry.isDirectory()) continue; const nested = path.join(dirPath, entry.name, 'module.yaml'); - if (await fs.pathExists(nested)) return nested; + if (await fs.pathExists(nested)) results.push(nested); } } } @@ -121,12 +128,19 @@ async function resolveInstalledModuleYaml(moduleName) { for (const entry of rootEntries) { if (!entry.isDirectory() || !entry.name.endsWith('-setup')) continue; const setupAssets = path.join(root, entry.name, 'assets', 'module.yaml'); - if (await fs.pathExists(setupAssets)) return setupAssets; + if (await fs.pathExists(setupAssets)) results.push(setupAssets); } const atRoot = path.join(root, 'module.yaml'); - if (await fs.pathExists(atRoot)) return atRoot; - return null; + if (await fs.pathExists(atRoot)) results.push(atRoot); + return results; + } + + // Backwards-compatible single-result variant for the existing external-cache + // and resolution-cache fallbacks (one module per root by construction). + async function searchRoot(root) { + const all = await searchRootAll(root); + return all.length > 0 ? all[0] : null; } const cacheRoot = getExternalModuleCachePath(moduleName); @@ -150,6 +164,37 @@ async function resolveInstalledModuleYaml(moduleName) { // Resolution cache unavailable — continue } + // Fallback: url-source custom modules cloned to ~/.bmad/cache/custom-modules/. + // Walk every cached repo, enumerate ALL module.yaml files via searchRootAll + // (a single repo can host multiple plugins in discovery mode), and match by + // the yaml's `code` or `name` field. This works on re-install runs where + // _resolutionCache is empty and covers both discovery-mode (with marketplace.json) + // and direct-mode modules, since we identify repo roots by .bmad-source.json + // (written by cloneRepo) or .claude-plugin/ rather than by marketplace.json. + try { + const customCacheDir = path.join(os.homedir(), '.bmad', 'cache', 'custom-modules'); + if (await fs.pathExists(customCacheDir)) { + const { CustomModuleManager } = require('./modules/custom-module-manager'); + const customMgr = new CustomModuleManager(); + const repoRoots = await customMgr._findCacheRepoRoots(customCacheDir); + for (const { repoPath } of repoRoots) { + const candidates = await searchRootAll(repoPath); + for (const candidate of candidates) { + try { + const parsed = yaml.parse(await fs.readFile(candidate, 'utf8')); + if (parsed && (parsed.code === moduleName || parsed.name === moduleName)) { + return candidate; + } + } catch { + // Malformed yaml — skip + } + } + } + } + } catch { + // Custom-modules cache walk failed — continue + } + return null; } From 1ad1f91e382f5b6d2547b93d9a85e0aea5b31a93 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?AJ=20C=C3=B4t=C3=A9?= <57828010+anderewrey@users.noreply.github.com> Date: Sun, 26 Apr 2026 19:37:56 -0300 Subject: [PATCH 16/23] feat(workflows): add brownfield epic scoping to detect file churn (#1823) (#1826) Add design completeness gate, file overlap check, and validation to prevent unnecessary file churn when epics target the same component. --- .../steps/step-02-design-epics.md | 38 +++++++++++++++++-- .../steps/step-04-final-validation.md | 6 +++ 2 files changed, 40 insertions(+), 4 deletions(-) diff --git a/src/bmm-skills/3-solutioning/bmad-create-epics-and-stories/steps/step-02-design-epics.md b/src/bmm-skills/3-solutioning/bmad-create-epics-and-stories/steps/step-02-design-epics.md index 00dd285e1..937f2df22 100644 --- a/src/bmm-skills/3-solutioning/bmad-create-epics-and-stories/steps/step-02-design-epics.md +++ b/src/bmm-skills/3-solutioning/bmad-create-epics-and-stories/steps/step-02-design-epics.md @@ -55,7 +55,8 @@ Load {planning_artifacts}/epics.md and review: 2. **Requirements Grouping**: Group related FRs that deliver cohesive user outcomes 3. **Incremental Delivery**: Each epic should deliver value independently 4. **Logical Flow**: Natural progression from user's perspective -5. **🔗 Dependency-Free Within Epic**: Stories within an epic must NOT depend on future stories +5. **Dependency-Free Within Epic**: Stories within an epic must NOT depend on future stories +6. **Implementation Efficiency**: Consider consolidating epics that all modify the same core files into fewer epics **⚠️ CRITICAL PRINCIPLE:** Organize by USER VALUE, not technical layers: @@ -74,6 +75,18 @@ Organize by USER VALUE, not technical layers: - Epic 3: Frontend Components (creates reusable components) - **No user value** - Epic 4: Deployment Pipeline (CI/CD setup) - **No user value** +**❌ WRONG Epic Examples (File Churn on Same Component):** + +- Epic 1: File Upload (modifies model, controller, web form, web API) +- Epic 2: File Status (modifies model, controller, web form, web API) +- Epic 3: File Access permissions (modifies model, controller, web form, web API) +- All three epics touch the same files — consolidate into one epic with ordered stories + +**✅ CORRECT Alternative:** + +- Epic 1: File Management Enhancement (upload, status, permissions as stories within one epic) +- Rationale: Single component, fully pre-designed, no feedback loop between epics + **🔗 DEPENDENCY RULES:** - Each epic must deliver COMPLETE functionality for its domain @@ -82,21 +95,38 @@ Organize by USER VALUE, not technical layers: ### 3. Design Epic Structure Collaboratively -**Step A: Identify User Value Themes** +**Step A: Assess Context and Identify Themes** + +First, assess how much of the solution design is already validated (Architecture, UX, Test Design). +When the outcome is certain and direction changes between epics are unlikely, prefer fewer but larger epics. +Split into multiple epics when there is a genuine risk boundary or when early feedback could change direction +of following epics. + +Then, identify user value themes: - Look for natural groupings in the FRs - Identify user journeys or workflows - Consider user types and their goals **Step B: Propose Epic Structure** -For each proposed epic: + +For each proposed epic (considering whether epics share the same core files): 1. **Epic Title**: User-centric, value-focused 2. **User Outcome**: What users can accomplish after this epic 3. **FR Coverage**: Which FR numbers this epic addresses 4. **Implementation Notes**: Any technical or UX considerations -**Step C: Create the epics_list** +**Step C: Review for File Overlap** + +Assess whether multiple proposed epics repeatedly target the same core files. If overlap is significant: + +- Distinguish meaningful overlap (same component end-to-end) from incidental sharing +- Ask whether to consolidate into one epic with ordered stories +- If confirmed, merge the epic FRs into a single epic, preserving dependency flow: each story must still fit within + a single dev agent's context + +**Step D: Create the epics_list** Format the epics_list as: diff --git a/src/bmm-skills/3-solutioning/bmad-create-epics-and-stories/steps/step-04-final-validation.md b/src/bmm-skills/3-solutioning/bmad-create-epics-and-stories/steps/step-04-final-validation.md index 6b6839097..6d2dd9dfa 100644 --- a/src/bmm-skills/3-solutioning/bmad-create-epics-and-stories/steps/step-04-final-validation.md +++ b/src/bmm-skills/3-solutioning/bmad-create-epics-and-stories/steps/step-04-final-validation.md @@ -90,6 +90,12 @@ Review the complete epic and story breakdown to ensure EVERY FR is covered: - Dependencies flow naturally - Foundation stories only setup what's needed - No big upfront technical work +- **File Churn Check:** Do multiple epics repeatedly modify the same core files? + - Assess whether the overlap pattern suggests unnecessary churn or is incidental + - If overlap is significant: Validate that splitting provides genuine value (risk mitigation, feedback loops, context size limits) + - If no justification for the split: Recommend consolidation into fewer epics + - ❌ WRONG: Multiple epics each modify the same core files with no feedback loop between them + - ✅ RIGHT: Epics target distinct files/components, OR consolidation was explicitly considered and rejected with rationale ### 5. Dependency Validation (CRITICAL) From 6ff74ba662e19c7591654d75833a46099783b9e5 Mon Sep 17 00:00:00 2001 From: Brian Date: Sun, 26 Apr 2026 22:50:47 -0500 Subject: [PATCH 17/23] fix(installer): route community installs through PluginResolver when marketplace.json ships (#2331) * fix(installer): route community installs through PluginResolver when marketplace.json ships Community-catalog installs ignored .claude-plugin/marketplace.json, so modules that nest module.yaml inside a setup skill's assets/ directory (e.g. Strategy 2 in PluginResolver) ended up half-installed: only module-help.csv and the generated config.yaml landed in _bmad//, while the actual skill source trees and module.yaml never got copied. The install would silently emit "could not locate module.yaml" warnings and leave .agents/skills/ without the module's skills. The fix wires the existing PluginResolver onto the community path: - CommunityModuleManager.cloneModule now detects marketplace.json after the clone+ref-checkout completes and runs PluginResolver. The resolution is stamped with channel/sha/registryApprovedTag/registryApprovedSha and cached in _pluginResolutions, mirroring the existing _resolutions cache. - OfficialModules.install consults the community plugin resolution and delegates to installFromResolution (the same code path custom-source installs already use). installFromResolution branches on communitySource to write source: 'community' with the registry's approved tag/sha and channel. - resolveInstalledModuleYaml now searches the community-modules cache root in addition to the external-modules cache, and the BMB setup-skill detector walks src/skills/ and skills/ (not just the repo root) so collectAgents FromModuleYaml and writeCentralConfig can find module.yaml in nested marketplace-plugin layouts. Backward compatibility: repos without marketplace.json (e.g. WDS, which declares module_definition: src/module.yaml at the root) continue through the legacy findModuleSource path with no behavior change. Verified against the live zarlor/suno-band-manager community module and a 23-check fixture suite covering Suno-shape, WDS-shape, and bare-repo layouts. * fix(installer): harden community marketplace.json resolution path Address review feedback on the community marketplace.json install path: - Wrap PluginResolver.resolve() in try/catch so a malformed plugin entry falls through to the legacy install path with a warn instead of crashing cloneModule. - Stop mutating the resolver's return object; shallow-clone before stamping community provenance so install state cannot leak back into resolver-owned objects. - Warn when _selectPluginForModule lands on the single-plugin fallback with a name that doesn't match the registry code or module_definition hint, so a misconfigured marketplace.json can't silently install the wrong plugin. - Add CommunityModuleManager.resolveFromCache() and call it from OfficialModules.install() when the in-process plugin cache is empty, so callers that reach install() without pre-cloning still get the marketplace-aware path. Reuses an existing channel resolution when present, otherwise synthesizes a stable-channel stub from the registry entry plus the cached repo's HEAD. - Align installFromResolution()'s returned versionInfo.version with manifestEntry.version precedence (communityVersion || cloneRef || ...) so downstream summaries match what was written to the manifest. Tests: lint, format:check, lint:md, test:install (290), test:channels (83), test:refs (7) all green. --- tools/installer/modules/community-manager.js | 220 +++++++++++++++++++ tools/installer/modules/official-modules.js | 46 +++- tools/installer/project-root.js | 28 ++- 3 files changed, 277 insertions(+), 17 deletions(-) diff --git a/tools/installer/modules/community-manager.js b/tools/installer/modules/community-manager.js index 04904a7e1..192e8f701 100644 --- a/tools/installer/modules/community-manager.js +++ b/tools/installer/modules/community-manager.js @@ -29,6 +29,11 @@ class CommunityModuleManager { // Shared across all instances; the manifest writer often uses a fresh instance. static _resolutions = new Map(); + // moduleCode → ResolvedModule (from PluginResolver) when the cloned repo ships + // a `.claude-plugin/marketplace.json`. Lets community installs reuse the same + // skill-level install pipeline as custom-source installs (installFromResolution). + static _pluginResolutions = new Map(); + constructor() { this._client = new RegistryClient(); this._cachedIndex = null; @@ -40,6 +45,11 @@ class CommunityModuleManager { return CommunityModuleManager._resolutions.get(moduleCode) || null; } + /** Get the marketplace.json-derived plugin resolution for a community module, if any. */ + getPluginResolution(moduleCode) { + return CommunityModuleManager._pluginResolutions.get(moduleCode) || null; + } + // ─── Data Loading ────────────────────────────────────────────────────────── /** @@ -371,6 +381,18 @@ class CommunityModuleManager { planSource: planEntry.source, }); + // If the repo ships a marketplace.json, route through PluginResolver so the + // skill-level install pipeline (installFromResolution) handles the copy. + // Repos without marketplace.json fall through to the legacy findModuleSource + // path unchanged. + await this._tryResolveMarketplacePlugin(moduleCacheDir, moduleInfo, { + channel: planEntry.channel, + version: recordedVersion, + sha: installedSha, + approvedTag, + approvedSha, + }); + // Install dependencies if needed const packageJsonPath = path.join(moduleCacheDir, 'package.json'); if ((needsDependencyInstall || wasNewClone) && (await fs.pathExists(packageJsonPath))) { @@ -392,6 +414,204 @@ class CommunityModuleManager { return moduleCacheDir; } + // ─── Marketplace.json Resolution ────────────────────────────────────────── + + /** + * Detect `.claude-plugin/marketplace.json` in a cloned community repo and + * route through PluginResolver. When successful, caches the resolution so + * OfficialModulesManager.install() can route the copy through + * installFromResolution() — the same path used by custom-source installs. + * + * Silent no-op when marketplace.json is absent or the resolver returns no + * matches; the legacy findModuleSource path then handles the install. + * + * @param {string} repoPath - Absolute path to the cloned repo + * @param {Object} moduleInfo - Normalized community module info + * @param {Object} resolution - Resolution metadata from cloneModule + * @param {string} resolution.channel - Channel ('stable' | 'next' | 'pinned') + * @param {string} resolution.version - Recorded version string + * @param {string} resolution.sha - Resolved git SHA + * @param {string|null} resolution.approvedTag - Registry approved tag + * @param {string|null} resolution.approvedSha - Registry approved SHA + */ + async _tryResolveMarketplacePlugin(repoPath, moduleInfo, resolution) { + const marketplacePath = path.join(repoPath, '.claude-plugin', 'marketplace.json'); + if (!(await fs.pathExists(marketplacePath))) return; + + let marketplaceData; + try { + marketplaceData = JSON.parse(await fs.readFile(marketplacePath, 'utf8')); + } catch { + // Malformed marketplace.json — fall through to legacy path. + return; + } + + const plugins = Array.isArray(marketplaceData?.plugins) ? marketplaceData.plugins : []; + if (plugins.length === 0) return; + + const selection = this._selectPluginForModule(plugins, moduleInfo); + if (!selection) { + await this._safeWarn( + `Community module '${moduleInfo.code}' ships marketplace.json but no plugin entry matches the registry code. ` + + `Falling back to legacy install path.`, + ); + return; + } + + if (selection.source === 'single-fallback') { + // Single-entry marketplace.json whose plugin name doesn't match the registry + // code or the module_definition hint. Most likely correct, but worth surfacing + // in case marketplace.json is misconfigured and we'd install the wrong plugin. + await this._safeWarn( + `Community module '${moduleInfo.code}' picked the only plugin in marketplace.json ('${selection.plugin?.name}') ` + + `because no name or module_definition match was found. Verify marketplace.json if the install looks wrong.`, + ); + } + + const { PluginResolver } = require('./plugin-resolver'); + const resolver = new PluginResolver(); + let resolved; + try { + resolved = await resolver.resolve(repoPath, selection.plugin); + } catch (error) { + // PluginResolver threw (malformed plugin entry, missing files, etc.). + // Honor the silent-fallthrough contract — warn and let the legacy + // findModuleSource path handle the install. + await this._safeWarn( + `PluginResolver failed for community module '${moduleInfo.code}': ${error.message}. ` + `Falling back to legacy install path.`, + ); + return; + } + if (!resolved || resolved.length === 0) return; + + // The registry registers a single code per module. If the resolver returns + // multiple modules (Strategy 4: multiple standalone skills), accept only + // the entry whose code matches the registry. Other entries are ignored — + // they belong to plugins not registered in the community catalog. + const matched = resolved.find((mod) => mod.code === moduleInfo.code) || (resolved.length === 1 ? resolved[0] : null); + if (!matched) return; + + // Shallow-clone before stamping provenance — the resolver may cache or reuse + // its return objects, and we don't want install-specific fields leaking back. + const stamped = { + ...matched, + code: moduleInfo.code, + repoUrl: moduleInfo.url, + cloneRef: resolution.channel === 'pinned' ? resolution.version : resolution.approvedTag || null, + cloneSha: resolution.sha, + communitySource: true, + communityChannel: resolution.channel, + communityVersion: resolution.version, + registryApprovedTag: resolution.approvedTag, + registryApprovedSha: resolution.approvedSha, + }; + + CommunityModuleManager._pluginResolutions.set(moduleInfo.code, stamped); + } + + /** + * Lazy fallback: resolve marketplace.json straight from the on-disk cache + * when `_pluginResolutions` is empty (e.g. callers that reach `install()` + * without `cloneModule` having populated the cache earlier in this process). + * + * Reuses an existing channel resolution if present; otherwise synthesizes a + * minimal stable-channel stub from the registry entry + the cached repo's + * current HEAD. Returns the cached plugin resolution if one is produced, + * otherwise null (caller falls back to the legacy path). + * + * @param {string} moduleCode + * @returns {Promise} + */ + async resolveFromCache(moduleCode) { + const existing = this.getPluginResolution(moduleCode); + if (existing) return existing; + + const cacheRepoDir = path.join(this.getCacheDir(), moduleCode); + const marketplacePath = path.join(cacheRepoDir, '.claude-plugin', 'marketplace.json'); + if (!(await fs.pathExists(marketplacePath))) return null; + + let moduleInfo; + try { + moduleInfo = await this.getModuleByCode(moduleCode); + } catch { + return null; + } + if (!moduleInfo) return null; + + let channelResolution = this.getResolution(moduleCode); + if (!channelResolution) { + let sha = ''; + try { + sha = execSync('git rev-parse HEAD', { cwd: cacheRepoDir, stdio: 'pipe' }).toString().trim(); + } catch { + // Not a git repo or unreadable — give up and let the legacy path run. + return null; + } + channelResolution = { + channel: 'stable', + version: moduleInfo.approvedTag || sha.slice(0, 7), + sha, + registryApprovedTag: moduleInfo.approvedTag || null, + registryApprovedSha: moduleInfo.approvedSha || null, + }; + } + + await this._tryResolveMarketplacePlugin(cacheRepoDir, moduleInfo, { + channel: channelResolution.channel, + version: channelResolution.version, + sha: channelResolution.sha, + approvedTag: channelResolution.registryApprovedTag, + approvedSha: channelResolution.registryApprovedSha, + }); + + return this.getPluginResolution(moduleCode); + } + + /** + * Best-effort warning emitter. `prompts.log.warn` may be undefined in some + * harnesses and may return a rejected promise — swallow both cases so a + * fallthrough warning can never crash the install. + */ + async _safeWarn(message) { + try { + const result = prompts.log?.warn?.(message); + if (result && typeof result.then === 'function') await result; + } catch { + /* ignore */ + } + } + + /** + * Pick which plugin entry from marketplace.json represents this community module. + * Precedence: + * 1. Exact match on `plugin.name === moduleInfo.code` + * 2. Trailing directory of `module_definition` matches `plugin.name` + * 3. Single plugin in marketplace.json — accepted with a warning so a + * mismatched-but-uniquely-named plugin doesn't install silently. + * Otherwise null (caller falls back to legacy path). + * + * @returns {{plugin: Object, source: 'name'|'hint'|'single-fallback'}|null} + */ + _selectPluginForModule(plugins, moduleInfo) { + const byCode = plugins.find((p) => p && p.name === moduleInfo.code); + if (byCode) return { plugin: byCode, source: 'name' }; + + if (moduleInfo.moduleDefinition) { + // module_definition like "src/skills/suno-setup/assets/module.yaml" → + // hint segment "suno-setup". Match that against plugin names. + const segments = moduleInfo.moduleDefinition.split('/').filter(Boolean); + const setupIdx = segments.findIndex((s) => s.endsWith('-setup')); + if (setupIdx !== -1) { + const hint = segments[setupIdx]; + const byHint = plugins.find((p) => p && p.name === hint); + if (byHint) return { plugin: byHint, source: 'hint' }; + } + } + + if (plugins.length === 1) return { plugin: plugins[0], source: 'single-fallback' }; + return null; + } + // ─── Source Finding ─────────────────────────────────────────────────────── /** diff --git a/tools/installer/modules/official-modules.js b/tools/installer/modules/official-modules.js index baafa7faf..4bd1e56b3 100644 --- a/tools/installer/modules/official-modules.js +++ b/tools/installer/modules/official-modules.js @@ -269,6 +269,21 @@ class OfficialModules { return this.installFromResolution(resolved, bmadDir, fileTrackingCallback, options); } + // Community modules whose cloned repo ships marketplace.json get the same + // skill-level install treatment as custom-source installs. If the in-process + // cache wasn't populated (e.g. caller skipped the pre-clone phase), fall + // back to resolving directly from `~/.bmad/cache/community-modules//` + // so we don't silently regress to the legacy half-install path. + const { CommunityModuleManager } = require('./community-manager'); + const communityMgr = new CommunityModuleManager(); + let communityResolved = communityMgr.getPluginResolution(moduleName); + if (!communityResolved) { + communityResolved = await communityMgr.resolveFromCache(moduleName); + } + if (communityResolved) { + return this.installFromResolution(communityResolved, bmadDir, fileTrackingCallback, options); + } + const sourcePath = await this.findModuleSource(moduleName, { silent: options.silent, channelOptions: options.channelOptions, @@ -360,21 +375,27 @@ class OfficialModules { await this.createModuleDirectories(resolved.code, bmadDir, options); } - // Update manifest. For custom modules, derive channel from the git ref: - // cloneRef present → pinned at that ref - // cloneRef absent → next (main HEAD) - // local path → no channel concept + // Update manifest. For community installs we honor the channel resolved by + // CommunityModuleManager (stable/next/pinned) and propagate the registry's + // approved tag/sha. For custom-source installs we derive channel from the + // cloneRef (present → pinned, absent → next; local paths have no channel). const { Manifest } = require('../core/manifest'); const manifestObj = new Manifest(); const hasGitClone = !!resolved.repoUrl; + const isCommunity = resolved.communitySource === true; const manifestEntry = { - version: resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || null), - source: 'custom', + version: resolved.communityVersion || resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || null), + source: isCommunity ? 'community' : 'custom', npmPackage: null, repoUrl: resolved.repoUrl || null, }; - if (hasGitClone) { + if (isCommunity) { + if (resolved.communityChannel) manifestEntry.channel = resolved.communityChannel; + if (resolved.cloneSha) manifestEntry.sha = resolved.cloneSha; + if (resolved.registryApprovedTag) manifestEntry.registryApprovedTag = resolved.registryApprovedTag; + if (resolved.registryApprovedSha) manifestEntry.registryApprovedSha = resolved.registryApprovedSha; + } else if (hasGitClone) { manifestEntry.channel = resolved.cloneRef ? 'pinned' : 'next'; if (resolved.cloneSha) manifestEntry.sha = resolved.cloneSha; if (resolved.rawInput) manifestEntry.rawSource = resolved.rawInput; @@ -386,10 +407,13 @@ class OfficialModules { success: true, module: resolved.code, path: targetPath, - // Match the manifestEntry.version expression above so downstream summary - // lines show the cloned ref (tag or 'main') instead of the on-disk - // package.json version for git-backed custom installs. - versionInfo: { version: resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || '') }, + // Mirror the manifestEntry.version precedence above so downstream summary + // lines show the same string we just wrote to disk (community installs + // use the registry-approved tag via `communityVersion`; custom git-backed + // installs show the cloned ref or 'main'). + versionInfo: { + version: resolved.communityVersion || resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || ''), + }, }; } diff --git a/tools/installer/project-root.js b/tools/installer/project-root.js index f883c8a2e..84ecde5b0 100644 --- a/tools/installer/project-root.js +++ b/tools/installer/project-root.js @@ -123,12 +123,18 @@ async function resolveInstalledModuleYaml(moduleName) { } } - // BMB standard: {setup-skill}/assets/module.yaml (setup skill is any *-setup directory) - const rootEntries = await fs.readdir(root, { withFileTypes: true }); - for (const entry of rootEntries) { - if (!entry.isDirectory() || !entry.name.endsWith('-setup')) continue; - const setupAssets = path.join(root, entry.name, 'assets', 'module.yaml'); - if (await fs.pathExists(setupAssets)) results.push(setupAssets); + // BMB standard: {setup-skill}/assets/module.yaml (setup skill is any *-setup directory). + // Check at the repo root, and also under src/skills/ and skills/ since + // marketplace plugins commonly nest skills under src/skills//. + const setupSearchRoots = [root, path.join(root, 'src', 'skills'), path.join(root, 'skills')]; + for (const setupRoot of setupSearchRoots) { + if (!(await fs.pathExists(setupRoot))) continue; + const entries = await fs.readdir(setupRoot, { withFileTypes: true }); + for (const entry of entries) { + if (!entry.isDirectory() || !entry.name.endsWith('-setup')) continue; + const setupAssets = path.join(setupRoot, entry.name, 'assets', 'module.yaml'); + if (await fs.pathExists(setupAssets)) results.push(setupAssets); + } } const atRoot = path.join(root, 'module.yaml'); @@ -149,6 +155,16 @@ async function resolveInstalledModuleYaml(moduleName) { if (found) return found; } + // Community modules are cloned to ~/.bmad/cache/community-modules// + // (parallel to the external-modules cache used above). Search there too so + // collectAgentsFromModuleYaml and writeCentralConfig can locate community + // module.yaml files regardless of how nested the layout is. + const communityCacheRoot = path.join(os.homedir(), '.bmad', 'cache', 'community-modules', moduleName); + if (await fs.pathExists(communityCacheRoot)) { + const found = await searchRoot(communityCacheRoot); + if (found) return found; + } + // Fallback: local custom-source modules store their source path in the // CustomModuleManager resolution cache populated during the same install run. // Match by code OR name since callers may use either form. From b4d73b7dafa8bdb5ed63a128ad70ee9bd74a6604 Mon Sep 17 00:00:00 2001 From: LanyGuan <88873443+LanyGuan@users.noreply.github.com> Date: Tue, 28 Apr 2026 08:58:38 +0800 Subject: [PATCH 18/23] Fix installer custom modules http (#2344) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * fix(installer): preserve http protocol in custom module clone URLs Previously, parseSource() hardcoded 'https://' when building cloneUrl, forcing http:// Git URLs (e.g., internal LAN hosts) to upgrade to https. This broke cloning for self-hosted Git servers that only serve over HTTP. - Capture the protocol from the regex match instead of discarding it - Update JSDoc and inline comments to document HTTP support - Update install-custom-modules docs (EN, ZH, VN) to list HTTP URL type Fixes the --custom-source flag for http:// addresses. * docs(installer): update JSDoc to mention HTTP support in cloneRepo Add HTTP to the cloneRepo method's JSDoc param description. Also fixes minor spacing in empty arrow functions (formatting). * docs(installer): fix JSDoc annotation for cloneRepo param Correct @param backtick escaping in cloneRepo JSDoc. Also documents HTTP as a supported protocol alongside HTTPS and SSH. --------- Co-authored-by: 关惠民 <9155544@qq.com> --- docs/how-to/install-custom-modules.md | 1 + docs/vi-vn/how-to/install-custom-modules.md | 1 + docs/zh-cn/how-to/install-custom-modules.md | 1 + .../installer/modules/custom-module-manager.js | 17 +++++++++-------- 4 files changed, 12 insertions(+), 8 deletions(-) diff --git a/docs/how-to/install-custom-modules.md b/docs/how-to/install-custom-modules.md index 288415afa..c4a38d41d 100644 --- a/docs/how-to/install-custom-modules.md +++ b/docs/how-to/install-custom-modules.md @@ -68,6 +68,7 @@ Select **Yes**, then provide a source: | Input Type | Example | | --------------------- | ------------------------------------------------- | | HTTPS URL (any host) | `https://github.com/org/repo` | +| HTTP URL (any host) | `http://host/org/repo` | | HTTPS URL with subdir | `https://github.com/org/repo/tree/main/my-module` | | SSH URL | `git@github.com:org/repo.git` | | Local path | `/Users/me/projects/my-module` | diff --git a/docs/vi-vn/how-to/install-custom-modules.md b/docs/vi-vn/how-to/install-custom-modules.md index 59ca36560..0b4064f1c 100644 --- a/docs/vi-vn/how-to/install-custom-modules.md +++ b/docs/vi-vn/how-to/install-custom-modules.md @@ -68,6 +68,7 @@ Chọn **Yes**, rồi nhập nguồn: | Loại đầu vào | Ví dụ | | --------------------- | ------------------------------------------------- | | HTTPS URL trên bất kỳ host nào | `https://github.com/org/repo` | +| HTTP URL trên bất kỳ host nào | `http://host/org/repo` | | HTTPS URL trỏ vào một thư mục con | `https://github.com/org/repo/tree/main/my-module` | | SSH URL | `git@github.com:org/repo.git` | | Đường dẫn cục bộ | `/Users/me/projects/my-module` | diff --git a/docs/zh-cn/how-to/install-custom-modules.md b/docs/zh-cn/how-to/install-custom-modules.md index 6b35c5df0..00193a3ed 100644 --- a/docs/zh-cn/how-to/install-custom-modules.md +++ b/docs/zh-cn/how-to/install-custom-modules.md @@ -68,6 +68,7 @@ Would you like to install from a custom source (Git URL or local path)? | 输入类型 | 示例 | | -------- | ---- | | HTTPS URL(任意主机) | `https://github.com/org/repo` | +| HTTP URL(任意主机) | `http://host/org/repo` | | 带子目录的 HTTPS URL | `https://github.com/org/repo/tree/main/my-module` | | SSH URL | `git@github.com:org/repo.git` | | 本地路径 | `/Users/me/projects/my-module` | diff --git a/tools/installer/modules/custom-module-manager.js b/tools/installer/modules/custom-module-manager.js index f6a26ba37..92644a934 100644 --- a/tools/installer/modules/custom-module-manager.js +++ b/tools/installer/modules/custom-module-manager.js @@ -24,8 +24,9 @@ class CustomModuleManager { /** * Parse a user-provided source input into a structured descriptor. - * Accepts local file paths, HTTPS Git URLs, and SSH Git URLs. - * For HTTPS URLs with deep paths (e.g., /tree/main/subdir), extracts the subdir. + * Accepts local file paths, HTTPS Git URLs, HTTP Git URLs, and SSH Git URLs. + * For HTTPS/HTTP URLs with deep paths (e.g., /tree/main/subdir), extracts the subdir. + * The original protocol (http or https) is preserved in the returned cloneUrl. * * @param {string} input - URL or local file path * @returns {Object} Parsed source descriptor: @@ -127,11 +128,11 @@ class CustomModuleManager { }; } - // HTTPS URL: https://host/owner/repo[/tree/branch/subdir][.git] - const httpsMatch = trimmed.match(/^https?:\/\/([^/]+)\/([^/]+)\/([^/.]+?)(?:\.git)?(\/.*)?$/); + // HTTPS/HTTP URL: https://host/owner/repo[/tree/branch/subdir][.git] + const httpsMatch = trimmed.match(/^(https?):\/\/([^/]+)\/([^/]+)\/([^/.]+?)(?:\.git)?(\/.*)?$/); if (httpsMatch) { - const [, host, owner, repo, remainder] = httpsMatch; - const cloneUrl = `https://${host}/${owner}/${repo}`; + const [, protocol, host, owner, repo, remainder] = httpsMatch; + const cloneUrl = `${protocol}://${host}/${owner}/${repo}`; let subdir = null; let urlRef = null; // branch/tag extracted from /tree//subdir @@ -311,7 +312,7 @@ class CustomModuleManager { /** * Clone a custom module repository to cache. * Supports any Git host (GitHub, GitLab, Bitbucket, self-hosted, etc.). - * @param {string} sourceInput - Git URL (HTTPS or SSH) + * @param {string} sourceInput - Git URL (HTTPS, HTTP, or SSH) * @param {Object} [options] - Clone options * @param {boolean} [options.silent] - Suppress spinner output * @param {boolean} [options.skipInstall] - Skip npm install (for browsing before user confirms) @@ -335,7 +336,7 @@ class CustomModuleManager { const createSpinner = async () => { if (silent) { - return { start() {}, stop() {}, error() {} }; + return { start() { }, stop() { }, error() { } }; } return await prompts.spinner(); }; From 3e89b30b3cdd3b2b30a8e6e5d2a2309a9d95eaed Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?J=C3=A9r=C3=B4me=20Revillard?= Date: Tue, 28 Apr 2026 03:49:21 +0200 Subject: [PATCH 19/23] fix: use full update path when --custom-source is passed with --yes (#2336) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * fix: use full update path when --custom-source is passed with --yes When --yes is used on an existing install, the installer auto-selects quick-update. However, quick-update never re-clones custom module repos — it only reads whatever is already in the cache. This means --custom-source with a new version tag (e.g. @1.1.0) is silently ignored and the previously cached version (e.g. 1.0.1) is reported as "already up to date". Default to the full update path when --custom-source is present, so the custom repo gets re-cloned at the requested version. Also ensure all installed modules are included in the selection when --yes is combined with --custom-source, preventing previously installed modules from being removed. * fix: address review feedback on choices.find() and comment clarity * style: prettier fix for empty-body methods in custom-module-manager --------- Co-authored-by: Brian --- tools/installer/modules/custom-module-manager.js | 2 +- tools/installer/ui.js | 14 ++++++++++---- 2 files changed, 11 insertions(+), 5 deletions(-) diff --git a/tools/installer/modules/custom-module-manager.js b/tools/installer/modules/custom-module-manager.js index 92644a934..ca3e52325 100644 --- a/tools/installer/modules/custom-module-manager.js +++ b/tools/installer/modules/custom-module-manager.js @@ -336,7 +336,7 @@ class CustomModuleManager { const createSpinner = async () => { if (silent) { - return { start() { }, stop() { }, error() { } }; + return { start() {}, stop() {}, error() {} }; } return await prompts.spinner(); }; diff --git a/tools/installer/ui.js b/tools/installer/ui.js index 4ec0ef118..7b720743b 100644 --- a/tools/installer/ui.js +++ b/tools/installer/ui.js @@ -200,12 +200,15 @@ class UI { actionType = options.action; await prompts.log.info(`Using action from command-line: ${actionType}`); } else if (options.yes) { - // Default to quick-update if available, otherwise first available choice + // Default to quick-update if available, unless flags that require the + // full update path are present (e.g. --custom-source which re-clones + // modules at a new version — quick-update skips that entirely). if (choices.length === 0) { throw new Error('No valid actions available for this installation'); } const hasQuickUpdate = choices.some((c) => c.value === 'quick-update'); - actionType = hasQuickUpdate ? 'quick-update' : choices[0].value; + const needsFullUpdate = !!options.customSource; + actionType = hasQuickUpdate && !needsFullUpdate ? 'quick-update' : (choices.find((c) => c.value === 'update') || choices[0]).value; await prompts.log.info(`Non-interactive mode (--yes): defaulting to ${actionType}`); } else { actionType = await prompts.select({ @@ -241,8 +244,11 @@ class UI { .map((m) => m.trim()) .filter(Boolean); await prompts.log.info(`Using modules from command-line: ${selectedModules.join(', ')}`); - } else if (options.customSource) { - // Custom source without --modules: start with empty list (core added below) + } else if (options.customSource && !options.yes) { + // Custom source without --modules or --yes: start with empty list + // (only custom source modules + core will be installed). + // When --yes is also set, fall through to the --yes branch so all + // installed modules are included alongside the custom source modules. selectedModules = []; } else if (options.yes) { selectedModules = await this.getDefaultModules(installedModuleIds); From 7ee5fa313bcae045945d2df80b7fbef67873eeb4 Mon Sep 17 00:00:00 2001 From: Brian Date: Mon, 27 Apr 2026 23:01:23 -0500 Subject: [PATCH 20/23] fix(installer): require --tools for fresh --yes installs; remove --tools none (#2346) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * fix(installer): require --tools for fresh --yes installs; remove --tools none (closes #2326) Fresh non-interactive installs without --tools previously produced a config-only install (~35 files vs ~1400 in the manifest) with no warning and a "BMAD is ready to use" success card, leaving slash commands unreachable. --tools none was an explicit opt-in for the same broken state. Now: fresh install + -y without --tools throws a helpful error pointing at --list-tools. --tools none is rejected as an unknown ID. Empty and typo'd tool IDs are also rejected. Existing-install paths (--action update, quick-update, modify) are unchanged - they continue to reuse previously-configured tools when --tools is omitted. Adds --list-tools flag that prints all 42 supported tool IDs (id, name, target_dir, preferred star) sourced from platform-codes.yaml. English docs updated; localized docs (vi-vn, fr, cs, etc.) will sync via the normal translation pass. * fix(installer): address review for #2326 — single source of truth, drop dead code, add tests - Refactor formatPlatformList to use IdeManager so --list-tools and --tools validation see the same set of platforms. Eliminates the drift where suspended platforms appeared in --list-tools but were rejected at validation. - Drop unused getValidPlatformIds export. - Flatten redundant block scope around the throw in the --yes-without-tools branch (refactor leftover). - Drop dead String() defensive cast (Commander always passes a string). - Add Test Suite 42: 8 unit tests covering _parseToolsFlag empty/whitespace/ unknown/typo cases plus an integration check that --list-tools output and --tools validation agree on the ID set. * fix(installer): close --tools "" bypass and drop hardcoded tool count - Replace truthy `if (options.tools)` guard with `!== undefined` in both upgrade and fresh-install branches. Empty string now reaches _parseToolsFlag and produces the specific "passed empty" error instead of falling through to a generic message (fresh-install) or being silently ignored (existing-install). - Drop the hardcoded "42 supported tools" count from the prereqs in install-bmad.md so the doc doesn't drift as platform-codes.yaml changes. Addresses augment / coderabbit review on #2346. --- docs/how-to/install-bmad.md | 15 ++--- test/test-installation-components.js | 88 +++++++++++++++++++++++++++ tools/installer/commands/install.js | 11 +++- tools/installer/ide/platform-codes.js | 43 +++++++++++++ tools/installer/ui.js | 80 ++++++++++++++++-------- 5 files changed, 202 insertions(+), 35 deletions(-) diff --git a/docs/how-to/install-bmad.md b/docs/how-to/install-bmad.md index 616e6e430..6651143d6 100644 --- a/docs/how-to/install-bmad.md +++ b/docs/how-to/install-bmad.md @@ -18,7 +18,7 @@ Use `npx bmad-method install` to set up BMad in your project. One command handle - **Node.js** 20+ (the installer requires it) - **Git** (for cloning external modules) -- **An AI tool** such as Claude Code or Cursor — or install without one using `--tools none` +- **An AI tool** such as Claude Code or Cursor (run `npx bmad-method install --list-tools` to see all supported tools) ::: @@ -122,7 +122,8 @@ Under `--yes`, patch and minor upgrades apply automatically. Majors stay frozen | `--yes`, `-y` | Skip all prompts; accept flag values + defaults | | `--directory ` | Install into this directory (default: current working dir) | | `--modules ` | Exact module set. Core is auto-added. Not a delta — list everything you want kept. | -| `--tools ` or `--tools none` | IDE/tool selection. `none` skips tool config entirely. | +| `--tools ` | IDE/tool selection. Required for fresh `--yes` installs. Run `--list-tools` for valid IDs. | +| `--list-tools` | Print all supported tool/IDE IDs (with target directories) and exit. | | `--action ` | `install`, `update`, or `quick-update`. Defaults based on existing install state. | | `--custom-source ` | Install custom modules from Git URLs or local paths | | `--channel ` | Apply to all externals (aliased as `--all-stable` / `--all-next`) | @@ -165,17 +166,17 @@ npx bmad-method install --yes --modules bmm,bmb --all-next --tools claude-code ```bash npx bmad-method install --yes --action update \ - --modules bmm,bmb,gds \ - --tools none + --modules bmm,bmb,gds ``` +`--tools` is omitted intentionally — `--action update` reuses the tools configured during the first install. + **Mix channels — bmb on next, gds on stable:** ```bash npx bmad-method install --yes --action update \ --modules bmm,bmb,cis,gds \ - --next=bmb \ - --tools none + --next=bmb ``` :::caution[Rate limit on shared IPs] @@ -204,7 +205,7 @@ For cross-machine reproducibility, don't rely on rerunning the same `--modules` ```bash npx bmad-method install --yes --modules bmb,cis \ - --pin bmb=v1.7.0 --pin cis=v0.4.2 --tools none + --pin bmb=v1.7.0 --pin cis=v0.4.2 --tools claude-code ``` ## Troubleshooting diff --git a/test/test-installation-components.js b/test/test-installation-components.js index 4827afcbf..f63f1b446 100644 --- a/test/test-installation-components.js +++ b/test/test-installation-components.js @@ -2773,6 +2773,94 @@ async function runTests() { console.log(''); + // ============================================================ + // Test Suite 42: --tools flag parsing & validation (#2326) + // ============================================================ + console.log(`${colors.yellow}Test Suite 42: --tools flag parsing & validation${colors.reset}\n`); + try { + const { UI } = require('../tools/installer/ui'); + const ui = new UI(); + const known = new Set(['claude-code', 'cursor', 'windsurf']); + + assert( + JSON.stringify(ui._parseToolsFlag('claude-code', known)) === JSON.stringify(['claude-code']), + 'parseToolsFlag returns single ID', + ); + + assert( + JSON.stringify(ui._parseToolsFlag('claude-code,cursor', known)) === JSON.stringify(['claude-code', 'cursor']), + 'parseToolsFlag returns multiple IDs', + ); + + assert( + JSON.stringify(ui._parseToolsFlag(' claude-code , cursor ', known)) === JSON.stringify(['claude-code', 'cursor']), + 'parseToolsFlag trims whitespace', + ); + + let emptyErr; + try { + ui._parseToolsFlag('', known); + } catch (error) { + emptyErr = error; + } + assert( + emptyErr && emptyErr.expected === true && /empty/i.test(emptyErr.message), + 'parseToolsFlag rejects empty string with expected=true', + ); + + let commasOnlyErr; + try { + ui._parseToolsFlag(' , , ', known); + } catch (error) { + commasOnlyErr = error; + } + assert(commasOnlyErr && commasOnlyErr.expected === true, 'parseToolsFlag rejects whitespace/comma-only input'); + + let noneErr; + try { + ui._parseToolsFlag('none', known); + } catch (error) { + noneErr = error; + } + assert(noneErr && noneErr.expected === true && /Unknown tool ID/.test(noneErr.message), 'parseToolsFlag rejects "none" as unknown ID'); + + let typoErr; + try { + ui._parseToolsFlag('claude-code,claude-cdoe', known); + } catch (error) { + typoErr = error; + } + const typoHeader = typoErr ? typoErr.message.split('\n')[0] : ''; + assert( + typoErr && typoErr.expected === true && /claude-cdoe/.test(typoHeader) && !/claude-code/.test(typoHeader), + 'parseToolsFlag reports only the unknown ID in error header (valid ones not listed as unknown)', + ); + + // --list-tools and --tools validation must agree on what counts as a valid ID. + const { formatPlatformList } = require('../tools/installer/ide/platform-codes'); + const { IdeManager } = require('../tools/installer/ide/manager'); + const ideManager42 = new IdeManager(); + await ideManager42.ensureInitialized(); + const validIds = new Set(ideManager42.getAvailableIdes().map((i) => i.value)); + const listed = await formatPlatformList(); + // Each entry line starts with ' *' (preferred) or ' ' (other), followed by the ID, then padding. + const entryLines = listed.split('\n').filter((l) => /^( \*| {2})[a-z]/.test(l)); + const listedIds = entryLines.map((l) => l.trim().replace(/^\*/, '').split(/\s+/)[0]); + const missingFromList = [...validIds].filter((id) => !listedIds.includes(id)); + const extraInList = listedIds.filter((id) => !validIds.has(id)); + assert( + missingFromList.length === 0 && extraInList.length === 0, + '--list-tools output matches the IDs that --tools accepts', + `Missing from list: ${missingFromList.join(',') || '(none)'}; Extra in list: ${extraInList.join(',') || '(none)'}`, + ); + } catch (error) { + console.log(`${colors.red}Test Suite 42 setup failed: ${error.message}${colors.reset}`); + console.log(error.stack); + failed++; + } + + console.log(''); + // ============================================================ // Summary // ============================================================ diff --git a/tools/installer/commands/install.js b/tools/installer/commands/install.js index e10a0c96a..55adcfb9c 100644 --- a/tools/installer/commands/install.js +++ b/tools/installer/commands/install.js @@ -15,8 +15,9 @@ module.exports = { ['--modules ', 'Comma-separated list of module IDs to install (e.g., "bmm,bmb")'], [ '--tools ', - 'Comma-separated list of tool/IDE IDs to configure (e.g., "claude-code,cursor"). Use "none" to skip tool configuration.', + 'Comma-separated list of tool/IDE IDs to configure (e.g., "claude-code,cursor"). Required for fresh non-interactive (--yes) installs. Run with --list-tools to see all valid IDs.', ], + ['--list-tools', 'Print all supported tool/IDE IDs (with target directories) and exit.'], ['--action ', 'Action type for existing installations: install, update, or quick-update'], ['--user-name ', 'Name for agents to use (default: system username)'], ['--communication-language ', 'Language for agent communication (default: English)'], @@ -40,6 +41,12 @@ module.exports = { ], action: async (options) => { try { + if (options.listTools) { + const { formatPlatformList } = require('../ide/platform-codes'); + process.stdout.write((await formatPlatformList()) + '\n'); + process.exit(0); + } + // Set debug flag as environment variable for all components if (options.debug) { process.env.BMAD_DEBUG_MANIFEST = 'true'; @@ -81,7 +88,7 @@ module.exports = { } else { await prompts.log.error(`Installation failed: ${error.message}`); } - if (error.stack) { + if (error.stack && !error.expected) { await prompts.log.message(error.stack); } } catch { diff --git a/tools/installer/ide/platform-codes.js b/tools/installer/ide/platform-codes.js index f29be8fcb..6d1aa9180 100644 --- a/tools/installer/ide/platform-codes.js +++ b/tools/installer/ide/platform-codes.js @@ -31,7 +31,50 @@ function clearCache() { _cachedPlatformCodes = null; } +/** + * Format the installable platform list for human-readable output (used by --list-tools). + * Sourced from IdeManager so this view matches what --tools accepts at install time + * (suspended platforms excluded). + * @returns {Promise} Formatted multi-line string with id, name, target_dir, preferred flag. + */ +async function formatPlatformList() { + const { IdeManager } = require('./manager'); + const ideManager = new IdeManager(); + await ideManager.ensureInitialized(); + + const entries = ideManager.getAvailableIdes().map((ide) => { + const handler = ideManager.handlers.get(ide.value); + return { + id: ide.value, + name: ide.name, + targetDir: handler?.installerConfig?.target_dir || '', + preferred: ide.preferred, + }; + }); + + const idWidth = Math.max(...entries.map((e) => e.id.length), 'ID'.length); + const nameWidth = Math.max(...entries.map((e) => e.name.length), 'Name'.length); + + const pad = (s, w) => s + ' '.repeat(Math.max(0, w - s.length)); + const lines = [ + `Supported tool IDs (pass via --tools [,...]):`, + '', + ` ${pad('ID', idWidth)} ${pad('Name', nameWidth)} Target dir`, + ` ${pad('-'.repeat(idWidth), idWidth)} ${pad('-'.repeat(nameWidth), nameWidth)} ${'-'.repeat(10)}`, + ]; + + for (const e of entries) { + const star = e.preferred ? ' *' : ' '; + lines.push(`${star}${pad(e.id, idWidth)} ${pad(e.name, nameWidth)} ${e.targetDir}`); + } + + lines.push('', '* = recommended / preferred', '', 'Example: bmad-method install --modules bmm --tools claude-code'); + + return lines.join('\n'); +} + module.exports = { loadPlatformCodes, clearCache, + formatPlatformList, }; diff --git a/tools/installer/ui.js b/tools/installer/ui.js index 7b720743b..1200c37ea 100644 --- a/tools/installer/ui.js +++ b/tools/installer/ui.js @@ -404,6 +404,37 @@ class UI { * @param {Object} options - Command-line options * @returns {Object} Tool configuration */ + _parseToolsFlag(toolsArg, allKnownValues) { + const selectedIdes = toolsArg + .split(',') + .map((t) => t.trim()) + .filter(Boolean); + + if (selectedIdes.length === 0) { + const err = new Error( + '--tools was passed empty. Provide at least one tool ID (e.g. --tools claude-code) or run with --list-tools to see valid IDs.', + ); + err.expected = true; + throw err; + } + + const unknown = selectedIdes.filter((id) => !allKnownValues.has(id)); + if (unknown.length > 0) { + const err = new Error( + [ + `Unknown tool ID${unknown.length === 1 ? '' : 's'}: ${unknown.join(', ')}`, + '', + 'Run with --list-tools to see all valid IDs.', + 'Common: claude-code, cursor, copilot, windsurf, cline', + ].join('\n'), + ); + err.expected = true; + throw err; + } + + return selectedIdes; + } + async promptToolSelection(projectDir, options = {}) { const { ExistingInstall } = require('./core/existing-install'); const { Installer } = require('./core/installer'); @@ -438,15 +469,10 @@ class UI { const allTools = [...preferredIdes, ...otherIdes]; // Non-interactive: handle --tools and --yes flags before interactive prompt - if (options.tools) { - if (options.tools.toLowerCase() === 'none') { - await prompts.log.info('Skipping tool configuration (--tools none)'); - return { ides: [], skipIde: true }; - } - const selectedIdes = options.tools - .split(',') - .map((t) => t.trim()) - .filter(Boolean); + // Use !== undefined so an explicit --tools "" falls through to _parseToolsFlag and + // gets a specific "passed empty" error instead of being silently ignored. + if (options.tools !== undefined) { + const selectedIdes = this._parseToolsFlag(options.tools, allKnownValues); await prompts.log.info(`Using tools from command-line: ${selectedIdes.join(', ')}`); await this.displaySelectedTools(selectedIdes, preferredIdes, allTools); return { ides: selectedIdes, skipIde: false }; @@ -522,21 +548,13 @@ class UI { let selectedIdes = []; - // Check if tools are provided via command-line - if (options.tools) { - // Check for explicit "none" value to skip tool installation - if (options.tools.toLowerCase() === 'none') { - await prompts.log.info('Skipping tool configuration (--tools none)'); - return { ides: [], skipIde: true }; - } else { - selectedIdes = options.tools - .split(',') - .map((t) => t.trim()) - .filter(Boolean); - await prompts.log.info(`Using tools from command-line: ${selectedIdes.join(', ')}`); - await this.displaySelectedTools(selectedIdes, preferredIdes, allTools); - return { ides: selectedIdes, skipIde: false }; - } + // Check if tools are provided via command-line. + // Use !== undefined so an explicit --tools "" still hits _parseToolsFlag's empty-value error. + if (options.tools !== undefined) { + selectedIdes = this._parseToolsFlag(options.tools, allKnownValues); + await prompts.log.info(`Using tools from command-line: ${selectedIdes.join(', ')}`); + await this.displaySelectedTools(selectedIdes, preferredIdes, allTools); + return { ides: selectedIdes, skipIde: false }; } else if (options.yes) { // If --yes flag is set, skip tool prompt and use previously configured tools or empty if (configuredIdes.length > 0) { @@ -544,8 +562,18 @@ class UI { await this.displaySelectedTools(configuredIdes, preferredIdes, allTools); return { ides: configuredIdes, skipIde: false }; } else { - await prompts.log.info('Skipping tool configuration (--yes flag, no previous tools)'); - return { ides: [], skipIde: true }; + const err = new Error( + [ + '--tools is required for non-interactive install (--yes / -y) when no tools are previously configured.', + '', + 'Common: claude-code, cursor, copilot, windsurf, cline', + 'See all supported tools: bmad-method install --list-tools', + '', + 'Example: bmad-method install --modules bmm --tools claude-code -y', + ].join('\n'), + ); + err.expected = true; + throw err; } } From 815600e4ca20ebce88f993a8bc9caaab20391ee2 Mon Sep 17 00:00:00 2001 From: Brian Date: Mon, 27 Apr 2026 23:14:23 -0500 Subject: [PATCH 21/23] fix(create-architecture): unprime step-07 validation checklist (#2292) (#2347) step-07-validation template shipped with all 16 completeness checkboxes pre-checked and Overall Status hard-coded to READY FOR IMPLEMENTATION, defeating the gate. Reset checkboxes to unchecked, replace status with a templated choice tied to the checklist and gap analysis, and instruct the agent to only mark items the validation actually confirms. Closes #2292 --- .../steps/step-07-validation.md | 44 ++++++++++--------- 1 file changed, 23 insertions(+), 21 deletions(-) diff --git a/src/bmm-skills/3-solutioning/bmad-create-architecture/steps/step-07-validation.md b/src/bmm-skills/3-solutioning/bmad-create-architecture/steps/step-07-validation.md index 3275c5db2..246071a6a 100644 --- a/src/bmm-skills/3-solutioning/bmad-create-architecture/steps/step-07-validation.md +++ b/src/bmm-skills/3-solutioning/bmad-create-architecture/steps/step-07-validation.md @@ -227,37 +227,39 @@ Prepare the content to append to the document: ### Architecture Completeness Checklist -**✅ Requirements Analysis** +Mark each item `[x]` only if validation confirms it; leave `[ ]` if it is missing, partial, or unverified. Any unchecked item must be reflected in the Gap Analysis above and in the Overall Status below. -- [x] Project context thoroughly analyzed -- [x] Scale and complexity assessed -- [x] Technical constraints identified -- [x] Cross-cutting concerns mapped +**Requirements Analysis** -**✅ Architectural Decisions** +- [ ] Project context thoroughly analyzed +- [ ] Scale and complexity assessed +- [ ] Technical constraints identified +- [ ] Cross-cutting concerns mapped -- [x] Critical decisions documented with versions -- [x] Technology stack fully specified -- [x] Integration patterns defined -- [x] Performance considerations addressed +**Architectural Decisions** -**✅ Implementation Patterns** +- [ ] Critical decisions documented with versions +- [ ] Technology stack fully specified +- [ ] Integration patterns defined +- [ ] Performance considerations addressed -- [x] Naming conventions established -- [x] Structure patterns defined -- [x] Communication patterns specified -- [x] Process patterns documented +**Implementation Patterns** -**✅ Project Structure** +- [ ] Naming conventions established +- [ ] Structure patterns defined +- [ ] Communication patterns specified +- [ ] Process patterns documented -- [x] Complete directory structure defined -- [x] Component boundaries established -- [x] Integration points mapped -- [x] Requirements to structure mapping complete +**Project Structure** + +- [ ] Complete directory structure defined +- [ ] Component boundaries established +- [ ] Integration points mapped +- [ ] Requirements to structure mapping complete ### Architecture Readiness Assessment -**Overall Status:** READY FOR IMPLEMENTATION +**Overall Status:** {{READY FOR IMPLEMENTATION | READY WITH MINOR GAPS | NOT READY}} (choose READY FOR IMPLEMENTATION only when all 16 checklist items are `[x]` and no Critical Gaps remain; choose NOT READY when any Critical Gap is open or any Requirements Analysis or Architectural Decisions item is unchecked; otherwise READY WITH MINOR GAPS) **Confidence Level:** {{high/medium/low}} based on validation results From 3da984a4911017c8a7614a92ebc66bb3c92b862e Mon Sep 17 00:00:00 2001 From: Brian Date: Mon, 27 Apr 2026 23:31:59 -0500 Subject: [PATCH 22/23] fix(config): promote project_name to core (closes #2279) (#2348) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * fix(config): promote project_name to core, fixes #2279 project_name was a bmm-specific prompt despite being a universal project-level concept used by every module — including core skills like bmad-brainstorming, which loads from _bmad/core/config.yaml and was silently broken because project_name lived under bmm. Users without bmm installed could not run brainstorming at all. Move: - src/core-skills/module.yaml: declare project_name with prompt "What is your project called?" and default {directory_name}, matching what bmm previously had. - src/bmm-skills/module.yaml: remove the bmm definition; add project_name to the "Variables from Core Config inserted" header comment so contributors can see what's inherited. Migration for existing installs: - tools/installer/modules/official-modules.js: after loadExistingConfig reads each per-module config.yaml, hoist any keys that are now declared in core but appear under non-core modules. Without this, the partition logic in writeCentralConfig (which strips core keys from non-core buckets) would silently drop the user's prior project_name on the next quick-update. Generic — handles project_name today and any future module→core promotions. - The hoist preserves precedence: an existing core value beats a stale module-side copy. --yes seed: - tools/installer/ui.js: add project_name to the hardcoded core seed (using path.basename(directory) to match the {directory_name} default) so non-interactive fresh installs populate it. Without this the seed silently omits project_name and core skills fall back to literals. Tests: - test/test-installation-components.js Suite 43 (9 assertions) covers the schema move, the loadExistingConfig hoist, and the precedence rule. - Suite 35 fixture updated: project_name moved from bmm bucket to core, with a stale bmm copy left in place to verify it gets stripped. Verified manually: - Fresh install -y: project_name lands in [core] of config.toml. - Existing install with project_name in bmm/config.yaml: quick-update hoists it to [core] and strips it from [modules.bmm]. * fix(installer): harden config-load against malformed config.yaml Per augment review on #2348: loadExistingConfig stored any truthy yaml.parse result (including scalars like '42'), which would later crash _hoistCoreKeysFromLegacyModuleConfigs at \`key in cfg\` with "Cannot use 'in' operator to search for ... in 42". - loadExistingConfig: only keep parses that are plain objects (not scalars or arrays). A corrupt config.yaml is now treated the same as a parse error — skipped, not crashed-on. - _hoistCoreKeysFromLegacyModuleConfigs: belt-and-suspenders type guards on _existingConfig.core (in case it's populated by some other path) and on each module cfg in the loop. - Test Suite 43 adds 2 assertions covering a scalar core/config.yaml: loadExistingConfig must not crash, and bmm.project_name must still hoist into a clean core bucket. --- src/bmm-skills/module.yaml | 6 +- src/core-skills/module.yaml | 5 + test/test-installation-components.js | 126 +++++++++++++++++++- tools/installer/modules/official-modules.js | 54 ++++++++- tools/installer/ui.js | 3 + 5 files changed, 186 insertions(+), 8 deletions(-) diff --git a/src/bmm-skills/module.yaml b/src/bmm-skills/module.yaml index cf3232614..490de183c 100644 --- a/src/bmm-skills/module.yaml +++ b/src/bmm-skills/module.yaml @@ -5,15 +5,11 @@ default_selected: true # This module will be selected by default for new install # Variables from Core Config inserted: ## user_name +## project_name ## communication_language ## document_output_language ## output_folder -project_name: - prompt: "What is your project called?" - default: "{directory_name}" - result: "{value}" - user_skill_level: prompt: - "What is your development experience level?" diff --git a/src/core-skills/module.yaml b/src/core-skills/module.yaml index 0ccc68a78..b2b2650fb 100644 --- a/src/core-skills/module.yaml +++ b/src/core-skills/module.yaml @@ -11,6 +11,11 @@ user_name: default: "BMad" result: "{value}" +project_name: + prompt: "What is your project called?" + default: "{directory_name}" + result: "{value}" + communication_language: prompt: "What language should agents use when chatting with you?" scope: user diff --git a/test/test-installation-components.js b/test/test-installation-components.js index f63f1b446..a8bf77756 100644 --- a/test/test-installation-components.js +++ b/test/test-installation-components.js @@ -1813,12 +1813,12 @@ async function runTests() { const moduleConfigs = { core: { user_name: 'TestUser', + project_name: 'demo-project', communication_language: 'Spanish', document_output_language: 'English', output_folder: '_bmad-output', }, bmm: { - project_name: 'demo-project', user_skill_level: 'expert', planning_artifacts: '{project-root}/_bmad-output/planning-artifacts', implementation_artifacts: '{project-root}/_bmad-output/implementation-artifacts', @@ -1826,7 +1826,10 @@ async function runTests() { // Spread-from-core pollution: legacy per-module config.yaml merges // core values into every module; writeCentralConfig must strip these // from [modules.bmm] so core values only live in [core]. + // project_name is now a core key (#2279), so it joins user_name etc. + // as a spread-from-core key that must be stripped. user_name: 'TestUser', + project_name: 'stale-bmm-copy', communication_language: 'Spanish', document_output_language: 'English', output_folder: '_bmad-output', @@ -1874,6 +1877,7 @@ async function runTests() { assert(teamContent.includes('[core]'), 'config.toml has [core] section'); assert(teamContent.includes('document_output_language = "English"'), 'Team-scope core key lands in config.toml'); assert(teamContent.includes('output_folder = "_bmad-output"'), 'Team-scope output_folder lands in config.toml'); + assert(teamContent.includes('project_name = "demo-project"'), 'project_name lands in [core] (core key as of #2279)'); assert(!teamContent.includes('user_name'), 'user_name (scope: user) is absent from config.toml'); assert(!teamContent.includes('communication_language'), 'communication_language (scope: user) is absent from config.toml'); @@ -1888,7 +1892,9 @@ async function runTests() { assert(bmmTeamMatch !== null, 'config.toml has [modules.bmm] section'); if (bmmTeamMatch) { const bmmTeamBlock = bmmTeamMatch[0]; - assert(bmmTeamBlock.includes('project_name = "demo-project"'), 'bmm team-scope key lands under [modules.bmm]'); + assert(bmmTeamBlock.includes('planning_artifacts'), 'bmm-owned team-scope key (planning_artifacts) lands under [modules.bmm]'); + assert(!bmmTeamBlock.includes('project_name'), 'project_name stripped from [modules.bmm] (now a core key, #2279)'); + assert(!bmmTeamBlock.includes('stale-bmm-copy'), 'stale bmm-copy of project_name not leaked into config.toml'); assert(!bmmTeamBlock.includes('user_name'), 'user_name stripped from [modules.bmm] (core-key pollution)'); assert(!bmmTeamBlock.includes('communication_language'), 'communication_language stripped from [modules.bmm]'); assert(!bmmTeamBlock.includes('user_skill_level'), 'user_skill_level (scope: user) absent from [modules.bmm] in config.toml'); @@ -2861,6 +2867,122 @@ async function runTests() { console.log(''); + // ============================================================ + // Test Suite 43: project_name promoted to core + hoist migration (#2279) + // ============================================================ + console.log(`${colors.yellow}Test Suite 43: project_name in core + hoist migration${colors.reset}\n`); + try { + const yamlLib = require('yaml'); + const coreSchemaPath = path.join(__dirname, '..', 'src', 'core-skills', 'module.yaml'); + const bmmSchemaPath = path.join(__dirname, '..', 'src', 'bmm-skills', 'module.yaml'); + const coreSchema = yamlLib.parse(await fs.readFile(coreSchemaPath, 'utf8')); + const bmmSchema = yamlLib.parse(await fs.readFile(bmmSchemaPath, 'utf8')); + + assert( + coreSchema.project_name && coreSchema.project_name.prompt && coreSchema.project_name.default === '{directory_name}', + 'core/module.yaml declares project_name with {directory_name} default', + ); + + assert(coreSchema.project_name.scope === undefined, 'project_name has no user scope (project-scoped, not user-scoped)'); + + assert(bmmSchema.project_name === undefined, 'bmm/module.yaml no longer declares project_name (now inherited from core)'); + + // Set up a mock existing install: bmm directory has project_name (legacy), + // core has user_name but not project_name. After hoist, project_name should + // move to core, leaving bmm with only its own keys. + const fixtureRoot43 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-fixture-43-')); + const bmadDir43 = path.join(fixtureRoot43, '_bmad'); + await fs.ensureDir(path.join(bmadDir43, '_config')); + await fs.writeFile(path.join(bmadDir43, '_config', 'manifest.yaml'), 'modules: []\n', 'utf8'); + await fs.ensureDir(path.join(bmadDir43, 'core')); + await fs.ensureDir(path.join(bmadDir43, 'bmm')); + await fs.writeFile(path.join(bmadDir43, 'core', 'config.yaml'), 'user_name: alice\n', 'utf8'); + await fs.writeFile( + path.join(bmadDir43, 'bmm', 'config.yaml'), + 'project_name: legacy-from-bmm\nuser_skill_level: intermediate\n', + 'utf8', + ); + + const officialModules43 = new OfficialModules(); + await officialModules43.loadExistingConfig(fixtureRoot43); + + assert( + officialModules43.existingConfig.core?.project_name === 'legacy-from-bmm', + 'loadExistingConfig hoists bmm.project_name to core on existing-install upgrade', + ); + + assert( + !('project_name' in (officialModules43.existingConfig.bmm || {})), + 'loadExistingConfig removes project_name from bmm after hoisting', + ); + + assert( + officialModules43.existingConfig.bmm?.user_skill_level === 'intermediate', + 'loadExistingConfig leaves non-core bmm keys (user_skill_level) untouched', + ); + + assert(officialModules43.existingConfig.core?.user_name === 'alice', 'loadExistingConfig preserves pre-existing core values'); + + // Precedence: if core already has the key, hoist must NOT overwrite it. + const fixtureRoot43b = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-fixture-43b-')); + const bmadDir43b = path.join(fixtureRoot43b, '_bmad'); + await fs.ensureDir(path.join(bmadDir43b, '_config')); + await fs.writeFile(path.join(bmadDir43b, '_config', 'manifest.yaml'), 'modules: []\n', 'utf8'); + await fs.ensureDir(path.join(bmadDir43b, 'core')); + await fs.ensureDir(path.join(bmadDir43b, 'bmm')); + await fs.writeFile(path.join(bmadDir43b, 'core', 'config.yaml'), 'project_name: from-core\n', 'utf8'); + await fs.writeFile(path.join(bmadDir43b, 'bmm', 'config.yaml'), 'project_name: stale-from-bmm\n', 'utf8'); + + const officialModules43b = new OfficialModules(); + await officialModules43b.loadExistingConfig(fixtureRoot43b); + + assert(officialModules43b.existingConfig.core?.project_name === 'from-core', 'hoist does not overwrite an existing core value'); + + assert( + !('project_name' in (officialModules43b.existingConfig.bmm || {})), + 'hoist still strips the duplicate from bmm so writeCentralConfig partition stays clean', + ); + + // Malformed config.yaml (parses to a scalar) must not crash loadExistingConfig + // or the hoist pass — they should treat it as "no config for that module" + // and continue. Regression for augment review on PR #2348. + const fixtureRoot43c = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-fixture-43c-')); + const bmadDir43c = path.join(fixtureRoot43c, '_bmad'); + await fs.ensureDir(path.join(bmadDir43c, '_config')); + await fs.writeFile(path.join(bmadDir43c, '_config', 'manifest.yaml'), 'modules: []\n', 'utf8'); + await fs.ensureDir(path.join(bmadDir43c, 'core')); + await fs.ensureDir(path.join(bmadDir43c, 'bmm')); + // Scalar YAML — yaml.parse returns the literal 42 (truthy non-object). + // Pre-fix this crashed _hoistCoreKeysFromLegacyModuleConfigs with + // "Cannot use 'in' operator to search for 'project_name' in 42". + await fs.writeFile(path.join(bmadDir43c, 'core', 'config.yaml'), '42\n', 'utf8'); + await fs.writeFile(path.join(bmadDir43c, 'bmm', 'config.yaml'), 'project_name: rescued\n', 'utf8'); + + const officialModules43c = new OfficialModules(); + let crashErr; + try { + await officialModules43c.loadExistingConfig(fixtureRoot43c); + } catch (error) { + crashErr = error; + } + assert(!crashErr, 'loadExistingConfig does not crash on a scalar core/config.yaml', crashErr?.stack); + + assert( + officialModules43c.existingConfig.core?.project_name === 'rescued', + 'scalar core gets replaced with {} and bmm.project_name still hoists in', + ); + + await fs.remove(fixtureRoot43).catch(() => {}); + await fs.remove(fixtureRoot43b).catch(() => {}); + await fs.remove(fixtureRoot43c).catch(() => {}); + } catch (error) { + console.log(`${colors.red}Test Suite 43 setup failed: ${error.message}${colors.reset}`); + console.log(error.stack); + failed++; + } + + console.log(''); + // ============================================================ // Summary // ============================================================ diff --git a/tools/installer/modules/official-modules.js b/tools/installer/modules/official-modules.js index 4bd1e56b3..615daba86 100644 --- a/tools/installer/modules/official-modules.js +++ b/tools/installer/modules/official-modules.js @@ -903,7 +903,10 @@ class OfficialModules { try { const content = await fs.readFile(moduleConfigPath, 'utf8'); const moduleConfig = yaml.parse(content); - if (moduleConfig) { + // Only keep plain object parses. A corrupt config.yaml that parses + // to a scalar or array would crash later code that does `key in cfg` + // / `Object.keys(cfg)`; treat it the same as a parse error. + if (moduleConfig && typeof moduleConfig === 'object' && !Array.isArray(moduleConfig)) { this._existingConfig[entry.name] = moduleConfig; foundAny = true; } @@ -914,9 +917,58 @@ class OfficialModules { } } + if (foundAny) { + await this._hoistCoreKeysFromLegacyModuleConfigs(); + } + return foundAny; } + /** + * Migrate prior answers when a key has moved from a non-core module to core + * (e.g. project_name moving from bmm to core in #2279). Without this, the + * partition logic in writeCentralConfig drops the value from the bmm bucket + * (because it's now a core key) without re-homing it under [core], so the + * user's prior answer silently disappears on the next install/quick-update. + */ + async _hoistCoreKeysFromLegacyModuleConfigs() { + const coreSchemaPath = path.join(getSourcePath(), 'core-skills', 'module.yaml'); + if (!(await fs.pathExists(coreSchemaPath))) return; + + let coreSchema; + try { + coreSchema = yaml.parse(await fs.readFile(coreSchemaPath, 'utf8')); + } catch { + return; + } + if (!coreSchema || typeof coreSchema !== 'object') return; + + const coreKeys = new Set( + Object.entries(coreSchema) + .filter(([, v]) => v && typeof v === 'object' && 'prompt' in v) + .map(([k]) => k), + ); + if (coreKeys.size === 0) return; + + // Belt-and-suspenders: loadExistingConfig already filters non-object parses, + // but anyone calling _hoistCoreKeysFromLegacyModuleConfigs in isolation (or + // future code paths populating _existingConfig directly) shouldn't be able + // to crash this with a scalar / array. + const existingCore = this._existingConfig.core; + this._existingConfig.core = existingCore && typeof existingCore === 'object' && !Array.isArray(existingCore) ? existingCore : {}; + + for (const [moduleName, cfg] of Object.entries(this._existingConfig)) { + if (moduleName === 'core' || !cfg || typeof cfg !== 'object' || Array.isArray(cfg)) continue; + for (const key of Object.keys(cfg)) { + if (!coreKeys.has(key)) continue; + if (!(key in this._existingConfig.core)) { + this._existingConfig.core[key] = cfg[key]; + } + delete cfg[key]; + } + } + } + /** * Pre-scan module schemas to gather metadata for the configuration gateway prompt. * Returns info about which modules have configurable options. diff --git a/tools/installer/ui.js b/tools/installer/ui.js index 1200c37ea..12501b3f2 100644 --- a/tools/installer/ui.js +++ b/tools/installer/ui.js @@ -758,6 +758,9 @@ class UI { const defaultUsername = safeUsername.charAt(0).toUpperCase() + safeUsername.slice(1); configCollector.collectedConfig.core = { user_name: defaultUsername, + // {directory_name} default per src/core-skills/module.yaml — matches what the + // interactive flow resolves via buildQuestion()'s {directory_name} placeholder. + project_name: path.basename(directory), communication_language: 'English', document_output_language: 'English', output_folder: '_bmad-output', From 48a7ec8bffd0c762cf44629a02816956ad0e8cc8 Mon Sep 17 00:00:00 2001 From: Brian Date: Mon, 27 Apr 2026 23:54:21 -0500 Subject: [PATCH 23/23] fix: align bmad-help.csv with documented schema and clean up source rows (#2278) (#2349) * fix(installer): preserve module-help.csv schema in merged bmad-help.csv (#2278) The installer's mergeModuleHelpCatalogs was rewriting the merged catalog under a different schema (module,phase,name,code,sequence,workflow-file,...) than the documented source schema in every module's module-help.csv (module,skill,display-name,menu-code,description,action,args,phase,...). Worse, the parsing assumed the wrong source column order, so column data was scrambled in the merged output. SKILL.md docs the source schema, so the bmad-help skill was navigating a catalog whose actual columns no longer matched its mental model. Drop the transformation and the agent enrichment columns (which had no consumers anywhere in the codebase). Emit rows verbatim in the source schema, padding short rows and filling empty module fields. Sort by module then phase, stable within phase to preserve authored order. Closes #2278 * fix(catalog): normalize module-help.csv rows to documented 13-column schema Many rows in core-skills/module-help.csv and bmm-skills/module-help.csv were missing one column between description and phase, leaving them at 12 fields instead of 13. CSV consumers that read by header position were silently mapping data into the wrong columns (description into action, phase into args, required into before, etc). Inserted an empty cell at column index 5 across all 31 affected rows to restore alignment with the documented header (module,skill,display-name,menu-code,description,action,args,phase, after,before,required,output-location,outputs). --- src/bmm-skills/module-help.csv | 42 +++++----- src/core-skills/module-help.csv | 22 +++--- tools/installer/core/installer.js | 127 +++++++----------------------- 3 files changed, 61 insertions(+), 130 deletions(-) diff --git a/src/bmm-skills/module-help.csv b/src/bmm-skills/module-help.csv index 8b824795f..78326a02e 100644 --- a/src/bmm-skills/module-help.csv +++ b/src/bmm-skills/module-help.csv @@ -1,33 +1,33 @@ module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs BMad Method,_meta,,,,,,,,,false,https://docs.bmad-method.org/llms.txt, -BMad Method,bmad-document-project,Document Project,DP,Analyze an existing project to produce useful documentation.,,anytime,,,false,project-knowledge,* -BMad Method,bmad-generate-project-context,Generate Project Context,GPC,Scan existing codebase to generate a lean LLM-optimized project-context.md. Essential for brownfield projects.,,anytime,,,false,output_folder,project context -BMad Method,bmad-quick-dev,Quick Dev,QQ,Unified intent-in code-out workflow: clarify plan implement review and present.,,anytime,,,false,implementation_artifacts,spec and project implementation -BMad Method,bmad-correct-course,Correct Course,CC,Navigate significant changes. May recommend start over update PRD redo architecture sprint planning or correct epics and stories.,,anytime,,,false,planning_artifacts,change proposal +BMad Method,bmad-document-project,Document Project,DP,Analyze an existing project to produce useful documentation.,,,anytime,,,false,project-knowledge,* +BMad Method,bmad-generate-project-context,Generate Project Context,GPC,Scan existing codebase to generate a lean LLM-optimized project-context.md. Essential for brownfield projects.,,,anytime,,,false,output_folder,project context +BMad Method,bmad-quick-dev,Quick Dev,QQ,Unified intent-in code-out workflow: clarify plan implement review and present.,,,anytime,,,false,implementation_artifacts,spec and project implementation +BMad Method,bmad-correct-course,Correct Course,CC,Navigate significant changes. May recommend start over update PRD redo architecture sprint planning or correct epics and stories.,,,anytime,,,false,planning_artifacts,change proposal BMad Method,bmad-agent-tech-writer,Write Document,WD,"Describe in detail what you want, and the agent will follow documentation best practices. Multi-turn conversation with subprocess for research/review.",write,,anytime,,,false,project-knowledge,document BMad Method,bmad-agent-tech-writer,Update Standards,US,Update agent memory documentation-standards.md with your specific preferences if you discover missing document conventions.,update-standards,,anytime,,,false,_bmad/_memory/tech-writer-sidecar,standards BMad Method,bmad-agent-tech-writer,Mermaid Generate,MG,Create a Mermaid diagram based on user description. Will suggest diagram types if not specified.,mermaid,,anytime,,,false,planning_artifacts,mermaid diagram BMad Method,bmad-agent-tech-writer,Validate Document,VD,Review the specified document against documentation standards and best practices. Returns specific actionable improvement suggestions organized by priority.,validate,[path],anytime,,,false,planning_artifacts,validation report BMad Method,bmad-agent-tech-writer,Explain Concept,EC,Create clear technical explanations with examples and diagrams for complex concepts.,explain,[topic],anytime,,,false,project_knowledge,explanation -BMad Method,bmad-brainstorming,Brainstorm Project,BP,Expert guided facilitation through a single or multiple techniques.,,1-analysis,,,false,planning_artifacts,brainstorming session -BMad Method,bmad-market-research,Market Research,MR,"Market analysis competitive landscape customer needs and trends.",,1-analysis,,,false,"planning_artifacts|project-knowledge",research documents -BMad Method,bmad-domain-research,Domain Research,DR,Industry domain deep dive subject matter expertise and terminology.,,1-analysis,,,false,"planning_artifacts|project_knowledge",research documents -BMad Method,bmad-technical-research,Technical Research,TR,Technical feasibility architecture options and implementation approaches.,,1-analysis,,,false,"planning_artifacts|project_knowledge",research documents +BMad Method,bmad-brainstorming,Brainstorm Project,BP,Expert guided facilitation through a single or multiple techniques.,,,1-analysis,,,false,planning_artifacts,brainstorming session +BMad Method,bmad-market-research,Market Research,MR,Market analysis competitive landscape customer needs and trends.,,,1-analysis,,,false,planning_artifacts|project-knowledge,research documents +BMad Method,bmad-domain-research,Domain Research,DR,Industry domain deep dive subject matter expertise and terminology.,,,1-analysis,,,false,planning_artifacts|project_knowledge,research documents +BMad Method,bmad-technical-research,Technical Research,TR,Technical feasibility architecture options and implementation approaches.,,,1-analysis,,,false,planning_artifacts|project_knowledge,research documents BMad Method,bmad-product-brief,Create Brief,CB,An expert guided experience to nail down your product idea in a brief. a gentler approach than PRFAQ when you are already sure of your concept and nothing will sway you.,,-A,1-analysis,,,false,planning_artifacts,product brief BMad Method,bmad-prfaq,PRFAQ Challenge,WB,Working Backwards guided experience to forge and stress-test your product concept to ensure you have a great product that users will love and need through the PRFAQ gauntlet to determine feasibility and alignment with user needs. alternative to product brief.,,-H,1-analysis,,,false,planning_artifacts,prfaq document -BMad Method,bmad-create-prd,Create PRD,CP,Expert led facilitation to produce your Product Requirements Document.,,2-planning,,,true,planning_artifacts,prd +BMad Method,bmad-create-prd,Create PRD,CP,Expert led facilitation to produce your Product Requirements Document.,,,2-planning,,,true,planning_artifacts,prd BMad Method,bmad-validate-prd,Validate PRD,VP,,,[path],2-planning,bmad-create-prd,,false,planning_artifacts,prd validation report BMad Method,bmad-edit-prd,Edit PRD,EP,,,[path],2-planning,bmad-validate-prd,,false,planning_artifacts,updated prd -BMad Method,bmad-create-ux-design,Create UX,CU,"Guidance through realizing the plan for your UX, strongly recommended if a UI is a primary piece of the proposed project.",,2-planning,bmad-create-prd,,false,planning_artifacts,ux design -BMad Method,bmad-create-architecture,Create Architecture,CA,Guided workflow to document technical decisions.,,3-solutioning,,,true,planning_artifacts,architecture -BMad Method,bmad-create-epics-and-stories,Create Epics and Stories,CE,,,3-solutioning,bmad-create-architecture,,true,planning_artifacts,epics and stories -BMad Method,bmad-check-implementation-readiness,Check Implementation Readiness,IR,Ensure PRD UX Architecture and Epics Stories are aligned.,,3-solutioning,bmad-create-epics-and-stories,,true,planning_artifacts,readiness report -BMad Method,bmad-sprint-planning,Sprint Planning,SP,Kicks off implementation by producing a plan the implementation agents will follow in sequence for every story.,,4-implementation,,,true,implementation_artifacts,sprint status -BMad Method,bmad-sprint-status,Sprint Status,SS,Anytime: Summarize sprint status and route to next workflow.,,4-implementation,bmad-sprint-planning,,false,, -BMad Method,bmad-create-story,Create Story,CS,"Story cycle start: Prepare first found story in the sprint plan that is next or a specific epic/story designation.",create,,4-implementation,bmad-sprint-planning,bmad-create-story:validate,true,implementation_artifacts,story +BMad Method,bmad-create-ux-design,Create UX,CU,"Guidance through realizing the plan for your UX, strongly recommended if a UI is a primary piece of the proposed project.",,,2-planning,bmad-create-prd,,false,planning_artifacts,ux design +BMad Method,bmad-create-architecture,Create Architecture,CA,Guided workflow to document technical decisions.,,,3-solutioning,,,true,planning_artifacts,architecture +BMad Method,bmad-create-epics-and-stories,Create Epics and Stories,CE,,,,3-solutioning,bmad-create-architecture,,true,planning_artifacts,epics and stories +BMad Method,bmad-check-implementation-readiness,Check Implementation Readiness,IR,Ensure PRD UX Architecture and Epics Stories are aligned.,,,3-solutioning,bmad-create-epics-and-stories,,true,planning_artifacts,readiness report +BMad Method,bmad-sprint-planning,Sprint Planning,SP,Kicks off implementation by producing a plan the implementation agents will follow in sequence for every story.,,,4-implementation,,,true,implementation_artifacts,sprint status +BMad Method,bmad-sprint-status,Sprint Status,SS,Anytime: Summarize sprint status and route to next workflow.,,,4-implementation,bmad-sprint-planning,,false,, +BMad Method,bmad-create-story,Create Story,CS,Story cycle start: Prepare first found story in the sprint plan that is next or a specific epic/story designation.,create,,4-implementation,bmad-sprint-planning,bmad-create-story:validate,true,implementation_artifacts,story BMad Method,bmad-create-story,Validate Story,VS,Validates story readiness and completeness before development work begins.,validate,,4-implementation,bmad-create-story:create,bmad-dev-story,false,implementation_artifacts,story validation report -BMad Method,bmad-dev-story,Dev Story,DS,Story cycle: Execute story implementation tasks and tests then CR then back to DS if fixes needed.,,4-implementation,bmad-create-story:validate,,true,, -BMad Method,bmad-code-review,Code Review,CR,Story cycle: If issues back to DS if approved then next CS or ER if epic complete.,,4-implementation,bmad-dev-story,,false,, -BMad Method,bmad-checkpoint-preview,Checkpoint,CK,Guided walkthrough of a change from purpose and context into details. Use for human review of commits branches or PRs.,,4-implementation,,,false,, -BMad Method,bmad-qa-generate-e2e-tests,QA Automation Test,QA,Generate automated API and E2E tests for implemented code. NOT for code review or story validation — use CR for that.,,4-implementation,bmad-dev-story,,false,implementation_artifacts,test suite -BMad Method,bmad-retrospective,Retrospective,ER,Optional at epic end: Review completed work lessons learned and next epic or if major issues consider CC.,,4-implementation,bmad-code-review,,false,implementation_artifacts,retrospective +BMad Method,bmad-dev-story,Dev Story,DS,Story cycle: Execute story implementation tasks and tests then CR then back to DS if fixes needed.,,,4-implementation,bmad-create-story:validate,,true,, +BMad Method,bmad-code-review,Code Review,CR,Story cycle: If issues back to DS if approved then next CS or ER if epic complete.,,,4-implementation,bmad-dev-story,,false,, +BMad Method,bmad-checkpoint-preview,Checkpoint,CK,Guided walkthrough of a change from purpose and context into details. Use for human review of commits branches or PRs.,,,4-implementation,,,false,, +BMad Method,bmad-qa-generate-e2e-tests,QA Automation Test,QA,Generate automated API and E2E tests for implemented code. NOT for code review or story validation — use CR for that.,,,4-implementation,bmad-dev-story,,false,implementation_artifacts,test suite +BMad Method,bmad-retrospective,Retrospective,ER,Optional at epic end: Review completed work lessons learned and next epic or if major issues consider CC.,,,4-implementation,bmad-code-review,,false,implementation_artifacts,retrospective diff --git a/src/core-skills/module-help.csv b/src/core-skills/module-help.csv index f3521c743..fec435f18 100644 --- a/src/core-skills/module-help.csv +++ b/src/core-skills/module-help.csv @@ -1,13 +1,13 @@ module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs Core,_meta,,,,,,,,,false,https://docs.bmad-method.org/llms.txt, -Core,bmad-brainstorming,Brainstorming,BSP,Use early in ideation or when stuck generating ideas.,,anytime,,,false,{output_folder}/brainstorming,brainstorming session -Core,bmad-party-mode,Party Mode,PM,Orchestrate multi-agent discussions when you need multiple perspectives or want agents to collaborate.,,anytime,,,false,, -Core,bmad-help,BMad Help,BH,,,anytime,,,false,, -Core,bmad-index-docs,Index Docs,ID,Use when LLM needs to understand available docs without loading everything.,,anytime,,,false,, -Core,bmad-shard-doc,Shard Document,SD,Use when doc becomes too large (>500 lines) to manage effectively.,[path],anytime,,,false,, -Core,bmad-editorial-review-prose,Editorial Review - Prose,EP,Use after drafting to polish written content.,[path],anytime,,,false,report located with target document,three-column markdown table with suggested fixes -Core,bmad-editorial-review-structure,Editorial Review - Structure,ES,Use when doc produced from multiple subprocesses or needs structural improvement.,[path],anytime,,,false,report located with target document, -Core,bmad-review-adversarial-general,Adversarial Review,AR,"Use for quality assurance or before finalizing deliverables. Code Review in other modules runs this automatically, but also useful for document reviews.",[path],anytime,,,false,, -Core,bmad-review-edge-case-hunter,Edge Case Hunter Review,ECH,Use alongside adversarial review for orthogonal coverage — method-driven not attitude-driven.,[path],anytime,,,false,, -Core,bmad-distillator,Distillator,DG,Use when you need token-efficient distillates that preserve all information for downstream LLM consumption.,[path],anytime,,,false,adjacent to source document or specified output_path,distillate markdown file(s) -Core,bmad-customize,BMad Customize,BC,"Use when you want to change how an agent or workflow behaves — add persistent facts, swap templates, insert activation hooks, or customize menus. Scans what's customizable, picks the right scope (agent vs workflow), writes the override to _bmad/custom/, and verifies the merge. No TOML hand-authoring required.",,anytime,,,false,{project-root}/_bmad/custom,TOML override files +Core,bmad-brainstorming,Brainstorming,BSP,Use early in ideation or when stuck generating ideas.,,,anytime,,,false,{output_folder}/brainstorming,brainstorming session +Core,bmad-party-mode,Party Mode,PM,Orchestrate multi-agent discussions when you need multiple perspectives or want agents to collaborate.,,,anytime,,,false,, +Core,bmad-help,BMad Help,BH,,,,anytime,,,false,, +Core,bmad-index-docs,Index Docs,ID,Use when LLM needs to understand available docs without loading everything.,,,anytime,,,false,, +Core,bmad-shard-doc,Shard Document,SD,Use when doc becomes too large (>500 lines) to manage effectively.,,[path],anytime,,,false,, +Core,bmad-editorial-review-prose,Editorial Review - Prose,EP,Use after drafting to polish written content.,,[path],anytime,,,false,report located with target document,three-column markdown table with suggested fixes +Core,bmad-editorial-review-structure,Editorial Review - Structure,ES,Use when doc produced from multiple subprocesses or needs structural improvement.,,[path],anytime,,,false,report located with target document, +Core,bmad-review-adversarial-general,Adversarial Review,AR,"Use for quality assurance or before finalizing deliverables. Code Review in other modules runs this automatically, but also useful for document reviews.",,[path],anytime,,,false,, +Core,bmad-review-edge-case-hunter,Edge Case Hunter Review,ECH,Use alongside adversarial review for orthogonal coverage — method-driven not attitude-driven.,,[path],anytime,,,false,, +Core,bmad-distillator,Distillator,DG,Use when you need token-efficient distillates that preserve all information for downstream LLM consumption.,,[path],anytime,,,false,adjacent to source document or specified output_path,distillate markdown file(s) +Core,bmad-customize,BMad Customize,BC,"Use when you want to change how an agent or workflow behaves — add persistent facts, swap templates, insert activation hooks, or customize menus. Scans what's customizable, picks the right scope (agent vs workflow), writes the override to _bmad/custom/, and verifies the merge. No TOML hand-authoring required.",,,anytime,,,false,{project-root}/_bmad/custom,TOML override files diff --git a/tools/installer/core/installer.js b/tools/installer/core/installer.js index a68193bc6..b91ba6bb7 100644 --- a/tools/installer/core/installer.js +++ b/tools/installer/core/installer.js @@ -923,29 +923,15 @@ class Installer { /** * Merge all module-help.csv files into a single bmad-help.csv. * Scans all installed modules for module-help.csv and merges them. - * Enriches agent info from the in-memory agent list produced by ManifestGenerator. - * Output is written to _bmad/_config/bmad-help.csv. + * Output preserves the source schema verbatim — see schema below. * @param {string} bmadDir - BMAD installation directory - * @param {Array} agentEntries - Agents collected from module.yaml (code, name, title, icon, module, ...) + * @param {Array} _agentEntries - Unused; retained for call-site compatibility */ - async mergeModuleHelpCatalogs(bmadDir, agentEntries = []) { + async mergeModuleHelpCatalogs(bmadDir, _agentEntries = []) { const allRows = []; - const headerRow = - 'module,phase,name,code,sequence,workflow-file,command,required,agent-name,agent-command,agent-display-name,agent-title,options,description,output-location,outputs'; - - // Build agent lookup from the in-memory list (agent code → command + display fields). - const agentInfo = new Map(); - for (const agent of agentEntries) { - if (!agent || !agent.code) continue; - const agentCommand = agent.module ? `bmad:${agent.module}:agent:${agent.code}` : `bmad:agent:${agent.code}`; - const displayName = agent.name || agent.code; - const titleCombined = agent.icon && agent.title ? `${agent.icon} ${agent.title}` : agent.title || agent.code; - agentInfo.set(agent.code, { - command: agentCommand, - displayName, - title: titleCombined, - }); - } + const headerRow = 'module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs'; + const COLUMN_COUNT = 13; + const PHASE_INDEX = 7; // Get all installed module directories const entries = await fs.readdir(bmadDir, { withFileTypes: true }); @@ -984,64 +970,19 @@ class Installer { // Parse the line - handle quoted fields with commas const columns = this.parseCSVLine(line); - if (columns.length >= 12) { - // Map old schema to new schema - // Old: module,phase,name,code,sequence,workflow-file,command,required,agent,options,description,output-location,outputs - // New: module,phase,name,code,sequence,workflow-file,command,required,agent-name,agent-command,agent-display-name,agent-title,options,description,output-location,outputs + if (columns.length < COLUMN_COUNT - 1) continue; - const [ - module, - phase, - name, - code, - sequence, - workflowFile, - command, - required, - agentName, - options, - description, - outputLocation, - outputs, - ] = columns; + // Pad short rows; truncate over-long rows + const padded = columns.slice(0, COLUMN_COUNT); + while (padded.length < COLUMN_COUNT) padded.push(''); - // Pass through _meta rows as-is (module metadata, not a skill) - if (phase === '_meta') { - const finalModule = (!module || module.trim() === '') && moduleName !== 'core' ? moduleName : module || ''; - const metaRow = [finalModule, '_meta', '', '', '', '', '', 'false', '', '', '', '', '', '', outputLocation || '', '']; - allRows.push(metaRow.map((c) => this.escapeCSVField(c)).join(',')); - continue; - } - - // If module column is empty, set it to this module's name (except for core which stays empty for universal tools) - const finalModule = (!module || module.trim() === '') && moduleName !== 'core' ? moduleName : module || ''; - - // Lookup agent info - const cleanAgentName = agentName ? agentName.trim() : ''; - const agentData = agentInfo.get(cleanAgentName) || { command: '', displayName: '', title: '' }; - - // Build new row with agent info - const newRow = [ - finalModule, - phase || '', - name || '', - code || '', - sequence || '', - workflowFile || '', - command || '', - required || 'false', - cleanAgentName, - agentData.command, - agentData.displayName, - agentData.title, - options || '', - description || '', - outputLocation || '', - outputs || '', - ]; - - allRows.push(newRow.map((c) => this.escapeCSVField(c)).join(',')); + // If module column is empty, fill with this module's name + // (core stays empty so its rows render as universal tools) + if ((!padded[0] || padded[0].trim() === '') && moduleName !== 'core') { + padded[0] = moduleName; } + + allRows.push(padded.map((c) => this.escapeCSVField(c)).join(',')); } if (process.env.BMAD_VERBOSE_INSTALL === 'true') { @@ -1053,44 +994,34 @@ class Installer { } } - // Sort by module, then phase, then sequence - allRows.sort((a, b) => { - const colsA = this.parseCSVLine(a); - const colsB = this.parseCSVLine(b); + // Sort by module, then phase. Stable sort preserves authored order within a phase. + const decorated = allRows.map((row, index) => ({ row, index, cols: this.parseCSVLine(row) })); + decorated.sort((a, b) => { + const moduleA = (a.cols[0] || '').toLowerCase(); + const moduleB = (b.cols[0] || '').toLowerCase(); + if (moduleA !== moduleB) return moduleA.localeCompare(moduleB); - // Module comparison (empty module/universal tools come first) - const moduleA = (colsA[0] || '').toLowerCase(); - const moduleB = (colsB[0] || '').toLowerCase(); - if (moduleA !== moduleB) { - return moduleA.localeCompare(moduleB); - } + const phaseA = a.cols[PHASE_INDEX] || ''; + const phaseB = b.cols[PHASE_INDEX] || ''; + if (phaseA !== phaseB) return phaseA.localeCompare(phaseB); - // Phase comparison - const phaseA = colsA[1] || ''; - const phaseB = colsB[1] || ''; - if (phaseA !== phaseB) { - return phaseA.localeCompare(phaseB); - } - - // Sequence comparison - const seqA = parseInt(colsA[4] || '0', 10); - const seqB = parseInt(colsB[4] || '0', 10); - return seqA - seqB; + return a.index - b.index; }); + const sortedRows = decorated.map((d) => d.row); // Write merged catalog const outputDir = path.join(bmadDir, '_config'); await fs.ensureDir(outputDir); const outputPath = path.join(outputDir, 'bmad-help.csv'); - const mergedContent = [headerRow, ...allRows].join('\n'); + const mergedContent = [headerRow, ...sortedRows].join('\n'); await fs.writeFile(outputPath, mergedContent, 'utf8'); // Track the installed file this.installedFiles.add(outputPath); if (process.env.BMAD_VERBOSE_INSTALL === 'true') { - await prompts.log.message(` Generated bmad-help.csv: ${allRows.length} workflows`); + await prompts.log.message(` Generated bmad-help.csv: ${sortedRows.length} workflows`); } }