Compare commits
24 Commits
| Author | SHA1 | Date |
|---|---|---|
|
|
e36f219c81 | |
|
|
9debc165aa | |
|
|
65b810a11f | |
|
|
e6cdc93b79 | |
|
|
e174bebc60 | |
|
|
fcf20f1c7b | |
|
|
e011192525 | |
|
|
91a57499e9 | |
|
|
48a7ec8bff | |
|
|
3da984a491 | |
|
|
815600e4ca | |
|
|
7ee5fa313b | |
|
|
3e89b30b3c | |
|
|
b4d73b7daf | |
|
|
6ff74ba662 | |
|
|
1ad1f91e38 | |
|
|
350688df67 | |
|
|
be85e5b4a0 | |
|
|
04cfde1454 | |
|
|
7baa30c567 | |
|
|
88b9a1c842 | |
|
|
69cbeb4d07 | |
|
|
1d35acfd84 | |
|
|
01cc32540b |
|
|
@ -13,7 +13,7 @@
|
|||
"name": "bmad-pro-skills",
|
||||
"source": "./",
|
||||
"description": "Next level skills for power users — advanced prompting techniques, agent management, and more.",
|
||||
"version": "6.3.0",
|
||||
"version": "6.6.0",
|
||||
"author": {
|
||||
"name": "Brian (BMad) Madison"
|
||||
},
|
||||
|
|
@ -35,7 +35,7 @@
|
|||
"name": "bmad-method-lifecycle",
|
||||
"source": "./",
|
||||
"description": "Full-lifecycle AI development framework — agents and workflows for product analysis, planning, architecture, and implementation.",
|
||||
"version": "6.3.0",
|
||||
"version": "6.6.0",
|
||||
"author": {
|
||||
"name": "Brian (BMad) Madison"
|
||||
},
|
||||
|
|
|
|||
|
|
@ -7,6 +7,7 @@ on:
|
|||
- "src/**"
|
||||
- "tools/installer/**"
|
||||
- "package.json"
|
||||
- "removals.txt"
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
channel:
|
||||
|
|
@ -135,6 +136,22 @@ jobs:
|
|||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Advance @next dist-tag to stable
|
||||
if: github.event_name == 'workflow_dispatch' && inputs.channel == 'latest'
|
||||
# Failure here leaves @next stale until the next push-driven prerelease
|
||||
# republishes — annoying but not release-breaking. Don't fail the job
|
||||
# after a successful stable publish + tag + GH release.
|
||||
continue-on-error: true
|
||||
run: |
|
||||
# Without this, @latest can leapfrog @next (e.g. latest=6.5.0 while
|
||||
# next=6.4.1-next.0) and `npx bmad-method@next install` silently
|
||||
# downgrades users. Point @next at the just-published stable so
|
||||
# @next >= @latest always holds; the next push-driven prerelease will
|
||||
# bump from this base via the existing derive step above.
|
||||
VERSION=$(node -p 'require("./package.json").version')
|
||||
npm dist-tag add "bmad-method@${VERSION}" next
|
||||
echo "Advanced @next dist-tag to ${VERSION}"
|
||||
|
||||
- name: Notify Discord
|
||||
if: github.event_name == 'workflow_dispatch' && inputs.channel == 'latest'
|
||||
continue-on-error: true
|
||||
|
|
|
|||
33
CHANGELOG.md
33
CHANGELOG.md
|
|
@ -1,5 +1,38 @@
|
|||
# Changelog
|
||||
|
||||
## v6.6.0 - 2026-04-28
|
||||
|
||||
### 💥 Breaking Changes
|
||||
|
||||
* `--tools none` is no longer accepted; fresh `--yes` installs now require an explicit `--tools <id>`. Existing-install flows are unchanged. Run `npx bmad-method --list-tools` to see supported IDs (#2346)
|
||||
* `project_name` has moved from `[modules.bmm]` to `[core]` in `config.toml`. Existing installs are auto-migrated on next install/update — no manual action required (#2348)
|
||||
|
||||
### 🎁 Features
|
||||
|
||||
* **Non-interactive config for CI/Docker** — new `--set <module>.<key>=<value>` (repeatable) and `--list-options [module]` flags allow installer configuration without prompts. Routes values to the correct config file with prototype-pollution defenses (#2354)
|
||||
* **Brownfield epic scoping** — Create Epics and Stories workflow now detects file-overlap between epics and applies an Implementation Efficiency principle plus a design completeness gate, reducing unnecessary file churn (#1826)
|
||||
|
||||
### 🐛 Fixes
|
||||
|
||||
* **Custom module installer** — Azure DevOps URLs now parse correctly with multi-segment paths and `_git` prefixes (#2269); HTTP (non-HTTPS) Git URLs are preserved for self-hosted servers (#2344); community installs route through `PluginResolver` so marketplace plugins with nested `module.yaml` install all skills (#2331); URL-source modules resolve from disk cache on re-install instead of warning (#2323); local `--custom-content` modules resolve correctly and `[modules.<code>]` TOML keys use the module code rather than display name (#2316); `--yes` with `--custom-source` now runs the full update path so version tags are respected (#2336)
|
||||
* **Installer safety** — `--list-tools` flag added; empty/typo'd tool IDs rejected with specific errors (#2346)
|
||||
* **Channel and dist-tag handling** — installer launched from a prerelease (e.g. `@next`) now defaults external module channels to `next` instead of silently downgrading to stable (#2321); stable publishes advance the `@next` dist-tag so prerelease users no longer leapfrog or miss update notifications (#2320)
|
||||
* **Architecture validation gate** — step-07 validation template no longer ships pre-checked; status field is now templated against actual checklist completion (#2347)
|
||||
* **bmad-help data integrity** — `bmad-help.csv` is no longer transformed at merge time and is emitted in its documented schema; 31 misaligned rows in core/bmm `module-help.csv` repaired (#2349)
|
||||
* **Config robustness** — malformed `module.yaml` (scalars, arrays) is now rejected before crash (#2348)
|
||||
* **Legacy cleanup** — pre-v6.2.0 wrapper skills (`bmad-bmm-*`, `bmad-agent-bmm-*`) are removed automatically on upgrade so they no longer error with missing-file warnings (#2315)
|
||||
|
||||
### 📚 Docs
|
||||
|
||||
* Complete Chinese (zh-CN) translations for `named-agents.md` and `expand-bmad-for-your-org.md`; localized BMad Ecosystem sidebar (CIS, BMB, TEA, WDS) across zh-cn, vi-vn, fr-fr, cs-cz (#2355)
|
||||
|
||||
## v6.5.0 - 2026-04-26
|
||||
|
||||
### 🎁 Features
|
||||
|
||||
* Support for 18 new agent platforms: AdaL, Sourcegraph Amp, IBM Bob, Command Code, Snowflake Cortex Code, Factory Droid, Firebender, Block Goose, Kode, Mistral Vibe, Mux, Neovate, OpenClaw, OpenHands, Pochi, Replit Agent, Warp, Zencoder — bringing total supported platforms to 42 (#2313)
|
||||
* All platforms that support the cross-tool `.agents/skills/` standard now use it (#2313)
|
||||
|
||||
## v6.4.0 - 2026-04-24
|
||||
|
||||
### ✨ Headline
|
||||
|
|
|
|||
|
|
@ -52,6 +52,15 @@ Follow the installer prompts, then open your AI IDE (Claude Code, Cursor, etc.)
|
|||
npx bmad-method install --directory /path/to/project --modules bmm --tools claude-code --yes
|
||||
```
|
||||
|
||||
Override any module config option with `--set <module>.<key>=<value>` (repeatable). Run `--list-options [module]` to see locally-known official keys (built-in modules plus any external officials cached on this machine):
|
||||
|
||||
```bash
|
||||
npx bmad-method install --yes \
|
||||
--modules bmm --tools claude-code \
|
||||
--set bmm.project_knowledge=research \
|
||||
--set bmm.user_skill_level=expert
|
||||
```
|
||||
|
||||
[See all installation options](https://docs.bmad-method.org/how-to/non-interactive-installation/)
|
||||
|
||||
> **Not sure what to do?** Ask `bmad-help` — it tells you exactly what's next and what's optional. You can also ask questions like `bmad-help I just finished the architecture, what do I do next?`
|
||||
|
|
|
|||
|
|
@ -60,7 +60,7 @@ Dostupná ID nástrojů pro příznak `--tools`:
|
|||
|
||||
**Preferované:** `claude-code`, `cursor`
|
||||
|
||||
Spusťte `npx bmad-method install` interaktivně jednou pro zobrazení aktuálního seznamu podporovaných nástrojů, nebo zkontrolujte [konfiguraci kódů platforem](https://github.com/bmad-code-org/BMAD-METHOD/blob/main/tools/cli/installers/lib/ide/platform-codes.yaml).
|
||||
Spusťte `npx bmad-method install` interaktivně jednou pro zobrazení aktuálního seznamu podporovaných nástrojů, nebo zkontrolujte [konfiguraci kódů platforem](https://github.com/bmad-code-org/BMAD-METHOD/blob/main/tools/installer/ide/platform-codes.yaml).
|
||||
|
||||
## Režimy instalace
|
||||
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ Use `npx bmad-method install` to set up BMad in your project. One command handle
|
|||
|
||||
- **Node.js** 20+ (the installer requires it)
|
||||
- **Git** (for cloning external modules)
|
||||
- **An AI tool** such as Claude Code or Cursor — or install without one using `--tools none`
|
||||
- **An AI tool** such as Claude Code or Cursor (run `npx bmad-method install --list-tools` to see all supported tools)
|
||||
|
||||
:::
|
||||
|
||||
|
|
@ -117,20 +117,23 @@ Under `--yes`, patch and minor upgrades apply automatically. Majors stay frozen
|
|||
|
||||
### Flag reference
|
||||
|
||||
| Flag | Purpose |
|
||||
| ------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------- |
|
||||
| `--yes`, `-y` | Skip all prompts; accept flag values + defaults |
|
||||
| `--directory <path>` | Install into this directory (default: current working dir) |
|
||||
| `--modules <a,b,c>` | Exact module set. Core is auto-added. Not a delta — list everything you want kept. |
|
||||
| `--tools <a,b>` or `--tools none` | IDE/tool selection. `none` skips tool config entirely. |
|
||||
| `--action <type>` | `install`, `update`, or `quick-update`. Defaults based on existing install state. |
|
||||
| `--custom-source <urls>` | Install custom modules from Git URLs or local paths |
|
||||
| `--channel <stable\|next>` | Apply to all externals (aliased as `--all-stable` / `--all-next`) |
|
||||
| `--all-stable` | Alias for `--channel=stable` |
|
||||
| `--all-next` | Alias for `--channel=next` |
|
||||
| `--next=<code>` | Put one module on next. Repeatable. |
|
||||
| `--pin <code>=<tag>` | Pin one module to a specific tag. Repeatable. |
|
||||
| `--user-name`, `--communication-language`, `--document-output-language`, `--output-folder` | Override per-user config defaults |
|
||||
| Flag | Purpose |
|
||||
| ------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `--yes`, `-y` | Skip all prompts; accept flag values + defaults |
|
||||
| `--directory <path>` | Install into this directory (default: current working dir) |
|
||||
| `--modules <a,b,c>` | Exact module set. Core is auto-added. Not a delta — list everything you want kept. |
|
||||
| `--tools <a,b>` | IDE/tool selection. Required for fresh `--yes` installs. Run `--list-tools` for valid IDs. |
|
||||
| `--list-tools` | Print all supported tool/IDE IDs (with target directories) and exit. |
|
||||
| `--action <type>` | `install`, `update`, or `quick-update`. Defaults based on existing install state. |
|
||||
| `--custom-source <urls>` | Install custom modules from Git URLs or local paths |
|
||||
| `--channel <stable\|next>` | Apply to all externals (aliased as `--all-stable` / `--all-next`) |
|
||||
| `--all-stable` | Alias for `--channel=stable` |
|
||||
| `--all-next` | Alias for `--channel=next` |
|
||||
| `--next=<code>` | Put one module on next. Repeatable. |
|
||||
| `--pin <code>=<tag>` | Pin one module to a specific tag. Repeatable. |
|
||||
| `--set <module>.<key>=<value>` | Set any module config option non-interactively (preferred — see [Module config overrides](#module-config-overrides)). Repeatable. |
|
||||
| `--list-options [module]` | Print every `--set` key for built-in and locally-cached official modules, then exit. Pass a module code to scope to one module. |
|
||||
| `--user-name`, `--communication-language`, `--document-output-language`, `--output-folder` | Legacy shortcuts equivalent to `--set core.<key>=<value>` (still supported) |
|
||||
|
||||
Precedence when flags overlap: `--pin` beats `--next=` beats `--channel` / `--all-*` beats the registry default (`stable`).
|
||||
|
||||
|
|
@ -165,19 +168,56 @@ npx bmad-method install --yes --modules bmm,bmb --all-next --tools claude-code
|
|||
|
||||
```bash
|
||||
npx bmad-method install --yes --action update \
|
||||
--modules bmm,bmb,gds \
|
||||
--tools none
|
||||
--modules bmm,bmb,gds
|
||||
```
|
||||
|
||||
`--tools` is omitted intentionally — `--action update` reuses the tools configured during the first install.
|
||||
|
||||
**Mix channels — bmb on next, gds on stable:**
|
||||
|
||||
```bash
|
||||
npx bmad-method install --yes --action update \
|
||||
--modules bmm,bmb,cis,gds \
|
||||
--next=bmb \
|
||||
--tools none
|
||||
--next=bmb
|
||||
```
|
||||
|
||||
### Module config overrides
|
||||
|
||||
`--set <module>.<key>=<value>` lets you set any module config option non-interactively. It's repeatable and scales to every module — present and future. The flag is applied as a post-install patch: the installer runs its normal flow first, then `--set` upserts each value into `_bmad/config.toml` (team scope) or `_bmad/config.user.toml` (user scope), and into `_bmad/<module>/config.yaml` so declared values carry forward to the next install.
|
||||
|
||||
**Example — install bmm with explicit project knowledge and skill level:**
|
||||
|
||||
```bash
|
||||
npx bmad-method install --yes \
|
||||
--modules bmm \
|
||||
--tools claude-code \
|
||||
--set bmm.project_knowledge=research \
|
||||
--set bmm.user_skill_level=expert
|
||||
```
|
||||
|
||||
**Discover available keys for a module:**
|
||||
|
||||
```bash
|
||||
npx bmad-method install --list-options bmm
|
||||
```
|
||||
|
||||
`--list-options` (no argument) lists every key the installer can find locally — built-in modules (`core`, `bmm`) plus any currently cached official modules. The cache is per-machine and can be cleared, so previously installed officials won't appear on a fresh checkout or an ephemeral CI worker until they're installed again. Community and custom modules aren't enumerated here; read the module's `module.yaml` directly to see what keys it declares.
|
||||
|
||||
**How it works:**
|
||||
|
||||
- **Routing.** The patch step looks for `[modules.<module>] <key>` (or `[core] <key>`) in `config.user.toml` first; if found there, it updates that file. Otherwise it writes to the team-scope `config.toml`. So user-scope keys (e.g. `core.user_name`, `bmm.user_skill_level`) end up in `config.user.toml` and team-scope keys end up in `config.toml`, matching the partition the installer uses.
|
||||
- **Verbatim values.** The value is written exactly as you provided it — no `result:` template rendering. To get the rendered form (e.g. `{project-root}/research`), pass it explicitly: `--set bmm.project_knowledge='{project-root}/research'`.
|
||||
- **Carry-forward, declared keys.** Values for keys declared in `module.yaml` survive subsequent installs because they're also written to `_bmad/<module>/config.yaml`, which the installer reads as the prompt default on the next run.
|
||||
- **Carry-forward, undeclared keys.** A value for a key the module's schema doesn't declare lands in `config.toml` for the current install but won't be re-emitted on the next install (the manifest writer's schema-strict partition drops unknown keys). Re-pass `--set` if you need it sticky, or edit `_bmad/config.toml` directly.
|
||||
- **No validation.** `single-select` values aren't checked against the allowed choices, and unknown keys aren't rejected — whatever you assert is written.
|
||||
- **Modules not in `--modules`.** Setting a value for a module you didn't include prints a warning and the value is dropped (no file gets created for an uninstalled module).
|
||||
|
||||
The legacy core shortcuts (`--user-name`, `--output-folder`, etc.) still work and remain documented for backward compatibility, but `--set core.user_name=...` is equivalent.
|
||||
|
||||
:::note[Works with quick-update]
|
||||
`--set` is a post-install patch, so it applies the same way regardless of action type. Under `bmad install --action quick-update` (or `--yes` against an existing install, where quick-update is the default), `--set` patches the central config files at the end just like a regular install.
|
||||
:::
|
||||
|
||||
:::caution[Rate limit on shared IPs]
|
||||
Anonymous GitHub API calls are capped at 60/hour per IP. A single install hits the API once per external module to resolve the stable tag. Offices behind NAT, CI runner pools, and VPNs can collectively exhaust this.
|
||||
|
||||
|
|
@ -204,7 +244,7 @@ For cross-machine reproducibility, don't rely on rerunning the same `--modules`
|
|||
|
||||
```bash
|
||||
npx bmad-method install --yes --modules bmb,cis \
|
||||
--pin bmb=v1.7.0 --pin cis=v0.4.2 --tools none
|
||||
--pin bmb=v1.7.0 --pin cis=v0.4.2 --tools claude-code
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
|
|
|||
|
|
@ -68,6 +68,7 @@ Select **Yes**, then provide a source:
|
|||
| Input Type | Example |
|
||||
| --------------------- | ------------------------------------------------- |
|
||||
| HTTPS URL (any host) | `https://github.com/org/repo` |
|
||||
| HTTP URL (any host) | `http://host/org/repo` |
|
||||
| HTTPS URL with subdir | `https://github.com/org/repo/tree/main/my-module` |
|
||||
| SSH URL | `git@github.com:org/repo.git` |
|
||||
| Local path | `/Users/me/projects/my-module` |
|
||||
|
|
|
|||
|
|
@ -68,6 +68,7 @@ Chọn **Yes**, rồi nhập nguồn:
|
|||
| Loại đầu vào | Ví dụ |
|
||||
| --------------------- | ------------------------------------------------- |
|
||||
| HTTPS URL trên bất kỳ host nào | `https://github.com/org/repo` |
|
||||
| HTTP URL trên bất kỳ host nào | `http://host/org/repo` |
|
||||
| HTTPS URL trỏ vào một thư mục con | `https://github.com/org/repo/tree/main/my-module` |
|
||||
| SSH URL | `git@github.com:org/repo.git` |
|
||||
| Đường dẫn cục bộ | `/Users/me/projects/my-module` |
|
||||
|
|
|
|||
|
|
@ -0,0 +1,94 @@
|
|||
---
|
||||
title: "命名智能体"
|
||||
description: 为什么 BMad 的智能体有名字、人设和自定义能力——相比菜单驱动或纯提示驱动的方案,这解锁了哪些可能性
|
||||
sidebar:
|
||||
order: 1
|
||||
---
|
||||
|
||||
你说"嘿 Mary,咱们来头脑风暴",Mary 就激活了。她用你配置的语言、以她独特的人设向你打招呼,并提醒你随时可以用 `bmad-help`。然后她跳过菜单,直接进入头脑风暴——因为你的意图已经足够明确。
|
||||
|
||||
这一页解释背后发生了什么,以及 BMad 为什么这样设计。
|
||||
|
||||
## 三足鼎立
|
||||
|
||||
BMad 的智能体模型建立在三个可组合的基本要素之上:
|
||||
|
||||
| 要素 | 提供什么 | 所在位置 |
|
||||
|---|---|---|
|
||||
| **技能(Skill)** | 能力——一项智能体能做的具体事(头脑风暴、撰写 PRD、实现 story) | `.claude/skills/{skill-name}/SKILL.md`(或你所用 IDE 的等价位置) |
|
||||
| **命名智能体(Named Agent)** | 人设连续性——一个可辨识的身份,把一组相关技能包装在统一的语气、原则和视觉标识下 | 目录名以 `bmad-agent-*` 开头的技能 |
|
||||
| **自定义(Customization)** | 让它成为你的——覆盖选项可以重塑智能体行为、添加 MCP 集成、替换模板、叠加组织规范 | `_bmad/custom/{skill-name}.toml`(团队提交的覆盖)和 `.user.toml`(个人,已 gitignore) |
|
||||
|
||||
抽掉任何一条腿,体验就会坍塌:
|
||||
|
||||
- 有技能没智能体 → 用户只能靠名称或编号在能力列表里自行查找
|
||||
- 有智能体没技能 → 空有人设,没有能力
|
||||
- 没有自定义 → 所有人用一模一样的开箱默认,任何组织特有需求都只能靠 fork
|
||||
|
||||
## 命名智能体带来了什么
|
||||
|
||||
BMad 内置六个命名智能体,各自对应 BMad Method 的一个阶段:
|
||||
|
||||
| 智能体 | 阶段 | 模块 |
|
||||
|---|---|---|
|
||||
| 📊 **Mary**,商业分析师 | 分析 | 市场调研、头脑风暴、产品摘要、PRFAQ |
|
||||
| 📚 **Paige**,技术文档工程师 | 分析 | 项目文档、流程图、文档校验 |
|
||||
| 📋 **John**,产品经理 | 规划 | PRD 创建、Epic/Story 拆分、实施就绪评审 |
|
||||
| 🎨 **Sally**,UX 设计师 | 规划 | UX 设计规范 |
|
||||
| 🏗️ **Winston**,系统架构师 | 方案设计 | 技术架构、一致性检查 |
|
||||
| 💻 **Amelia**,高级工程师 | 实现 | Story 执行、快速开发、代码评审、Sprint 规划 |
|
||||
|
||||
每位智能体都有硬编码的身份(名字、职衔、专业领域)和可自定义的层(角色、原则、沟通风格、图标、菜单)。你可以重写 Mary 的原则或添加菜单项,但无法改她的名字——这是刻意为之的。品牌辨识度经得起自定义,所以"嘿 Mary"永远激活分析师,无论团队怎样塑造她的行为。
|
||||
|
||||
## 激活流程
|
||||
|
||||
调用命名智能体时,八个步骤依次执行:
|
||||
|
||||
1. **解析智能体配置** — 通过 Python 解析器(使用 stdlib `tomllib`)将内置 `customize.toml` 与团队覆盖和个人覆盖合并
|
||||
2. **执行前置步骤** — 团队配置的任何预处理行为
|
||||
3. **采用人设** — 硬编码身份加上自定义的角色、沟通风格、原则
|
||||
4. **加载持久化事实** — 组织规则、合规说明,可通过 `file:` 前缀加载文件(如 `file:{project-root}/docs/project-context.md`)
|
||||
5. **加载配置** — 用户名、沟通语言、输出语言、产物路径
|
||||
6. **打招呼** — 个性化问候,使用配置的语言,带上智能体的 emoji 前缀让你一眼认出谁在说话
|
||||
7. **执行后置步骤** — 团队配置的任何问候后设置
|
||||
8. **分发或展示菜单** — 如果你的开场消息能匹配某个菜单项,直接执行;否则展示菜单等待输入
|
||||
|
||||
第 8 步是意图与能力的交汇点。"嘿 Mary,咱们来头脑风暴"之所以跳过菜单渲染,是因为 `bmad-brainstorming` 显然对应 Mary 菜单上的 `BP`。如果你说的比较模糊,她会简短问一句,而不是走确认仪式。如果完全不匹配,她会正常继续对话。
|
||||
|
||||
## 为什么不只用菜单?
|
||||
|
||||
菜单迫使用户迁就工具。你得记住头脑风暴在分析师智能体的 `BP` 编码下,而不是 PM 智能体上,还得知道哪个人设负责哪些功能。这些都是工具强加给你的认知负担。
|
||||
|
||||
命名智能体把这个关系反转了。你用任何自然的方式,对着某个人说你想做什么。智能体知道自己是谁、能做什么。当你的意图足够清晰,她就直接开始。
|
||||
|
||||
菜单仍然作为兜底存在——探索时展示,确定时跳过。
|
||||
|
||||
## 为什么不直接用空白提示?
|
||||
|
||||
空白提示假设你知道"魔法咒语"。"帮我头脑风暴"也许有用,但"帮我发散下我这个 SaaS 创意"可能就不灵了,而结果取决于你怎么措辞。你变成了提示工程师。
|
||||
|
||||
命名智能体在不牺牲自由度的前提下增加了结构。人设保持一致,能力随时可发现,`bmad-help` 永远只差一个命令。你不用猜智能体能做什么,也不需要翻手册才能用它。
|
||||
|
||||
## 自定义是一等公民
|
||||
|
||||
自定义模型让这套方案能从单个开发者扩展到整个组织。
|
||||
|
||||
每个智能体自带 `customize.toml` 及合理默认值。团队在 `_bmad/custom/bmad-agent-{role}.toml` 中提交覆盖。个人可以在 `.user.toml`(已 gitignore)中叠加偏好。解析器在激活时按可预测的结构化规则合并三层配置。
|
||||
|
||||
大多数用户从不需要手写这些文件。`bmad-customize` 技能会引导你选择目标、区分智能体/工作流作用域、撰写覆盖、验证合并结果——让自定义能力对任何理解自己意图的人开放,不限于精通 TOML 的人。
|
||||
|
||||
举个例子:团队提交一个文件,告诉 Amelia 查库文档时一律用 Context7 MCP 工具,本地 epics 列表找不到 story 时回退到 Linear。Amelia 分发的每个开发工作流(dev-story、quick-dev、create-story、code-review)都继承这些行为,无需改源码、无需逐工作流重复配置。
|
||||
|
||||
此外还有第二个自定义面,用于**跨领域关注点**:中央配置 `_bmad/config.toml` 和 `_bmad/config.user.toml`(由安装器维护,从每个模块的 `module.yaml` 重建)加上 `_bmad/custom/config.toml`(团队提交)和 `_bmad/custom/config.user.toml`(个人,已 gitignore)作为覆盖。这里存放着 **智能体花名册** ——轻量级描述符,`bmad-party-mode`、`bmad-retrospective` 和 `bmad-advanced-elicitation` 等花名册消费者读取它来了解有哪些智能体可用、如何扮演它们。用团队覆盖在全组织范围重新定义某个智能体;用 `.user.toml` 覆盖添加虚构角色(Kirk、Spock、领域专家)作为个人实验——无需碰任何技能目录。每个技能的配置文件塑造 Mary **激活时的行为**;中央配置塑造其他技能**查看花名册时看到的 Mary**。
|
||||
|
||||
完整自定义文档和实操示例请参见:
|
||||
|
||||
- [如何自定义 BMad](../how-to/customize-bmad.md) — 可自定义项和合并规则的参考
|
||||
- [如何为组织扩展 BMad](../how-to/expand-bmad-for-your-org.md) — 五个实操方案,覆盖智能体全局规则、工作流约定、外部发布、模板替换和花名册管理
|
||||
- `bmad-customize` 技能 — 引导式编写助手,将你的意图转换为正确放置并经过验证的覆盖文件
|
||||
|
||||
## 更大的理念
|
||||
|
||||
当今大多数 AI 助手要么是菜单,要么是提示框,两者都把认知负担推给了用户。命名智能体加上可自定义技能,让你可以和一个了解项目的队友对话,并且让你的组织能塑造这个队友而不必 fork。
|
||||
|
||||
下次你输入"嘿 Mary,咱们来头脑风暴",她直接上手干活时,留意一下哪些事情**没有**发生。没有斜杠命令,没有菜单要翻,没有尴尬的功能介绍。这种"无感",正是设计本身。
|
||||
|
|
@ -0,0 +1,258 @@
|
|||
---
|
||||
title: "如何为组织扩展 BMad"
|
||||
description: 五个自定义方案,无需 fork 即可重塑 BMad——涵盖智能体全局规则、工作流约定、外部发布、模板替换和花名册变更
|
||||
sidebar:
|
||||
order: 9
|
||||
---
|
||||
|
||||
BMad 的自定义机制让组织无需编辑已安装文件或 fork 技能就能重塑行为。本指南介绍五个方案,覆盖大部分企业级需求。
|
||||
|
||||
:::note[前置条件]
|
||||
|
||||
- 已在项目中安装 BMad(参见[如何安装 BMad](./install-bmad.md))
|
||||
- 熟悉自定义模型(参见[如何自定义 BMad](./customize-bmad.md))
|
||||
- PATH 中有 Python 3.11+(解析器只用标准库,不需要 `pip install`)
|
||||
:::
|
||||
|
||||
:::tip[如何应用这些方案]
|
||||
下面的**逐技能方案**(方案 1–4)可以通过运行 `bmad-customize` 技能并描述意图来应用——它会选择正确的配置面、生成覆盖文件并验证合并结果。方案 5(中央配置的花名册覆盖)超出 v1 技能范围,仍需手动编写。本文档中的方案是覆盖**什么**的权威参考;`bmad-customize` 负责处理**怎么做**的部分(针对智能体/工作流层面)。
|
||||
:::
|
||||
|
||||
## 三层心智模型
|
||||
|
||||
在选择方案之前,先理解你的覆盖落在哪一层:
|
||||
|
||||
| 层 | 覆盖文件位置 | 作用范围 |
|
||||
|---|---|---|
|
||||
| **智能体**(如 Amelia、Mary、John) | `_bmad/custom/bmad-agent-{role}.toml` 中的 `[agent]` 段 | 跟随人设进入**该智能体分发的每个工作流** |
|
||||
| **工作流**(如 product-brief、create-prd) | `_bmad/custom/{workflow-name}.toml` 中的 `[workflow]` 段 | 仅作用于该工作流的单次运行 |
|
||||
| **中央配置** | `_bmad/custom/config.toml` 中的 `[agents.*]`、`[core]`、`[modules.*]` | 花名册(party-mode、retrospective、elicitation 可用的角色)、全组织统一的安装设置 |
|
||||
|
||||
经验法则:如果规则应当在工程师做任何开发工作时生效,就自定义**开发智能体**。如果只在撰写产品摘要时生效,就自定义 **product-brief 工作流**。如果要改变"谁在场"(重命名智能体、添加自定义角色、统一产物路径),就编辑**中央配置**。
|
||||
|
||||
## 方案 1:让智能体的规则贯穿其分发的所有工作流
|
||||
|
||||
**场景:** 统一工具使用和外部系统集成,让智能体分发的每个工作流都继承这些行为。这是影响面最大的模式。
|
||||
|
||||
**示例:Amelia(开发智能体)查库文档一律用 Context7,本地 epics 列表找不到 story 时回退到 Linear。**
|
||||
|
||||
```toml
|
||||
# _bmad/custom/bmad-agent-dev.toml
|
||||
|
||||
[agent]
|
||||
|
||||
# 每次激活时加载。传递到 dev-story、quick-dev、
|
||||
# create-story、code-review、qa-generate——Amelia 分发的每个技能。
|
||||
persistent_facts = [
|
||||
"For any library documentation lookup (React, TypeScript, Zod, Prisma, etc.), call the context7 MCP tool (`mcp__context7__resolve_library_id` then `mcp__context7__get_library_docs`) before relying on training-data knowledge. Up-to-date docs trump memorized APIs.",
|
||||
"When a story reference isn't found in {planning_artifacts}/epics-and-stories.md, search Linear via `mcp__linear__search_issues` using the story ID or title before asking the user to clarify. If Linear returns a match, treat it as the authoritative story source.",
|
||||
]
|
||||
```
|
||||
|
||||
**为什么有效:** 两句话就能重塑组织内所有开发工作流,无需逐工作流重复配置、无需改源码。每个新工程师拉下仓库就自动继承这些约定。
|
||||
|
||||
**团队文件 vs 个人文件:**
|
||||
- `bmad-agent-dev.toml`:提交到 git,对整个团队生效
|
||||
- `bmad-agent-dev.user.toml`:已 gitignore,个人偏好叠加在上面
|
||||
|
||||
## 方案 2:在特定工作流中强制执行组织规范
|
||||
|
||||
**场景:** 塑造工作流输出的*内容*,使其满足合规、审计或下游消费者的要求。
|
||||
|
||||
**示例:每份产品摘要都必须包含合规字段,智能体知晓组织的发布规范。**
|
||||
|
||||
```toml
|
||||
# _bmad/custom/bmad-product-brief.toml
|
||||
|
||||
[workflow]
|
||||
|
||||
persistent_facts = [
|
||||
"Every brief must include an 'Owner' field, a 'Target Release' field, and a 'Security Review Status' field.",
|
||||
"Non-commercial briefs (internal tools, research projects) must still include a user-value section, but can omit market differentiation.",
|
||||
"file:{project-root}/docs/enterprise/brief-publishing-conventions.md",
|
||||
]
|
||||
```
|
||||
|
||||
**效果:** 这些事实在工作流激活的第 3 步加载。当智能体起草摘要时,它已了解必填字段和企业规范文档。内置默认值(`file:{project-root}/**/project-context.md`)仍会加载,因为这是追加操作。
|
||||
|
||||
## 方案 3:将完成的产出发布到外部系统
|
||||
|
||||
**场景:** 工作流生成输出后,自动发布到企业级记录系统(Confluence、Notion、SharePoint)并创建后续工作项(Jira、Linear、Asana)。
|
||||
|
||||
**示例:摘要自动发布到 Confluence,并提供可选的 Jira Epic 创建。**
|
||||
|
||||
```toml
|
||||
# _bmad/custom/bmad-product-brief.toml
|
||||
|
||||
[workflow]
|
||||
|
||||
# 终端钩子。标量覆盖会整体替换空默认值。
|
||||
on_complete = """
|
||||
Publish and offer follow-up:
|
||||
|
||||
1. Read the finalized brief file path from the prior step.
|
||||
2. Call `mcp__atlassian__confluence_create_page` with:
|
||||
- space: "PRODUCT"
|
||||
- parent: "Product Briefs"
|
||||
- title: the brief's title
|
||||
- body: the brief's markdown contents
|
||||
Capture the returned page URL.
|
||||
3. Tell the user: "Brief published to Confluence: <url>".
|
||||
4. Ask: "Want me to open a Jira epic for this brief now?"
|
||||
5. If yes, call `mcp__atlassian__jira_create_issue` with:
|
||||
- type: "Epic"
|
||||
- project: "PROD"
|
||||
- summary: the brief's title
|
||||
- description: a short summary plus a link back to the Confluence page.
|
||||
Report the epic key and URL.
|
||||
6. If no, exit cleanly.
|
||||
|
||||
If either MCP tool fails, report the failure, print the brief path,
|
||||
and ask the user to publish manually.
|
||||
"""
|
||||
```
|
||||
|
||||
**为什么用 `on_complete` 而不是 `activation_steps_append`:** `on_complete` 只在终端阶段运行一次,在工作流主输出写入之后。这是发布产物的正确时机。`activation_steps_append` 在每次激活时运行,在工作流开始之前。
|
||||
|
||||
**权衡:**
|
||||
- **Confluence 发布是非破坏性的**,完成时始终运行
|
||||
- **Jira Epic 创建对全团队可见**,会触发 Sprint 规划信号,因此需用户确认
|
||||
- **优雅降级:** 如果 MCP 工具失败,交给用户手动处理,而不是静默丢弃输出
|
||||
|
||||
## 方案 4:替换为你自己的输出模板
|
||||
|
||||
**场景:** 默认输出结构不符合组织期望的格式,或同一仓库中不同团队需要不同模板。
|
||||
|
||||
**示例:将 product-brief 工作流指向企业自有模板。**
|
||||
|
||||
```toml
|
||||
# _bmad/custom/bmad-product-brief.toml
|
||||
|
||||
[workflow]
|
||||
brief_template = "{project-root}/docs/enterprise/brief-template.md"
|
||||
```
|
||||
|
||||
**原理:** 工作流自带的 `customize.toml` 中 `brief_template = "resources/brief-template.md"`(裸路径,从技能根目录解析)。你的覆盖指向 `{project-root}` 下的文件,智能体在第 4 步读取你的模板而非内置模板。
|
||||
|
||||
**模板编写建议:**
|
||||
- 将模板放在 `{project-root}/docs/` 或 `{project-root}/_bmad/custom/templates/` 下,使它们与覆盖文件一起版本管理
|
||||
- 沿用内置模板的结构约定(章节标题、frontmatter),智能体会适配实际内容
|
||||
- 对于多团队仓库,使用 `.user.toml` 让各团队指向自己的模板,无需改动已提交的团队文件
|
||||
|
||||
## 方案 5:自定义花名册
|
||||
|
||||
**场景:** 改变 `bmad-party-mode`、`bmad-retrospective` 和 `bmad-advanced-elicitation` 等花名册驱动技能中*谁在场*,无需编辑源码或 fork。以下是三种常见变体。
|
||||
|
||||
### 5a. 在全组织范围内重塑 BMad 智能体
|
||||
|
||||
每个真实智能体都有一段安装器从 `module.yaml` 合成的描述符。覆盖它可以在所有花名册消费者中改变语气和定位:
|
||||
|
||||
```toml
|
||||
# _bmad/custom/config.toml(提交到 git——对每个开发者生效)
|
||||
|
||||
[agents.bmad-agent-analyst]
|
||||
description = "Mary the Regulatory-Aware Business Analyst — channels Porter and Minto, but lives and breathes FDA audit trails. Speaks like a forensic investigator presenting a case file."
|
||||
```
|
||||
|
||||
Party-mode 会用新描述来生成 Mary。分析师激活流程本身不受影响,因为 Mary 的行为由她的每技能 `customize.toml` 控制。这个覆盖改变的是**外部技能如何感知和介绍她**,而不是她的内部工作方式。
|
||||
|
||||
### 5b. 添加虚构或自定义智能体
|
||||
|
||||
一段完整的描述符就足以让花名册功能识别,不需要技能目录。适合在 party mode 或头脑风暴中增加性格多样性:
|
||||
|
||||
```toml
|
||||
# _bmad/custom/config.user.toml(个人——已 gitignore)
|
||||
|
||||
[agents.spock]
|
||||
team = "startrek"
|
||||
name = "Commander Spock"
|
||||
title = "Science Officer"
|
||||
icon = "🖖"
|
||||
description = "Logic first, emotion suppressed. Begins observations with 'Fascinating.' Never rounds up. Counterpoint to any argument that relies on gut instinct."
|
||||
|
||||
[agents.mccoy]
|
||||
team = "startrek"
|
||||
name = "Dr. Leonard McCoy"
|
||||
title = "Chief Medical Officer"
|
||||
icon = "⚕️"
|
||||
description = "Country doctor's warmth, short fuse. 'Dammit Jim, I'm a doctor not a ___.' Ethics-driven counterweight to Spock."
|
||||
```
|
||||
|
||||
让 party-mode "邀请企业号船员",它会按 `team = "startrek"` 过滤并生成 Spock 和 McCoy。真实的 BMad 智能体(Mary、Amelia)也可以同桌。
|
||||
|
||||
### 5c. 锁定团队安装设置
|
||||
|
||||
安装器会向每个开发者提示 `planning_artifacts` 路径等值。当组织需要一个统一答案时,在中央配置中锁定——任何开发者本地的提示回答都会在解析时被覆盖:
|
||||
|
||||
```toml
|
||||
# _bmad/custom/config.toml
|
||||
|
||||
[modules.bmm]
|
||||
planning_artifacts = "{project-root}/shared/planning"
|
||||
implementation_artifacts = "{project-root}/shared/implementation"
|
||||
|
||||
[core]
|
||||
document_output_language = "English"
|
||||
```
|
||||
|
||||
个人设置如 `user_name`、`communication_language` 或 `user_skill_level` 留在各开发者自己的 `_bmad/config.user.toml` 中。团队文件不应触碰这些。
|
||||
|
||||
**为什么用中央配置而不是逐智能体的 customize.toml:** 逐智能体文件塑造*一个*智能体激活时的行为。中央配置塑造花名册消费者*查看全局时看到的内容:*有哪些智能体、叫什么、属于哪个团队,以及整个仓库共识的安装设置。两个层面,各司其职。
|
||||
|
||||
## 在 IDE 会话文件中强化全局规则
|
||||
|
||||
BMad 的自定义在技能激活时加载。许多 IDE 工具还会在**每次会话开始时**加载一个全局指令文件,在任何技能运行之前(`CLAUDE.md`、`AGENTS.md`、`.cursor/rules/`、`.github/copilot-instructions.md` 等)。对于即使在 BMad 技能之外也应生效的规则,请在全局指令中也声明一份。
|
||||
|
||||
**何时需要"双重声明":**
|
||||
- 规则足够重要,即使在普通对话(没有激活技能)中也应遵守
|
||||
- 你需要"双保险",因为模型的训练数据默认值可能会拉偏方向
|
||||
- 规则足够精简,重复一次不会让会话文件臃肿
|
||||
|
||||
**示例:在仓库的 `CLAUDE.md` 中强化方案 1 的开发智能体规则。**
|
||||
|
||||
```markdown
|
||||
<!-- Any file-read of library docs goes through the context7 MCP tool
|
||||
(`mcp__context7__resolve_library_id` then `mcp__context7__get_library_docs`)
|
||||
before relying on training-data knowledge. -->
|
||||
```
|
||||
|
||||
一句话,每次会话加载。它与 `bmad-agent-dev.toml` 自定义配合,使规则在 Amelia 的工作流内和与助手的临时对话中都生效。各层各管各的范围:
|
||||
|
||||
| 层 | 作用范围 | 用途 |
|
||||
|---|---|---|
|
||||
| IDE 会话文件(`CLAUDE.md` / `AGENTS.md`) | 每次会话,在任何技能激活之前 | 简短的、应在 BMad 之外也生效的通用规则 |
|
||||
| BMad 智能体自定义 | 该智能体分发的每个工作流 | 智能体人设相关的行为 |
|
||||
| BMad 工作流自定义 | 单次工作流运行 | 工作流特定的输出格式、发布钩子、模板 |
|
||||
| BMad 中央配置 | 花名册 + 共享安装设置 | 谁在场、团队使用的共享路径 |
|
||||
|
||||
IDE 会话文件要**精简**。十几行精挑细选的规则比长篇大论有效得多。模型每轮都会读取它,噪声会淹没信号。
|
||||
|
||||
## 组合使用
|
||||
|
||||
五个方案可以自由组合。一个典型的企业级 `bmad-product-brief` 覆盖可能同时设置 `persistent_facts`(方案 2)、`on_complete`(方案 3)和 `brief_template`(方案 4)。智能体级规则(方案 1)在另一个以智能体命名的文件中,中央配置(方案 5)锁定共享花名册和团队设置,四者并行生效。
|
||||
|
||||
```toml
|
||||
# _bmad/custom/bmad-product-brief.toml(工作流级)
|
||||
|
||||
[workflow]
|
||||
persistent_facts = ["..."]
|
||||
brief_template = "{project-root}/docs/enterprise/brief-template.md"
|
||||
on_complete = """ ... """
|
||||
```
|
||||
|
||||
```toml
|
||||
# _bmad/custom/bmad-agent-analyst.toml(智能体级——Mary 分发 product-brief)
|
||||
|
||||
[agent]
|
||||
persistent_facts = ["Always include a 'Regulatory Review' section when the domain involves healthcare, finance, or children's data."]
|
||||
```
|
||||
|
||||
效果:Mary 在人设激活时加载监管评审规则。当用户选择 product-brief 菜单项时,工作流加载自己的规范、写入企业模板,完成后发布到 Confluence。每一层各有贡献,且无一需要编辑 BMad 源码。
|
||||
|
||||
## 故障排查
|
||||
|
||||
**覆盖没有生效?** 检查文件是否在 `_bmad/custom/` 下且使用了准确的技能目录名(如 `bmad-agent-dev.toml`,而非 `bmad-dev.toml`)。参见[如何自定义 BMad](./customize-bmad.md)。
|
||||
|
||||
**MCP 工具名称不确定?** 使用 MCP 服务器在当前会话中暴露的准确名称。如果不确定,让 Claude Code 列出可用的 MCP 工具。在 `persistent_facts` 或 `on_complete` 中硬编码的名称,在 MCP 服务器未连接时不会生效。
|
||||
|
||||
**方案不适用于你的场景?** 以上方案是示例性的。底层机制(三层合并、结构化规则、智能体贯穿工作流)支持更多模式,按需组合即可。
|
||||
|
|
@ -68,6 +68,7 @@ Would you like to install from a custom source (Git URL or local path)?
|
|||
| 输入类型 | 示例 |
|
||||
| -------- | ---- |
|
||||
| HTTPS URL(任意主机) | `https://github.com/org/repo` |
|
||||
| HTTP URL(任意主机) | `http://host/org/repo` |
|
||||
| 带子目录的 HTTPS URL | `https://github.com/org/repo/tree/main/my-module` |
|
||||
| SSH URL | `git@github.com:org/repo.git` |
|
||||
| 本地路径 | `/Users/me/projects/my-module` |
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
{
|
||||
"name": "bmad-method",
|
||||
"version": "6.4.0",
|
||||
"version": "6.6.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "bmad-method",
|
||||
"version": "6.4.0",
|
||||
"version": "6.6.0",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@clack/core": "^1.0.0",
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
{
|
||||
"$schema": "https://json.schemastore.org/package.json",
|
||||
"name": "bmad-method",
|
||||
"version": "6.4.0",
|
||||
"version": "6.6.0",
|
||||
"description": "Breakthrough Method of Agile AI-driven Development",
|
||||
"keywords": [
|
||||
"agile",
|
||||
|
|
@ -39,12 +39,13 @@
|
|||
"lint:fix": "eslint . --ext .js,.cjs,.mjs,.yaml --fix",
|
||||
"lint:md": "markdownlint-cli2 \"**/*.md\"",
|
||||
"prepare": "command -v husky >/dev/null 2>&1 && husky || exit 0",
|
||||
"quality": "npm run format:check && npm run lint && npm run lint:md && npm run docs:build && npm run test:install && npm run validate:refs && npm run validate:skills",
|
||||
"quality": "npm run format:check && npm run lint && npm run lint:md && npm run docs:build && npm run test:install && npm run test:urls && npm run validate:refs && npm run validate:skills",
|
||||
"rebundle": "node tools/installer/bundlers/bundle-web.js rebundle",
|
||||
"test": "npm run test:refs && npm run test:install && npm run test:channels && npm run lint && npm run lint:md && npm run format:check",
|
||||
"test": "npm run test:refs && npm run test:install && npm run test:urls && npm run test:channels && npm run lint && npm run lint:md && npm run format:check",
|
||||
"test:channels": "node test/test-installer-channels.js",
|
||||
"test:install": "node test/test-installation-components.js",
|
||||
"test:refs": "node test/test-file-refs-csv.js",
|
||||
"test:urls": "node test/test-parse-source-urls.js",
|
||||
"validate:refs": "node tools/validate-file-refs.js --strict",
|
||||
"validate:skills": "node tools/validate-skills.js --strict"
|
||||
},
|
||||
|
|
|
|||
37
removals.txt
37
removals.txt
|
|
@ -15,3 +15,40 @@ bmad-quick-spec
|
|||
bmad-quick-flow
|
||||
bmad-quick-dev-new-preview
|
||||
bmad-init
|
||||
|
||||
# Pre-v6.2.0 wrapper skills (module-prefixed naming, dropped in v6.2.0).
|
||||
# Users upgrading from v6.0.x / v6.1.x had these installed and the cleanup
|
||||
# never knew to remove them; they remained alongside the new self-contained
|
||||
# skills causing duplicates and broken-file errors. See issue #2309.
|
||||
bmad-agent-bmm-analyst
|
||||
bmad-agent-bmm-architect
|
||||
bmad-agent-bmm-dev
|
||||
bmad-agent-bmm-pm
|
||||
bmad-agent-bmm-qa
|
||||
bmad-agent-bmm-quick-flow-solo-dev
|
||||
bmad-agent-bmm-sm
|
||||
bmad-agent-bmm-tech-writer
|
||||
bmad-agent-bmm-ux-designer
|
||||
bmad-bmm-check-implementation-readiness
|
||||
bmad-bmm-code-review
|
||||
bmad-bmm-correct-course
|
||||
bmad-bmm-create-architecture
|
||||
bmad-bmm-create-epics-and-stories
|
||||
bmad-bmm-create-prd
|
||||
bmad-bmm-create-product-brief
|
||||
bmad-bmm-create-story
|
||||
bmad-bmm-create-ux-design
|
||||
bmad-bmm-dev-story
|
||||
bmad-bmm-document-project
|
||||
bmad-bmm-domain-research
|
||||
bmad-bmm-edit-prd
|
||||
bmad-bmm-generate-project-context
|
||||
bmad-bmm-market-research
|
||||
bmad-bmm-qa-generate-e2e-tests
|
||||
bmad-bmm-quick-dev
|
||||
bmad-bmm-quick-spec
|
||||
bmad-bmm-retrospective
|
||||
bmad-bmm-sprint-planning
|
||||
bmad-bmm-sprint-status
|
||||
bmad-bmm-technical-research
|
||||
bmad-bmm-validate-prd
|
||||
|
|
|
|||
|
|
@ -7,8 +7,8 @@
|
|||
"description": "Produces battle-tested PRFAQ document and optional LLM distillate for PRD input.",
|
||||
"supports-headless": true,
|
||||
"phase-name": "1-analysis",
|
||||
"after": ["brainstorming", "perform-research"],
|
||||
"before": ["create-prd"],
|
||||
"preceded-by": ["brainstorming", "perform-research"],
|
||||
"followed-by": ["create-prd"],
|
||||
"is-required": false,
|
||||
"output-location": "{planning_artifacts}"
|
||||
}
|
||||
|
|
|
|||
|
|
@ -8,8 +8,8 @@
|
|||
"description": "Produces executive product brief and optional LLM distillate for PRD input.",
|
||||
"supports-headless": true,
|
||||
"phase-name": "1-analysis",
|
||||
"after": ["brainstorming", "perform-research"],
|
||||
"before": ["create-prd"],
|
||||
"preceded-by": ["brainstorming", "perform-research"],
|
||||
"followed-by": ["create-prd"],
|
||||
"is-required": true,
|
||||
"output-location": "{planning_artifacts}"
|
||||
}
|
||||
|
|
|
|||
|
|
@ -227,37 +227,39 @@ Prepare the content to append to the document:
|
|||
|
||||
### Architecture Completeness Checklist
|
||||
|
||||
**✅ Requirements Analysis**
|
||||
Mark each item `[x]` only if validation confirms it; leave `[ ]` if it is missing, partial, or unverified. Any unchecked item must be reflected in the Gap Analysis above and in the Overall Status below.
|
||||
|
||||
- [x] Project context thoroughly analyzed
|
||||
- [x] Scale and complexity assessed
|
||||
- [x] Technical constraints identified
|
||||
- [x] Cross-cutting concerns mapped
|
||||
**Requirements Analysis**
|
||||
|
||||
**✅ Architectural Decisions**
|
||||
- [ ] Project context thoroughly analyzed
|
||||
- [ ] Scale and complexity assessed
|
||||
- [ ] Technical constraints identified
|
||||
- [ ] Cross-cutting concerns mapped
|
||||
|
||||
- [x] Critical decisions documented with versions
|
||||
- [x] Technology stack fully specified
|
||||
- [x] Integration patterns defined
|
||||
- [x] Performance considerations addressed
|
||||
**Architectural Decisions**
|
||||
|
||||
**✅ Implementation Patterns**
|
||||
- [ ] Critical decisions documented with versions
|
||||
- [ ] Technology stack fully specified
|
||||
- [ ] Integration patterns defined
|
||||
- [ ] Performance considerations addressed
|
||||
|
||||
- [x] Naming conventions established
|
||||
- [x] Structure patterns defined
|
||||
- [x] Communication patterns specified
|
||||
- [x] Process patterns documented
|
||||
**Implementation Patterns**
|
||||
|
||||
**✅ Project Structure**
|
||||
- [ ] Naming conventions established
|
||||
- [ ] Structure patterns defined
|
||||
- [ ] Communication patterns specified
|
||||
- [ ] Process patterns documented
|
||||
|
||||
- [x] Complete directory structure defined
|
||||
- [x] Component boundaries established
|
||||
- [x] Integration points mapped
|
||||
- [x] Requirements to structure mapping complete
|
||||
**Project Structure**
|
||||
|
||||
- [ ] Complete directory structure defined
|
||||
- [ ] Component boundaries established
|
||||
- [ ] Integration points mapped
|
||||
- [ ] Requirements to structure mapping complete
|
||||
|
||||
### Architecture Readiness Assessment
|
||||
|
||||
**Overall Status:** READY FOR IMPLEMENTATION
|
||||
**Overall Status:** {{READY FOR IMPLEMENTATION | READY WITH MINOR GAPS | NOT READY}} (choose READY FOR IMPLEMENTATION only when all 16 checklist items are `[x]` and no Critical Gaps remain; choose NOT READY when any Critical Gap is open or any Requirements Analysis or Architectural Decisions item is unchecked; otherwise READY WITH MINOR GAPS)
|
||||
|
||||
**Confidence Level:** {{high/medium/low}} based on validation results
|
||||
|
||||
|
|
|
|||
|
|
@ -55,7 +55,8 @@ Load {planning_artifacts}/epics.md and review:
|
|||
2. **Requirements Grouping**: Group related FRs that deliver cohesive user outcomes
|
||||
3. **Incremental Delivery**: Each epic should deliver value independently
|
||||
4. **Logical Flow**: Natural progression from user's perspective
|
||||
5. **🔗 Dependency-Free Within Epic**: Stories within an epic must NOT depend on future stories
|
||||
5. **Dependency-Free Within Epic**: Stories within an epic must NOT depend on future stories
|
||||
6. **Implementation Efficiency**: Consider consolidating epics that all modify the same core files into fewer epics
|
||||
|
||||
**⚠️ CRITICAL PRINCIPLE:**
|
||||
Organize by USER VALUE, not technical layers:
|
||||
|
|
@ -74,6 +75,18 @@ Organize by USER VALUE, not technical layers:
|
|||
- Epic 3: Frontend Components (creates reusable components) - **No user value**
|
||||
- Epic 4: Deployment Pipeline (CI/CD setup) - **No user value**
|
||||
|
||||
**❌ WRONG Epic Examples (File Churn on Same Component):**
|
||||
|
||||
- Epic 1: File Upload (modifies model, controller, web form, web API)
|
||||
- Epic 2: File Status (modifies model, controller, web form, web API)
|
||||
- Epic 3: File Access permissions (modifies model, controller, web form, web API)
|
||||
- All three epics touch the same files — consolidate into one epic with ordered stories
|
||||
|
||||
**✅ CORRECT Alternative:**
|
||||
|
||||
- Epic 1: File Management Enhancement (upload, status, permissions as stories within one epic)
|
||||
- Rationale: Single component, fully pre-designed, no feedback loop between epics
|
||||
|
||||
**🔗 DEPENDENCY RULES:**
|
||||
|
||||
- Each epic must deliver COMPLETE functionality for its domain
|
||||
|
|
@ -82,21 +95,38 @@ Organize by USER VALUE, not technical layers:
|
|||
|
||||
### 3. Design Epic Structure Collaboratively
|
||||
|
||||
**Step A: Identify User Value Themes**
|
||||
**Step A: Assess Context and Identify Themes**
|
||||
|
||||
First, assess how much of the solution design is already validated (Architecture, UX, Test Design).
|
||||
When the outcome is certain and direction changes between epics are unlikely, prefer fewer but larger epics.
|
||||
Split into multiple epics when there is a genuine risk boundary or when early feedback could change direction
|
||||
of following epics.
|
||||
|
||||
Then, identify user value themes:
|
||||
|
||||
- Look for natural groupings in the FRs
|
||||
- Identify user journeys or workflows
|
||||
- Consider user types and their goals
|
||||
|
||||
**Step B: Propose Epic Structure**
|
||||
For each proposed epic:
|
||||
|
||||
For each proposed epic (considering whether epics share the same core files):
|
||||
|
||||
1. **Epic Title**: User-centric, value-focused
|
||||
2. **User Outcome**: What users can accomplish after this epic
|
||||
3. **FR Coverage**: Which FR numbers this epic addresses
|
||||
4. **Implementation Notes**: Any technical or UX considerations
|
||||
|
||||
**Step C: Create the epics_list**
|
||||
**Step C: Review for File Overlap**
|
||||
|
||||
Assess whether multiple proposed epics repeatedly target the same core files. If overlap is significant:
|
||||
|
||||
- Distinguish meaningful overlap (same component end-to-end) from incidental sharing
|
||||
- Ask whether to consolidate into one epic with ordered stories
|
||||
- If confirmed, merge the epic FRs into a single epic, preserving dependency flow: each story must still fit within
|
||||
a single dev agent's context
|
||||
|
||||
**Step D: Create the epics_list**
|
||||
|
||||
Format the epics_list as:
|
||||
|
||||
|
|
|
|||
|
|
@ -90,6 +90,12 @@ Review the complete epic and story breakdown to ensure EVERY FR is covered:
|
|||
- Dependencies flow naturally
|
||||
- Foundation stories only setup what's needed
|
||||
- No big upfront technical work
|
||||
- **File Churn Check:** Do multiple epics repeatedly modify the same core files?
|
||||
- Assess whether the overlap pattern suggests unnecessary churn or is incidental
|
||||
- If overlap is significant: Validate that splitting provides genuine value (risk mitigation, feedback loops, context size limits)
|
||||
- If no justification for the split: Recommend consolidation into fewer epics
|
||||
- ❌ WRONG: Multiple epics each modify the same core files with no feedback loop between them
|
||||
- ✅ RIGHT: Epics target distinct files/components, OR consolidation was explicitly considered and rejected with rationale
|
||||
|
||||
### 5. Dependency Validation (CRITICAL)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,33 +1,33 @@
|
|||
module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs
|
||||
module,skill,display-name,menu-code,description,action,args,phase,preceded-by,followed-by,required,output-location,outputs
|
||||
BMad Method,_meta,,,,,,,,,false,https://docs.bmad-method.org/llms.txt,
|
||||
BMad Method,bmad-document-project,Document Project,DP,Analyze an existing project to produce useful documentation.,,anytime,,,false,project-knowledge,*
|
||||
BMad Method,bmad-generate-project-context,Generate Project Context,GPC,Scan existing codebase to generate a lean LLM-optimized project-context.md. Essential for brownfield projects.,,anytime,,,false,output_folder,project context
|
||||
BMad Method,bmad-quick-dev,Quick Dev,QQ,Unified intent-in code-out workflow: clarify plan implement review and present.,,anytime,,,false,implementation_artifacts,spec and project implementation
|
||||
BMad Method,bmad-correct-course,Correct Course,CC,Navigate significant changes. May recommend start over update PRD redo architecture sprint planning or correct epics and stories.,,anytime,,,false,planning_artifacts,change proposal
|
||||
BMad Method,bmad-document-project,Document Project,DP,Analyze an existing project to produce useful documentation.,,,anytime,,,false,project-knowledge,*
|
||||
BMad Method,bmad-generate-project-context,Generate Project Context,GPC,Scan existing codebase to generate a lean LLM-optimized project-context.md. Essential for brownfield projects.,,,anytime,,,false,output_folder,project context
|
||||
BMad Method,bmad-quick-dev,Quick Dev,QQ,Unified intent-in code-out workflow: clarify plan implement review and present.,,,anytime,,,false,implementation_artifacts,spec and project implementation
|
||||
BMad Method,bmad-correct-course,Correct Course,CC,Navigate significant changes. May recommend start over update PRD redo architecture sprint planning or correct epics and stories.,,,anytime,,,false,planning_artifacts,change proposal
|
||||
BMad Method,bmad-agent-tech-writer,Write Document,WD,"Describe in detail what you want, and the agent will follow documentation best practices. Multi-turn conversation with subprocess for research/review.",write,,anytime,,,false,project-knowledge,document
|
||||
BMad Method,bmad-agent-tech-writer,Update Standards,US,Update agent memory documentation-standards.md with your specific preferences if you discover missing document conventions.,update-standards,,anytime,,,false,_bmad/_memory/tech-writer-sidecar,standards
|
||||
BMad Method,bmad-agent-tech-writer,Mermaid Generate,MG,Create a Mermaid diagram based on user description. Will suggest diagram types if not specified.,mermaid,,anytime,,,false,planning_artifacts,mermaid diagram
|
||||
BMad Method,bmad-agent-tech-writer,Validate Document,VD,Review the specified document against documentation standards and best practices. Returns specific actionable improvement suggestions organized by priority.,validate,[path],anytime,,,false,planning_artifacts,validation report
|
||||
BMad Method,bmad-agent-tech-writer,Explain Concept,EC,Create clear technical explanations with examples and diagrams for complex concepts.,explain,[topic],anytime,,,false,project_knowledge,explanation
|
||||
BMad Method,bmad-brainstorming,Brainstorm Project,BP,Expert guided facilitation through a single or multiple techniques.,,1-analysis,,,false,planning_artifacts,brainstorming session
|
||||
BMad Method,bmad-market-research,Market Research,MR,"Market analysis competitive landscape customer needs and trends.",,1-analysis,,,false,"planning_artifacts|project-knowledge",research documents
|
||||
BMad Method,bmad-domain-research,Domain Research,DR,Industry domain deep dive subject matter expertise and terminology.,,1-analysis,,,false,"planning_artifacts|project_knowledge",research documents
|
||||
BMad Method,bmad-technical-research,Technical Research,TR,Technical feasibility architecture options and implementation approaches.,,1-analysis,,,false,"planning_artifacts|project_knowledge",research documents
|
||||
BMad Method,bmad-brainstorming,Brainstorm Project,BP,Expert guided facilitation through a single or multiple techniques.,,,1-analysis,,,false,planning_artifacts,brainstorming session
|
||||
BMad Method,bmad-market-research,Market Research,MR,Market analysis competitive landscape customer needs and trends.,,,1-analysis,,,false,planning_artifacts|project-knowledge,research documents
|
||||
BMad Method,bmad-domain-research,Domain Research,DR,Industry domain deep dive subject matter expertise and terminology.,,,1-analysis,,,false,planning_artifacts|project_knowledge,research documents
|
||||
BMad Method,bmad-technical-research,Technical Research,TR,Technical feasibility architecture options and implementation approaches.,,,1-analysis,,,false,planning_artifacts|project_knowledge,research documents
|
||||
BMad Method,bmad-product-brief,Create Brief,CB,An expert guided experience to nail down your product idea in a brief. a gentler approach than PRFAQ when you are already sure of your concept and nothing will sway you.,,-A,1-analysis,,,false,planning_artifacts,product brief
|
||||
BMad Method,bmad-prfaq,PRFAQ Challenge,WB,Working Backwards guided experience to forge and stress-test your product concept to ensure you have a great product that users will love and need through the PRFAQ gauntlet to determine feasibility and alignment with user needs. alternative to product brief.,,-H,1-analysis,,,false,planning_artifacts,prfaq document
|
||||
BMad Method,bmad-create-prd,Create PRD,CP,Expert led facilitation to produce your Product Requirements Document.,,2-planning,,,true,planning_artifacts,prd
|
||||
BMad Method,bmad-create-prd,Create PRD,CP,Expert led facilitation to produce your Product Requirements Document.,,,2-planning,,,true,planning_artifacts,prd
|
||||
BMad Method,bmad-validate-prd,Validate PRD,VP,,,[path],2-planning,bmad-create-prd,,false,planning_artifacts,prd validation report
|
||||
BMad Method,bmad-edit-prd,Edit PRD,EP,,,[path],2-planning,bmad-validate-prd,,false,planning_artifacts,updated prd
|
||||
BMad Method,bmad-create-ux-design,Create UX,CU,"Guidance through realizing the plan for your UX, strongly recommended if a UI is a primary piece of the proposed project.",,2-planning,bmad-create-prd,,false,planning_artifacts,ux design
|
||||
BMad Method,bmad-create-architecture,Create Architecture,CA,Guided workflow to document technical decisions.,,3-solutioning,,,true,planning_artifacts,architecture
|
||||
BMad Method,bmad-create-epics-and-stories,Create Epics and Stories,CE,,,3-solutioning,bmad-create-architecture,,true,planning_artifacts,epics and stories
|
||||
BMad Method,bmad-check-implementation-readiness,Check Implementation Readiness,IR,Ensure PRD UX Architecture and Epics Stories are aligned.,,3-solutioning,bmad-create-epics-and-stories,,true,planning_artifacts,readiness report
|
||||
BMad Method,bmad-sprint-planning,Sprint Planning,SP,Kicks off implementation by producing a plan the implementation agents will follow in sequence for every story.,,4-implementation,,,true,implementation_artifacts,sprint status
|
||||
BMad Method,bmad-sprint-status,Sprint Status,SS,Anytime: Summarize sprint status and route to next workflow.,,4-implementation,bmad-sprint-planning,,false,,
|
||||
BMad Method,bmad-create-story,Create Story,CS,"Story cycle start: Prepare first found story in the sprint plan that is next or a specific epic/story designation.",create,,4-implementation,bmad-sprint-planning,bmad-create-story:validate,true,implementation_artifacts,story
|
||||
BMad Method,bmad-create-ux-design,Create UX,CU,"Guidance through realizing the plan for your UX, strongly recommended if a UI is a primary piece of the proposed project.",,,2-planning,bmad-create-prd,,false,planning_artifacts,ux design
|
||||
BMad Method,bmad-create-architecture,Create Architecture,CA,Guided workflow to document technical decisions.,,,3-solutioning,,,true,planning_artifacts,architecture
|
||||
BMad Method,bmad-create-epics-and-stories,Create Epics and Stories,CE,,,,3-solutioning,bmad-create-architecture,,true,planning_artifacts,epics and stories
|
||||
BMad Method,bmad-check-implementation-readiness,Check Implementation Readiness,IR,Ensure PRD UX Architecture and Epics Stories are aligned.,,,3-solutioning,bmad-create-epics-and-stories,,true,planning_artifacts,readiness report
|
||||
BMad Method,bmad-sprint-planning,Sprint Planning,SP,Kicks off implementation by producing a plan the implementation agents will follow in sequence for every story.,,,4-implementation,,,true,implementation_artifacts,sprint status
|
||||
BMad Method,bmad-sprint-status,Sprint Status,SS,Anytime: Summarize sprint status and route to next workflow.,,,4-implementation,bmad-sprint-planning,,false,,
|
||||
BMad Method,bmad-create-story,Create Story,CS,Story cycle start: Prepare first found story in the sprint plan that is next or a specific epic/story designation.,create,,4-implementation,bmad-sprint-planning,bmad-create-story:validate,true,implementation_artifacts,story
|
||||
BMad Method,bmad-create-story,Validate Story,VS,Validates story readiness and completeness before development work begins.,validate,,4-implementation,bmad-create-story:create,bmad-dev-story,false,implementation_artifacts,story validation report
|
||||
BMad Method,bmad-dev-story,Dev Story,DS,Story cycle: Execute story implementation tasks and tests then CR then back to DS if fixes needed.,,4-implementation,bmad-create-story:validate,,true,,
|
||||
BMad Method,bmad-code-review,Code Review,CR,Story cycle: If issues back to DS if approved then next CS or ER if epic complete.,,4-implementation,bmad-dev-story,,false,,
|
||||
BMad Method,bmad-checkpoint-preview,Checkpoint,CK,Guided walkthrough of a change from purpose and context into details. Use for human review of commits branches or PRs.,,4-implementation,,,false,,
|
||||
BMad Method,bmad-qa-generate-e2e-tests,QA Automation Test,QA,Generate automated API and E2E tests for implemented code. NOT for code review or story validation — use CR for that.,,4-implementation,bmad-dev-story,,false,implementation_artifacts,test suite
|
||||
BMad Method,bmad-retrospective,Retrospective,ER,Optional at epic end: Review completed work lessons learned and next epic or if major issues consider CC.,,4-implementation,bmad-code-review,,false,implementation_artifacts,retrospective
|
||||
BMad Method,bmad-dev-story,Dev Story,DS,Story cycle: Execute story implementation tasks and tests then CR then back to DS if fixes needed.,,,4-implementation,bmad-create-story:validate,,true,,
|
||||
BMad Method,bmad-code-review,Code Review,CR,Story cycle: If issues back to DS if approved then next CS or ER if epic complete.,,,4-implementation,bmad-dev-story,,false,,
|
||||
BMad Method,bmad-checkpoint-preview,Checkpoint,CK,Guided walkthrough of a change from purpose and context into details. Use for human review of commits branches or PRs.,,,4-implementation,,,false,,
|
||||
BMad Method,bmad-qa-generate-e2e-tests,QA Automation Test,QA,Generate automated API and E2E tests for implemented code. NOT for code review or story validation — use CR for that.,,,4-implementation,bmad-dev-story,,false,implementation_artifacts,test suite
|
||||
BMad Method,bmad-retrospective,Retrospective,ER,Optional at epic end: Review completed work lessons learned and next epic or if major issues consider CC.,,,4-implementation,bmad-code-review,,false,implementation_artifacts,retrospective
|
||||
|
|
|
|||
|
Can't render this file because it has a wrong number of fields in line 3.
|
|
|
@ -5,15 +5,11 @@ default_selected: true # This module will be selected by default for new install
|
|||
|
||||
# Variables from Core Config inserted:
|
||||
## user_name
|
||||
## project_name
|
||||
## communication_language
|
||||
## document_output_language
|
||||
## output_folder
|
||||
|
||||
project_name:
|
||||
prompt: "What is your project called?"
|
||||
default: "{directory_name}"
|
||||
result: "{value}"
|
||||
|
||||
user_skill_level:
|
||||
prompt:
|
||||
- "What is your development experience level?"
|
||||
|
|
|
|||
|
|
@ -139,7 +139,7 @@ parts: 1
|
|||
|
||||
## Solution Architecture
|
||||
- Plugins: skill bundles with Anthropic plugin standard as base format + bmad-manifest.json extending for BMAD-specific metadata (installer options, capabilities, help integration, phase ordering, dependencies)
|
||||
- Existing manifest example: `{"module-code":"bmm","replaces-skill":"bmad-create-product-brief","capabilities":[{"name":"create-brief","menu-code":"CB","supports-headless":true,"phase-name":"1-analysis","after":["brainstorming"],"before":["create-prd"],"is-required":true}]}`
|
||||
- Existing manifest example: `{"module-code":"bmm","replaces-skill":"bmad-create-product-brief","capabilities":[{"name":"create-brief","menu-code":"CB","supports-headless":true,"phase-name":"1-analysis","preceded-by":["brainstorming"],"followed-by":["create-prd"],"is-required":true}]}`
|
||||
- Vercel skills CLI handles platform translation; integration pattern (wrap/fork/call) is PRD decision
|
||||
- bmad-setup: global skill scanning installed bmad-manifest.json files, registering capabilities, configuring project settings; always included as base skill in every bundle (solves bootstrapping)
|
||||
- bmad-update: plugin update path without full reinstall; technical approach (diff/replace/preserve customizations) is PRD decision
|
||||
|
|
|
|||
|
|
@ -33,16 +33,16 @@ When this skill completes, the user should:
|
|||
The catalog uses this format:
|
||||
|
||||
```
|
||||
module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs
|
||||
module,skill,display-name,menu-code,description,action,args,phase,preceded-by,followed-by,required,output-location,outputs
|
||||
```
|
||||
|
||||
**Phases** determine the high-level flow:
|
||||
- `anytime` — available regardless of workflow state
|
||||
- Numbered phases (`1-analysis`, `2-planning`, etc.) flow in order; naming varies by module
|
||||
|
||||
**Dependencies** determine ordering within and across phases:
|
||||
- `after` — skills that should ideally complete before this one
|
||||
- `before` — skills that should run after this one
|
||||
**Sequencing** determines recommended ordering within and across phases (these are soft suggestions, not hard gates — see `required` for gating):
|
||||
- `preceded-by` — skills that should ideally complete before this one
|
||||
- `followed-by` — skills that should ideally run after this one
|
||||
- Format: `skill-name` for single-action skills, `skill-name:action` for multi-action skills
|
||||
|
||||
**Required gates**:
|
||||
|
|
|
|||
|
|
@ -1,13 +1,13 @@
|
|||
module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs
|
||||
module,skill,display-name,menu-code,description,action,args,phase,preceded-by,followed-by,required,output-location,outputs
|
||||
Core,_meta,,,,,,,,,false,https://docs.bmad-method.org/llms.txt,
|
||||
Core,bmad-brainstorming,Brainstorming,BSP,Use early in ideation or when stuck generating ideas.,,anytime,,,false,{output_folder}/brainstorming,brainstorming session
|
||||
Core,bmad-party-mode,Party Mode,PM,Orchestrate multi-agent discussions when you need multiple perspectives or want agents to collaborate.,,anytime,,,false,,
|
||||
Core,bmad-help,BMad Help,BH,,,anytime,,,false,,
|
||||
Core,bmad-index-docs,Index Docs,ID,Use when LLM needs to understand available docs without loading everything.,,anytime,,,false,,
|
||||
Core,bmad-shard-doc,Shard Document,SD,Use when doc becomes too large (>500 lines) to manage effectively.,[path],anytime,,,false,,
|
||||
Core,bmad-editorial-review-prose,Editorial Review - Prose,EP,Use after drafting to polish written content.,[path],anytime,,,false,report located with target document,three-column markdown table with suggested fixes
|
||||
Core,bmad-editorial-review-structure,Editorial Review - Structure,ES,Use when doc produced from multiple subprocesses or needs structural improvement.,[path],anytime,,,false,report located with target document,
|
||||
Core,bmad-review-adversarial-general,Adversarial Review,AR,"Use for quality assurance or before finalizing deliverables. Code Review in other modules runs this automatically, but also useful for document reviews.",[path],anytime,,,false,,
|
||||
Core,bmad-review-edge-case-hunter,Edge Case Hunter Review,ECH,Use alongside adversarial review for orthogonal coverage — method-driven not attitude-driven.,[path],anytime,,,false,,
|
||||
Core,bmad-distillator,Distillator,DG,Use when you need token-efficient distillates that preserve all information for downstream LLM consumption.,[path],anytime,,,false,adjacent to source document or specified output_path,distillate markdown file(s)
|
||||
Core,bmad-customize,BMad Customize,BC,"Use when you want to change how an agent or workflow behaves — add persistent facts, swap templates, insert activation hooks, or customize menus. Scans what's customizable, picks the right scope (agent vs workflow), writes the override to _bmad/custom/, and verifies the merge. No TOML hand-authoring required.",,anytime,,,false,{project-root}/_bmad/custom,TOML override files
|
||||
Core,bmad-brainstorming,Brainstorming,BSP,Use early in ideation or when stuck generating ideas.,,,anytime,,,false,{output_folder}/brainstorming,brainstorming session
|
||||
Core,bmad-party-mode,Party Mode,PM,Orchestrate multi-agent discussions when you need multiple perspectives or want agents to collaborate.,,,anytime,,,false,,
|
||||
Core,bmad-help,BMad Help,BH,,,,anytime,,,false,,
|
||||
Core,bmad-index-docs,Index Docs,ID,Use when LLM needs to understand available docs without loading everything.,,,anytime,,,false,,
|
||||
Core,bmad-shard-doc,Shard Document,SD,Use when doc becomes too large (>500 lines) to manage effectively.,,[path],anytime,,,false,,
|
||||
Core,bmad-editorial-review-prose,Editorial Review - Prose,EP,Use after drafting to polish written content.,,[path],anytime,,,false,report located with target document,three-column markdown table with suggested fixes
|
||||
Core,bmad-editorial-review-structure,Editorial Review - Structure,ES,Use when doc produced from multiple subprocesses or needs structural improvement.,,[path],anytime,,,false,report located with target document,
|
||||
Core,bmad-review-adversarial-general,Adversarial Review,AR,"Use for quality assurance or before finalizing deliverables. Code Review in other modules runs this automatically, but also useful for document reviews.",,[path],anytime,,,false,,
|
||||
Core,bmad-review-edge-case-hunter,Edge Case Hunter Review,ECH,Use alongside adversarial review for orthogonal coverage — method-driven not attitude-driven.,,[path],anytime,,,false,,
|
||||
Core,bmad-distillator,Distillator,DG,Use when you need token-efficient distillates that preserve all information for downstream LLM consumption.,,[path],anytime,,,false,adjacent to source document or specified output_path,distillate markdown file(s)
|
||||
Core,bmad-customize,BMad Customize,BC,"Use when you want to change how an agent or workflow behaves — add persistent facts, swap templates, insert activation hooks, or customize menus. Scans what's customizable, picks the right scope (agent vs workflow), writes the override to _bmad/custom/, and verifies the merge. No TOML hand-authoring required.",,,anytime,,,false,{project-root}/_bmad/custom,TOML override files
|
||||
|
|
|
|||
|
Can't render this file because it has a wrong number of fields in line 3.
|
|
|
@ -11,6 +11,11 @@ user_name:
|
|||
default: "BMad"
|
||||
result: "{value}"
|
||||
|
||||
project_name:
|
||||
prompt: "What is your project called?"
|
||||
default: "{directory_name}"
|
||||
result: "{value}"
|
||||
|
||||
communication_language:
|
||||
prompt: "What language should agents use when chatting with you?"
|
||||
scope: user
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,294 @@
|
|||
/**
|
||||
* parseSource() URL parsing tests
|
||||
*
|
||||
* Verifies that CustomModuleManager.parseSource() correctly handles Git URLs
|
||||
* across arbitrary hosts and path shapes (deep paths, nested groups, browse
|
||||
* links, repo names containing dots, etc.) using host-agnostic rules.
|
||||
*
|
||||
* Usage: node test/test-parse-source-urls.js
|
||||
*/
|
||||
|
||||
const { CustomModuleManager } = require('../tools/installer/modules/custom-module-manager');
|
||||
|
||||
// ANSI colors
|
||||
const colors = {
|
||||
reset: '\u001B[0m',
|
||||
green: '\u001B[32m',
|
||||
red: '\u001B[31m',
|
||||
cyan: '\u001B[36m',
|
||||
dim: '\u001B[2m',
|
||||
};
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
function assert(condition, testName, errorMessage = '') {
|
||||
if (condition) {
|
||||
console.log(`${colors.green}✓${colors.reset} ${testName}`);
|
||||
passed++;
|
||||
} else {
|
||||
console.log(`${colors.red}✗${colors.reset} ${testName}`);
|
||||
if (errorMessage) {
|
||||
console.log(` ${colors.dim}${errorMessage}${colors.reset}`);
|
||||
}
|
||||
failed++;
|
||||
}
|
||||
}
|
||||
|
||||
const manager = new CustomModuleManager();
|
||||
|
||||
// ─── Deep path shapes (4+ segments) ─────────────────────────────────────────
|
||||
|
||||
console.log(`\n${colors.cyan}Deep path shapes${colors.reset}\n`);
|
||||
|
||||
{
|
||||
// Hosts that expose the repo at a nested path like /<org>/<project>/<marker>/<repo>.
|
||||
// The parser must preserve the full path (no stripping of intermediate segments).
|
||||
const result = manager.parseSource('https://git.example.com/myorg/MyProject/_git/my-module');
|
||||
assert(result.isValid === true, 'nested-path URL is valid');
|
||||
assert(result.type === 'url', 'nested-path type is url');
|
||||
assert(
|
||||
result.cloneUrl === 'https://git.example.com/myorg/MyProject/_git/my-module',
|
||||
'nested-path cloneUrl preserves full path',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.subdir === null, 'nested-path URL has no subdir');
|
||||
assert(
|
||||
result.cacheKey === 'git.example.com/myorg/MyProject/_git/my-module',
|
||||
'nested-path cacheKey includes full repo path',
|
||||
`Got: ${result.cacheKey}`,
|
||||
);
|
||||
assert(result.displayName === '_git/my-module', 'nested-path displayName uses last two segments', `Got: ${result.displayName}`);
|
||||
}
|
||||
|
||||
{
|
||||
const result = manager.parseSource('https://git.example.com/myorg/MyProject/_git/my-module.git');
|
||||
assert(result.isValid === true, 'nested-path URL with .git suffix is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://git.example.com/myorg/MyProject/_git/my-module',
|
||||
'nested-path .git suffix stripped from cloneUrl',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
}
|
||||
|
||||
{
|
||||
// Browse links that use ?path=/... to point at a subdirectory.
|
||||
const result = manager.parseSource('https://git.example.com/myorg/MyProject/_git/my-module?path=/path/to/subdir');
|
||||
assert(result.isValid === true, 'URL with ?path= is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://git.example.com/myorg/MyProject/_git/my-module',
|
||||
'?path= cloneUrl excludes subdir',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.subdir === 'path/to/subdir', '?path= subdir correctly extracted', `Got: ${result.subdir}`);
|
||||
}
|
||||
|
||||
// ─── Azure DevOps URLs (Issue #2268) ────────────────────────────────────────
|
||||
|
||||
console.log(`\n${colors.cyan}Azure DevOps URLs (Issue #2268)${colors.reset}\n`);
|
||||
|
||||
{
|
||||
// Modern dev.azure.com format — the exact URL from the bug report.
|
||||
const result = manager.parseSource('https://dev.azure.com/myorg/MyProject/_git/my-module');
|
||||
assert(result.isValid === true, 'ADO modern URL is valid');
|
||||
assert(result.type === 'url', 'ADO modern type is url');
|
||||
assert(
|
||||
result.cloneUrl === 'https://dev.azure.com/myorg/MyProject/_git/my-module',
|
||||
'ADO modern cloneUrl preserves full _git path',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(
|
||||
result.cacheKey === 'dev.azure.com/myorg/MyProject/_git/my-module',
|
||||
'ADO modern cacheKey includes full path',
|
||||
`Got: ${result.cacheKey}`,
|
||||
);
|
||||
assert(result.subdir === null, 'ADO modern URL has no subdir');
|
||||
}
|
||||
|
||||
{
|
||||
// Modern format with .git suffix
|
||||
const result = manager.parseSource('https://dev.azure.com/myorg/MyProject/_git/my-module.git');
|
||||
assert(result.isValid === true, 'ADO modern .git suffix is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://dev.azure.com/myorg/MyProject/_git/my-module',
|
||||
'ADO modern .git suffix stripped from cloneUrl',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
}
|
||||
|
||||
{
|
||||
// Modern format with ?path= subdir (browse link)
|
||||
const result = manager.parseSource('https://dev.azure.com/myorg/MyProject/_git/my-module?path=/src/skills');
|
||||
assert(result.isValid === true, 'ADO modern ?path= is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://dev.azure.com/myorg/MyProject/_git/my-module',
|
||||
'ADO modern ?path= cloneUrl excludes subdir',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.subdir === 'src/skills', 'ADO modern ?path= subdir extracted', `Got: ${result.subdir}`);
|
||||
}
|
||||
|
||||
{
|
||||
// Legacy visualstudio.com format
|
||||
const result = manager.parseSource('https://myorg.visualstudio.com/MyProject/_git/my-module');
|
||||
assert(result.isValid === true, 'ADO legacy URL is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://myorg.visualstudio.com/MyProject/_git/my-module',
|
||||
'ADO legacy cloneUrl preserves full path',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(
|
||||
result.cacheKey === 'myorg.visualstudio.com/MyProject/_git/my-module',
|
||||
'ADO legacy cacheKey includes full path',
|
||||
`Got: ${result.cacheKey}`,
|
||||
);
|
||||
}
|
||||
|
||||
{
|
||||
// Legacy format with .git suffix
|
||||
const result = manager.parseSource('https://myorg.visualstudio.com/MyProject/_git/my-module.git');
|
||||
assert(result.isValid === true, 'ADO legacy .git suffix is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://myorg.visualstudio.com/MyProject/_git/my-module',
|
||||
'ADO legacy .git suffix stripped from cloneUrl',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
}
|
||||
|
||||
{
|
||||
// Legacy format with ?path= subdir
|
||||
const result = manager.parseSource('https://myorg.visualstudio.com/MyProject/_git/my-module?path=/src');
|
||||
assert(result.isValid === true, 'ADO legacy ?path= is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://myorg.visualstudio.com/MyProject/_git/my-module',
|
||||
'ADO legacy ?path= cloneUrl excludes subdir',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.subdir === 'src', 'ADO legacy ?path= subdir extracted', `Got: ${result.subdir}`);
|
||||
}
|
||||
|
||||
// ─── Subdomain hosts ────────────────────────────────────────────────────────
|
||||
|
||||
console.log(`\n${colors.cyan}Subdomain hosts${colors.reset}\n`);
|
||||
|
||||
{
|
||||
const result = manager.parseSource('https://myorg.example.com/MyProject/_git/my-module');
|
||||
assert(result.isValid === true, 'subdomain URL is valid');
|
||||
assert(result.type === 'url', 'subdomain type is url');
|
||||
assert(
|
||||
result.cloneUrl === 'https://myorg.example.com/MyProject/_git/my-module',
|
||||
'subdomain cloneUrl preserves full path',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.subdir === null, 'subdomain URL has no subdir');
|
||||
assert(
|
||||
result.cacheKey === 'myorg.example.com/MyProject/_git/my-module',
|
||||
'subdomain cacheKey includes full repo path',
|
||||
`Got: ${result.cacheKey}`,
|
||||
);
|
||||
}
|
||||
|
||||
// ─── Simple owner/repo URLs (regression) ────────────────────────────────────
|
||||
|
||||
console.log(`\n${colors.cyan}Simple owner/repo URLs (regression check)${colors.reset}\n`);
|
||||
|
||||
{
|
||||
const result = manager.parseSource('https://github.com/owner/repo');
|
||||
assert(result.isValid === true, 'GitHub basic URL still valid');
|
||||
assert(result.cloneUrl === 'https://github.com/owner/repo', 'GitHub cloneUrl unchanged', `Got: ${result.cloneUrl}`);
|
||||
assert(result.cacheKey === 'github.com/owner/repo', 'GitHub cacheKey unchanged', `Got: ${result.cacheKey}`);
|
||||
}
|
||||
|
||||
{
|
||||
const result = manager.parseSource('https://github.com/owner/repo/tree/main/subdir');
|
||||
assert(result.isValid === true, 'GitHub URL with tree path still valid');
|
||||
assert(result.cloneUrl === 'https://github.com/owner/repo', 'GitHub tree URL cloneUrl correct', `Got: ${result.cloneUrl}`);
|
||||
assert(result.subdir === 'subdir', 'GitHub tree subdir still extracted', `Got: ${result.subdir}`);
|
||||
}
|
||||
|
||||
{
|
||||
const result = manager.parseSource('git@github.com:owner/repo.git');
|
||||
assert(result.isValid === true, 'SSH URL still valid');
|
||||
assert(result.cloneUrl === 'git@github.com:owner/repo.git', 'SSH cloneUrl unchanged', `Got: ${result.cloneUrl}`);
|
||||
}
|
||||
|
||||
// ─── Generic URL handling (any host, any path depth) ────────────────────────
|
||||
|
||||
console.log(`\n${colors.cyan}Generic URL handling${colors.reset}\n`);
|
||||
|
||||
{
|
||||
// GitLab nested groups — the old 2-segment regex would have failed this.
|
||||
const result = manager.parseSource('https://gitlab.com/group/subgroup/repo');
|
||||
assert(result.isValid === true, 'GitLab nested-group URL is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://gitlab.com/group/subgroup/repo',
|
||||
'GitLab nested-group cloneUrl preserves full path',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(
|
||||
result.cacheKey === 'gitlab.com/group/subgroup/repo',
|
||||
'GitLab nested-group cacheKey includes full path',
|
||||
`Got: ${result.cacheKey}`,
|
||||
);
|
||||
assert(result.displayName === 'subgroup/repo', 'GitLab nested-group displayName uses last two segments', `Got: ${result.displayName}`);
|
||||
}
|
||||
|
||||
{
|
||||
const result = manager.parseSource('https://gitlab.com/group/subgroup/repo/-/tree/main/src/module');
|
||||
assert(result.isValid === true, 'GitLab nested-group tree URL is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://gitlab.com/group/subgroup/repo',
|
||||
'GitLab nested-group tree cloneUrl excludes subdir',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.subdir === 'src/module', 'GitLab nested-group tree subdir extracted', `Got: ${result.subdir}`);
|
||||
}
|
||||
|
||||
{
|
||||
// Self-hosted host with a repo name containing dots — the old regex
|
||||
// explicitly excluded dots from the repo segment.
|
||||
const result = manager.parseSource('https://git.example.com/owner/my.repo.name');
|
||||
assert(result.isValid === true, 'repo name with dots is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://git.example.com/owner/my.repo.name',
|
||||
'repo name with dots preserved in cloneUrl',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.displayName === 'owner/my.repo.name', 'repo name with dots preserved in displayName', `Got: ${result.displayName}`);
|
||||
}
|
||||
|
||||
{
|
||||
// Browser URL pointing at a ref with NO trailing subdir must still strip
|
||||
// the /tree/<ref> segment from the clone URL.
|
||||
const result = manager.parseSource('https://github.com/owner/repo/tree/main');
|
||||
assert(result.isValid === true, 'tree URL without subdir is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://github.com/owner/repo',
|
||||
'tree URL without subdir strips ref from cloneUrl',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.subdir === null, 'tree URL without subdir yields null subdir', `Got: ${result.subdir}`);
|
||||
assert(result.displayName === 'owner/repo', 'tree URL without subdir displayName is owner/repo', `Got: ${result.displayName}`);
|
||||
}
|
||||
|
||||
{
|
||||
// Same shape for GitLab's /-/tree form and Gitea's /src/branch form.
|
||||
const gitlab = manager.parseSource('https://gitlab.com/group/repo/-/tree/main');
|
||||
assert(
|
||||
gitlab.cloneUrl === 'https://gitlab.com/group/repo' && gitlab.subdir === null,
|
||||
'GitLab /-/tree/<ref> without subdir strips ref',
|
||||
`Got: ${gitlab.cloneUrl} subdir=${gitlab.subdir}`,
|
||||
);
|
||||
|
||||
const gitea = manager.parseSource('https://gitea.example.com/owner/repo/src/branch/main');
|
||||
assert(
|
||||
gitea.cloneUrl === 'https://gitea.example.com/owner/repo' && gitea.subdir === null,
|
||||
'Gitea /src/branch/<ref> without subdir strips ref',
|
||||
`Got: ${gitea.cloneUrl} subdir=${gitea.subdir}`,
|
||||
);
|
||||
}
|
||||
|
||||
// ─── Summary ────────────────────────────────────────────────────────────────
|
||||
|
||||
console.log(`\n${colors.cyan}Results: ${passed} passed, ${failed} failed${colors.reset}\n`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
|
|
@ -222,7 +222,6 @@ Support assumption: full Agent Skills support. Gemini CLI docs confirm workspace
|
|||
|
||||
- [x] Confirm Gemini CLI native skills path is `.gemini/skills/{skill-name}/SKILL.md` (per [geminicli.com/docs/cli/skills](https://geminicli.com/docs/cli/skills/))
|
||||
- [x] Implement native skills output — target_dir `.gemini/skills`, skill_format true, template_type default (replaces TOML templates)
|
||||
- [x] Add legacy cleanup for `.gemini/commands` (via `legacy_targets`)
|
||||
- [x] Test fresh install — skills written to `.gemini/skills/bmad-master/SKILL.md` with correct frontmatter
|
||||
- [x] Test reinstall/upgrade from legacy TOML command output — legacy dir removed, skills installed
|
||||
- [x] Confirm no ancestor conflict protection is needed — Gemini CLI uses workspace > user > extension precedence, no ancestor directory inheritance
|
||||
|
|
@ -236,7 +235,6 @@ Support assumption: full Agent Skills support. iFlow docs confirm workspace skil
|
|||
|
||||
- [x] Confirm iFlow native skills path is `.iflow/skills/{skill-name}/SKILL.md`
|
||||
- [x] Implement native skills output — target_dir `.iflow/skills`, skill_format true, template_type default
|
||||
- [x] Add legacy cleanup for `.iflow/commands` (via `legacy_targets`)
|
||||
- [x] Test fresh install — skills written to `.iflow/skills/bmad-master/SKILL.md`
|
||||
- [x] Test legacy cleanup — legacy commands dir removed
|
||||
- [x] Implement/extend automated tests — 6 assertions in test suite 24
|
||||
|
|
@ -249,7 +247,6 @@ Support assumption: full Agent Skills support. Qwen Code supports workspace skil
|
|||
|
||||
- [x] Confirm QwenCoder native skills path is `.qwen/skills/{skill-name}/SKILL.md`
|
||||
- [x] Implement native skills output — target_dir `.qwen/skills`, skill_format true, template_type default
|
||||
- [x] Add legacy cleanup for `.qwen/commands` (via `legacy_targets`)
|
||||
- [x] Test fresh install — skills written to `.qwen/skills/bmad-master/SKILL.md`
|
||||
- [x] Test legacy cleanup — legacy commands dir removed
|
||||
- [x] Implement/extend automated tests — 6 assertions in test suite 25
|
||||
|
|
@ -262,7 +259,6 @@ Support assumption: full Agent Skills support. Rovo Dev now supports workspace s
|
|||
|
||||
- [x] Confirm Rovo Dev native skills path is `.rovodev/skills/{skill-name}/SKILL.md` (per Atlassian blog)
|
||||
- [x] Replace 257-line custom `rovodev.js` with config-driven entry in `platform-codes.yaml`
|
||||
- [x] Add legacy cleanup for `.rovodev/workflows` (via `legacy_targets`) and BMAD entries in `prompts.yml` (via `cleanupRovoDevPrompts()` in `_config-driven.js`)
|
||||
- [x] Test fresh install — skills written to `.rovodev/skills/bmad-master/SKILL.md`
|
||||
- [x] Test legacy cleanup — legacy workflows dir removed, `prompts.yml` BMAD entries stripped while preserving user entries
|
||||
- [x] Implement/extend automated tests — 8 assertions in test suite 26
|
||||
|
|
|
|||
|
|
@ -23,13 +23,10 @@ checkForUpdate().catch(() => {
|
|||
|
||||
async function checkForUpdate() {
|
||||
try {
|
||||
// For beta versions, check the beta tag; otherwise check latest
|
||||
const isBeta =
|
||||
packageJson.version.includes('Beta') ||
|
||||
packageJson.version.includes('beta') ||
|
||||
packageJson.version.includes('alpha') ||
|
||||
packageJson.version.includes('rc');
|
||||
const tag = isBeta ? 'beta' : 'latest';
|
||||
// Prereleases (e.g. 6.5.1-next.0) live on the `next` dist-tag; stable
|
||||
// releases live on `latest`. semver.prerelease() returns null for stable,
|
||||
// so this correctly routes pre-1.0-next/rc/etc. without string matching.
|
||||
const tag = semver.prerelease(packageJson.version) ? 'next' : 'latest';
|
||||
|
||||
const result = execSync(`npm view ${packageName}@${tag} version`, {
|
||||
encoding: 'utf8',
|
||||
|
|
|
|||
|
|
@ -15,7 +15,18 @@ module.exports = {
|
|||
['--modules <modules>', 'Comma-separated list of module IDs to install (e.g., "bmm,bmb")'],
|
||||
[
|
||||
'--tools <tools>',
|
||||
'Comma-separated list of tool/IDE IDs to configure (e.g., "claude-code,cursor"). Use "none" to skip tool configuration.',
|
||||
'Comma-separated list of tool/IDE IDs to configure (e.g., "claude-code,cursor"). Required for fresh non-interactive (--yes) installs. Run with --list-tools to see all valid IDs.',
|
||||
],
|
||||
['--list-tools', 'Print all supported tool/IDE IDs (with target directories) and exit.'],
|
||||
[
|
||||
'--set <spec>',
|
||||
'Set a module config option non-interactively. Spec format: <module>.<key>=<value> (e.g. bmm.project_knowledge=research). Repeatable. Run --list-options to see available keys.',
|
||||
(value, prev) => [...(prev || []), value],
|
||||
[],
|
||||
],
|
||||
[
|
||||
'--list-options [module]',
|
||||
'List available --set keys for all locally-known official modules, or for a single module by code, then exit.',
|
||||
],
|
||||
['--action <type>', 'Action type for existing installations: install, update, or quick-update'],
|
||||
['--user-name <name>', 'Name for agents to use (default: system username)'],
|
||||
|
|
@ -40,12 +51,49 @@ module.exports = {
|
|||
],
|
||||
action: async (options) => {
|
||||
try {
|
||||
if (options.listTools) {
|
||||
const { formatPlatformList } = require('../ide/platform-codes');
|
||||
process.stdout.write((await formatPlatformList()) + '\n');
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
if (options.listOptions !== undefined) {
|
||||
const { formatOptionsList } = require('../list-options');
|
||||
const moduleArg = options.listOptions === true ? null : options.listOptions;
|
||||
const { text, ok } = await formatOptionsList(moduleArg);
|
||||
const stream = ok ? process.stdout : process.stderr;
|
||||
// process.exit() forces immediate termination and can truncate the
|
||||
// buffered write when stdout/stderr is piped or captured by CI. Wait
|
||||
// for the write to flush, then set process.exitCode and return so the
|
||||
// event loop drains naturally. Non-zero exit when a single-module
|
||||
// lookup misses so a CI typo like `--list-options bmn` doesn't look
|
||||
// successful in scripts.
|
||||
await new Promise((resolve, reject) => {
|
||||
stream.write(text + '\n', (error) => (error ? reject(error) : resolve()));
|
||||
});
|
||||
process.exitCode = ok ? 0 : 1;
|
||||
return;
|
||||
}
|
||||
|
||||
// Set debug flag as environment variable for all components
|
||||
if (options.debug) {
|
||||
process.env.BMAD_DEBUG_MANIFEST = 'true';
|
||||
await prompts.log.info('Debug mode enabled');
|
||||
}
|
||||
|
||||
// Validate --set syntax up-front so malformed entries fail fast,
|
||||
// before we touch the network or filesystem. Parsed entries are
|
||||
// re-derived inside ui.js where overrides are seeded.
|
||||
if (options.set && options.set.length > 0) {
|
||||
const { parseSetEntries } = require('../set-overrides');
|
||||
try {
|
||||
parseSetEntries(options.set);
|
||||
} catch (error) {
|
||||
await prompts.log.error(error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
const config = await ui.promptInstall(options);
|
||||
|
||||
// Handle cancel
|
||||
|
|
@ -54,8 +102,13 @@ module.exports = {
|
|||
process.exit(0);
|
||||
}
|
||||
|
||||
// Handle quick update separately
|
||||
// Handle quick update separately. --set is a post-install TOML patch so
|
||||
// it works the same way for quick-update as for a regular install — the
|
||||
// installer runs, then `applySetOverrides` patches the central config
|
||||
// files. Pass the parsed overrides through.
|
||||
if (config.actionType === 'quick-update') {
|
||||
const { parseSetEntries } = require('../set-overrides');
|
||||
config.setOverrides = parseSetEntries(options.set || []);
|
||||
const result = await installer.quickUpdate(config);
|
||||
await prompts.log.success('Quick update complete!');
|
||||
await prompts.log.info(`Updated ${result.moduleCount} modules with preserved settings (${result.modules.join(', ')})`);
|
||||
|
|
@ -81,7 +134,7 @@ module.exports = {
|
|||
} else {
|
||||
await prompts.log.error(`Installation failed: ${error.message}`);
|
||||
}
|
||||
if (error.stack) {
|
||||
if (error.stack && !error.expected) {
|
||||
await prompts.log.message(error.stack);
|
||||
}
|
||||
} catch {
|
||||
|
|
|
|||
|
|
@ -3,7 +3,19 @@
|
|||
* User input comes from either UI answers or headless CLI flags.
|
||||
*/
|
||||
class Config {
|
||||
constructor({ directory, modules, ides, skipPrompts, verbose, actionType, coreConfig, moduleConfigs, quickUpdate, channelOptions }) {
|
||||
constructor({
|
||||
directory,
|
||||
modules,
|
||||
ides,
|
||||
skipPrompts,
|
||||
verbose,
|
||||
actionType,
|
||||
coreConfig,
|
||||
moduleConfigs,
|
||||
quickUpdate,
|
||||
channelOptions,
|
||||
setOverrides,
|
||||
}) {
|
||||
this.directory = directory;
|
||||
this.modules = Object.freeze([...modules]);
|
||||
this.ides = Object.freeze([...ides]);
|
||||
|
|
@ -15,6 +27,11 @@ class Config {
|
|||
this._quickUpdate = quickUpdate;
|
||||
// channelOptions carry a Map + Set; don't deep-freeze.
|
||||
this.channelOptions = channelOptions || null;
|
||||
// Parsed `--set <module>.<key>=<value>` overrides, applied as a TOML
|
||||
// patch AFTER the install finishes. Shape: { moduleCode: { key: value } }.
|
||||
// Intentionally NOT integrated with the prompt/template/schema flow; see
|
||||
// `tools/installer/set-overrides.js` for the rationale and tradeoffs.
|
||||
this.setOverrides = setOverrides || {};
|
||||
Object.freeze(this);
|
||||
}
|
||||
|
||||
|
|
@ -40,6 +57,7 @@ class Config {
|
|||
moduleConfigs: userInput.moduleConfigs || null,
|
||||
quickUpdate: userInput._quickUpdate || false,
|
||||
channelOptions: userInput.channelOptions || null,
|
||||
setOverrides: userInput.setOverrides || {},
|
||||
});
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -12,8 +12,10 @@ const { BMAD_FOLDER_NAME } = require('../ide/shared/path-utils');
|
|||
const { InstallPaths } = require('./install-paths');
|
||||
const { ExternalModuleManager } = require('../modules/external-manager');
|
||||
const { resolveModuleVersion } = require('../modules/version-resolver');
|
||||
const { MODULE_HELP_CSV_HEADER } = require('../modules/module-help-schema');
|
||||
|
||||
const { ExistingInstall } = require('./existing-install');
|
||||
const { warnPreNativeSkillsLegacy } = require('./legacy-warnings');
|
||||
|
||||
class Installer {
|
||||
constructor() {
|
||||
|
|
@ -41,6 +43,16 @@ class Installer {
|
|||
const officialModules = await OfficialModules.build(config, paths);
|
||||
const existingInstall = await ExistingInstall.detect(paths.bmadDir);
|
||||
|
||||
try {
|
||||
await warnPreNativeSkillsLegacy({
|
||||
projectRoot: paths.projectRoot,
|
||||
existingVersion: existingInstall.installed ? existingInstall.version : null,
|
||||
});
|
||||
} catch (error) {
|
||||
// Legacy-dir scan is informational; never let it abort install.
|
||||
await prompts.log.warn(`Warning: Could not check for legacy BMAD entries: ${error.message}`);
|
||||
}
|
||||
|
||||
if (existingInstall.installed) {
|
||||
await this._removeDeselectedModules(existingInstall, config, paths);
|
||||
updateState = await this._prepareUpdateState(paths, config, existingInstall, officialModules);
|
||||
|
|
@ -183,15 +195,16 @@ class Installer {
|
|||
|
||||
if (toRemove.length === 0) return;
|
||||
|
||||
await this.ideManager.ensureInitialized();
|
||||
for (const ide of toRemove) {
|
||||
try {
|
||||
const handler = this.ideManager.handlers.get(ide);
|
||||
if (handler) {
|
||||
await handler.cleanup(paths.projectRoot);
|
||||
}
|
||||
} catch (error) {
|
||||
await prompts.log.warn(`Warning: Failed to remove ${ide}: ${error.message}`);
|
||||
// Pass the newly-selected list as remainingIdes so cleanupByList skips
|
||||
// target_dir wipes for IDEs whose directory is still owned by a peer
|
||||
// (e.g. removing 'cursor' while 'gemini' remains — both share .agents/skills).
|
||||
const results = await this.ideManager.cleanupByList(paths.projectRoot, toRemove, {
|
||||
remainingIdes: [...newlySelected],
|
||||
});
|
||||
|
||||
for (const result of results || []) {
|
||||
if (result && result.success === false) {
|
||||
await prompts.log.warn(`Warning: Failed to remove ${result.ide}: ${result.error || 'unknown error'}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -298,6 +311,19 @@ class Installer {
|
|||
moduleConfigs,
|
||||
});
|
||||
|
||||
// Apply post-install --set TOML patches. Runs after writeCentralConfig
|
||||
// (inside generateManifests above) so the patch operates on the
|
||||
// freshly written `_bmad/config.toml` / `_bmad/config.user.toml`.
|
||||
// See `tools/installer/set-overrides.js` for routing rules.
|
||||
if (config.setOverrides && Object.keys(config.setOverrides).length > 0) {
|
||||
const { applySetOverrides } = require('../set-overrides');
|
||||
const applied = await applySetOverrides(config.setOverrides, paths.bmadDir);
|
||||
if (applied.length > 0) {
|
||||
const summary = applied.map((a) => `${a.module}.${a.key} → ${a.file}`).join(', ');
|
||||
await prompts.log.info(`Applied --set overrides: ${summary}`);
|
||||
}
|
||||
}
|
||||
|
||||
message('Generating help catalog...');
|
||||
await this.mergeModuleHelpCatalogs(paths.bmadDir, manifestGen.agents);
|
||||
addResult('Help catalog', 'ok');
|
||||
|
|
@ -342,13 +368,14 @@ class Installer {
|
|||
return;
|
||||
}
|
||||
|
||||
for (const ide of validIdes) {
|
||||
const setupResult = await this.ideManager.setup(ide, paths.projectRoot, paths.bmadDir, {
|
||||
selectedModules: allModules || [],
|
||||
verbose: config.verbose,
|
||||
previousSkillIds,
|
||||
});
|
||||
const setupResults = await this.ideManager.setupBatch(validIdes, paths.projectRoot, paths.bmadDir, {
|
||||
selectedModules: allModules || [],
|
||||
verbose: config.verbose,
|
||||
previousSkillIds,
|
||||
});
|
||||
|
||||
for (const setupResult of setupResults) {
|
||||
const ide = setupResult.ide;
|
||||
if (setupResult.success) {
|
||||
addResult(ide, 'ok', setupResult.detail || '');
|
||||
} else {
|
||||
|
|
@ -910,29 +937,15 @@ class Installer {
|
|||
/**
|
||||
* Merge all module-help.csv files into a single bmad-help.csv.
|
||||
* Scans all installed modules for module-help.csv and merges them.
|
||||
* Enriches agent info from the in-memory agent list produced by ManifestGenerator.
|
||||
* Output is written to _bmad/_config/bmad-help.csv.
|
||||
* Output preserves the source schema verbatim — see schema below.
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @param {Array<Object>} agentEntries - Agents collected from module.yaml (code, name, title, icon, module, ...)
|
||||
* @param {Array<Object>} _agentEntries - Unused; retained for call-site compatibility
|
||||
*/
|
||||
async mergeModuleHelpCatalogs(bmadDir, agentEntries = []) {
|
||||
async mergeModuleHelpCatalogs(bmadDir, _agentEntries = []) {
|
||||
const allRows = [];
|
||||
const headerRow =
|
||||
'module,phase,name,code,sequence,workflow-file,command,required,agent-name,agent-command,agent-display-name,agent-title,options,description,output-location,outputs';
|
||||
|
||||
// Build agent lookup from the in-memory list (agent code → command + display fields).
|
||||
const agentInfo = new Map();
|
||||
for (const agent of agentEntries) {
|
||||
if (!agent || !agent.code) continue;
|
||||
const agentCommand = agent.module ? `bmad:${agent.module}:agent:${agent.code}` : `bmad:agent:${agent.code}`;
|
||||
const displayName = agent.name || agent.code;
|
||||
const titleCombined = agent.icon && agent.title ? `${agent.icon} ${agent.title}` : agent.title || agent.code;
|
||||
agentInfo.set(agent.code, {
|
||||
command: agentCommand,
|
||||
displayName,
|
||||
title: titleCombined,
|
||||
});
|
||||
}
|
||||
const headerRow = MODULE_HELP_CSV_HEADER;
|
||||
const COLUMN_COUNT = 13;
|
||||
const PHASE_INDEX = 7;
|
||||
|
||||
// Get all installed module directories
|
||||
const entries = await fs.readdir(bmadDir, { withFileTypes: true });
|
||||
|
|
@ -963,72 +976,37 @@ class Installer {
|
|||
const content = await fs.readFile(helpFilePath, 'utf8');
|
||||
const lines = content.split('\n').filter((line) => line.trim() && !line.startsWith('#'));
|
||||
|
||||
let headerWarned = false;
|
||||
for (const line of lines) {
|
||||
// Skip header row
|
||||
// Header row: warn on drift from canonical schema, then skip.
|
||||
// Data rows are loaded positionally regardless, so the warning
|
||||
// is advisory — the maintainer should rename their columns.
|
||||
if (line.startsWith('module,')) {
|
||||
if (!headerWarned && line.trim() !== headerRow) {
|
||||
await prompts.log.warn(
|
||||
` ${moduleName}/module-help.csv header does not match canonical schema. ` +
|
||||
`Expected: ${headerRow} | Found: ${line.trim()} | Data loaded positionally.`,
|
||||
);
|
||||
headerWarned = true;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
// Parse the line - handle quoted fields with commas
|
||||
const columns = this.parseCSVLine(line);
|
||||
if (columns.length >= 12) {
|
||||
// Map old schema to new schema
|
||||
// Old: module,phase,name,code,sequence,workflow-file,command,required,agent,options,description,output-location,outputs
|
||||
// New: module,phase,name,code,sequence,workflow-file,command,required,agent-name,agent-command,agent-display-name,agent-title,options,description,output-location,outputs
|
||||
if (columns.length < COLUMN_COUNT - 1) continue;
|
||||
|
||||
const [
|
||||
module,
|
||||
phase,
|
||||
name,
|
||||
code,
|
||||
sequence,
|
||||
workflowFile,
|
||||
command,
|
||||
required,
|
||||
agentName,
|
||||
options,
|
||||
description,
|
||||
outputLocation,
|
||||
outputs,
|
||||
] = columns;
|
||||
// Pad short rows; truncate over-long rows
|
||||
const padded = columns.slice(0, COLUMN_COUNT);
|
||||
while (padded.length < COLUMN_COUNT) padded.push('');
|
||||
|
||||
// Pass through _meta rows as-is (module metadata, not a skill)
|
||||
if (phase === '_meta') {
|
||||
const finalModule = (!module || module.trim() === '') && moduleName !== 'core' ? moduleName : module || '';
|
||||
const metaRow = [finalModule, '_meta', '', '', '', '', '', 'false', '', '', '', '', '', '', outputLocation || '', ''];
|
||||
allRows.push(metaRow.map((c) => this.escapeCSVField(c)).join(','));
|
||||
continue;
|
||||
}
|
||||
|
||||
// If module column is empty, set it to this module's name (except for core which stays empty for universal tools)
|
||||
const finalModule = (!module || module.trim() === '') && moduleName !== 'core' ? moduleName : module || '';
|
||||
|
||||
// Lookup agent info
|
||||
const cleanAgentName = agentName ? agentName.trim() : '';
|
||||
const agentData = agentInfo.get(cleanAgentName) || { command: '', displayName: '', title: '' };
|
||||
|
||||
// Build new row with agent info
|
||||
const newRow = [
|
||||
finalModule,
|
||||
phase || '',
|
||||
name || '',
|
||||
code || '',
|
||||
sequence || '',
|
||||
workflowFile || '',
|
||||
command || '',
|
||||
required || 'false',
|
||||
cleanAgentName,
|
||||
agentData.command,
|
||||
agentData.displayName,
|
||||
agentData.title,
|
||||
options || '',
|
||||
description || '',
|
||||
outputLocation || '',
|
||||
outputs || '',
|
||||
];
|
||||
|
||||
allRows.push(newRow.map((c) => this.escapeCSVField(c)).join(','));
|
||||
// If module column is empty, fill with this module's name
|
||||
// (core stays empty so its rows render as universal tools)
|
||||
if ((!padded[0] || padded[0].trim() === '') && moduleName !== 'core') {
|
||||
padded[0] = moduleName;
|
||||
}
|
||||
|
||||
allRows.push(padded.map((c) => this.escapeCSVField(c)).join(','));
|
||||
}
|
||||
|
||||
if (process.env.BMAD_VERBOSE_INSTALL === 'true') {
|
||||
|
|
@ -1040,44 +1018,34 @@ class Installer {
|
|||
}
|
||||
}
|
||||
|
||||
// Sort by module, then phase, then sequence
|
||||
allRows.sort((a, b) => {
|
||||
const colsA = this.parseCSVLine(a);
|
||||
const colsB = this.parseCSVLine(b);
|
||||
// Sort by module, then phase. Stable sort preserves authored order within a phase.
|
||||
const decorated = allRows.map((row, index) => ({ row, index, cols: this.parseCSVLine(row) }));
|
||||
decorated.sort((a, b) => {
|
||||
const moduleA = (a.cols[0] || '').toLowerCase();
|
||||
const moduleB = (b.cols[0] || '').toLowerCase();
|
||||
if (moduleA !== moduleB) return moduleA.localeCompare(moduleB);
|
||||
|
||||
// Module comparison (empty module/universal tools come first)
|
||||
const moduleA = (colsA[0] || '').toLowerCase();
|
||||
const moduleB = (colsB[0] || '').toLowerCase();
|
||||
if (moduleA !== moduleB) {
|
||||
return moduleA.localeCompare(moduleB);
|
||||
}
|
||||
const phaseA = a.cols[PHASE_INDEX] || '';
|
||||
const phaseB = b.cols[PHASE_INDEX] || '';
|
||||
if (phaseA !== phaseB) return phaseA.localeCompare(phaseB);
|
||||
|
||||
// Phase comparison
|
||||
const phaseA = colsA[1] || '';
|
||||
const phaseB = colsB[1] || '';
|
||||
if (phaseA !== phaseB) {
|
||||
return phaseA.localeCompare(phaseB);
|
||||
}
|
||||
|
||||
// Sequence comparison
|
||||
const seqA = parseInt(colsA[4] || '0', 10);
|
||||
const seqB = parseInt(colsB[4] || '0', 10);
|
||||
return seqA - seqB;
|
||||
return a.index - b.index;
|
||||
});
|
||||
const sortedRows = decorated.map((d) => d.row);
|
||||
|
||||
// Write merged catalog
|
||||
const outputDir = path.join(bmadDir, '_config');
|
||||
await fs.ensureDir(outputDir);
|
||||
const outputPath = path.join(outputDir, 'bmad-help.csv');
|
||||
|
||||
const mergedContent = [headerRow, ...allRows].join('\n');
|
||||
const mergedContent = [headerRow, ...sortedRows].join('\n');
|
||||
await fs.writeFile(outputPath, mergedContent, 'utf8');
|
||||
|
||||
// Track the installed file
|
||||
this.installedFiles.add(outputPath);
|
||||
|
||||
if (process.env.BMAD_VERBOSE_INSTALL === 'true') {
|
||||
await prompts.log.message(` Generated bmad-help.csv: ${allRows.length} workflows`);
|
||||
await prompts.log.message(` Generated bmad-help.csv: ${sortedRows.length} workflows`);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -1339,6 +1307,10 @@ class Installer {
|
|||
ides: configuredIdes,
|
||||
coreConfig: quickModules.collectedConfig.core,
|
||||
moduleConfigs: quickModules.collectedConfig,
|
||||
// Forward `--set` overrides so the post-install patch step
|
||||
// (`applySetOverrides`) runs at the end of quick-update too. The
|
||||
// installer.install path applies them after writeCentralConfig.
|
||||
setOverrides: config.setOverrides || {},
|
||||
actionType: 'install',
|
||||
_quickUpdate: true,
|
||||
_preserveModules: skippedModules,
|
||||
|
|
|
|||
|
|
@ -0,0 +1,151 @@
|
|||
const os = require('node:os');
|
||||
const path = require('node:path');
|
||||
const semver = require('semver');
|
||||
const fs = require('../fs-native');
|
||||
const prompts = require('../prompts');
|
||||
const { BMAD_FOLDER_NAME } = require('../ide/shared/path-utils');
|
||||
const { getInstalledCanonicalIds, isBmadOwnedEntry } = require('../ide/shared/installed-skills');
|
||||
|
||||
const MIN_NATIVE_SKILLS_VERSION = '6.1.0';
|
||||
|
||||
// Pre-v6.1.0 paths: BMAD used to install commands/workflows/etc in tool-specific dirs.
|
||||
// In v6.1.0 BMAD switched to native SKILL.md format.
|
||||
const LEGACY_COMMAND_PATHS = [
|
||||
'.agent/workflows',
|
||||
'.augment/commands',
|
||||
'.claude/commands',
|
||||
'.clinerules/workflows',
|
||||
'.codex/prompts',
|
||||
'~/.codex/prompts',
|
||||
'.codebuddy/commands',
|
||||
'.crush/commands',
|
||||
'.cursor/commands',
|
||||
'.gemini/commands',
|
||||
'.github/agents',
|
||||
'.github/prompts',
|
||||
'.iflow/commands',
|
||||
'.kilocode/workflows',
|
||||
'.kiro/steering',
|
||||
'.opencode/agents',
|
||||
'.opencode/commands',
|
||||
'.opencode/agent',
|
||||
'.opencode/command',
|
||||
'.qwen/commands',
|
||||
'.roo/commands',
|
||||
'.rovodev/workflows',
|
||||
'.trae/rules',
|
||||
'.windsurf/workflows',
|
||||
];
|
||||
|
||||
// Skill paths that moved to the cross-tool .agents/skills/ standard.
|
||||
// Users upgrading from a prior install may have stale BMAD skills here that
|
||||
// the AI tool will load alongside the new ones, causing duplicates.
|
||||
const LEGACY_SKILL_PATHS = [
|
||||
'.augment/skills',
|
||||
'~/.augment/skills',
|
||||
'.codex/skills',
|
||||
'.crush/skills',
|
||||
'.cursor/skills',
|
||||
'~/.cursor/skills',
|
||||
'.gemini/skills',
|
||||
'~/.gemini/skills',
|
||||
'.github/skills',
|
||||
'~/.github/skills',
|
||||
'.kilocode/skills',
|
||||
'.kimi/skills',
|
||||
'~/.kimi/skills',
|
||||
'.opencode/skills',
|
||||
'~/.opencode/skills',
|
||||
'.pi/skills',
|
||||
'~/.pi/skills',
|
||||
'.roo/skills',
|
||||
'~/.roo/skills',
|
||||
'.rovodev/skills',
|
||||
'~/.rovodev/skills',
|
||||
'.windsurf/skills',
|
||||
'~/.windsurf/skills',
|
||||
'~/.codeium/windsurf/skills',
|
||||
];
|
||||
|
||||
const LEGACY_PATHS = [...LEGACY_COMMAND_PATHS, ...LEGACY_SKILL_PATHS];
|
||||
|
||||
function expandPath(p) {
|
||||
if (p === '~') return os.homedir();
|
||||
if (p.startsWith('~/')) return path.join(os.homedir(), p.slice(2));
|
||||
return p;
|
||||
}
|
||||
|
||||
function resolveLegacyPath(projectRoot, p) {
|
||||
if (path.isAbsolute(p) || p.startsWith('~')) return expandPath(p);
|
||||
return path.join(projectRoot, p);
|
||||
}
|
||||
|
||||
async function findStaleLegacyDirs(projectRoot) {
|
||||
const bmadDir = path.join(projectRoot, BMAD_FOLDER_NAME);
|
||||
const canonicalIds = await getInstalledCanonicalIds(bmadDir);
|
||||
|
||||
const findings = [];
|
||||
for (const legacyPath of LEGACY_PATHS) {
|
||||
const resolved = resolveLegacyPath(projectRoot, legacyPath);
|
||||
if (!(await fs.pathExists(resolved))) continue;
|
||||
try {
|
||||
const entries = await fs.readdir(resolved);
|
||||
const bmadEntries = entries.filter((e) => isBmadOwnedEntry(e, canonicalIds));
|
||||
if (bmadEntries.length > 0) {
|
||||
findings.push({ path: resolved, displayPath: legacyPath, count: bmadEntries.length, entries: bmadEntries });
|
||||
}
|
||||
} catch {
|
||||
// Unreadable dir — skip
|
||||
}
|
||||
}
|
||||
return findings;
|
||||
}
|
||||
|
||||
function isPreNativeSkillsVersion(version) {
|
||||
if (!version) return false;
|
||||
const coerced = semver.valid(version) || semver.valid(semver.coerce(version));
|
||||
if (!coerced) return false;
|
||||
return semver.lt(coerced, MIN_NATIVE_SKILLS_VERSION);
|
||||
}
|
||||
|
||||
async function warnPreNativeSkillsLegacy({ projectRoot, existingVersion } = {}) {
|
||||
const versionTriggered = isPreNativeSkillsVersion(existingVersion);
|
||||
const staleDirs = await findStaleLegacyDirs(projectRoot);
|
||||
|
||||
if (!versionTriggered && staleDirs.length === 0) return;
|
||||
|
||||
if (versionTriggered) {
|
||||
await prompts.log.warn(
|
||||
`Detected previous BMAD install v${existingVersion} (pre-${MIN_NATIVE_SKILLS_VERSION}). ` +
|
||||
`BMAD switched to native skills format in v${MIN_NATIVE_SKILLS_VERSION}; old command/workflow directories from your prior install may still be present.`,
|
||||
);
|
||||
}
|
||||
|
||||
if (staleDirs.length > 0) {
|
||||
await prompts.log.warn(
|
||||
`Found stale BMAD entries in ${staleDirs.length} legacy location(s) that the new installer no longer manages. ` +
|
||||
`Your AI tool may load these alongside the new skills, causing duplicates. Remove them manually:`,
|
||||
);
|
||||
for (const finding of staleDirs) {
|
||||
// Print each entry by exact name. A `bmad*` glob would (a) miss
|
||||
// custom-module skills the canonicalId scan now picks up, and
|
||||
// (b) match bmad-os-* utility skills the user should keep.
|
||||
const entries = finding.entries || [];
|
||||
for (const entry of entries) {
|
||||
await prompts.log.message(` rm -rf "${path.join(finding.path, entry)}"`);
|
||||
}
|
||||
}
|
||||
} else if (versionTriggered) {
|
||||
await prompts.log.message(
|
||||
' No stale legacy directories detected, but if your AI tool shows duplicate BMAD commands after install, check for old `bmad-*` entries in tool-specific dirs (e.g. .claude/commands, .cursor/commands).',
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
warnPreNativeSkillsLegacy,
|
||||
findStaleLegacyDirs,
|
||||
isPreNativeSkillsVersion,
|
||||
LEGACY_PATHS,
|
||||
MIN_NATIVE_SKILLS_VERSION,
|
||||
};
|
||||
|
|
@ -435,6 +435,9 @@ class ManifestGenerator {
|
|||
// this means user-scoped keys (e.g. user_name) could mis-file into the
|
||||
// team config, so the operator should notice.
|
||||
const scopeByModuleKey = {};
|
||||
// Maps installer moduleName (may be full display name) → module code field
|
||||
// from module.yaml, so TOML sections use [modules.<code>] not [modules.<name>].
|
||||
const codeByModuleName = {};
|
||||
for (const moduleName of this.updatedModules) {
|
||||
const moduleYamlPath = await resolveInstalledModuleYaml(moduleName);
|
||||
if (!moduleYamlPath) {
|
||||
|
|
@ -447,6 +450,7 @@ class ManifestGenerator {
|
|||
try {
|
||||
const parsed = yaml.parse(await fs.readFile(moduleYamlPath, 'utf8'));
|
||||
if (!parsed || typeof parsed !== 'object') continue;
|
||||
if (parsed.code) codeByModuleName[moduleName] = parsed.code;
|
||||
scopeByModuleKey[moduleName] = {};
|
||||
for (const [key, value] of Object.entries(parsed)) {
|
||||
if (value && typeof value === 'object' && 'prompt' in value) {
|
||||
|
|
@ -545,6 +549,9 @@ class ManifestGenerator {
|
|||
if (moduleName === 'core') continue;
|
||||
const cfg = moduleConfigs[moduleName];
|
||||
if (!cfg || Object.keys(cfg).length === 0) continue;
|
||||
// Use the module's code field from module.yaml as the TOML key so the
|
||||
// section is [modules.mdo] not [modules.MDO: Maxio DevOps Operations].
|
||||
const sectionKey = codeByModuleName[moduleName] || moduleName;
|
||||
// Only filter out spread-from-core pollution when we actually know
|
||||
// this module's prompt schema. For external/marketplace modules whose
|
||||
// module.yaml isn't in the src tree, fall through as all-team so we
|
||||
|
|
@ -552,14 +559,14 @@ class ManifestGenerator {
|
|||
const haveSchema = Object.keys(scopeByModuleKey[moduleName] || {}).length > 0;
|
||||
const { team: modTeam, user: modUser } = partition(moduleName, cfg, haveSchema);
|
||||
if (Object.keys(modTeam).length > 0) {
|
||||
teamLines.push(`[modules.${moduleName}]`);
|
||||
teamLines.push(`[modules.${sectionKey}]`);
|
||||
for (const [key, value] of Object.entries(modTeam)) {
|
||||
teamLines.push(`${key} = ${formatTomlValue(value)}`);
|
||||
}
|
||||
teamLines.push('');
|
||||
}
|
||||
if (Object.keys(modUser).length > 0) {
|
||||
userLines.push(`[modules.${moduleName}]`);
|
||||
userLines.push(`[modules.${sectionKey}]`);
|
||||
for (const [key, value] of Object.entries(modUser)) {
|
||||
userLines.push(`${key} = ${formatTomlValue(value)}`);
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,10 +1,129 @@
|
|||
const os = require('node:os');
|
||||
const path = require('node:path');
|
||||
const fs = require('../fs-native');
|
||||
const yaml = require('yaml');
|
||||
const prompts = require('../prompts');
|
||||
const csv = require('csv-parse/sync');
|
||||
const { BMAD_FOLDER_NAME } = require('./shared/path-utils');
|
||||
const { getInstalledCanonicalIds, isBmadOwnedEntry } = require('./shared/installed-skills');
|
||||
|
||||
// Reserved OpenCode slash commands. A skill whose canonicalId collides with
|
||||
// one of these is skipped during command-pointer generation so it doesn't
|
||||
// shadow a built-in.
|
||||
const RESERVED_OPENCODE_COMMANDS = new Set([
|
||||
'review',
|
||||
'commit',
|
||||
'init',
|
||||
'help',
|
||||
'skills',
|
||||
'fast',
|
||||
'compact',
|
||||
'clear',
|
||||
'undo',
|
||||
'redo',
|
||||
'edit',
|
||||
'editor',
|
||||
'exit',
|
||||
'quit',
|
||||
'theme',
|
||||
'config',
|
||||
'model',
|
||||
'session',
|
||||
]);
|
||||
|
||||
// Wrap a description for safe insertion into single-line YAML frontmatter.
|
||||
// Leaves plain values untouched; double-quotes (and escapes) anything that
|
||||
// could break YAML parsing or span multiple lines.
|
||||
function yamlSafeSingleLine(value) {
|
||||
const collapsed = String(value)
|
||||
.replaceAll(/[\r\n]+/g, ' ')
|
||||
.trim();
|
||||
const needsQuoting = /[:#'"\\]/.test(collapsed) || /^[!&*?|>%@`[{]/.test(collapsed);
|
||||
if (!needsQuoting) return collapsed;
|
||||
const escaped = collapsed.replaceAll('\\', '\\\\').replaceAll('"', String.raw`\"`);
|
||||
return `"${escaped}"`;
|
||||
}
|
||||
|
||||
// Validate that a canonicalId is a safe basename — no path separators, no
|
||||
// parent-dir traversal, no leading dots, only the character set we expect.
|
||||
// Defense-in-depth: the manifest is trusted today, but the value flows
|
||||
// directly into a file path and a malformed entry should not write outside
|
||||
// the commands directory.
|
||||
function isSafeCanonicalId(value) {
|
||||
return typeof value === 'string' && /^[a-zA-Z0-9][a-zA-Z0-9_.-]*$/.test(value) && !value.includes('..');
|
||||
}
|
||||
|
||||
// Default body template for command pointer files. Used when a platform's
|
||||
// installer config doesn't override `commands_body_template`. Matches
|
||||
// OpenCode's native `@skills/<id>` skill-reference syntax.
|
||||
const DEFAULT_COMMANDS_BODY_TEMPLATE = '@skills/{canonicalId}';
|
||||
|
||||
// Is this skill a persona agent (vs. a workflow/tool/standalone skill)?
|
||||
// Used by platforms that surface only persona agents (e.g. Copilot's Custom
|
||||
// Agents picker). Signal: the skill's source `customize.toml` has an
|
||||
// `[agent]` section. This is the actual configuration source of truth —
|
||||
// every BMAD persona is configured via [agent] in its customize.toml,
|
||||
// every workflow uses [workflow], every standalone skill has no
|
||||
// customize.toml at all. Verified against the full installed manifest:
|
||||
// catches exactly the 20 description-confirmed personas across BMM, CIS,
|
||||
// GDS, WDS, TEA, and correctly excludes meta-skills like
|
||||
// `bmad-agent-builder` (a skill-builder workflow whose canonical id
|
||||
// contains `-agent-` but which has no [agent] section because it isn't a
|
||||
// persona itself).
|
||||
//
|
||||
// Reading the source toml — at install time the source skill directory
|
||||
// (resolved from manifest record.path) still exists; cleanup runs later
|
||||
// in the install flow.
|
||||
async function isAgentSkill(record, bmadDir) {
|
||||
if (!record?.path || !bmadDir) return false;
|
||||
const bmadFolderName = path.basename(bmadDir);
|
||||
const bmadPrefix = bmadFolderName + '/';
|
||||
const relativePath = record.path.startsWith(bmadPrefix) ? record.path.slice(bmadPrefix.length) : record.path;
|
||||
const tomlPath = path.join(bmadDir, path.dirname(relativePath), 'customize.toml');
|
||||
if (!(await fs.pathExists(tomlPath))) return false;
|
||||
try {
|
||||
const content = await fs.readFile(tomlPath, 'utf8');
|
||||
return /^\[agent\]/m.test(content);
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// Resolve placeholders in a body template. Supported placeholders:
|
||||
// {canonicalId} — the skill's canonical id
|
||||
// {target_dir} — the platform's skill install directory (e.g. .agents/skills)
|
||||
// {project-root} — left as a literal placeholder for the model/tool to expand
|
||||
// at runtime; consistent with PR #1769's templates.
|
||||
function expandBodyTemplate(template, { canonicalId, targetDir }) {
|
||||
return template.replaceAll('{canonicalId}', canonicalId).replaceAll('{target_dir}', targetDir);
|
||||
}
|
||||
|
||||
// The exact body the installer would generate for a given description and
|
||||
// canonicalId, given the platform's body template. Centralised so both the
|
||||
// write and the freshness-check paths agree on the canonical form.
|
||||
function buildCommandPointerBody(description, canonicalId, { template, targetDir }) {
|
||||
const bodyText = expandBodyTemplate(template, { canonicalId, targetDir });
|
||||
return `---\ndescription: ${yamlSafeSingleLine(description)}\n---\n\n${bodyText}\n`;
|
||||
}
|
||||
|
||||
// Heuristic: does an existing pointer file look like our generator's output
|
||||
// (and therefore safe to refresh) versus a user-modified file (which we
|
||||
// preserve)? We check the body shape rather than full equality so that
|
||||
// description-only edits in the manifest can propagate without trampling
|
||||
// hand edits to the body.
|
||||
function looksLikeGeneratorOutput(content, canonicalId, { template, targetDir }) {
|
||||
if (typeof content !== 'string') return false;
|
||||
const trimmed = content.trim();
|
||||
const expectedTail = expandBodyTemplate(template, { canonicalId, targetDir }).trim();
|
||||
// Must end with the exact body our generator writes (post-expansion).
|
||||
if (!trimmed.endsWith(expectedTail)) return false;
|
||||
// Must start with frontmatter containing exactly one description: line.
|
||||
const fmMatch = trimmed.match(/^---\n([\S\s]*?)\n---\n/);
|
||||
if (!fmMatch) return false;
|
||||
const fmLines = fmMatch[1].split('\n').filter((l) => l.length > 0);
|
||||
if (fmLines.length !== 1) return false;
|
||||
if (!fmLines[0].startsWith('description:')) return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Config-driven IDE setup handler
|
||||
|
|
@ -16,7 +135,7 @@ const { BMAD_FOLDER_NAME } = require('./shared/path-utils');
|
|||
* Features:
|
||||
* - Config-driven from platform-codes.yaml
|
||||
* - Verbatim skill installation from skill-manifest.csv
|
||||
* - Legacy directory cleanup and IDE-specific marker removal
|
||||
* - IDE-specific marker removal (copilot-instructions, kilo modes, rovodev prompts)
|
||||
*/
|
||||
class ConfigDrivenIdeSetup {
|
||||
constructor(platformCode, platformConfig) {
|
||||
|
|
@ -44,16 +163,20 @@ class ConfigDrivenIdeSetup {
|
|||
async detect(projectDir) {
|
||||
if (!this.configDir) return false;
|
||||
|
||||
const dir = path.join(projectDir || process.cwd(), this.configDir);
|
||||
if (await fs.pathExists(dir)) {
|
||||
try {
|
||||
const entries = await fs.readdir(dir);
|
||||
return entries.some((e) => typeof e === 'string' && e.startsWith('bmad'));
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
const root = projectDir || process.cwd();
|
||||
const dir = path.join(root, this.configDir);
|
||||
if (!(await fs.pathExists(dir))) return false;
|
||||
|
||||
let entries;
|
||||
try {
|
||||
entries = await fs.readdir(dir);
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
return false;
|
||||
|
||||
const bmadDir = await this._findBmadDir(root);
|
||||
const canonicalIds = await getInstalledCanonicalIds(bmadDir);
|
||||
return entries.some((e) => isBmadOwnedEntry(e, canonicalIds));
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -92,6 +215,18 @@ class ConfigDrivenIdeSetup {
|
|||
return { success: false, reason: 'no-config' };
|
||||
}
|
||||
|
||||
// When a peer platform in the same install batch owns this target_dir,
|
||||
// skip the skill write — the peer has already populated it. Command
|
||||
// pointers, however, write to a separate per-IDE directory and must
|
||||
// still be generated for this IDE; they are not deduped across peers.
|
||||
if (options.skipTarget) {
|
||||
const results = { skills: 0, sharedTargetHandledByPeer: true };
|
||||
if (this.installerConfig.commands_target_dir) {
|
||||
results.commands = await this.installCommandPointers(projectDir, bmadDir, this.installerConfig, options);
|
||||
}
|
||||
return { success: true, results };
|
||||
}
|
||||
|
||||
if (this.installerConfig.target_dir) {
|
||||
return this.installToTarget(projectDir, bmadDir, this.installerConfig, options);
|
||||
}
|
||||
|
|
@ -118,11 +253,157 @@ class ConfigDrivenIdeSetup {
|
|||
results.skills = await this.installVerbatimSkills(projectDir, bmadDir, targetPath, config);
|
||||
results.skillDirectories = this.skillWriteTracker.size;
|
||||
|
||||
if (config.commands_target_dir) {
|
||||
results.commands = await this.installCommandPointers(projectDir, bmadDir, config, options);
|
||||
}
|
||||
|
||||
await this.printSummary(results, target_dir, options);
|
||||
this.skillWriteTracker = null;
|
||||
return { success: true, results };
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate per-skill command pointer files for IDEs that surface commands
|
||||
* separately from skills (e.g. OpenCode's `.opencode/commands/<name>.md`).
|
||||
*
|
||||
* Each pointer is a tiny markdown file whose body is `@skills/<canonicalId>`
|
||||
* so invoking `/<canonicalId>` routes the user straight to the skill instead
|
||||
* of forcing them through a `/skills` menu.
|
||||
*
|
||||
* Skips:
|
||||
* - Names that collide with reserved built-in slash commands.
|
||||
* - canonicalIds that aren't safe basename-only identifiers (defense
|
||||
* against path traversal even though the manifest is currently trusted).
|
||||
* - Existing files whose body looks user-modified (preserves hand edits);
|
||||
* pointer files matching the generator pattern get overwritten so that
|
||||
* description changes in skill-manifest.csv propagate on re-install.
|
||||
*
|
||||
* Per-file write failures are recorded and reported but do not abort the
|
||||
* rest of the install — pointer files are a non-essential adjunct to the
|
||||
* skill copy that already succeeded.
|
||||
*
|
||||
* @param {string} projectDir
|
||||
* @param {string} bmadDir
|
||||
* @param {Object} config - Installer config; reads commands_target_dir.
|
||||
* @param {Object} options - Setup options. forceCommands overwrites existing
|
||||
* files unconditionally (including hand-modified ones).
|
||||
* @returns {Promise<Object>} { created, updated, skippedExisting, skippedCollision, skippedInvalidId, writeFailures, fallbackDescription }
|
||||
*/
|
||||
async installCommandPointers(projectDir, bmadDir, config, options = {}) {
|
||||
const result = {
|
||||
created: 0,
|
||||
updated: 0,
|
||||
skippedExisting: 0,
|
||||
skippedCollision: 0,
|
||||
skippedInvalidId: 0,
|
||||
skippedFiltered: 0,
|
||||
writeFailures: 0,
|
||||
fallbackDescription: 0,
|
||||
};
|
||||
|
||||
const csvPath = path.join(bmadDir, '_config', 'skill-manifest.csv');
|
||||
if (!(await fs.pathExists(csvPath))) return result;
|
||||
|
||||
const commandsPath = path.join(projectDir, config.commands_target_dir);
|
||||
await fs.ensureDir(commandsPath);
|
||||
|
||||
// Per-platform pointer-file shape, all overrideable in platform-codes.yaml.
|
||||
const extension = config.commands_extension || '.md';
|
||||
const template = config.commands_body_template || DEFAULT_COMMANDS_BODY_TEMPLATE;
|
||||
const targetDir = config.target_dir;
|
||||
const filter = config.commands_filter || null;
|
||||
|
||||
const csvContent = await fs.readFile(csvPath, 'utf8');
|
||||
const records = csv.parse(csvContent, { columns: true, skip_empty_lines: true });
|
||||
|
||||
for (const record of records) {
|
||||
const canonicalId = record.canonicalId;
|
||||
if (!canonicalId) continue;
|
||||
|
||||
// Defensive basename validation. canonicalId comes from a trusted
|
||||
// manifest today, but the value flows directly into a file path —
|
||||
// reject anything that could escape commands_target_dir.
|
||||
if (!isSafeCanonicalId(canonicalId)) {
|
||||
result.skippedInvalidId++;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Optional per-platform filter: surfaces that should only show
|
||||
// persona agents (e.g. Copilot's Custom Agents picker) skip
|
||||
// workflow/tool skills here so the picker isn't cluttered with
|
||||
// 90+ unrelated entries.
|
||||
if (filter === 'agents-only' && !(await isAgentSkill(record, bmadDir))) {
|
||||
result.skippedFiltered++;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Reserved-name guard is OpenCode-specific. Other adapters that opt
|
||||
// into commands_target_dir later should declare their own reserved
|
||||
// set rather than inheriting OpenCode's.
|
||||
if (this.name === 'opencode' && RESERVED_OPENCODE_COMMANDS.has(canonicalId)) {
|
||||
result.skippedCollision++;
|
||||
continue;
|
||||
}
|
||||
|
||||
let description = (record.description || '').trim();
|
||||
if (!description) {
|
||||
description = `Run the ${canonicalId} skill`;
|
||||
result.fallbackDescription++;
|
||||
}
|
||||
|
||||
const body = buildCommandPointerBody(description, canonicalId, { template, targetDir });
|
||||
const commandFile = path.join(commandsPath, `${canonicalId}${extension}`);
|
||||
|
||||
// If a pointer file already exists, decide whether to overwrite based
|
||||
// on whether it looks like generator output (description-only diff) or
|
||||
// a user-modified file. forceCommands overrides this protection.
|
||||
if (!options.forceCommands && (await fs.pathExists(commandFile))) {
|
||||
let existing;
|
||||
try {
|
||||
existing = await fs.readFile(commandFile, 'utf8');
|
||||
} catch {
|
||||
// Treat unreadable as user-owned and skip — safer than overwriting.
|
||||
result.skippedExisting++;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (existing === body) {
|
||||
// No-op idempotent re-run.
|
||||
result.skippedExisting++;
|
||||
continue;
|
||||
}
|
||||
if (looksLikeGeneratorOutput(existing, canonicalId, { template, targetDir })) {
|
||||
// Description (or other generated bit) has changed; refresh in place.
|
||||
try {
|
||||
await fs.writeFile(commandFile, body, 'utf8');
|
||||
result.updated++;
|
||||
} catch (error) {
|
||||
result.writeFailures++;
|
||||
if (!options.silent) {
|
||||
await prompts.log.warn(`Failed to update command pointer ${canonicalId}${extension}: ${error.message}`);
|
||||
}
|
||||
}
|
||||
continue;
|
||||
}
|
||||
// Hand-modified pointer — preserve it.
|
||||
result.skippedExisting++;
|
||||
continue;
|
||||
}
|
||||
|
||||
try {
|
||||
await fs.writeFile(commandFile, body, 'utf8');
|
||||
result.created++;
|
||||
} catch (error) {
|
||||
result.writeFailures++;
|
||||
if (!options.silent) {
|
||||
await prompts.log.warn(`Failed to write command pointer ${canonicalId}${extension}: ${error.message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Install verbatim native SKILL.md directories from skill-manifest.csv.
|
||||
* Copies the entire source directory as-is into the IDE skill directory.
|
||||
|
|
@ -197,6 +478,18 @@ class ConfigDrivenIdeSetup {
|
|||
if (count > 0) {
|
||||
await prompts.log.success(`${this.name} configured: ${count} skills → ${targetDir}`);
|
||||
}
|
||||
const cmd = results.commands;
|
||||
if (cmd && (cmd.created > 0 || cmd.updated > 0) && this.installerConfig?.commands_target_dir) {
|
||||
const total = cmd.created + cmd.updated;
|
||||
const detail = cmd.updated > 0 ? `${cmd.created} new, ${cmd.updated} refreshed` : `${total}`;
|
||||
await prompts.log.success(`${this.name} commands: ${detail} → ${this.installerConfig.commands_target_dir}`);
|
||||
if (cmd.skippedCollision > 0) {
|
||||
await prompts.log.message(` (${cmd.skippedCollision} skipped — name collides with reserved slash command)`);
|
||||
}
|
||||
if (cmd.writeFailures > 0) {
|
||||
await prompts.log.warn(` (${cmd.writeFailures} pointer writes failed — see warnings above)`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -222,27 +515,6 @@ class ConfigDrivenIdeSetup {
|
|||
removalSet = new Set();
|
||||
}
|
||||
|
||||
// Migrate legacy target directories (e.g. .opencode/agent → .opencode/agents)
|
||||
// Legacy dirs are abandoned entirely, so use prefix matching (null removalSet)
|
||||
if (this.installerConfig?.legacy_targets) {
|
||||
const legacyDirsExist = await Promise.all(
|
||||
this.installerConfig.legacy_targets.map((d) =>
|
||||
this.isGlobalPath(d) ? fs.pathExists(d.replace(/^~/, os.homedir())) : fs.pathExists(path.join(projectDir, d)),
|
||||
),
|
||||
);
|
||||
if (legacyDirsExist.some(Boolean)) {
|
||||
if (!options.silent) await prompts.log.message(' Migrating legacy directories...');
|
||||
for (const legacyDir of this.installerConfig.legacy_targets) {
|
||||
if (this.isGlobalPath(legacyDir)) {
|
||||
await this.warnGlobalLegacy(legacyDir, options);
|
||||
} else {
|
||||
await this.cleanupTarget(projectDir, legacyDir, options, null);
|
||||
await this.removeEmptyParents(projectDir, legacyDir);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Strip BMAD markers from copilot-instructions.md if present
|
||||
if (this.name === 'github-copilot') {
|
||||
await this.cleanupCopilotInstructions(projectDir, options);
|
||||
|
|
@ -258,47 +530,47 @@ class ConfigDrivenIdeSetup {
|
|||
await this.cleanupRovoDevPrompts(projectDir, options);
|
||||
}
|
||||
|
||||
// Clean generated command pointer files in commands_target_dir.
|
||||
// Mirrors target_dir cleanup so uninstalls and skill removals don't
|
||||
// leave dangling /<canonicalId> commands pointing at missing skills.
|
||||
// Runs regardless of skipTarget — command pointers live in a per-IDE
|
||||
// directory and are not deduped across peers, so a peer-owned shared
|
||||
// skills directory does not protect this IDE's command pointers from
|
||||
// cleanup. The "currently active" set is passed so install-flow cleanup
|
||||
// (where removalSet contains skills that will be re-added moments later)
|
||||
// doesn't trample hand-edited pointers; install-flow cleanup will only
|
||||
// delete pointers for skills that are not in the new manifest.
|
||||
if (this.installerConfig?.commands_target_dir) {
|
||||
// In the install/update flow (signal: previousSkillIds was passed),
|
||||
// spare pointers whose canonicalId is still in the manifest so hand
|
||||
// edits survive a routine reinstall. In the uninstall flow (no
|
||||
// previousSkillIds — full uninstall or per-IDE removal via
|
||||
// cleanupByList), don't spare anything; the IDE itself is going away,
|
||||
// so its pointers should go with it.
|
||||
const isInstallFlow = options.previousSkillIds && options.previousSkillIds.size > 0;
|
||||
const activeSkillIds = isInstallFlow ? await this._readActiveSkillIds(resolvedBmadDir) : new Set();
|
||||
const extension = this.installerConfig.commands_extension || '.md';
|
||||
await this.cleanupCommandPointers(
|
||||
projectDir,
|
||||
this.installerConfig.commands_target_dir,
|
||||
options,
|
||||
removalSet,
|
||||
activeSkillIds,
|
||||
extension,
|
||||
);
|
||||
}
|
||||
|
||||
// Skip target_dir cleanup when a peer platform owns this directory
|
||||
// (set during dedup'd install or when uninstalling one of several
|
||||
// platforms that share the same target_dir).
|
||||
if (options.skipTarget) return;
|
||||
|
||||
// Clean current target directory
|
||||
if (this.installerConfig?.target_dir) {
|
||||
await this.cleanupTarget(projectDir, this.installerConfig.target_dir, options, removalSet);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a path is global (starts with ~ or is absolute)
|
||||
* @param {string} p - Path to check
|
||||
* @returns {boolean}
|
||||
*/
|
||||
isGlobalPath(p) {
|
||||
return p.startsWith('~') || path.isAbsolute(p);
|
||||
}
|
||||
|
||||
/**
|
||||
* Warn about stale BMAD files in a global legacy directory (never auto-deletes)
|
||||
* @param {string} legacyDir - Legacy directory path (may start with ~)
|
||||
* @param {Object} options - Options (silent, etc.)
|
||||
*/
|
||||
async warnGlobalLegacy(legacyDir, options = {}) {
|
||||
try {
|
||||
const expanded = legacyDir.startsWith('~/')
|
||||
? path.join(os.homedir(), legacyDir.slice(2))
|
||||
: legacyDir === '~'
|
||||
? os.homedir()
|
||||
: legacyDir;
|
||||
|
||||
if (!(await fs.pathExists(expanded))) return;
|
||||
|
||||
const entries = await fs.readdir(expanded);
|
||||
const bmadFiles = entries.filter((e) => typeof e === 'string' && e.startsWith('bmad'));
|
||||
|
||||
if (bmadFiles.length > 0 && !options.silent) {
|
||||
await prompts.log.warn(`Found ${bmadFiles.length} stale BMAD file(s) in ${expanded}. Remove manually: rm ${expanded}/bmad-*`);
|
||||
}
|
||||
} catch {
|
||||
// Errors reading global paths are silently ignored
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Find the _bmad directory in a project
|
||||
* @param {string} projectDir - Project directory
|
||||
|
|
@ -387,6 +659,97 @@ class ConfigDrivenIdeSetup {
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Cleanup generated command pointer files for entries in removalSet.
|
||||
* Symmetric counterpart to installCommandPointers — removes
|
||||
* `<canonicalId><extension>` files whose canonicalId is in the set. Removes
|
||||
* the commands directory entirely if it ends up empty.
|
||||
* @param {string} projectDir
|
||||
* @param {string} commandsTargetDir - Relative dir (e.g. .opencode/commands)
|
||||
* @param {Object} options
|
||||
* @param {Set<string>} removalSet - canonicalIds whose pointer files to remove
|
||||
* @param {Set<string>} [activeSkillIds] - canonicalIds present in the
|
||||
* current manifest. Pointers for IDs in this set are spared so an
|
||||
* install-flow cleanup (where removalSet === previousSkillIds and the
|
||||
* same skills are about to be re-installed) doesn't wipe hand-edited
|
||||
* pointer files. Pass an empty set or omit to delete every match in
|
||||
* removalSet (uninstall flow).
|
||||
* @param {string} [extension] - Pointer file extension (default '.md');
|
||||
* matches the platform's commands_extension config value so cleanup
|
||||
* correctly identifies pointer files for IDEs whose convention isn't .md
|
||||
* (e.g. Copilot's `.agent.md`).
|
||||
*/
|
||||
async cleanupCommandPointers(
|
||||
projectDir,
|
||||
commandsTargetDir,
|
||||
options = {},
|
||||
removalSet = new Set(),
|
||||
activeSkillIds = new Set(),
|
||||
extension = '.md',
|
||||
) {
|
||||
if (!removalSet || removalSet.size === 0) return;
|
||||
|
||||
const commandsPath = path.join(projectDir, commandsTargetDir);
|
||||
if (!(await fs.pathExists(commandsPath))) return;
|
||||
|
||||
let entries;
|
||||
try {
|
||||
entries = await fs.readdir(commandsPath);
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
|
||||
for (const entry of entries) {
|
||||
if (!entry.endsWith(extension)) continue;
|
||||
const canonicalId = entry.slice(0, -extension.length);
|
||||
if (!removalSet.has(canonicalId)) continue;
|
||||
// Spare pointers for skills that are still in the manifest; the
|
||||
// install pass will refresh them in place if their content has gone
|
||||
// stale, while preserving hand edits.
|
||||
if (activeSkillIds.has(canonicalId)) continue;
|
||||
try {
|
||||
await fs.remove(path.join(commandsPath, entry));
|
||||
} catch {
|
||||
// Skip files we can't remove.
|
||||
}
|
||||
}
|
||||
|
||||
// Remove the commands directory if we emptied it.
|
||||
try {
|
||||
const remaining = await fs.readdir(commandsPath);
|
||||
if (remaining.length === 0) {
|
||||
await fs.remove(commandsPath);
|
||||
}
|
||||
} catch {
|
||||
// Directory may already be gone.
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Read the canonicalIds currently present in the skill-manifest.csv.
|
||||
* Used by cleanup to distinguish "re-install of an existing skill"
|
||||
* (preserve pointer) from "skill truly being removed" (delete pointer).
|
||||
* @param {string|null} bmadDir
|
||||
* @returns {Promise<Set<string>>}
|
||||
*/
|
||||
async _readActiveSkillIds(bmadDir) {
|
||||
const ids = new Set();
|
||||
if (!bmadDir) return ids;
|
||||
const csvPath = path.join(bmadDir, '_config', 'skill-manifest.csv');
|
||||
if (!(await fs.pathExists(csvPath))) return ids;
|
||||
try {
|
||||
const content = await fs.readFile(csvPath, 'utf8');
|
||||
const records = csv.parse(content, { columns: true, skip_empty_lines: true });
|
||||
for (const record of records) {
|
||||
if (record.canonicalId) ids.add(record.canonicalId);
|
||||
}
|
||||
} catch {
|
||||
// Manifest unreadable — return an empty set so cleanup falls back to
|
||||
// the conservative "delete what removalSet says" behavior.
|
||||
}
|
||||
return ids;
|
||||
}
|
||||
|
||||
/**
|
||||
* Cleanup a specific target directory.
|
||||
* When removalSet is provided, only removes entries in that set.
|
||||
|
|
@ -426,8 +789,8 @@ class ConfigDrivenIdeSetup {
|
|||
// Always preserve bmad-os-* utility skills regardless of cleanup mode
|
||||
if (entry.startsWith('bmad-os-')) continue;
|
||||
|
||||
// Surgical removal from set, or legacy prefix matching when set is null
|
||||
const shouldRemove = removalSet ? removalSet.has(entry) : entry.startsWith('bmad');
|
||||
// Surgical removal from set, or fallback to manifest+prefix detection when null
|
||||
const shouldRemove = removalSet ? removalSet.has(entry) : isBmadOwnedEntry(entry, null);
|
||||
|
||||
if (shouldRemove) {
|
||||
try {
|
||||
|
|
@ -590,10 +953,9 @@ class ConfigDrivenIdeSetup {
|
|||
try {
|
||||
if (await fs.pathExists(candidatePath)) {
|
||||
const entries = await fs.readdir(candidatePath);
|
||||
const hasBmad = entries.some(
|
||||
(e) => typeof e === 'string' && e.toLowerCase().startsWith('bmad') && !e.toLowerCase().startsWith('bmad-os-'),
|
||||
);
|
||||
if (hasBmad) {
|
||||
const ancestorBmadDir = await this._findBmadDir(current);
|
||||
const canonicalIds = await getInstalledCanonicalIds(ancestorBmadDir);
|
||||
if (entries.some((e) => isBmadOwnedEntry(e, canonicalIds))) {
|
||||
return candidatePath;
|
||||
}
|
||||
}
|
||||
|
|
@ -605,43 +967,6 @@ class ConfigDrivenIdeSetup {
|
|||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Walk up ancestor directories from relativeDir toward projectDir, removing each if empty
|
||||
* Stops at projectDir boundary — never removes projectDir itself
|
||||
* @param {string} projectDir - Project root (boundary)
|
||||
* @param {string} relativeDir - Relative directory to start from
|
||||
*/
|
||||
async removeEmptyParents(projectDir, relativeDir) {
|
||||
const resolvedProject = path.resolve(projectDir);
|
||||
let current = relativeDir;
|
||||
let last = null;
|
||||
while (current && current !== '.' && current !== last) {
|
||||
last = current;
|
||||
const fullPath = path.resolve(projectDir, current);
|
||||
// Boundary guard: never traverse outside projectDir
|
||||
if (!fullPath.startsWith(resolvedProject + path.sep) && fullPath !== resolvedProject) break;
|
||||
try {
|
||||
if (!(await fs.pathExists(fullPath))) {
|
||||
// Dir already gone — advance current; last is reset at top of next iteration
|
||||
current = path.dirname(current);
|
||||
continue;
|
||||
}
|
||||
const remaining = await fs.readdir(fullPath);
|
||||
if (remaining.length > 0) break;
|
||||
await fs.rmdir(fullPath);
|
||||
} catch (error) {
|
||||
// ENOTEMPTY: TOCTOU race (file added between readdir and rmdir) — skip level, continue upward
|
||||
// ENOENT: dir removed by another process between pathExists and rmdir — skip level, continue upward
|
||||
if (error.code === 'ENOTEMPTY' || error.code === 'ENOENT') {
|
||||
current = path.dirname(current);
|
||||
continue;
|
||||
}
|
||||
break; // fatal error (e.g. EACCES) — stop upward walk
|
||||
}
|
||||
current = path.dirname(current);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { ConfigDrivenIdeSetup };
|
||||
|
|
|
|||
|
|
@ -160,8 +160,18 @@ class IdeManager {
|
|||
let detail = '';
|
||||
if (handlerResult && handlerResult.results) {
|
||||
const r = handlerResult.results;
|
||||
const count = r.skillDirectories || r.skills || 0;
|
||||
if (count > 0) detail = `${count} skills`;
|
||||
let count = r.skillDirectories || r.skills || 0;
|
||||
// Dedup'd platform: report the count its peer wrote so the user sees
|
||||
// a consistent picture across all platforms sharing the dir.
|
||||
if (count === 0 && r.sharedTargetHandledByPeer && options.sharedSkillCount) {
|
||||
count = options.sharedSkillCount;
|
||||
}
|
||||
const targetDir = handler.installerConfig?.target_dir || null;
|
||||
if (count > 0 && targetDir) {
|
||||
detail = `${count} skills → ${targetDir}`;
|
||||
} else if (count > 0) {
|
||||
detail = `${count} skills`;
|
||||
}
|
||||
}
|
||||
// Propagate handler's success status (default true for backward compat)
|
||||
const success = handlerResult?.success !== false;
|
||||
|
|
@ -172,6 +182,57 @@ class IdeManager {
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Run setup for multiple IDEs as a single batch.
|
||||
* Dedupes work when several selected platforms share the same target_dir:
|
||||
* the first platform owns the directory write, peers skip it.
|
||||
* @param {Array<string>} ideList - IDE names to set up
|
||||
* @param {string} projectDir
|
||||
* @param {string} bmadDir
|
||||
* @param {Object} [options] - Forwarded to each handler.setup
|
||||
* @returns {Promise<Array>} Per-IDE results
|
||||
*/
|
||||
async setupBatch(ideList, projectDir, bmadDir, options = {}) {
|
||||
await this.ensureInitialized();
|
||||
const results = [];
|
||||
// target_dir → { firstIde, skillCount } from the platform that actually wrote it
|
||||
const claimedTargets = new Map();
|
||||
|
||||
for (const ideName of ideList) {
|
||||
const handler = this.handlers.get(ideName.toLowerCase());
|
||||
if (!handler) {
|
||||
results.push(await this.setup(ideName, projectDir, bmadDir, options));
|
||||
continue;
|
||||
}
|
||||
|
||||
const target = handler.installerConfig?.target_dir || null;
|
||||
const claim = target ? claimedTargets.get(target) : null;
|
||||
const skipTarget = !!claim;
|
||||
|
||||
const result = await this.setup(ideName, projectDir, bmadDir, {
|
||||
...options,
|
||||
skipTarget,
|
||||
sharedWith: claim?.firstIde || null,
|
||||
sharedTarget: target,
|
||||
sharedSkillCount: claim?.skillCount || 0,
|
||||
});
|
||||
|
||||
if (target && !claim) {
|
||||
const writtenCount = result.handlerResult?.results?.skillDirectories || result.handlerResult?.results?.skills || 0;
|
||||
// Only claim the target when the install actually succeeded and wrote skills.
|
||||
// If the first platform fails (ancestor conflict, exception, etc.), leave the
|
||||
// dir unclaimed so the next peer becomes the new first writer instead of
|
||||
// silently skipping into a broken/empty target_dir.
|
||||
if (result.success && writtenCount > 0) {
|
||||
claimedTargets.set(target, { firstIde: ideName, skillCount: writtenCount });
|
||||
}
|
||||
}
|
||||
results.push(result);
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
/**
|
||||
* Cleanup IDE configurations
|
||||
* @param {string} projectDir - Project directory
|
||||
|
|
@ -198,6 +259,8 @@ class IdeManager {
|
|||
* @param {string} projectDir - Project directory
|
||||
* @param {Array<string>} ideList - List of IDE names to clean up
|
||||
* @param {Object} [options] - Cleanup options passed through to handlers
|
||||
* options.remainingIdes - IDE names still installed after this cleanup; used
|
||||
* to skip target_dir wipe when a co-installed platform shares the dir.
|
||||
* @returns {Array} Results array
|
||||
*/
|
||||
async cleanupByList(projectDir, ideList, options = {}) {
|
||||
|
|
@ -211,13 +274,27 @@ class IdeManager {
|
|||
// Build lowercase lookup for case-insensitive matching
|
||||
const lowercaseHandlers = new Map([...this.handlers.entries()].map(([k, v]) => [k.toLowerCase(), v]));
|
||||
|
||||
// Resolve target_dirs for IDEs that will remain installed after this cleanup
|
||||
const remainingTargets = new Set();
|
||||
if (Array.isArray(options.remainingIdes)) {
|
||||
for (const remaining of options.remainingIdes) {
|
||||
const h = lowercaseHandlers.get(String(remaining).toLowerCase());
|
||||
const t = h?.installerConfig?.target_dir;
|
||||
if (t) remainingTargets.add(t);
|
||||
}
|
||||
}
|
||||
|
||||
for (const ideName of ideList) {
|
||||
const handler = lowercaseHandlers.get(ideName.toLowerCase());
|
||||
if (!handler) continue;
|
||||
|
||||
const target = handler.installerConfig?.target_dir || null;
|
||||
const skipTarget = target && remainingTargets.has(target);
|
||||
const cleanupOptions = skipTarget ? { ...options, skipTarget: true } : options;
|
||||
|
||||
try {
|
||||
await handler.cleanup(projectDir, options);
|
||||
results.push({ ide: ideName, success: true });
|
||||
await handler.cleanup(projectDir, cleanupOptions);
|
||||
results.push({ ide: ideName, success: true, skippedTarget: !!skipTarget });
|
||||
} catch (error) {
|
||||
results.push({ ide: ideName, success: false, error: error.message });
|
||||
}
|
||||
|
|
|
|||
|
|
@ -31,7 +31,50 @@ function clearCache() {
|
|||
_cachedPlatformCodes = null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Format the installable platform list for human-readable output (used by --list-tools).
|
||||
* Sourced from IdeManager so this view matches what --tools accepts at install time
|
||||
* (suspended platforms excluded).
|
||||
* @returns {Promise<string>} Formatted multi-line string with id, name, target_dir, preferred flag.
|
||||
*/
|
||||
async function formatPlatformList() {
|
||||
const { IdeManager } = require('./manager');
|
||||
const ideManager = new IdeManager();
|
||||
await ideManager.ensureInitialized();
|
||||
|
||||
const entries = ideManager.getAvailableIdes().map((ide) => {
|
||||
const handler = ideManager.handlers.get(ide.value);
|
||||
return {
|
||||
id: ide.value,
|
||||
name: ide.name,
|
||||
targetDir: handler?.installerConfig?.target_dir || '',
|
||||
preferred: ide.preferred,
|
||||
};
|
||||
});
|
||||
|
||||
const idWidth = Math.max(...entries.map((e) => e.id.length), 'ID'.length);
|
||||
const nameWidth = Math.max(...entries.map((e) => e.name.length), 'Name'.length);
|
||||
|
||||
const pad = (s, w) => s + ' '.repeat(Math.max(0, w - s.length));
|
||||
const lines = [
|
||||
`Supported tool IDs (pass via --tools <id>[,<id>...]):`,
|
||||
'',
|
||||
` ${pad('ID', idWidth)} ${pad('Name', nameWidth)} Target dir`,
|
||||
` ${pad('-'.repeat(idWidth), idWidth)} ${pad('-'.repeat(nameWidth), nameWidth)} ${'-'.repeat(10)}`,
|
||||
];
|
||||
|
||||
for (const e of entries) {
|
||||
const star = e.preferred ? ' *' : ' ';
|
||||
lines.push(`${star}${pad(e.id, idWidth)} ${pad(e.name, nameWidth)} ${e.targetDir}`);
|
||||
}
|
||||
|
||||
lines.push('', '* = recommended / preferred', '', 'Example: bmad-method install --modules bmm --tools claude-code');
|
||||
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
loadPlatformCodes,
|
||||
clearCache,
|
||||
formatPlatformList,
|
||||
};
|
||||
|
|
|
|||
|
|
@ -5,128 +5,218 @@
|
|||
# preferred: Whether shown as a recommended option on install
|
||||
# suspended: (optional) Message explaining why install is blocked
|
||||
# installer:
|
||||
# target_dir: Directory where skill directories are installed
|
||||
# legacy_targets: (optional) Old target dirs to clean up on reinstall
|
||||
# target_dir: Directory where skill directories are installed (project/workspace)
|
||||
# global_target_dir: (optional) User-home directory for global install
|
||||
# ancestor_conflict_check: (optional) Refuse install when ancestor dir has BMAD files
|
||||
#
|
||||
# Multiple platforms may share the same target_dir or global_target_dir — many tools
|
||||
# read from the shared `.agents/skills/` and `~/.agents/skills/` cross-tool standard.
|
||||
# Paths verified against each tool's primary docs as of 2026-04-25.
|
||||
|
||||
platforms:
|
||||
adal:
|
||||
name: "AdaL"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .adal/skills
|
||||
global_target_dir: ~/.adal/skills
|
||||
|
||||
amp:
|
||||
name: "Sourcegraph Amp"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.config/agents/skills
|
||||
|
||||
antigravity:
|
||||
name: "Google Antigravity"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .agent/workflows
|
||||
target_dir: .agent/skills
|
||||
global_target_dir: ~/.gemini/antigravity/skills
|
||||
|
||||
auggie:
|
||||
name: "Auggie"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .augment/commands
|
||||
target_dir: .augment/skills
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
|
||||
bob:
|
||||
name: "IBM Bob"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .bob/skills
|
||||
global_target_dir: ~/.bob/skills
|
||||
|
||||
claude-code:
|
||||
name: "Claude Code"
|
||||
preferred: true
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .claude/commands
|
||||
target_dir: .claude/skills
|
||||
global_target_dir: ~/.claude/skills
|
||||
|
||||
cline:
|
||||
name: "Cline"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .clinerules/workflows
|
||||
target_dir: .cline/skills
|
||||
global_target_dir: ~/.cline/skills
|
||||
|
||||
codex:
|
||||
name: "Codex"
|
||||
preferred: false
|
||||
preferred: true
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .codex/prompts
|
||||
- ~/.codex/prompts
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.codex/skills
|
||||
|
||||
codebuddy:
|
||||
name: "CodeBuddy"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .codebuddy/commands
|
||||
target_dir: .codebuddy/skills
|
||||
global_target_dir: ~/.codebuddy/skills
|
||||
|
||||
command-code:
|
||||
name: "Command Code"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
|
||||
cortex:
|
||||
name: "Snowflake Cortex Code"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .cortex/skills
|
||||
global_target_dir: ~/.snowflake/cortex/skills
|
||||
|
||||
crush:
|
||||
name: "Crush"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .crush/commands
|
||||
target_dir: .crush/skills
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.config/agents/skills
|
||||
|
||||
cursor:
|
||||
name: "Cursor"
|
||||
preferred: true
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .cursor/commands
|
||||
target_dir: .cursor/skills
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
|
||||
droid:
|
||||
name: "Factory Droid"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .factory/skills
|
||||
global_target_dir: ~/.factory/skills
|
||||
|
||||
firebender:
|
||||
name: "Firebender"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .firebender/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
|
||||
gemini:
|
||||
name: "Gemini CLI"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .gemini/commands
|
||||
target_dir: .gemini/skills
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
|
||||
github-copilot:
|
||||
name: "GitHub Copilot"
|
||||
preferred: true
|
||||
installer:
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
commands_target_dir: .github/agents
|
||||
commands_extension: .agent.md
|
||||
commands_body_template: "LOAD the FULL {project-root}/{target_dir}/{canonicalId}/SKILL.md, READ its entire contents and follow its directions exactly!"
|
||||
# The Custom Agents picker should only show persona agents (not
|
||||
# workflows/tools). Detected by reading each skill's source
|
||||
# `customize.toml` and checking for an `[agent]` section — that's
|
||||
# the actual configuration source of truth: every BMAD persona is
|
||||
# configured under `[agent]`, every workflow under `[workflow]`,
|
||||
# every standalone skill has no customize.toml. This signal is
|
||||
# naming-independent, so personas like `bmad-tea` (which doesn't
|
||||
# follow the `-agent-` convention) are still included, and
|
||||
# meta-skills like `bmad-agent-builder` (which contains `-agent-`
|
||||
# but is a skill-builder workflow, not a persona) are correctly
|
||||
# excluded.
|
||||
commands_filter: agents-only
|
||||
|
||||
goose:
|
||||
name: "Block Goose"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .github/agents
|
||||
- .github/prompts
|
||||
target_dir: .github/skills
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.config/agents/skills
|
||||
|
||||
iflow:
|
||||
name: "iFlow"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .iflow/commands
|
||||
target_dir: .iflow/skills
|
||||
global_target_dir: ~/.iflow/skills
|
||||
|
||||
junie:
|
||||
name: "Junie"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .agents/skills
|
||||
target_dir: .junie/skills
|
||||
global_target_dir: ~/.junie/skills
|
||||
|
||||
kilo:
|
||||
name: "KiloCoder"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .kilocode/workflows
|
||||
target_dir: .kilocode/skills
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.kilocode/skills
|
||||
|
||||
kimi-code:
|
||||
name: "Kimi Code"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .kimi/skills
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
|
||||
kiro:
|
||||
name: "Kiro"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .kiro/steering
|
||||
target_dir: .kiro/skills
|
||||
global_target_dir: ~/.kiro/skills
|
||||
|
||||
kode:
|
||||
name: "Kode"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .kode/skills
|
||||
global_target_dir: ~/.kode/skills
|
||||
|
||||
mistral-vibe:
|
||||
name: "Mistral Vibe"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.vibe/skills
|
||||
|
||||
mux:
|
||||
name: "Mux"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
|
||||
neovate:
|
||||
name: "Neovate"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .neovate/skills
|
||||
global_target_dir: ~/.neovate/skills
|
||||
|
||||
ona:
|
||||
name: "Ona"
|
||||
|
|
@ -134,65 +224,99 @@ platforms:
|
|||
installer:
|
||||
target_dir: .ona/skills
|
||||
|
||||
openclaw:
|
||||
name: "OpenClaw"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
|
||||
opencode:
|
||||
name: "OpenCode"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .opencode/agents
|
||||
- .opencode/commands
|
||||
- .opencode/agent
|
||||
- .opencode/command
|
||||
target_dir: .opencode/skills
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
commands_target_dir: .opencode/commands
|
||||
|
||||
openhands:
|
||||
name: "OpenHands"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
|
||||
pi:
|
||||
name: "Pi"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .pi/skills
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
|
||||
pochi:
|
||||
name: "Pochi"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
|
||||
qoder:
|
||||
name: "Qoder"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .qoder/skills
|
||||
global_target_dir: ~/.qoder/skills
|
||||
|
||||
qwen:
|
||||
name: "QwenCoder"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .qwen/commands
|
||||
target_dir: .qwen/skills
|
||||
global_target_dir: ~/.qwen/skills
|
||||
|
||||
replit:
|
||||
name: "Replit Agent"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .agents/skills
|
||||
|
||||
roo:
|
||||
name: "Roo Code"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .roo/commands
|
||||
target_dir: .roo/skills
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
|
||||
rovo-dev:
|
||||
name: "Rovo Dev"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .rovodev/workflows
|
||||
target_dir: .rovodev/skills
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
|
||||
trae:
|
||||
name: "Trae"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .trae/rules
|
||||
target_dir: .trae/skills
|
||||
|
||||
warp:
|
||||
name: "Warp"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
|
||||
windsurf:
|
||||
name: "Windsurf"
|
||||
preferred: false
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .windsurf/workflows
|
||||
target_dir: .windsurf/skills
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
|
||||
zencoder:
|
||||
name: "Zencoder"
|
||||
preferred: false
|
||||
installer:
|
||||
target_dir: .zencoder/skills
|
||||
global_target_dir: ~/.zencoder/skills
|
||||
|
|
|
|||
|
|
@ -0,0 +1,50 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('../../fs-native');
|
||||
const csv = require('csv-parse/sync');
|
||||
|
||||
/**
|
||||
* Read the global skill-manifest.csv and return the set of canonicalIds.
|
||||
* These define which directory entries in a target_dir are BMAD-owned, regardless
|
||||
* of whether they happen to start with "bmad-" (custom modules can ship skills
|
||||
* with any prefix, e.g. "fred-cool-skill").
|
||||
*
|
||||
* @param {string} bmadDir - Path to the _bmad install directory
|
||||
* @returns {Promise<Set<string>>} Set of canonicalIds, or empty set if manifest missing
|
||||
*/
|
||||
async function getInstalledCanonicalIds(bmadDir) {
|
||||
const ids = new Set();
|
||||
if (!bmadDir) return ids;
|
||||
|
||||
const csvPath = path.join(bmadDir, '_config', 'skill-manifest.csv');
|
||||
if (!(await fs.pathExists(csvPath))) return ids;
|
||||
|
||||
try {
|
||||
const content = await fs.readFile(csvPath, 'utf8');
|
||||
const records = csv.parse(content, { columns: true, skip_empty_lines: true });
|
||||
for (const record of records) {
|
||||
if (record.canonicalId) ids.add(record.canonicalId);
|
||||
}
|
||||
} catch {
|
||||
// Unreadable/invalid manifest — treat as no info
|
||||
}
|
||||
|
||||
return ids;
|
||||
}
|
||||
|
||||
/**
|
||||
* Test whether a directory entry is BMAD-owned.
|
||||
* Prefers the manifest's canonicalIds; falls back to the legacy "bmad" prefix
|
||||
* when no manifest is available (early install, ancestor lookup with no bmad dir).
|
||||
*
|
||||
* @param {string} entry - Directory entry name
|
||||
* @param {Set<string>|null} canonicalIds - From getInstalledCanonicalIds, or null
|
||||
* @returns {boolean}
|
||||
*/
|
||||
function isBmadOwnedEntry(entry, canonicalIds) {
|
||||
if (!entry || typeof entry !== 'string') return false;
|
||||
if (entry.toLowerCase().startsWith('bmad-os-')) return false;
|
||||
if (canonicalIds && canonicalIds.size > 0) return canonicalIds.has(entry);
|
||||
return entry.toLowerCase().startsWith('bmad');
|
||||
}
|
||||
|
||||
module.exports = { getInstalledCanonicalIds, isBmadOwnedEntry };
|
||||
|
|
@ -0,0 +1,210 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('./fs-native');
|
||||
const yaml = require('yaml');
|
||||
const { getProjectRoot, getModulePath, getExternalModuleCachePath } = require('./project-root');
|
||||
|
||||
/**
|
||||
* Read a module.yaml and return its declared `code:` field, or null if missing/unparseable.
|
||||
*/
|
||||
async function readModuleCode(yamlPath) {
|
||||
try {
|
||||
const parsed = yaml.parse(await fs.readFile(yamlPath, 'utf8'));
|
||||
if (parsed && typeof parsed === 'object' && typeof parsed.code === 'string') {
|
||||
return parsed.code;
|
||||
}
|
||||
} catch {
|
||||
// fall through
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Discover module.yaml files for officials we can read locally:
|
||||
* - core, bmm: bundled in src/ (always present)
|
||||
* - external officials: only if previously cloned to ~/.bmad/cache/external-modules/
|
||||
*
|
||||
* Each result's `code` is the `code:` field from the module.yaml when present;
|
||||
* that's the value `--set <module>.<key>=<value>` matches against.
|
||||
*
|
||||
* Community/custom modules are not enumerated; users reference their own
|
||||
* module.yaml directly per the design (see issue #1663).
|
||||
*
|
||||
* @returns {Promise<Array<{code: string, yamlPath: string, source: string}>>}
|
||||
*/
|
||||
async function discoverOfficialModuleYamls() {
|
||||
const found = [];
|
||||
// Dedupe is case-insensitive because module caches occasionally retain a
|
||||
// legacy UPPERCASE-named directory alongside the canonical lowercase one
|
||||
// (same module, different cache key from an older schema). We pick whichever
|
||||
// entry we see first and skip the alternate-case duplicate. NOTE: `--set`
|
||||
// matching itself is case-sensitive (it keys on `moduleName` from the install
|
||||
// flow's selected list, which is always lowercase short codes), so the
|
||||
// surfaced `code` here is what users should type. Don't change to
|
||||
// case-sensitive dedupe without revisiting that contract.
|
||||
const seenCodes = new Set();
|
||||
|
||||
const addFound = async (yamlPath, source, fallbackCode) => {
|
||||
const declaredCode = await readModuleCode(yamlPath);
|
||||
const code = declaredCode || fallbackCode;
|
||||
if (!code) return;
|
||||
const lower = code.toLowerCase();
|
||||
if (seenCodes.has(lower)) return;
|
||||
seenCodes.add(lower);
|
||||
found.push({ code, yamlPath, source });
|
||||
};
|
||||
|
||||
// Built-ins.
|
||||
for (const code of ['core', 'bmm']) {
|
||||
const yamlPath = path.join(getModulePath(code), 'module.yaml');
|
||||
if (await fs.pathExists(yamlPath)) {
|
||||
// Built-ins use their well-known short codes regardless of what the
|
||||
// module.yaml `code:` says, since the install flow keys on these.
|
||||
seenCodes.add(code.toLowerCase());
|
||||
found.push({ code, yamlPath, source: 'built-in' });
|
||||
}
|
||||
}
|
||||
|
||||
// Bundled in src/modules/<code>/module.yaml (rare, but supported by getModulePath).
|
||||
const srcModulesDir = path.join(getProjectRoot(), 'src', 'modules');
|
||||
if (await fs.pathExists(srcModulesDir)) {
|
||||
const entries = await fs.readdir(srcModulesDir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (!entry.isDirectory()) continue;
|
||||
const yamlPath = path.join(srcModulesDir, entry.name, 'module.yaml');
|
||||
if (await fs.pathExists(yamlPath)) {
|
||||
await addFound(yamlPath, 'bundled', entry.name);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// External cache (~/.bmad/cache/external-modules/<code>/...).
|
||||
const cacheRoot = getExternalModuleCachePath('').replace(/\/$/, '');
|
||||
if (await fs.pathExists(cacheRoot)) {
|
||||
const rawEntries = await fs.readdir(cacheRoot, { withFileTypes: true });
|
||||
for (const entry of rawEntries) {
|
||||
if (!entry.isDirectory()) continue;
|
||||
const candidates = [
|
||||
path.join(cacheRoot, entry.name, 'module.yaml'),
|
||||
path.join(cacheRoot, entry.name, 'src', 'module.yaml'),
|
||||
path.join(cacheRoot, entry.name, 'skills', 'module.yaml'),
|
||||
];
|
||||
for (const candidate of candidates) {
|
||||
if (await fs.pathExists(candidate)) {
|
||||
await addFound(candidate, 'cached', entry.name);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return found;
|
||||
}
|
||||
|
||||
function formatPromptText(item) {
|
||||
if (Array.isArray(item.prompt)) return item.prompt.join(' ');
|
||||
return String(item.prompt || '').trim();
|
||||
}
|
||||
|
||||
function inferType(item) {
|
||||
if (item['single-select']) return 'single-select';
|
||||
if (item['multi-select']) return 'multi-select';
|
||||
if (typeof item.default === 'boolean') return 'boolean';
|
||||
if (typeof item.default === 'number') return 'number';
|
||||
return 'string';
|
||||
}
|
||||
|
||||
function formatModuleOptions(code, parsed, source) {
|
||||
const lines = [];
|
||||
const header = source === 'built-in' ? code : `${code} (${source})`;
|
||||
lines.push(header + ':');
|
||||
|
||||
let count = 0;
|
||||
for (const [key, item] of Object.entries(parsed)) {
|
||||
if (!item || typeof item !== 'object' || !('prompt' in item)) continue;
|
||||
count++;
|
||||
const type = inferType(item);
|
||||
const scope = item.scope === 'user' ? ' [user-scope]' : '';
|
||||
const defaultStr = item.default === undefined || item.default === null ? '(none)' : String(item.default);
|
||||
lines.push(` ${code}.${key} (${type}${scope}) default: ${defaultStr}`);
|
||||
const promptText = formatPromptText(item);
|
||||
if (promptText) lines.push(` ${promptText}`);
|
||||
if (Array.isArray(item['single-select'])) {
|
||||
const values = item['single-select'].map((v) => (typeof v === 'object' ? v.value : v)).filter((v) => v !== undefined);
|
||||
if (values.length > 0) lines.push(` values: ${values.join(' | ')}`);
|
||||
}
|
||||
lines.push('');
|
||||
}
|
||||
|
||||
if (count === 0) {
|
||||
lines.push(' (no configurable options)', '');
|
||||
}
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
/**
|
||||
* Render `--list-options` output.
|
||||
*
|
||||
* Returns `{ text, ok }` so callers can surface a non-zero exit code on
|
||||
* a typo'd module-code lookup. Discovery dedupes case-insensitively, so
|
||||
* the lookup is also case-insensitive — typing `--list-options BMM` and
|
||||
* `--list-options bmm` both find the bmm built-in.
|
||||
*
|
||||
* @param {string|null} moduleCode - if non-null, restrict to this module
|
||||
* @returns {Promise<{text: string, ok: boolean}>}
|
||||
*/
|
||||
async function formatOptionsList(moduleCode) {
|
||||
const discovered = await discoverOfficialModuleYamls();
|
||||
const needle = moduleCode ? moduleCode.toLowerCase() : null;
|
||||
const filtered = needle ? discovered.filter((d) => d.code.toLowerCase() === needle) : discovered;
|
||||
|
||||
if (filtered.length === 0) {
|
||||
if (moduleCode) {
|
||||
const text = [
|
||||
`No locally-known module.yaml for '${moduleCode}'.`,
|
||||
'',
|
||||
'Built-in modules (core, bmm) are always available. External officials',
|
||||
'appear here after they have been installed at least once on this machine',
|
||||
'(they are cached under ~/.bmad/cache/external-modules/).',
|
||||
'',
|
||||
'For community or custom modules, read the module.yaml file in that',
|
||||
"module's source repository directly.",
|
||||
].join('\n');
|
||||
return { text, ok: false };
|
||||
}
|
||||
return { text: 'No modules found.', ok: false };
|
||||
}
|
||||
|
||||
const sections = [];
|
||||
// Track when a module-scoped lookup couldn't actually be rendered (yaml
|
||||
// unparseable or empty after parse). The full `--list-options` output is
|
||||
// tolerant of one bad entry, but `--list-options <module>` against a single
|
||||
// unreadable module should still fail tooling so a CI script catches it.
|
||||
let moduleScopedFailure = false;
|
||||
sections.push('Available --set keys', 'Format: --set <module>.<key>=<value> (repeatable)', '');
|
||||
for (const { code, yamlPath, source } of filtered) {
|
||||
let parsed;
|
||||
try {
|
||||
parsed = yaml.parse(await fs.readFile(yamlPath, 'utf8'));
|
||||
} catch {
|
||||
sections.push(`${code} (${source}): could not parse module.yaml`, '');
|
||||
if (moduleCode) moduleScopedFailure = true;
|
||||
continue;
|
||||
}
|
||||
if (!parsed || typeof parsed !== 'object' || Array.isArray(parsed)) {
|
||||
sections.push(`${code} (${source}): module.yaml is not a valid object (got ${Array.isArray(parsed) ? 'array' : typeof parsed})`, '');
|
||||
if (moduleCode) moduleScopedFailure = true;
|
||||
continue;
|
||||
}
|
||||
sections.push(formatModuleOptions(code, parsed, source));
|
||||
}
|
||||
|
||||
if (!moduleCode) {
|
||||
sections.push(
|
||||
'Community and custom modules are not listed here — read their module.yaml directly. Unknown keys still persist with a warning.',
|
||||
);
|
||||
}
|
||||
|
||||
return { text: sections.join('\n'), ok: !moduleScopedFailure };
|
||||
}
|
||||
|
||||
module.exports = { formatOptionsList, discoverOfficialModuleYamls };
|
||||
|
|
@ -29,6 +29,11 @@ class CommunityModuleManager {
|
|||
// Shared across all instances; the manifest writer often uses a fresh instance.
|
||||
static _resolutions = new Map();
|
||||
|
||||
// moduleCode → ResolvedModule (from PluginResolver) when the cloned repo ships
|
||||
// a `.claude-plugin/marketplace.json`. Lets community installs reuse the same
|
||||
// skill-level install pipeline as custom-source installs (installFromResolution).
|
||||
static _pluginResolutions = new Map();
|
||||
|
||||
constructor() {
|
||||
this._client = new RegistryClient();
|
||||
this._cachedIndex = null;
|
||||
|
|
@ -40,6 +45,11 @@ class CommunityModuleManager {
|
|||
return CommunityModuleManager._resolutions.get(moduleCode) || null;
|
||||
}
|
||||
|
||||
/** Get the marketplace.json-derived plugin resolution for a community module, if any. */
|
||||
getPluginResolution(moduleCode) {
|
||||
return CommunityModuleManager._pluginResolutions.get(moduleCode) || null;
|
||||
}
|
||||
|
||||
// ─── Data Loading ──────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
|
|
@ -371,6 +381,18 @@ class CommunityModuleManager {
|
|||
planSource: planEntry.source,
|
||||
});
|
||||
|
||||
// If the repo ships a marketplace.json, route through PluginResolver so the
|
||||
// skill-level install pipeline (installFromResolution) handles the copy.
|
||||
// Repos without marketplace.json fall through to the legacy findModuleSource
|
||||
// path unchanged.
|
||||
await this._tryResolveMarketplacePlugin(moduleCacheDir, moduleInfo, {
|
||||
channel: planEntry.channel,
|
||||
version: recordedVersion,
|
||||
sha: installedSha,
|
||||
approvedTag,
|
||||
approvedSha,
|
||||
});
|
||||
|
||||
// Install dependencies if needed
|
||||
const packageJsonPath = path.join(moduleCacheDir, 'package.json');
|
||||
if ((needsDependencyInstall || wasNewClone) && (await fs.pathExists(packageJsonPath))) {
|
||||
|
|
@ -392,6 +414,204 @@ class CommunityModuleManager {
|
|||
return moduleCacheDir;
|
||||
}
|
||||
|
||||
// ─── Marketplace.json Resolution ──────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Detect `.claude-plugin/marketplace.json` in a cloned community repo and
|
||||
* route through PluginResolver. When successful, caches the resolution so
|
||||
* OfficialModulesManager.install() can route the copy through
|
||||
* installFromResolution() — the same path used by custom-source installs.
|
||||
*
|
||||
* Silent no-op when marketplace.json is absent or the resolver returns no
|
||||
* matches; the legacy findModuleSource path then handles the install.
|
||||
*
|
||||
* @param {string} repoPath - Absolute path to the cloned repo
|
||||
* @param {Object} moduleInfo - Normalized community module info
|
||||
* @param {Object} resolution - Resolution metadata from cloneModule
|
||||
* @param {string} resolution.channel - Channel ('stable' | 'next' | 'pinned')
|
||||
* @param {string} resolution.version - Recorded version string
|
||||
* @param {string} resolution.sha - Resolved git SHA
|
||||
* @param {string|null} resolution.approvedTag - Registry approved tag
|
||||
* @param {string|null} resolution.approvedSha - Registry approved SHA
|
||||
*/
|
||||
async _tryResolveMarketplacePlugin(repoPath, moduleInfo, resolution) {
|
||||
const marketplacePath = path.join(repoPath, '.claude-plugin', 'marketplace.json');
|
||||
if (!(await fs.pathExists(marketplacePath))) return;
|
||||
|
||||
let marketplaceData;
|
||||
try {
|
||||
marketplaceData = JSON.parse(await fs.readFile(marketplacePath, 'utf8'));
|
||||
} catch {
|
||||
// Malformed marketplace.json — fall through to legacy path.
|
||||
return;
|
||||
}
|
||||
|
||||
const plugins = Array.isArray(marketplaceData?.plugins) ? marketplaceData.plugins : [];
|
||||
if (plugins.length === 0) return;
|
||||
|
||||
const selection = this._selectPluginForModule(plugins, moduleInfo);
|
||||
if (!selection) {
|
||||
await this._safeWarn(
|
||||
`Community module '${moduleInfo.code}' ships marketplace.json but no plugin entry matches the registry code. ` +
|
||||
`Falling back to legacy install path.`,
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
if (selection.source === 'single-fallback') {
|
||||
// Single-entry marketplace.json whose plugin name doesn't match the registry
|
||||
// code or the module_definition hint. Most likely correct, but worth surfacing
|
||||
// in case marketplace.json is misconfigured and we'd install the wrong plugin.
|
||||
await this._safeWarn(
|
||||
`Community module '${moduleInfo.code}' picked the only plugin in marketplace.json ('${selection.plugin?.name}') ` +
|
||||
`because no name or module_definition match was found. Verify marketplace.json if the install looks wrong.`,
|
||||
);
|
||||
}
|
||||
|
||||
const { PluginResolver } = require('./plugin-resolver');
|
||||
const resolver = new PluginResolver();
|
||||
let resolved;
|
||||
try {
|
||||
resolved = await resolver.resolve(repoPath, selection.plugin);
|
||||
} catch (error) {
|
||||
// PluginResolver threw (malformed plugin entry, missing files, etc.).
|
||||
// Honor the silent-fallthrough contract — warn and let the legacy
|
||||
// findModuleSource path handle the install.
|
||||
await this._safeWarn(
|
||||
`PluginResolver failed for community module '${moduleInfo.code}': ${error.message}. ` + `Falling back to legacy install path.`,
|
||||
);
|
||||
return;
|
||||
}
|
||||
if (!resolved || resolved.length === 0) return;
|
||||
|
||||
// The registry registers a single code per module. If the resolver returns
|
||||
// multiple modules (Strategy 4: multiple standalone skills), accept only
|
||||
// the entry whose code matches the registry. Other entries are ignored —
|
||||
// they belong to plugins not registered in the community catalog.
|
||||
const matched = resolved.find((mod) => mod.code === moduleInfo.code) || (resolved.length === 1 ? resolved[0] : null);
|
||||
if (!matched) return;
|
||||
|
||||
// Shallow-clone before stamping provenance — the resolver may cache or reuse
|
||||
// its return objects, and we don't want install-specific fields leaking back.
|
||||
const stamped = {
|
||||
...matched,
|
||||
code: moduleInfo.code,
|
||||
repoUrl: moduleInfo.url,
|
||||
cloneRef: resolution.channel === 'pinned' ? resolution.version : resolution.approvedTag || null,
|
||||
cloneSha: resolution.sha,
|
||||
communitySource: true,
|
||||
communityChannel: resolution.channel,
|
||||
communityVersion: resolution.version,
|
||||
registryApprovedTag: resolution.approvedTag,
|
||||
registryApprovedSha: resolution.approvedSha,
|
||||
};
|
||||
|
||||
CommunityModuleManager._pluginResolutions.set(moduleInfo.code, stamped);
|
||||
}
|
||||
|
||||
/**
|
||||
* Lazy fallback: resolve marketplace.json straight from the on-disk cache
|
||||
* when `_pluginResolutions` is empty (e.g. callers that reach `install()`
|
||||
* without `cloneModule` having populated the cache earlier in this process).
|
||||
*
|
||||
* Reuses an existing channel resolution if present; otherwise synthesizes a
|
||||
* minimal stable-channel stub from the registry entry + the cached repo's
|
||||
* current HEAD. Returns the cached plugin resolution if one is produced,
|
||||
* otherwise null (caller falls back to the legacy path).
|
||||
*
|
||||
* @param {string} moduleCode
|
||||
* @returns {Promise<Object|null>}
|
||||
*/
|
||||
async resolveFromCache(moduleCode) {
|
||||
const existing = this.getPluginResolution(moduleCode);
|
||||
if (existing) return existing;
|
||||
|
||||
const cacheRepoDir = path.join(this.getCacheDir(), moduleCode);
|
||||
const marketplacePath = path.join(cacheRepoDir, '.claude-plugin', 'marketplace.json');
|
||||
if (!(await fs.pathExists(marketplacePath))) return null;
|
||||
|
||||
let moduleInfo;
|
||||
try {
|
||||
moduleInfo = await this.getModuleByCode(moduleCode);
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
if (!moduleInfo) return null;
|
||||
|
||||
let channelResolution = this.getResolution(moduleCode);
|
||||
if (!channelResolution) {
|
||||
let sha = '';
|
||||
try {
|
||||
sha = execSync('git rev-parse HEAD', { cwd: cacheRepoDir, stdio: 'pipe' }).toString().trim();
|
||||
} catch {
|
||||
// Not a git repo or unreadable — give up and let the legacy path run.
|
||||
return null;
|
||||
}
|
||||
channelResolution = {
|
||||
channel: 'stable',
|
||||
version: moduleInfo.approvedTag || sha.slice(0, 7),
|
||||
sha,
|
||||
registryApprovedTag: moduleInfo.approvedTag || null,
|
||||
registryApprovedSha: moduleInfo.approvedSha || null,
|
||||
};
|
||||
}
|
||||
|
||||
await this._tryResolveMarketplacePlugin(cacheRepoDir, moduleInfo, {
|
||||
channel: channelResolution.channel,
|
||||
version: channelResolution.version,
|
||||
sha: channelResolution.sha,
|
||||
approvedTag: channelResolution.registryApprovedTag,
|
||||
approvedSha: channelResolution.registryApprovedSha,
|
||||
});
|
||||
|
||||
return this.getPluginResolution(moduleCode);
|
||||
}
|
||||
|
||||
/**
|
||||
* Best-effort warning emitter. `prompts.log.warn` may be undefined in some
|
||||
* harnesses and may return a rejected promise — swallow both cases so a
|
||||
* fallthrough warning can never crash the install.
|
||||
*/
|
||||
async _safeWarn(message) {
|
||||
try {
|
||||
const result = prompts.log?.warn?.(message);
|
||||
if (result && typeof result.then === 'function') await result;
|
||||
} catch {
|
||||
/* ignore */
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Pick which plugin entry from marketplace.json represents this community module.
|
||||
* Precedence:
|
||||
* 1. Exact match on `plugin.name === moduleInfo.code`
|
||||
* 2. Trailing directory of `module_definition` matches `plugin.name`
|
||||
* 3. Single plugin in marketplace.json — accepted with a warning so a
|
||||
* mismatched-but-uniquely-named plugin doesn't install silently.
|
||||
* Otherwise null (caller falls back to legacy path).
|
||||
*
|
||||
* @returns {{plugin: Object, source: 'name'|'hint'|'single-fallback'}|null}
|
||||
*/
|
||||
_selectPluginForModule(plugins, moduleInfo) {
|
||||
const byCode = plugins.find((p) => p && p.name === moduleInfo.code);
|
||||
if (byCode) return { plugin: byCode, source: 'name' };
|
||||
|
||||
if (moduleInfo.moduleDefinition) {
|
||||
// module_definition like "src/skills/suno-setup/assets/module.yaml" →
|
||||
// hint segment "suno-setup". Match that against plugin names.
|
||||
const segments = moduleInfo.moduleDefinition.split('/').filter(Boolean);
|
||||
const setupIdx = segments.findIndex((s) => s.endsWith('-setup'));
|
||||
if (setupIdx !== -1) {
|
||||
const hint = segments[setupIdx];
|
||||
const byHint = plugins.find((p) => p && p.name === hint);
|
||||
if (byHint) return { plugin: byHint, source: 'hint' };
|
||||
}
|
||||
}
|
||||
|
||||
if (plugins.length === 1) return { plugin: plugins[0], source: 'single-fallback' };
|
||||
return null;
|
||||
}
|
||||
|
||||
// ─── Source Finding ───────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -24,8 +24,9 @@ class CustomModuleManager {
|
|||
|
||||
/**
|
||||
* Parse a user-provided source input into a structured descriptor.
|
||||
* Accepts local file paths, HTTPS Git URLs, and SSH Git URLs.
|
||||
* For HTTPS URLs with deep paths (e.g., /tree/main/subdir), extracts the subdir.
|
||||
* Accepts local file paths, HTTPS Git URLs, HTTP Git URLs, and SSH Git URLs.
|
||||
* For HTTPS/HTTP URLs with deep paths (e.g., /tree/main/subdir), extracts the subdir.
|
||||
* The original protocol (http or https) is preserved in the returned cloneUrl.
|
||||
*
|
||||
* @param {string} input - URL or local file path
|
||||
* @returns {Object} Parsed source descriptor:
|
||||
|
|
@ -127,58 +128,102 @@ class CustomModuleManager {
|
|||
};
|
||||
}
|
||||
|
||||
// HTTPS URL: https://host/owner/repo[/tree/branch/subdir][.git]
|
||||
const httpsMatch = trimmed.match(/^https?:\/\/([^/]+)\/([^/]+)\/([^/.]+?)(?:\.git)?(\/.*)?$/);
|
||||
if (httpsMatch) {
|
||||
const [, host, owner, repo, remainder] = httpsMatch;
|
||||
const cloneUrl = `https://${host}/${owner}/${repo}`;
|
||||
let subdir = null;
|
||||
let urlRef = null; // branch/tag extracted from /tree/<ref>/subdir
|
||||
// HTTPS/HTTP URL: generic handling for any Git host.
|
||||
// We avoid host-specific parsing — `git clone` will accept whatever URL the
|
||||
// user provides. We only need to (a) separate an optional browser-style
|
||||
// subdir suffix from the clone URL, (b) extract any embedded ref
|
||||
// (branch/tag) from deep-path URLs, and (c) derive a cache key / display
|
||||
// name from the path. The original protocol (http or https) is preserved.
|
||||
if (/^https?:\/\//i.test(trimmed)) {
|
||||
let url;
|
||||
try {
|
||||
url = new URL(trimmed);
|
||||
} catch {
|
||||
url = null;
|
||||
}
|
||||
|
||||
if (remainder) {
|
||||
// Extract subdir from deep path patterns used by various Git hosts
|
||||
if (url && url.host) {
|
||||
const host = url.host;
|
||||
let repoPath = url.pathname.replace(/^\/+/, '').replace(/\/+$/, '');
|
||||
let subdir = null;
|
||||
let urlRef = null; // branch/tag/commit extracted from deep-path URLs
|
||||
|
||||
// Detect browser-style deep-path patterns that embed a ref
|
||||
// (branch/tag/commit) and optional subdirectory. These appear
|
||||
// across many hosts:
|
||||
// GitHub /<repo>/tree|blob/<ref>[/<subdir>]
|
||||
// GitLab /<repo>/-/tree|blob/<ref>[/<subdir>]
|
||||
// Gitea /<repo>/src/<ref>[/<subdir>]
|
||||
// Gitea /<repo>/src/(branch|commit|tag)/<ref>[/<subdir>]
|
||||
// Group 1 = repo path prefix, Group 2 = ref, Group 3 = subdir (optional).
|
||||
const deepPathPatterns = [
|
||||
{ regex: /^\/(?:-\/)?tree\/([^/]+)\/(.+)$/, refIdx: 1, pathIdx: 2 }, // GitHub, GitLab
|
||||
{ regex: /^\/(?:-\/)?blob\/([^/]+)\/(.+)$/, refIdx: 1, pathIdx: 2 },
|
||||
{ regex: /^\/src\/([^/]+)\/(.+)$/, refIdx: 1, pathIdx: 2 }, // Gitea/Forgejo
|
||||
/^(.+?)\/(?:-\/)?(?:tree|blob)\/([^/]+)(?:\/(.+))?$/,
|
||||
/^(.+?)\/src\/(?:branch\/|commit\/|tag\/)?([^/]+)(?:\/(.+))?$/,
|
||||
];
|
||||
// Also match `/tree/<ref>` with no subdir
|
||||
const refOnlyPatterns = [/^\/(?:-\/)?tree\/([^/]+?)\/?$/, /^\/(?:-\/)?blob\/([^/]+?)\/?$/, /^\/src\/([^/]+?)\/?$/];
|
||||
|
||||
for (const p of deepPathPatterns) {
|
||||
const match = remainder.match(p.regex);
|
||||
for (const pattern of deepPathPatterns) {
|
||||
const match = repoPath.match(pattern);
|
||||
if (match) {
|
||||
urlRef = match[p.refIdx];
|
||||
subdir = match[p.pathIdx].replace(/\/$/, '');
|
||||
repoPath = match[1];
|
||||
if (match[2]) urlRef = match[2];
|
||||
if (match[3]) {
|
||||
const cleaned = match[3].replace(/\/+$/, '');
|
||||
if (cleaned) subdir = cleaned;
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Some hosts use ?path=/subdir on browse links to point at a file or
|
||||
// directory. Honor it when no deep-path marker matched above.
|
||||
if (!subdir) {
|
||||
for (const r of refOnlyPatterns) {
|
||||
const match = remainder.match(r);
|
||||
if (match) {
|
||||
urlRef = match[1];
|
||||
break;
|
||||
}
|
||||
const pathParam = url.searchParams.get('path');
|
||||
if (pathParam) {
|
||||
const cleaned = pathParam.replace(/^\/+/, '').replace(/\/+$/, '');
|
||||
if (cleaned) subdir = cleaned;
|
||||
}
|
||||
}
|
||||
|
||||
// Strip a single trailing .git for a stable cacheKey/displayName.
|
||||
const repoPathClean = repoPath.replace(/\.git$/i, '');
|
||||
if (!repoPathClean) {
|
||||
return {
|
||||
type: null,
|
||||
cloneUrl: null,
|
||||
subdir: null,
|
||||
localPath: null,
|
||||
cacheKey: null,
|
||||
displayName: null,
|
||||
isValid: false,
|
||||
error: 'Not a valid Git URL or local path',
|
||||
};
|
||||
}
|
||||
|
||||
const cloneUrl = `${url.protocol}//${host}/${repoPathClean}`;
|
||||
const cacheKey = `${host}/${repoPathClean}`;
|
||||
|
||||
// Display name: prefer "<owner>/<repo>" using the last two meaningful
|
||||
// path segments.
|
||||
const segments = repoPathClean.split('/').filter(Boolean);
|
||||
const repoSeg = segments.at(-1);
|
||||
const ownerSeg = segments.at(-2);
|
||||
const displayName = ownerSeg ? `${ownerSeg}/${repoSeg}` : repoSeg;
|
||||
|
||||
// Precedence: explicit @version suffix > URL /tree/<ref> path segment.
|
||||
const version = versionSuffix || urlRef || null;
|
||||
|
||||
return {
|
||||
type: 'url',
|
||||
cloneUrl,
|
||||
subdir,
|
||||
localPath: null,
|
||||
version,
|
||||
rawInput: trimmedRaw,
|
||||
cacheKey,
|
||||
displayName,
|
||||
isValid: true,
|
||||
error: null,
|
||||
};
|
||||
}
|
||||
|
||||
// Precedence: explicit @version suffix > URL /tree/<ref> path segment.
|
||||
const version = versionSuffix || urlRef || null;
|
||||
|
||||
return {
|
||||
type: 'url',
|
||||
cloneUrl,
|
||||
subdir,
|
||||
localPath: null,
|
||||
version,
|
||||
rawInput: trimmedRaw,
|
||||
cacheKey: `${host}/${owner}/${repo}`,
|
||||
displayName: `${owner}/${repo}`,
|
||||
isValid: true,
|
||||
error: null,
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
|
|
@ -311,7 +356,7 @@ class CustomModuleManager {
|
|||
/**
|
||||
* Clone a custom module repository to cache.
|
||||
* Supports any Git host (GitHub, GitLab, Bitbucket, self-hosted, etc.).
|
||||
* @param {string} sourceInput - Git URL (HTTPS or SSH)
|
||||
* @param {string} sourceInput - Git URL (HTTPS, HTTP, or SSH)
|
||||
* @param {Object} [options] - Clone options
|
||||
* @param {boolean} [options.silent] - Suppress spinner output
|
||||
* @param {boolean} [options.skipInstall] - Skip npm install (for browsing before user confirms)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,13 @@
|
|||
/**
|
||||
* Canonical schema for per-module `module-help.csv` files.
|
||||
*
|
||||
* Both the merger (`Installer.mergeModuleHelpCatalogs`) and the synthesizer
|
||||
* (`PluginResolver._buildSynthesizedHelpCsv`) emit this exact header. The
|
||||
* merger compares each per-module file's header against this string and
|
||||
* warns on drift, so any rename here must be matched in external module
|
||||
* authors' CSVs (or accepted as a positional fall-through with a warning).
|
||||
*/
|
||||
const MODULE_HELP_CSV_HEADER =
|
||||
'module,skill,display-name,menu-code,description,action,args,phase,preceded-by,followed-by,required,output-location,outputs';
|
||||
|
||||
module.exports = { MODULE_HELP_CSV_HEADER };
|
||||
|
|
@ -269,6 +269,21 @@ class OfficialModules {
|
|||
return this.installFromResolution(resolved, bmadDir, fileTrackingCallback, options);
|
||||
}
|
||||
|
||||
// Community modules whose cloned repo ships marketplace.json get the same
|
||||
// skill-level install treatment as custom-source installs. If the in-process
|
||||
// cache wasn't populated (e.g. caller skipped the pre-clone phase), fall
|
||||
// back to resolving directly from `~/.bmad/cache/community-modules/<name>/`
|
||||
// so we don't silently regress to the legacy half-install path.
|
||||
const { CommunityModuleManager } = require('./community-manager');
|
||||
const communityMgr = new CommunityModuleManager();
|
||||
let communityResolved = communityMgr.getPluginResolution(moduleName);
|
||||
if (!communityResolved) {
|
||||
communityResolved = await communityMgr.resolveFromCache(moduleName);
|
||||
}
|
||||
if (communityResolved) {
|
||||
return this.installFromResolution(communityResolved, bmadDir, fileTrackingCallback, options);
|
||||
}
|
||||
|
||||
const sourcePath = await this.findModuleSource(moduleName, {
|
||||
silent: options.silent,
|
||||
channelOptions: options.channelOptions,
|
||||
|
|
@ -360,21 +375,27 @@ class OfficialModules {
|
|||
await this.createModuleDirectories(resolved.code, bmadDir, options);
|
||||
}
|
||||
|
||||
// Update manifest. For custom modules, derive channel from the git ref:
|
||||
// cloneRef present → pinned at that ref
|
||||
// cloneRef absent → next (main HEAD)
|
||||
// local path → no channel concept
|
||||
// Update manifest. For community installs we honor the channel resolved by
|
||||
// CommunityModuleManager (stable/next/pinned) and propagate the registry's
|
||||
// approved tag/sha. For custom-source installs we derive channel from the
|
||||
// cloneRef (present → pinned, absent → next; local paths have no channel).
|
||||
const { Manifest } = require('../core/manifest');
|
||||
const manifestObj = new Manifest();
|
||||
|
||||
const hasGitClone = !!resolved.repoUrl;
|
||||
const isCommunity = resolved.communitySource === true;
|
||||
const manifestEntry = {
|
||||
version: resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || null),
|
||||
source: 'custom',
|
||||
version: resolved.communityVersion || resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || null),
|
||||
source: isCommunity ? 'community' : 'custom',
|
||||
npmPackage: null,
|
||||
repoUrl: resolved.repoUrl || null,
|
||||
};
|
||||
if (hasGitClone) {
|
||||
if (isCommunity) {
|
||||
if (resolved.communityChannel) manifestEntry.channel = resolved.communityChannel;
|
||||
if (resolved.cloneSha) manifestEntry.sha = resolved.cloneSha;
|
||||
if (resolved.registryApprovedTag) manifestEntry.registryApprovedTag = resolved.registryApprovedTag;
|
||||
if (resolved.registryApprovedSha) manifestEntry.registryApprovedSha = resolved.registryApprovedSha;
|
||||
} else if (hasGitClone) {
|
||||
manifestEntry.channel = resolved.cloneRef ? 'pinned' : 'next';
|
||||
if (resolved.cloneSha) manifestEntry.sha = resolved.cloneSha;
|
||||
if (resolved.rawInput) manifestEntry.rawSource = resolved.rawInput;
|
||||
|
|
@ -386,10 +407,13 @@ class OfficialModules {
|
|||
success: true,
|
||||
module: resolved.code,
|
||||
path: targetPath,
|
||||
// Match the manifestEntry.version expression above so downstream summary
|
||||
// lines show the cloned ref (tag or 'main') instead of the on-disk
|
||||
// package.json version for git-backed custom installs.
|
||||
versionInfo: { version: resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || '') },
|
||||
// Mirror the manifestEntry.version precedence above so downstream summary
|
||||
// lines show the same string we just wrote to disk (community installs
|
||||
// use the registry-approved tag via `communityVersion`; custom git-backed
|
||||
// installs show the cloned ref or 'main').
|
||||
versionInfo: {
|
||||
version: resolved.communityVersion || resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || ''),
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
|
|
@ -879,7 +903,10 @@ class OfficialModules {
|
|||
try {
|
||||
const content = await fs.readFile(moduleConfigPath, 'utf8');
|
||||
const moduleConfig = yaml.parse(content);
|
||||
if (moduleConfig) {
|
||||
// Only keep plain object parses. A corrupt config.yaml that parses
|
||||
// to a scalar or array would crash later code that does `key in cfg`
|
||||
// / `Object.keys(cfg)`; treat it the same as a parse error.
|
||||
if (moduleConfig && typeof moduleConfig === 'object' && !Array.isArray(moduleConfig)) {
|
||||
this._existingConfig[entry.name] = moduleConfig;
|
||||
foundAny = true;
|
||||
}
|
||||
|
|
@ -890,9 +917,58 @@ class OfficialModules {
|
|||
}
|
||||
}
|
||||
|
||||
if (foundAny) {
|
||||
await this._hoistCoreKeysFromLegacyModuleConfigs();
|
||||
}
|
||||
|
||||
return foundAny;
|
||||
}
|
||||
|
||||
/**
|
||||
* Migrate prior answers when a key has moved from a non-core module to core
|
||||
* (e.g. project_name moving from bmm to core in #2279). Without this, the
|
||||
* partition logic in writeCentralConfig drops the value from the bmm bucket
|
||||
* (because it's now a core key) without re-homing it under [core], so the
|
||||
* user's prior answer silently disappears on the next install/quick-update.
|
||||
*/
|
||||
async _hoistCoreKeysFromLegacyModuleConfigs() {
|
||||
const coreSchemaPath = path.join(getSourcePath(), 'core-skills', 'module.yaml');
|
||||
if (!(await fs.pathExists(coreSchemaPath))) return;
|
||||
|
||||
let coreSchema;
|
||||
try {
|
||||
coreSchema = yaml.parse(await fs.readFile(coreSchemaPath, 'utf8'));
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
if (!coreSchema || typeof coreSchema !== 'object') return;
|
||||
|
||||
const coreKeys = new Set(
|
||||
Object.entries(coreSchema)
|
||||
.filter(([, v]) => v && typeof v === 'object' && 'prompt' in v)
|
||||
.map(([k]) => k),
|
||||
);
|
||||
if (coreKeys.size === 0) return;
|
||||
|
||||
// Belt-and-suspenders: loadExistingConfig already filters non-object parses,
|
||||
// but anyone calling _hoistCoreKeysFromLegacyModuleConfigs in isolation (or
|
||||
// future code paths populating _existingConfig directly) shouldn't be able
|
||||
// to crash this with a scalar / array.
|
||||
const existingCore = this._existingConfig.core;
|
||||
this._existingConfig.core = existingCore && typeof existingCore === 'object' && !Array.isArray(existingCore) ? existingCore : {};
|
||||
|
||||
for (const [moduleName, cfg] of Object.entries(this._existingConfig)) {
|
||||
if (moduleName === 'core' || !cfg || typeof cfg !== 'object' || Array.isArray(cfg)) continue;
|
||||
for (const key of Object.keys(cfg)) {
|
||||
if (!coreKeys.has(key)) continue;
|
||||
if (!(key in this._existingConfig.core)) {
|
||||
this._existingConfig.core[key] = cfg[key];
|
||||
}
|
||||
delete cfg[key];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Pre-scan module schemas to gather metadata for the configuration gateway prompt.
|
||||
* Returns info about which modules have configurable options.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
const fs = require('../fs-native');
|
||||
const path = require('node:path');
|
||||
const yaml = require('yaml');
|
||||
const { MODULE_HELP_CSV_HEADER } = require('./module-help-schema');
|
||||
|
||||
/**
|
||||
* Resolves how to install a plugin from marketplace.json by analyzing
|
||||
|
|
@ -338,8 +339,7 @@ class PluginResolver {
|
|||
* @returns {string} CSV content
|
||||
*/
|
||||
_buildSynthesizedHelpCsv(moduleName, skillInfos) {
|
||||
const header = 'module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs';
|
||||
const rows = [header];
|
||||
const rows = [MODULE_HELP_CSV_HEADER];
|
||||
|
||||
for (const info of skillInfos) {
|
||||
const displayName = this._formatDisplayName(info.name || info.dirName);
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
const path = require('node:path');
|
||||
const os = require('node:os');
|
||||
const yaml = require('yaml');
|
||||
const fs = require('./fs-native');
|
||||
|
||||
/**
|
||||
|
|
@ -86,6 +87,11 @@ function getExternalModuleCachePath(moduleName, ...segments) {
|
|||
* Built-in modules (core, bmm) live under <src>. External official modules are
|
||||
* cloned into ~/.bmad/cache/external-modules/<name>/ with varying internal
|
||||
* layouts (some at src/module.yaml, some at skills/module.yaml, some nested).
|
||||
* Url-source custom modules are cloned into ~/.bmad/cache/custom-modules/<host>/<owner>/<repo>/
|
||||
* and are resolved by walking the cache and matching `code` or `name` from the
|
||||
* discovered module.yaml. Local custom-source modules are not cached; their
|
||||
* path is read from the CustomModuleManager resolution cache set during the
|
||||
* same install run.
|
||||
* This mirrors the candidate-path search in
|
||||
* ExternalModuleManager.findExternalModuleSource but performs no git/network
|
||||
* work, which keeps it safe to call during manifest writing.
|
||||
|
|
@ -97,26 +103,113 @@ async function resolveInstalledModuleYaml(moduleName) {
|
|||
const builtIn = path.join(getModulePath(moduleName), 'module.yaml');
|
||||
if (await fs.pathExists(builtIn)) return builtIn;
|
||||
|
||||
const cacheRoot = getExternalModuleCachePath(moduleName);
|
||||
if (!(await fs.pathExists(cacheRoot))) return null;
|
||||
// Collect every module.yaml under a root using the standard candidate paths.
|
||||
// Url-source repos can host multiple plugins (discovery mode), so we need all
|
||||
// matches, not just the first. Returned in priority order.
|
||||
async function searchRootAll(root) {
|
||||
const results = [];
|
||||
for (const dir of ['skills', 'src']) {
|
||||
const direct = path.join(root, dir, 'module.yaml');
|
||||
if (await fs.pathExists(direct)) results.push(direct);
|
||||
|
||||
for (const dir of ['skills', 'src']) {
|
||||
const direct = path.join(cacheRoot, dir, 'module.yaml');
|
||||
if (await fs.pathExists(direct)) return direct;
|
||||
|
||||
const dirPath = path.join(cacheRoot, dir);
|
||||
if (await fs.pathExists(dirPath)) {
|
||||
const entries = await fs.readdir(dirPath, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (!entry.isDirectory()) continue;
|
||||
const nested = path.join(dirPath, entry.name, 'module.yaml');
|
||||
if (await fs.pathExists(nested)) return nested;
|
||||
const dirPath = path.join(root, dir);
|
||||
if (await fs.pathExists(dirPath)) {
|
||||
const entries = await fs.readdir(dirPath, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (!entry.isDirectory()) continue;
|
||||
const nested = path.join(dirPath, entry.name, 'module.yaml');
|
||||
if (await fs.pathExists(nested)) results.push(nested);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// BMB standard: {setup-skill}/assets/module.yaml (setup skill is any *-setup directory).
|
||||
// Check at the repo root, and also under src/skills/ and skills/ since
|
||||
// marketplace plugins commonly nest skills under src/skills/<name>/.
|
||||
const setupSearchRoots = [root, path.join(root, 'src', 'skills'), path.join(root, 'skills')];
|
||||
for (const setupRoot of setupSearchRoots) {
|
||||
if (!(await fs.pathExists(setupRoot))) continue;
|
||||
const entries = await fs.readdir(setupRoot, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (!entry.isDirectory() || !entry.name.endsWith('-setup')) continue;
|
||||
const setupAssets = path.join(setupRoot, entry.name, 'assets', 'module.yaml');
|
||||
if (await fs.pathExists(setupAssets)) results.push(setupAssets);
|
||||
}
|
||||
}
|
||||
|
||||
const atRoot = path.join(root, 'module.yaml');
|
||||
if (await fs.pathExists(atRoot)) results.push(atRoot);
|
||||
return results;
|
||||
}
|
||||
|
||||
const atRoot = path.join(cacheRoot, 'module.yaml');
|
||||
if (await fs.pathExists(atRoot)) return atRoot;
|
||||
// Backwards-compatible single-result variant for the existing external-cache
|
||||
// and resolution-cache fallbacks (one module per root by construction).
|
||||
async function searchRoot(root) {
|
||||
const all = await searchRootAll(root);
|
||||
return all.length > 0 ? all[0] : null;
|
||||
}
|
||||
|
||||
const cacheRoot = getExternalModuleCachePath(moduleName);
|
||||
if (await fs.pathExists(cacheRoot)) {
|
||||
const found = await searchRoot(cacheRoot);
|
||||
if (found) return found;
|
||||
}
|
||||
|
||||
// Community modules are cloned to ~/.bmad/cache/community-modules/<name>/
|
||||
// (parallel to the external-modules cache used above). Search there too so
|
||||
// collectAgentsFromModuleYaml and writeCentralConfig can locate community
|
||||
// module.yaml files regardless of how nested the layout is.
|
||||
const communityCacheRoot = path.join(os.homedir(), '.bmad', 'cache', 'community-modules', moduleName);
|
||||
if (await fs.pathExists(communityCacheRoot)) {
|
||||
const found = await searchRoot(communityCacheRoot);
|
||||
if (found) return found;
|
||||
}
|
||||
|
||||
// Fallback: local custom-source modules store their source path in the
|
||||
// CustomModuleManager resolution cache populated during the same install run.
|
||||
// Match by code OR name since callers may use either form.
|
||||
try {
|
||||
const { CustomModuleManager } = require('./modules/custom-module-manager');
|
||||
for (const [, mod] of CustomModuleManager._resolutionCache) {
|
||||
if ((mod.code === moduleName || mod.name === moduleName) && mod.localPath) {
|
||||
const found = await searchRoot(mod.localPath);
|
||||
if (found) return found;
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Resolution cache unavailable — continue
|
||||
}
|
||||
|
||||
// Fallback: url-source custom modules cloned to ~/.bmad/cache/custom-modules/.
|
||||
// Walk every cached repo, enumerate ALL module.yaml files via searchRootAll
|
||||
// (a single repo can host multiple plugins in discovery mode), and match by
|
||||
// the yaml's `code` or `name` field. This works on re-install runs where
|
||||
// _resolutionCache is empty and covers both discovery-mode (with marketplace.json)
|
||||
// and direct-mode modules, since we identify repo roots by .bmad-source.json
|
||||
// (written by cloneRepo) or .claude-plugin/ rather than by marketplace.json.
|
||||
try {
|
||||
const customCacheDir = path.join(os.homedir(), '.bmad', 'cache', 'custom-modules');
|
||||
if (await fs.pathExists(customCacheDir)) {
|
||||
const { CustomModuleManager } = require('./modules/custom-module-manager');
|
||||
const customMgr = new CustomModuleManager();
|
||||
const repoRoots = await customMgr._findCacheRepoRoots(customCacheDir);
|
||||
for (const { repoPath } of repoRoots) {
|
||||
const candidates = await searchRootAll(repoPath);
|
||||
for (const candidate of candidates) {
|
||||
try {
|
||||
const parsed = yaml.parse(await fs.readFile(candidate, 'utf8'));
|
||||
if (parsed && (parsed.code === moduleName || parsed.name === moduleName)) {
|
||||
return candidate;
|
||||
}
|
||||
} catch {
|
||||
// Malformed yaml — skip
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Custom-modules cache walk failed — continue
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,330 @@
|
|||
// `--set <module>.<key>=<value>` is a post-install patch. The installer runs
|
||||
// its normal flow and writes `_bmad/config.toml`, `_bmad/config.user.toml`,
|
||||
// and `_bmad/<module>/config.yaml`; afterwards `applySetOverrides` upserts
|
||||
// each override into those files.
|
||||
//
|
||||
// This is intentionally NOT integrated with the prompt/template/schema
|
||||
// system. Tradeoffs:
|
||||
// - No `result:` template rendering: `--set bmm.project_knowledge=research`
|
||||
// writes "research" verbatim. Pass `--set bmm.project_knowledge='{project-root}/research'`
|
||||
// if you want the rendered form.
|
||||
// - Carry-forward across installs is best-effort: declared schema keys
|
||||
// persist via the existingValue path on the next interactive run; values
|
||||
// for keys outside any module's schema may need to be re-passed on each
|
||||
// install (or edited directly in `_bmad/config.toml`).
|
||||
// - No "key not in schema" validation: whatever you assert, we write.
|
||||
//
|
||||
// Names that, when used as object keys, can mutate `Object.prototype` and
|
||||
// cascade into every plain-object lookup in the process. The `--set` pipeline
|
||||
// assigns into plain `{}` maps keyed by user input, so `--set __proto__.x=1`
|
||||
// would otherwise reach `overrides.__proto__[x] = 1` and pollute every plain
|
||||
// object. We reject the names at parse time and harden the maps in
|
||||
// `parseSetEntries` with `Object.create(null)` for defense-in-depth.
|
||||
const PROTOTYPE_POLLUTING_NAMES = new Set(['__proto__', 'prototype', 'constructor']);
|
||||
|
||||
const path = require('node:path');
|
||||
const fs = require('./fs-native');
|
||||
const yaml = require('yaml');
|
||||
|
||||
/**
|
||||
* Parse a single `--set <module>.<key>=<value>` entry.
|
||||
* @param {string} entry - raw flag value
|
||||
* @returns {{module: string, key: string, value: string}}
|
||||
* @throws {Error} on malformed input
|
||||
*/
|
||||
function parseSetEntry(entry) {
|
||||
if (typeof entry !== 'string' || entry.length === 0) {
|
||||
throw new Error('--set: empty entry. Expected <module>.<key>=<value>');
|
||||
}
|
||||
const eq = entry.indexOf('=');
|
||||
if (eq === -1) {
|
||||
throw new Error(`--set "${entry}": missing '='. Expected <module>.<key>=<value>`);
|
||||
}
|
||||
const lhs = entry.slice(0, eq);
|
||||
// Note: only the LHS is trimmed. Values may legitimately contain leading
|
||||
// or trailing whitespace (paths with spaces, quoted strings); module / key
|
||||
// names cannot, so it's safe to be strict on the left.
|
||||
const value = entry.slice(eq + 1);
|
||||
const dot = lhs.indexOf('.');
|
||||
if (dot === -1) {
|
||||
throw new Error(`--set "${entry}": missing '.'. Expected <module>.<key>=<value>`);
|
||||
}
|
||||
const moduleCode = lhs.slice(0, dot).trim();
|
||||
const key = lhs.slice(dot + 1).trim();
|
||||
if (!moduleCode || !key) {
|
||||
throw new Error(`--set "${entry}": empty module or key. Expected <module>.<key>=<value>`);
|
||||
}
|
||||
if (PROTOTYPE_POLLUTING_NAMES.has(moduleCode) || PROTOTYPE_POLLUTING_NAMES.has(key)) {
|
||||
throw new Error(
|
||||
`--set "${entry}": '__proto__', 'prototype', and 'constructor' are reserved and cannot be used as a module or key name.`,
|
||||
);
|
||||
}
|
||||
return { module: moduleCode, key, value };
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse repeated `--set` entries into a `{ module: { key: value } }` map.
|
||||
* Later entries overwrite earlier ones for the same key. Both the outer
|
||||
* map and the per-module inner maps are `Object.create(null)` so callers
|
||||
* that bypass `parseSetEntry`'s name check still can't pollute prototypes.
|
||||
*
|
||||
* @param {string[]} entries
|
||||
* @returns {Object<string, Object<string, string>>}
|
||||
*/
|
||||
function parseSetEntries(entries) {
|
||||
const overrides = Object.create(null);
|
||||
if (!Array.isArray(entries)) return overrides;
|
||||
for (const entry of entries) {
|
||||
const { module: moduleCode, key, value } = parseSetEntry(entry);
|
||||
if (!overrides[moduleCode]) overrides[moduleCode] = Object.create(null);
|
||||
overrides[moduleCode][key] = value;
|
||||
}
|
||||
return overrides;
|
||||
}
|
||||
|
||||
/**
|
||||
* Encode a JS string as a TOML basic string (double-quoted with escapes).
|
||||
* @param {string} value
|
||||
*/
|
||||
function tomlString(value) {
|
||||
const s = String(value);
|
||||
// Per the TOML spec, basic strings escape `\`, `"`, and control characters.
|
||||
return (
|
||||
'"' +
|
||||
s
|
||||
.replaceAll('\\', '\\\\')
|
||||
.replaceAll('"', String.raw`\"`)
|
||||
.replaceAll('\b', String.raw`\b`)
|
||||
.replaceAll('\f', String.raw`\f`)
|
||||
.replaceAll('\n', String.raw`\n`)
|
||||
.replaceAll('\r', String.raw`\r`)
|
||||
.replaceAll('\t', String.raw`\t`) +
|
||||
'"'
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Section header for a given module code.
|
||||
* - `core` → `[core]`
|
||||
* - `<other>` → `[modules.<other>]`
|
||||
*
|
||||
* Mirrors the layout `manifest-generator.writeCentralConfig` produces.
|
||||
*/
|
||||
function sectionHeader(moduleCode) {
|
||||
return moduleCode === 'core' ? '[core]' : `[modules.${moduleCode}]`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Insert or update `key = value` inside a TOML section, returning the new
|
||||
* file content. The format produced by the installer is regular and small
|
||||
* enough that a line scanner is more reliable than pulling in a TOML
|
||||
* round-tripper that would normalize the file's existing whitespace and
|
||||
* comment structure.
|
||||
*
|
||||
* - If `[section]` exists and contains `key`, replace the value on that
|
||||
* line (preserving any inline comment after the value).
|
||||
* - If `[section]` exists but `key` doesn't, append `key = value` at the
|
||||
* end of the section (before the next `[...]` header or EOF, skipping
|
||||
* trailing blank lines so the section stays tidy).
|
||||
* - If `[section]` doesn't exist, append a new section block at EOF.
|
||||
*
|
||||
* @param {string} content existing file content (may be empty)
|
||||
* @param {string} section exact `[section]` header to target
|
||||
* @param {string} key
|
||||
* @param {string} valueToml already TOML-encoded value (e.g. `"foo"`)
|
||||
* @returns {string} new content
|
||||
*/
|
||||
function upsertTomlKey(content, section, key, valueToml) {
|
||||
const lines = content.split('\n');
|
||||
// Track whether the file already ended with a newline so we can preserve
|
||||
// that. `split('\n')` on `"a\n"` yields `['a', '']`, which gives us the
|
||||
// marker we need.
|
||||
const hadTrailingNewline = lines.length > 0 && lines.at(-1) === '';
|
||||
if (hadTrailingNewline) lines.pop();
|
||||
|
||||
// Locate the target section.
|
||||
const sectionStart = lines.findIndex((line) => line.trim() === section);
|
||||
if (sectionStart === -1) {
|
||||
// Section doesn't exist — append a new block. Pad with a blank line if
|
||||
// the file is non-empty so sections stay visually separated.
|
||||
if (lines.length > 0 && lines.at(-1).trim() !== '') lines.push('');
|
||||
lines.push(section, `${key} = ${valueToml}`);
|
||||
return lines.join('\n') + (hadTrailingNewline ? '\n' : '');
|
||||
}
|
||||
|
||||
// Find the section's end (next `[...]` header or EOF).
|
||||
let sectionEnd = lines.length;
|
||||
for (let i = sectionStart + 1; i < lines.length; i++) {
|
||||
if (/^\s*\[/.test(lines[i])) {
|
||||
sectionEnd = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Look for the key inside the section. Match `<key> = ...` allowing
|
||||
// optional leading whitespace; preserve the comment tail (`# ...`) if any.
|
||||
const keyPattern = new RegExp(`^(\\s*)${escapeRegExp(key)}\\s*=\\s*(.*)$`);
|
||||
for (let i = sectionStart + 1; i < sectionEnd; i++) {
|
||||
const match = lines[i].match(keyPattern);
|
||||
if (match) {
|
||||
const indent = match[1];
|
||||
// Preserve trailing comment if present. We split on the first `#` that
|
||||
// is preceded by whitespace — TOML strings can't contain unescaped `#`
|
||||
// in basic-string form so this is safe for the values we emit.
|
||||
const tail = match[2];
|
||||
const commentIdx = tail.search(/\s+#/);
|
||||
const commentSuffix = commentIdx === -1 ? '' : tail.slice(commentIdx);
|
||||
lines[i] = `${indent}${key} = ${valueToml}${commentSuffix}`;
|
||||
return lines.join('\n') + (hadTrailingNewline ? '\n' : '');
|
||||
}
|
||||
}
|
||||
|
||||
// Section exists but key doesn't. Insert before the next section header,
|
||||
// skipping trailing blank lines inside the current section so the new
|
||||
// entry sits with its siblings.
|
||||
let insertAt = sectionEnd;
|
||||
while (insertAt > sectionStart + 1 && lines[insertAt - 1].trim() === '') {
|
||||
insertAt--;
|
||||
}
|
||||
lines.splice(insertAt, 0, `${key} = ${valueToml}`);
|
||||
return lines.join('\n') + (hadTrailingNewline ? '\n' : '');
|
||||
}
|
||||
|
||||
function escapeRegExp(s) {
|
||||
return s.replaceAll(/[.*+?^${}()|[\]\\]/g, String.raw`\$&`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Look up `[section] key` in a TOML file. Returns true if the file exists,
|
||||
* the section is present, and `key` is set within it. Used by
|
||||
* `applySetOverrides` to route an override to the file that already owns
|
||||
* the key (so user-scope keys land in `config.user.toml`, team-scope keys
|
||||
* land in `config.toml`).
|
||||
*/
|
||||
async function tomlHasKey(filePath, section, key) {
|
||||
if (!(await fs.pathExists(filePath))) return false;
|
||||
const content = await fs.readFile(filePath, 'utf8');
|
||||
const lines = content.split('\n');
|
||||
const sectionStart = lines.findIndex((line) => line.trim() === section);
|
||||
if (sectionStart === -1) return false;
|
||||
const keyPattern = new RegExp(`^\\s*${escapeRegExp(key)}\\s*=`);
|
||||
for (let i = sectionStart + 1; i < lines.length; i++) {
|
||||
if (/^\s*\[/.test(lines[i])) return false;
|
||||
if (keyPattern.test(lines[i])) return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Apply parsed `--set` overrides to the central TOML files written by the
|
||||
* installer. Called at the end of an install / quick-update.
|
||||
*
|
||||
* Routing per (module, key):
|
||||
* 1. If `_bmad/config.user.toml` already has `[section] key`, update there
|
||||
* (user-scope key like `core.user_name`, `bmm.user_skill_level`).
|
||||
* 2. Otherwise update `_bmad/config.toml` (team scope, the default).
|
||||
*
|
||||
* The schema-correct user/team partition lives in `manifest-generator`. We
|
||||
* intentionally don't re-read module schemas here — the only goal is to
|
||||
* match the file the installer just wrote the key to. For brand-new keys
|
||||
* (not in either file yet), team scope is the safe default.
|
||||
*
|
||||
* @param {Object<string, Object<string, string>>} overrides
|
||||
* @param {string} bmadDir absolute path to `_bmad/`
|
||||
* @returns {Promise<Array<{module:string,key:string,scope:'team'|'user',file:string}>>}
|
||||
* a list of applied entries (for caller logging)
|
||||
*/
|
||||
async function applySetOverrides(overrides, bmadDir) {
|
||||
const applied = [];
|
||||
if (!overrides || typeof overrides !== 'object') return applied;
|
||||
|
||||
const teamPath = path.join(bmadDir, 'config.toml');
|
||||
const userPath = path.join(bmadDir, 'config.user.toml');
|
||||
|
||||
for (const moduleCode of Object.keys(overrides)) {
|
||||
// Skip overrides for modules not actually installed. The installer writes
|
||||
// `_bmad/<module>/config.yaml` for every installed module (including core),
|
||||
// so its presence is a reliable "is this module here?" signal that works
|
||||
// for both fresh installs and quick-updates without coupling to caller-
|
||||
// supplied module lists.
|
||||
const moduleConfigYaml = path.join(bmadDir, moduleCode, 'config.yaml');
|
||||
if (!(await fs.pathExists(moduleConfigYaml))) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const section = sectionHeader(moduleCode);
|
||||
const moduleOverrides = overrides[moduleCode] || {};
|
||||
for (const key of Object.keys(moduleOverrides)) {
|
||||
const value = moduleOverrides[key];
|
||||
const valueToml = tomlString(value);
|
||||
|
||||
const userOwnsIt = await tomlHasKey(userPath, section, key);
|
||||
const targetPath = userOwnsIt ? userPath : teamPath;
|
||||
|
||||
// The team file always exists post-install; the user file only exists
|
||||
// if the install wrote at least one user-scope key. If we're routing to
|
||||
// it but it doesn't exist yet, create it with a minimal header so it
|
||||
// has the same shape as installer-written user toml.
|
||||
let content = '';
|
||||
if (await fs.pathExists(targetPath)) {
|
||||
content = await fs.readFile(targetPath, 'utf8');
|
||||
} else {
|
||||
content = '# Personal overrides for _bmad/config.toml.\n';
|
||||
}
|
||||
|
||||
const next = upsertTomlKey(content, section, key, valueToml);
|
||||
await fs.writeFile(targetPath, next, 'utf8');
|
||||
applied.push({
|
||||
module: moduleCode,
|
||||
key,
|
||||
scope: userOwnsIt ? 'user' : 'team',
|
||||
file: path.basename(targetPath),
|
||||
});
|
||||
}
|
||||
|
||||
// Also patch the per-module yaml (`_bmad/<module>/config.yaml`). The
|
||||
// installer reads this file as `_existingConfig` on subsequent runs and
|
||||
// surfaces declared values as prompt defaults — under `--yes` those
|
||||
// defaults are accepted, so patching here gives `--set` natural
|
||||
// carry-forward for declared keys without needing schema-strict
|
||||
// partition exemptions in the manifest writer. For undeclared keys the
|
||||
// value lives in the per-module yaml but won't be re-emitted into
|
||||
// config.toml on the next install (the schema-strict partition drops
|
||||
// it); re-pass `--set` if you need it sticky.
|
||||
const moduleYamlPath = path.join(bmadDir, moduleCode, 'config.yaml');
|
||||
if (await fs.pathExists(moduleYamlPath)) {
|
||||
try {
|
||||
const text = await fs.readFile(moduleYamlPath, 'utf8');
|
||||
const parsed = yaml.parse(text);
|
||||
if (parsed && typeof parsed === 'object' && !Array.isArray(parsed)) {
|
||||
// Preserve the installer's banner header (everything up to the
|
||||
// first non-comment line) so `_bmad/<module>/config.yaml` keeps
|
||||
// its provenance comments after we round-trip it.
|
||||
const headerLines = [];
|
||||
for (const line of text.split('\n')) {
|
||||
if (line.startsWith('#') || line.trim() === '') {
|
||||
headerLines.push(line);
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
for (const key of Object.keys(moduleOverrides)) {
|
||||
parsed[key] = moduleOverrides[key];
|
||||
}
|
||||
const body = yaml.stringify(parsed, { indent: 2, lineWidth: 0, minContentWidth: 0 });
|
||||
const header = headerLines.length > 0 ? headerLines.join('\n') + '\n' : '';
|
||||
await fs.writeFile(moduleYamlPath, header + body, 'utf8');
|
||||
}
|
||||
} catch {
|
||||
// Per-module yaml unparseable — skip silently. The central toml was
|
||||
// already patched above, which is the user-visible state for the
|
||||
// current install. Carry-forward will fail next install but the
|
||||
// current install reflects the override.
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return applied;
|
||||
}
|
||||
|
||||
module.exports = { parseSetEntry, parseSetEntries, applySetOverrides, upsertTomlKey, tomlString };
|
||||
|
|
@ -2,6 +2,7 @@ const path = require('node:path');
|
|||
const os = require('node:os');
|
||||
const semver = require('semver');
|
||||
const fs = require('./fs-native');
|
||||
const installerPackageJson = require('../../package.json');
|
||||
const { CLIUtils } = require('./cli-utils');
|
||||
const { ExternalModuleManager } = require('./modules/external-manager');
|
||||
const { resolveModuleVersion } = require('./modules/version-resolver');
|
||||
|
|
@ -15,6 +16,7 @@ const {
|
|||
} = require('./modules/channel-plan');
|
||||
const channelResolver = require('./modules/channel-resolver');
|
||||
const prompts = require('./prompts');
|
||||
const { parseSetEntries } = require('./set-overrides');
|
||||
|
||||
const manifest = new Manifest();
|
||||
|
||||
|
|
@ -128,6 +130,24 @@ class UI {
|
|||
await prompts.log.warn(warning);
|
||||
}
|
||||
|
||||
// When the user launched the installer from a prerelease (npx bmad-method@next),
|
||||
// mirror that intent for external modules: seed the global channel to 'next' so
|
||||
// the module picker's version labels resolve from main HEAD (matching what
|
||||
// actually gets installed) and the interactive channel gate skips — the user
|
||||
// already declared "next" intent by typing @next. Explicit channel flags
|
||||
// override this seed.
|
||||
if (
|
||||
semver.prerelease(installerPackageJson.version) !== null &&
|
||||
!channelOptions.global &&
|
||||
channelOptions.nextSet.size === 0 &&
|
||||
channelOptions.pins.size === 0
|
||||
) {
|
||||
channelOptions.global = 'next';
|
||||
await prompts.log.info(
|
||||
'Launched from a prerelease — installing all external modules from main HEAD (next channel). Pass --all-stable or --pin to override.',
|
||||
);
|
||||
}
|
||||
|
||||
// Get directory from options or prompt
|
||||
let confirmedDirectory;
|
||||
if (options.directory) {
|
||||
|
|
@ -181,12 +201,15 @@ class UI {
|
|||
actionType = options.action;
|
||||
await prompts.log.info(`Using action from command-line: ${actionType}`);
|
||||
} else if (options.yes) {
|
||||
// Default to quick-update if available, otherwise first available choice
|
||||
// Default to quick-update if available, unless flags that require the
|
||||
// full update path are present (e.g. --custom-source which re-clones
|
||||
// modules at a new version — quick-update skips that entirely).
|
||||
if (choices.length === 0) {
|
||||
throw new Error('No valid actions available for this installation');
|
||||
}
|
||||
const hasQuickUpdate = choices.some((c) => c.value === 'quick-update');
|
||||
actionType = hasQuickUpdate ? 'quick-update' : choices[0].value;
|
||||
const needsFullUpdate = !!options.customSource;
|
||||
actionType = hasQuickUpdate && !needsFullUpdate ? 'quick-update' : (choices.find((c) => c.value === 'update') || choices[0]).value;
|
||||
await prompts.log.info(`Non-interactive mode (--yes): defaulting to ${actionType}`);
|
||||
} else {
|
||||
actionType = await prompts.select({
|
||||
|
|
@ -222,8 +245,11 @@ class UI {
|
|||
.map((m) => m.trim())
|
||||
.filter(Boolean);
|
||||
await prompts.log.info(`Using modules from command-line: ${selectedModules.join(', ')}`);
|
||||
} else if (options.customSource) {
|
||||
// Custom source without --modules: start with empty list (core added below)
|
||||
} else if (options.customSource && !options.yes) {
|
||||
// Custom source without --modules or --yes: start with empty list
|
||||
// (only custom source modules + core will be installed).
|
||||
// When --yes is also set, fall through to the --yes branch so all
|
||||
// installed modules are included alongside the custom source modules.
|
||||
selectedModules = [];
|
||||
} else if (options.yes) {
|
||||
selectedModules = await this.getDefaultModules(installedModuleIds);
|
||||
|
|
@ -262,7 +288,7 @@ class UI {
|
|||
// Get tool selection
|
||||
const toolSelection = await this.promptToolSelection(confirmedDirectory, options);
|
||||
|
||||
const moduleConfigs = await this.collectModuleConfigs(confirmedDirectory, selectedModules, {
|
||||
const { moduleConfigs, setOverrides } = await this.collectModuleConfigs(confirmedDirectory, selectedModules, {
|
||||
...options,
|
||||
channelOptions,
|
||||
});
|
||||
|
|
@ -288,6 +314,7 @@ class UI {
|
|||
skipIde: toolSelection.skipIde,
|
||||
coreConfig: moduleConfigs.core || {},
|
||||
moduleConfigs: moduleConfigs,
|
||||
setOverrides,
|
||||
skipPrompts: options.yes || false,
|
||||
channelOptions,
|
||||
};
|
||||
|
|
@ -332,12 +359,14 @@ class UI {
|
|||
|
||||
// Interactive channel gate: "Ready to install (all stable)? [Y/n]"
|
||||
// Only shown for fresh installs with no channel flags and an external module
|
||||
// selected. Non-interactive installs skip this and fall through to the
|
||||
// registry default (stable) or whatever flags were supplied.
|
||||
// selected. Skipped for prerelease launches because channelOptions.global
|
||||
// was already seeded to 'next' upstream. Non-interactive installs skip this
|
||||
// and fall through to the registry default (stable) or whatever flags were
|
||||
// supplied.
|
||||
await this._interactiveChannelGate({ options, channelOptions, selectedModules });
|
||||
|
||||
let toolSelection = await this.promptToolSelection(confirmedDirectory, options);
|
||||
const moduleConfigs = await this.collectModuleConfigs(confirmedDirectory, selectedModules, {
|
||||
const { moduleConfigs, setOverrides } = await this.collectModuleConfigs(confirmedDirectory, selectedModules, {
|
||||
...options,
|
||||
channelOptions,
|
||||
});
|
||||
|
|
@ -363,6 +392,7 @@ class UI {
|
|||
skipIde: toolSelection.skipIde,
|
||||
coreConfig: moduleConfigs.core || {},
|
||||
moduleConfigs: moduleConfigs,
|
||||
setOverrides,
|
||||
skipPrompts: options.yes || false,
|
||||
channelOptions,
|
||||
};
|
||||
|
|
@ -377,6 +407,37 @@ class UI {
|
|||
* @param {Object} options - Command-line options
|
||||
* @returns {Object} Tool configuration
|
||||
*/
|
||||
_parseToolsFlag(toolsArg, allKnownValues) {
|
||||
const selectedIdes = toolsArg
|
||||
.split(',')
|
||||
.map((t) => t.trim())
|
||||
.filter(Boolean);
|
||||
|
||||
if (selectedIdes.length === 0) {
|
||||
const err = new Error(
|
||||
'--tools was passed empty. Provide at least one tool ID (e.g. --tools claude-code) or run with --list-tools to see valid IDs.',
|
||||
);
|
||||
err.expected = true;
|
||||
throw err;
|
||||
}
|
||||
|
||||
const unknown = selectedIdes.filter((id) => !allKnownValues.has(id));
|
||||
if (unknown.length > 0) {
|
||||
const err = new Error(
|
||||
[
|
||||
`Unknown tool ID${unknown.length === 1 ? '' : 's'}: ${unknown.join(', ')}`,
|
||||
'',
|
||||
'Run with --list-tools to see all valid IDs.',
|
||||
'Common: claude-code, cursor, copilot, windsurf, cline',
|
||||
].join('\n'),
|
||||
);
|
||||
err.expected = true;
|
||||
throw err;
|
||||
}
|
||||
|
||||
return selectedIdes;
|
||||
}
|
||||
|
||||
async promptToolSelection(projectDir, options = {}) {
|
||||
const { ExistingInstall } = require('./core/existing-install');
|
||||
const { Installer } = require('./core/installer');
|
||||
|
|
@ -411,15 +472,10 @@ class UI {
|
|||
const allTools = [...preferredIdes, ...otherIdes];
|
||||
|
||||
// Non-interactive: handle --tools and --yes flags before interactive prompt
|
||||
if (options.tools) {
|
||||
if (options.tools.toLowerCase() === 'none') {
|
||||
await prompts.log.info('Skipping tool configuration (--tools none)');
|
||||
return { ides: [], skipIde: true };
|
||||
}
|
||||
const selectedIdes = options.tools
|
||||
.split(',')
|
||||
.map((t) => t.trim())
|
||||
.filter(Boolean);
|
||||
// Use !== undefined so an explicit --tools "" falls through to _parseToolsFlag and
|
||||
// gets a specific "passed empty" error instead of being silently ignored.
|
||||
if (options.tools !== undefined) {
|
||||
const selectedIdes = this._parseToolsFlag(options.tools, allKnownValues);
|
||||
await prompts.log.info(`Using tools from command-line: ${selectedIdes.join(', ')}`);
|
||||
await this.displaySelectedTools(selectedIdes, preferredIdes, allTools);
|
||||
return { ides: selectedIdes, skipIde: false };
|
||||
|
|
@ -495,21 +551,13 @@ class UI {
|
|||
|
||||
let selectedIdes = [];
|
||||
|
||||
// Check if tools are provided via command-line
|
||||
if (options.tools) {
|
||||
// Check for explicit "none" value to skip tool installation
|
||||
if (options.tools.toLowerCase() === 'none') {
|
||||
await prompts.log.info('Skipping tool configuration (--tools none)');
|
||||
return { ides: [], skipIde: true };
|
||||
} else {
|
||||
selectedIdes = options.tools
|
||||
.split(',')
|
||||
.map((t) => t.trim())
|
||||
.filter(Boolean);
|
||||
await prompts.log.info(`Using tools from command-line: ${selectedIdes.join(', ')}`);
|
||||
await this.displaySelectedTools(selectedIdes, preferredIdes, allTools);
|
||||
return { ides: selectedIdes, skipIde: false };
|
||||
}
|
||||
// Check if tools are provided via command-line.
|
||||
// Use !== undefined so an explicit --tools "" still hits _parseToolsFlag's empty-value error.
|
||||
if (options.tools !== undefined) {
|
||||
selectedIdes = this._parseToolsFlag(options.tools, allKnownValues);
|
||||
await prompts.log.info(`Using tools from command-line: ${selectedIdes.join(', ')}`);
|
||||
await this.displaySelectedTools(selectedIdes, preferredIdes, allTools);
|
||||
return { ides: selectedIdes, skipIde: false };
|
||||
} else if (options.yes) {
|
||||
// If --yes flag is set, skip tool prompt and use previously configured tools or empty
|
||||
if (configuredIdes.length > 0) {
|
||||
|
|
@ -517,8 +565,18 @@ class UI {
|
|||
await this.displaySelectedTools(configuredIdes, preferredIdes, allTools);
|
||||
return { ides: configuredIdes, skipIde: false };
|
||||
} else {
|
||||
await prompts.log.info('Skipping tool configuration (--yes flag, no previous tools)');
|
||||
return { ides: [], skipIde: true };
|
||||
const err = new Error(
|
||||
[
|
||||
'--tools is required for non-interactive install (--yes / -y) when no tools are previously configured.',
|
||||
'',
|
||||
'Common: claude-code, cursor, copilot, windsurf, cline',
|
||||
'See all supported tools: bmad-method install --list-tools',
|
||||
'',
|
||||
'Example: bmad-method install --modules bmm --tools claude-code -y',
|
||||
].join('\n'),
|
||||
);
|
||||
err.expected = true;
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -654,6 +712,33 @@ class UI {
|
|||
*/
|
||||
async collectModuleConfigs(directory, modules, options = {}) {
|
||||
const { OfficialModules } = require('./modules/official-modules');
|
||||
|
||||
// Parse --set up front purely to surface user-error before the install
|
||||
// burns time on the network / filesystem. The actual application happens
|
||||
// in installer.install() as a post-write TOML patch — see
|
||||
// `tools/installer/set-overrides.js`. We also warn about overrides
|
||||
// targeting modules the user didn't include, since those will silently
|
||||
// miss the file the patch step looks for.
|
||||
let setOverrides = {};
|
||||
try {
|
||||
setOverrides = parseSetEntries(options.set || []);
|
||||
} catch (error) {
|
||||
// install.js validated already; rethrow as-is for the user.
|
||||
throw error;
|
||||
}
|
||||
// Drop overrides for modules that aren't in the install set so the
|
||||
// post-install patch step doesn't create orphan sections in config.toml
|
||||
// for modules that were never installed.
|
||||
const selectedModuleSet = new Set(['core', ...modules]);
|
||||
for (const moduleCode of Object.keys(setOverrides)) {
|
||||
if (!selectedModuleSet.has(moduleCode)) {
|
||||
await prompts.log.warn(
|
||||
`--set ${moduleCode}.* — module '${moduleCode}' is not in the install set; values will be ignored. Add it to --modules to apply.`,
|
||||
);
|
||||
delete setOverrides[moduleCode];
|
||||
}
|
||||
}
|
||||
|
||||
const configCollector = new OfficialModules({ channelOptions: options.channelOptions });
|
||||
|
||||
// Seed core config from CLI options if provided
|
||||
|
|
@ -703,6 +788,9 @@ class UI {
|
|||
const defaultUsername = safeUsername.charAt(0).toUpperCase() + safeUsername.slice(1);
|
||||
configCollector.collectedConfig.core = {
|
||||
user_name: defaultUsername,
|
||||
// {directory_name} default per src/core-skills/module.yaml — matches what the
|
||||
// interactive flow resolves via buildQuestion()'s {directory_name} placeholder.
|
||||
project_name: path.basename(directory),
|
||||
communication_language: 'English',
|
||||
document_output_language: 'English',
|
||||
output_folder: '_bmad-output',
|
||||
|
|
@ -716,7 +804,7 @@ class UI {
|
|||
skipPrompts: options.yes || false,
|
||||
});
|
||||
|
||||
return configCollector.collectedConfig;
|
||||
return { moduleConfigs: configCollector.collectedConfig, setOverrides };
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -1783,7 +1871,9 @@ class UI {
|
|||
*
|
||||
* Skipped when:
|
||||
* - running non-interactively (--yes)
|
||||
* - the user already passed channel flags (--channel / --pin / --next)
|
||||
* - the user already passed channel flags (--channel / --pin / --next), OR
|
||||
* the installer was launched from a prerelease (which seeds
|
||||
* channelOptions.global = 'next' upstream in promptInstall)
|
||||
* - no externals/community modules are selected
|
||||
*
|
||||
* Mutates channelOptions.pins and channelOptions.nextSet to reflect picker choices.
|
||||
|
|
|
|||
|
|
@ -1,175 +0,0 @@
|
|||
# BMAD Platform Codes Configuration
|
||||
# Central configuration for all platform/IDE codes used in the BMAD system
|
||||
#
|
||||
# This file defines the standardized platform codes that are used throughout
|
||||
# the installation system to identify different platforms (IDEs, tools, etc.)
|
||||
#
|
||||
# Format:
|
||||
# code: Platform identifier used internally
|
||||
# name: Display name shown to users
|
||||
# preferred: Whether this platform is shown as a recommended option on install
|
||||
# category: Type of platform (ide, tool, service, etc.)
|
||||
|
||||
platforms:
|
||||
# Recommended Platforms
|
||||
claude-code:
|
||||
name: "Claude Code"
|
||||
preferred: true
|
||||
category: cli
|
||||
description: "Anthropic's official CLI for Claude"
|
||||
|
||||
cursor:
|
||||
name: "Cursor"
|
||||
preferred: true
|
||||
category: ide
|
||||
description: "AI-first code editor"
|
||||
|
||||
# Other IDEs and Tools
|
||||
cline:
|
||||
name: "Cline"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "AI coding assistant"
|
||||
|
||||
opencode:
|
||||
name: "OpenCode"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "OpenCode terminal coding assistant"
|
||||
|
||||
codebuddy:
|
||||
name: "CodeBuddy"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "Tencent Cloud Code Assistant - AI-powered coding companion"
|
||||
|
||||
auggie:
|
||||
name: "Auggie"
|
||||
preferred: false
|
||||
category: cli
|
||||
description: "AI development tool"
|
||||
|
||||
roo:
|
||||
name: "Roo Code"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "Enhanced Cline fork"
|
||||
|
||||
rovo-dev:
|
||||
name: "Rovo Dev"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "Atlassian's Rovo development environment"
|
||||
|
||||
kiro:
|
||||
name: "Kiro"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "Amazon's AI-powered IDE"
|
||||
|
||||
github-copilot:
|
||||
name: "GitHub Copilot"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "GitHub's AI pair programmer"
|
||||
|
||||
codex:
|
||||
name: "Codex"
|
||||
preferred: false
|
||||
category: cli
|
||||
description: "OpenAI Codex integration"
|
||||
|
||||
qwen:
|
||||
name: "QwenCoder"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "Qwen AI coding assistant"
|
||||
|
||||
gemini:
|
||||
name: "Gemini CLI"
|
||||
preferred: false
|
||||
category: cli
|
||||
description: "Google's CLI for Gemini"
|
||||
|
||||
iflow:
|
||||
name: "iFlow"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "AI workflow automation"
|
||||
|
||||
kilo:
|
||||
name: "KiloCoder"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "AI coding platform"
|
||||
|
||||
kimi-code:
|
||||
name: "Kimi Code"
|
||||
preferred: false
|
||||
category: cli
|
||||
description: "Moonshot AI's Kimi Code CLI"
|
||||
|
||||
crush:
|
||||
name: "Crush"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "AI development assistant"
|
||||
|
||||
antigravity:
|
||||
name: "Google Antigravity"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "Google's AI development environment"
|
||||
|
||||
trae:
|
||||
name: "Trae"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "AI coding tool"
|
||||
|
||||
windsurf:
|
||||
name: "Windsurf"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "AI-powered IDE with cascade flows"
|
||||
|
||||
junie:
|
||||
name: "Junie"
|
||||
preferred: false
|
||||
category: cli
|
||||
description: "AI coding agent by JetBrains"
|
||||
|
||||
ona:
|
||||
name: "Ona"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "Ona AI development environment"
|
||||
|
||||
# Platform categories
|
||||
categories:
|
||||
ide:
|
||||
name: "Integrated Development Environment"
|
||||
description: "Full-featured code editors with AI assistance"
|
||||
|
||||
cli:
|
||||
name: "Command Line Interface"
|
||||
description: "Terminal-based tools"
|
||||
|
||||
tool:
|
||||
name: "Development Tool"
|
||||
description: "Standalone development utilities"
|
||||
|
||||
service:
|
||||
name: "Cloud Service"
|
||||
description: "Cloud-based development platforms"
|
||||
|
||||
extension:
|
||||
name: "Editor Extension"
|
||||
description: "Plugins for existing editors"
|
||||
|
||||
# Naming conventions and rules
|
||||
conventions:
|
||||
code_format: "lowercase-kebab-case"
|
||||
name_format: "Title Case"
|
||||
max_code_length: 20
|
||||
allowed_characters: "a-z0-9-"
|
||||
|
|
@ -129,13 +129,45 @@ export default defineConfig({
|
|||
// TEA docs moved to standalone module site; keep BMM sidebar focused.
|
||||
{
|
||||
label: 'BMad Ecosystem',
|
||||
translations: { 'vi-VN': 'Hệ sinh thái BMad', 'zh-CN': 'BMad 生态系统', 'fr-FR': 'Écosystème BMad', 'cs-CZ': 'Ekosystém BMad' },
|
||||
collapsed: false,
|
||||
items: [
|
||||
{ label: 'BMad Builder', link: 'https://bmad-builder-docs.bmad-method.org/', attrs: { target: '_blank' } },
|
||||
{ label: 'Creative Intelligence Suite', link: 'https://cis-docs.bmad-method.org/', attrs: { target: '_blank' } },
|
||||
{ label: 'Game Dev Studio', link: 'https://game-dev-studio-docs.bmad-method.org/', attrs: { target: '_blank' } },
|
||||
{
|
||||
label: 'BMad Builder',
|
||||
translations: { 'vi-VN': 'BMad Builder', 'zh-CN': 'BMad 构建器', 'fr-FR': 'BMad Builder', 'cs-CZ': 'BMad Builder' },
|
||||
link: 'https://bmad-builder-docs.bmad-method.org/',
|
||||
attrs: { target: '_blank' },
|
||||
},
|
||||
{
|
||||
label: 'Creative Intelligence Suite',
|
||||
translations: {
|
||||
'vi-VN': 'Bộ công cụ Trí tuệ Sáng tạo',
|
||||
'zh-CN': '创意智能套件',
|
||||
'fr-FR': "Suite d'Intelligence Créative",
|
||||
'cs-CZ': 'Sada kreativní inteligence',
|
||||
},
|
||||
link: 'https://cis-docs.bmad-method.org/',
|
||||
attrs: { target: '_blank' },
|
||||
},
|
||||
{
|
||||
label: 'Game Dev Studio',
|
||||
translations: {
|
||||
'vi-VN': 'Xưởng phát triển Game',
|
||||
'zh-CN': '游戏开发工作室',
|
||||
'fr-FR': 'Studio de Développement de Jeux',
|
||||
'cs-CZ': 'Herní vývojové studio',
|
||||
},
|
||||
link: 'https://game-dev-studio-docs.bmad-method.org/',
|
||||
attrs: { target: '_blank' },
|
||||
},
|
||||
{
|
||||
label: 'Test Architect (TEA)',
|
||||
translations: {
|
||||
'vi-VN': 'Kiến trúc sư Kiểm thử (TEA)',
|
||||
'zh-CN': '测试架构师 (TEA)',
|
||||
'fr-FR': 'Architecte de Tests (TEA)',
|
||||
'cs-CZ': 'Testovací architekt (TEA)',
|
||||
},
|
||||
link: 'https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/',
|
||||
attrs: { target: '_blank' },
|
||||
},
|
||||
|
|
|
|||
Loading…
Reference in New Issue