Compare commits
18 Commits
9c40c1949b
...
5d3334a2a6
| Author | SHA1 | Date |
|---|---|---|
|
|
5d3334a2a6 | |
|
|
e36f219c81 | |
|
|
9debc165aa | |
|
|
65b810a11f | |
|
|
e6cdc93b79 | |
|
|
e174bebc60 | |
|
|
fcf20f1c7b | |
|
|
e011192525 | |
|
|
91a57499e9 | |
|
|
48a7ec8bff | |
|
|
3da984a491 | |
|
|
815600e4ca | |
|
|
7ee5fa313b | |
|
|
3e89b30b3c | |
|
|
b4d73b7daf | |
|
|
6ff74ba662 | |
|
|
1ad1f91e38 | |
|
|
350688df67 |
|
|
@ -13,7 +13,7 @@
|
|||
"name": "bmad-pro-skills",
|
||||
"source": "./",
|
||||
"description": "Next level skills for power users — advanced prompting techniques, agent management, and more.",
|
||||
"version": "6.3.0",
|
||||
"version": "6.6.0",
|
||||
"author": {
|
||||
"name": "Brian (BMad) Madison"
|
||||
},
|
||||
|
|
@ -35,7 +35,7 @@
|
|||
"name": "bmad-method-lifecycle",
|
||||
"source": "./",
|
||||
"description": "Full-lifecycle AI development framework — agents and workflows for product analysis, planning, architecture, and implementation.",
|
||||
"version": "6.3.0",
|
||||
"version": "6.6.0",
|
||||
"author": {
|
||||
"name": "Brian (BMad) Madison"
|
||||
},
|
||||
|
|
|
|||
26
CHANGELOG.md
26
CHANGELOG.md
|
|
@ -1,5 +1,31 @@
|
|||
# Changelog
|
||||
|
||||
## v6.6.0 - 2026-04-28
|
||||
|
||||
### 💥 Breaking Changes
|
||||
|
||||
* `--tools none` is no longer accepted; fresh `--yes` installs now require an explicit `--tools <id>`. Existing-install flows are unchanged. Run `npx bmad-method --list-tools` to see supported IDs (#2346)
|
||||
* `project_name` has moved from `[modules.bmm]` to `[core]` in `config.toml`. Existing installs are auto-migrated on next install/update — no manual action required (#2348)
|
||||
|
||||
### 🎁 Features
|
||||
|
||||
* **Non-interactive config for CI/Docker** — new `--set <module>.<key>=<value>` (repeatable) and `--list-options [module]` flags allow installer configuration without prompts. Routes values to the correct config file with prototype-pollution defenses (#2354)
|
||||
* **Brownfield epic scoping** — Create Epics and Stories workflow now detects file-overlap between epics and applies an Implementation Efficiency principle plus a design completeness gate, reducing unnecessary file churn (#1826)
|
||||
|
||||
### 🐛 Fixes
|
||||
|
||||
* **Custom module installer** — Azure DevOps URLs now parse correctly with multi-segment paths and `_git` prefixes (#2269); HTTP (non-HTTPS) Git URLs are preserved for self-hosted servers (#2344); community installs route through `PluginResolver` so marketplace plugins with nested `module.yaml` install all skills (#2331); URL-source modules resolve from disk cache on re-install instead of warning (#2323); local `--custom-content` modules resolve correctly and `[modules.<code>]` TOML keys use the module code rather than display name (#2316); `--yes` with `--custom-source` now runs the full update path so version tags are respected (#2336)
|
||||
* **Installer safety** — `--list-tools` flag added; empty/typo'd tool IDs rejected with specific errors (#2346)
|
||||
* **Channel and dist-tag handling** — installer launched from a prerelease (e.g. `@next`) now defaults external module channels to `next` instead of silently downgrading to stable (#2321); stable publishes advance the `@next` dist-tag so prerelease users no longer leapfrog or miss update notifications (#2320)
|
||||
* **Architecture validation gate** — step-07 validation template no longer ships pre-checked; status field is now templated against actual checklist completion (#2347)
|
||||
* **bmad-help data integrity** — `bmad-help.csv` is no longer transformed at merge time and is emitted in its documented schema; 31 misaligned rows in core/bmm `module-help.csv` repaired (#2349)
|
||||
* **Config robustness** — malformed `module.yaml` (scalars, arrays) is now rejected before crash (#2348)
|
||||
* **Legacy cleanup** — pre-v6.2.0 wrapper skills (`bmad-bmm-*`, `bmad-agent-bmm-*`) are removed automatically on upgrade so they no longer error with missing-file warnings (#2315)
|
||||
|
||||
### 📚 Docs
|
||||
|
||||
* Complete Chinese (zh-CN) translations for `named-agents.md` and `expand-bmad-for-your-org.md`; localized BMad Ecosystem sidebar (CIS, BMB, TEA, WDS) across zh-cn, vi-vn, fr-fr, cs-cz (#2355)
|
||||
|
||||
## v6.5.0 - 2026-04-26
|
||||
|
||||
### 🎁 Features
|
||||
|
|
|
|||
|
|
@ -52,6 +52,15 @@ Follow the installer prompts, then open your AI IDE (Claude Code, Cursor, etc.)
|
|||
npx bmad-method install --directory /path/to/project --modules bmm --tools claude-code --yes
|
||||
```
|
||||
|
||||
Override any module config option with `--set <module>.<key>=<value>` (repeatable). Run `--list-options [module]` to see locally-known official keys (built-in modules plus any external officials cached on this machine):
|
||||
|
||||
```bash
|
||||
npx bmad-method install --yes \
|
||||
--modules bmm --tools claude-code \
|
||||
--set bmm.project_knowledge=research \
|
||||
--set bmm.user_skill_level=expert
|
||||
```
|
||||
|
||||
[See all installation options](https://docs.bmad-method.org/how-to/non-interactive-installation/)
|
||||
|
||||
> **Not sure what to do?** Ask `bmad-help` — it tells you exactly what's next and what's optional. You can also ask questions like `bmad-help I just finished the architecture, what do I do next?`
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ Use `npx bmad-method install` to set up BMad in your project. One command handle
|
|||
|
||||
- **Node.js** 20+ (the installer requires it)
|
||||
- **Git** (for cloning external modules)
|
||||
- **An AI tool** such as Claude Code or Cursor — or install without one using `--tools none`
|
||||
- **An AI tool** such as Claude Code or Cursor (run `npx bmad-method install --list-tools` to see all supported tools)
|
||||
|
||||
:::
|
||||
|
||||
|
|
@ -118,11 +118,12 @@ Under `--yes`, patch and minor upgrades apply automatically. Majors stay frozen
|
|||
### Flag reference
|
||||
|
||||
| Flag | Purpose |
|
||||
| ------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------- |
|
||||
| ------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `--yes`, `-y` | Skip all prompts; accept flag values + defaults |
|
||||
| `--directory <path>` | Install into this directory (default: current working dir) |
|
||||
| `--modules <a,b,c>` | Exact module set. Core is auto-added. Not a delta — list everything you want kept. |
|
||||
| `--tools <a,b>` or `--tools none` | IDE/tool selection. `none` skips tool config entirely. |
|
||||
| `--tools <a,b>` | IDE/tool selection. Required for fresh `--yes` installs. Run `--list-tools` for valid IDs. |
|
||||
| `--list-tools` | Print all supported tool/IDE IDs (with target directories) and exit. |
|
||||
| `--action <type>` | `install`, `update`, or `quick-update`. Defaults based on existing install state. |
|
||||
| `--custom-source <urls>` | Install custom modules from Git URLs or local paths |
|
||||
| `--channel <stable\|next>` | Apply to all externals (aliased as `--all-stable` / `--all-next`) |
|
||||
|
|
@ -130,7 +131,9 @@ Under `--yes`, patch and minor upgrades apply automatically. Majors stay frozen
|
|||
| `--all-next` | Alias for `--channel=next` |
|
||||
| `--next=<code>` | Put one module on next. Repeatable. |
|
||||
| `--pin <code>=<tag>` | Pin one module to a specific tag. Repeatable. |
|
||||
| `--user-name`, `--communication-language`, `--document-output-language`, `--output-folder` | Override per-user config defaults |
|
||||
| `--set <module>.<key>=<value>` | Set any module config option non-interactively (preferred — see [Module config overrides](#module-config-overrides)). Repeatable. |
|
||||
| `--list-options [module]` | Print every `--set` key for built-in and locally-cached official modules, then exit. Pass a module code to scope to one module. |
|
||||
| `--user-name`, `--communication-language`, `--document-output-language`, `--output-folder` | Legacy shortcuts equivalent to `--set core.<key>=<value>` (still supported) |
|
||||
|
||||
Precedence when flags overlap: `--pin` beats `--next=` beats `--channel` / `--all-*` beats the registry default (`stable`).
|
||||
|
||||
|
|
@ -165,19 +168,56 @@ npx bmad-method install --yes --modules bmm,bmb --all-next --tools claude-code
|
|||
|
||||
```bash
|
||||
npx bmad-method install --yes --action update \
|
||||
--modules bmm,bmb,gds \
|
||||
--tools none
|
||||
--modules bmm,bmb,gds
|
||||
```
|
||||
|
||||
`--tools` is omitted intentionally — `--action update` reuses the tools configured during the first install.
|
||||
|
||||
**Mix channels — bmb on next, gds on stable:**
|
||||
|
||||
```bash
|
||||
npx bmad-method install --yes --action update \
|
||||
--modules bmm,bmb,cis,gds \
|
||||
--next=bmb \
|
||||
--tools none
|
||||
--next=bmb
|
||||
```
|
||||
|
||||
### Module config overrides
|
||||
|
||||
`--set <module>.<key>=<value>` lets you set any module config option non-interactively. It's repeatable and scales to every module — present and future. The flag is applied as a post-install patch: the installer runs its normal flow first, then `--set` upserts each value into `_bmad/config.toml` (team scope) or `_bmad/config.user.toml` (user scope), and into `_bmad/<module>/config.yaml` so declared values carry forward to the next install.
|
||||
|
||||
**Example — install bmm with explicit project knowledge and skill level:**
|
||||
|
||||
```bash
|
||||
npx bmad-method install --yes \
|
||||
--modules bmm \
|
||||
--tools claude-code \
|
||||
--set bmm.project_knowledge=research \
|
||||
--set bmm.user_skill_level=expert
|
||||
```
|
||||
|
||||
**Discover available keys for a module:**
|
||||
|
||||
```bash
|
||||
npx bmad-method install --list-options bmm
|
||||
```
|
||||
|
||||
`--list-options` (no argument) lists every key the installer can find locally — built-in modules (`core`, `bmm`) plus any currently cached official modules. The cache is per-machine and can be cleared, so previously installed officials won't appear on a fresh checkout or an ephemeral CI worker until they're installed again. Community and custom modules aren't enumerated here; read the module's `module.yaml` directly to see what keys it declares.
|
||||
|
||||
**How it works:**
|
||||
|
||||
- **Routing.** The patch step looks for `[modules.<module>] <key>` (or `[core] <key>`) in `config.user.toml` first; if found there, it updates that file. Otherwise it writes to the team-scope `config.toml`. So user-scope keys (e.g. `core.user_name`, `bmm.user_skill_level`) end up in `config.user.toml` and team-scope keys end up in `config.toml`, matching the partition the installer uses.
|
||||
- **Verbatim values.** The value is written exactly as you provided it — no `result:` template rendering. To get the rendered form (e.g. `{project-root}/research`), pass it explicitly: `--set bmm.project_knowledge='{project-root}/research'`.
|
||||
- **Carry-forward, declared keys.** Values for keys declared in `module.yaml` survive subsequent installs because they're also written to `_bmad/<module>/config.yaml`, which the installer reads as the prompt default on the next run.
|
||||
- **Carry-forward, undeclared keys.** A value for a key the module's schema doesn't declare lands in `config.toml` for the current install but won't be re-emitted on the next install (the manifest writer's schema-strict partition drops unknown keys). Re-pass `--set` if you need it sticky, or edit `_bmad/config.toml` directly.
|
||||
- **No validation.** `single-select` values aren't checked against the allowed choices, and unknown keys aren't rejected — whatever you assert is written.
|
||||
- **Modules not in `--modules`.** Setting a value for a module you didn't include prints a warning and the value is dropped (no file gets created for an uninstalled module).
|
||||
|
||||
The legacy core shortcuts (`--user-name`, `--output-folder`, etc.) still work and remain documented for backward compatibility, but `--set core.user_name=...` is equivalent.
|
||||
|
||||
:::note[Works with quick-update]
|
||||
`--set` is a post-install patch, so it applies the same way regardless of action type. Under `bmad install --action quick-update` (or `--yes` against an existing install, where quick-update is the default), `--set` patches the central config files at the end just like a regular install.
|
||||
:::
|
||||
|
||||
:::caution[Rate limit on shared IPs]
|
||||
Anonymous GitHub API calls are capped at 60/hour per IP. A single install hits the API once per external module to resolve the stable tag. Offices behind NAT, CI runner pools, and VPNs can collectively exhaust this.
|
||||
|
||||
|
|
@ -204,7 +244,7 @@ For cross-machine reproducibility, don't rely on rerunning the same `--modules`
|
|||
|
||||
```bash
|
||||
npx bmad-method install --yes --modules bmb,cis \
|
||||
--pin bmb=v1.7.0 --pin cis=v0.4.2 --tools none
|
||||
--pin bmb=v1.7.0 --pin cis=v0.4.2 --tools claude-code
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
|
|
|||
|
|
@ -68,6 +68,7 @@ Select **Yes**, then provide a source:
|
|||
| Input Type | Example |
|
||||
| --------------------- | ------------------------------------------------- |
|
||||
| HTTPS URL (any host) | `https://github.com/org/repo` |
|
||||
| HTTP URL (any host) | `http://host/org/repo` |
|
||||
| HTTPS URL with subdir | `https://github.com/org/repo/tree/main/my-module` |
|
||||
| SSH URL | `git@github.com:org/repo.git` |
|
||||
| Local path | `/Users/me/projects/my-module` |
|
||||
|
|
|
|||
|
|
@ -68,6 +68,7 @@ Chọn **Yes**, rồi nhập nguồn:
|
|||
| Loại đầu vào | Ví dụ |
|
||||
| --------------------- | ------------------------------------------------- |
|
||||
| HTTPS URL trên bất kỳ host nào | `https://github.com/org/repo` |
|
||||
| HTTP URL trên bất kỳ host nào | `http://host/org/repo` |
|
||||
| HTTPS URL trỏ vào một thư mục con | `https://github.com/org/repo/tree/main/my-module` |
|
||||
| SSH URL | `git@github.com:org/repo.git` |
|
||||
| Đường dẫn cục bộ | `/Users/me/projects/my-module` |
|
||||
|
|
|
|||
|
|
@ -0,0 +1,94 @@
|
|||
---
|
||||
title: "命名智能体"
|
||||
description: 为什么 BMad 的智能体有名字、人设和自定义能力——相比菜单驱动或纯提示驱动的方案,这解锁了哪些可能性
|
||||
sidebar:
|
||||
order: 1
|
||||
---
|
||||
|
||||
你说"嘿 Mary,咱们来头脑风暴",Mary 就激活了。她用你配置的语言、以她独特的人设向你打招呼,并提醒你随时可以用 `bmad-help`。然后她跳过菜单,直接进入头脑风暴——因为你的意图已经足够明确。
|
||||
|
||||
这一页解释背后发生了什么,以及 BMad 为什么这样设计。
|
||||
|
||||
## 三足鼎立
|
||||
|
||||
BMad 的智能体模型建立在三个可组合的基本要素之上:
|
||||
|
||||
| 要素 | 提供什么 | 所在位置 |
|
||||
|---|---|---|
|
||||
| **技能(Skill)** | 能力——一项智能体能做的具体事(头脑风暴、撰写 PRD、实现 story) | `.claude/skills/{skill-name}/SKILL.md`(或你所用 IDE 的等价位置) |
|
||||
| **命名智能体(Named Agent)** | 人设连续性——一个可辨识的身份,把一组相关技能包装在统一的语气、原则和视觉标识下 | 目录名以 `bmad-agent-*` 开头的技能 |
|
||||
| **自定义(Customization)** | 让它成为你的——覆盖选项可以重塑智能体行为、添加 MCP 集成、替换模板、叠加组织规范 | `_bmad/custom/{skill-name}.toml`(团队提交的覆盖)和 `.user.toml`(个人,已 gitignore) |
|
||||
|
||||
抽掉任何一条腿,体验就会坍塌:
|
||||
|
||||
- 有技能没智能体 → 用户只能靠名称或编号在能力列表里自行查找
|
||||
- 有智能体没技能 → 空有人设,没有能力
|
||||
- 没有自定义 → 所有人用一模一样的开箱默认,任何组织特有需求都只能靠 fork
|
||||
|
||||
## 命名智能体带来了什么
|
||||
|
||||
BMad 内置六个命名智能体,各自对应 BMad Method 的一个阶段:
|
||||
|
||||
| 智能体 | 阶段 | 模块 |
|
||||
|---|---|---|
|
||||
| 📊 **Mary**,商业分析师 | 分析 | 市场调研、头脑风暴、产品摘要、PRFAQ |
|
||||
| 📚 **Paige**,技术文档工程师 | 分析 | 项目文档、流程图、文档校验 |
|
||||
| 📋 **John**,产品经理 | 规划 | PRD 创建、Epic/Story 拆分、实施就绪评审 |
|
||||
| 🎨 **Sally**,UX 设计师 | 规划 | UX 设计规范 |
|
||||
| 🏗️ **Winston**,系统架构师 | 方案设计 | 技术架构、一致性检查 |
|
||||
| 💻 **Amelia**,高级工程师 | 实现 | Story 执行、快速开发、代码评审、Sprint 规划 |
|
||||
|
||||
每位智能体都有硬编码的身份(名字、职衔、专业领域)和可自定义的层(角色、原则、沟通风格、图标、菜单)。你可以重写 Mary 的原则或添加菜单项,但无法改她的名字——这是刻意为之的。品牌辨识度经得起自定义,所以"嘿 Mary"永远激活分析师,无论团队怎样塑造她的行为。
|
||||
|
||||
## 激活流程
|
||||
|
||||
调用命名智能体时,八个步骤依次执行:
|
||||
|
||||
1. **解析智能体配置** — 通过 Python 解析器(使用 stdlib `tomllib`)将内置 `customize.toml` 与团队覆盖和个人覆盖合并
|
||||
2. **执行前置步骤** — 团队配置的任何预处理行为
|
||||
3. **采用人设** — 硬编码身份加上自定义的角色、沟通风格、原则
|
||||
4. **加载持久化事实** — 组织规则、合规说明,可通过 `file:` 前缀加载文件(如 `file:{project-root}/docs/project-context.md`)
|
||||
5. **加载配置** — 用户名、沟通语言、输出语言、产物路径
|
||||
6. **打招呼** — 个性化问候,使用配置的语言,带上智能体的 emoji 前缀让你一眼认出谁在说话
|
||||
7. **执行后置步骤** — 团队配置的任何问候后设置
|
||||
8. **分发或展示菜单** — 如果你的开场消息能匹配某个菜单项,直接执行;否则展示菜单等待输入
|
||||
|
||||
第 8 步是意图与能力的交汇点。"嘿 Mary,咱们来头脑风暴"之所以跳过菜单渲染,是因为 `bmad-brainstorming` 显然对应 Mary 菜单上的 `BP`。如果你说的比较模糊,她会简短问一句,而不是走确认仪式。如果完全不匹配,她会正常继续对话。
|
||||
|
||||
## 为什么不只用菜单?
|
||||
|
||||
菜单迫使用户迁就工具。你得记住头脑风暴在分析师智能体的 `BP` 编码下,而不是 PM 智能体上,还得知道哪个人设负责哪些功能。这些都是工具强加给你的认知负担。
|
||||
|
||||
命名智能体把这个关系反转了。你用任何自然的方式,对着某个人说你想做什么。智能体知道自己是谁、能做什么。当你的意图足够清晰,她就直接开始。
|
||||
|
||||
菜单仍然作为兜底存在——探索时展示,确定时跳过。
|
||||
|
||||
## 为什么不直接用空白提示?
|
||||
|
||||
空白提示假设你知道"魔法咒语"。"帮我头脑风暴"也许有用,但"帮我发散下我这个 SaaS 创意"可能就不灵了,而结果取决于你怎么措辞。你变成了提示工程师。
|
||||
|
||||
命名智能体在不牺牲自由度的前提下增加了结构。人设保持一致,能力随时可发现,`bmad-help` 永远只差一个命令。你不用猜智能体能做什么,也不需要翻手册才能用它。
|
||||
|
||||
## 自定义是一等公民
|
||||
|
||||
自定义模型让这套方案能从单个开发者扩展到整个组织。
|
||||
|
||||
每个智能体自带 `customize.toml` 及合理默认值。团队在 `_bmad/custom/bmad-agent-{role}.toml` 中提交覆盖。个人可以在 `.user.toml`(已 gitignore)中叠加偏好。解析器在激活时按可预测的结构化规则合并三层配置。
|
||||
|
||||
大多数用户从不需要手写这些文件。`bmad-customize` 技能会引导你选择目标、区分智能体/工作流作用域、撰写覆盖、验证合并结果——让自定义能力对任何理解自己意图的人开放,不限于精通 TOML 的人。
|
||||
|
||||
举个例子:团队提交一个文件,告诉 Amelia 查库文档时一律用 Context7 MCP 工具,本地 epics 列表找不到 story 时回退到 Linear。Amelia 分发的每个开发工作流(dev-story、quick-dev、create-story、code-review)都继承这些行为,无需改源码、无需逐工作流重复配置。
|
||||
|
||||
此外还有第二个自定义面,用于**跨领域关注点**:中央配置 `_bmad/config.toml` 和 `_bmad/config.user.toml`(由安装器维护,从每个模块的 `module.yaml` 重建)加上 `_bmad/custom/config.toml`(团队提交)和 `_bmad/custom/config.user.toml`(个人,已 gitignore)作为覆盖。这里存放着 **智能体花名册** ——轻量级描述符,`bmad-party-mode`、`bmad-retrospective` 和 `bmad-advanced-elicitation` 等花名册消费者读取它来了解有哪些智能体可用、如何扮演它们。用团队覆盖在全组织范围重新定义某个智能体;用 `.user.toml` 覆盖添加虚构角色(Kirk、Spock、领域专家)作为个人实验——无需碰任何技能目录。每个技能的配置文件塑造 Mary **激活时的行为**;中央配置塑造其他技能**查看花名册时看到的 Mary**。
|
||||
|
||||
完整自定义文档和实操示例请参见:
|
||||
|
||||
- [如何自定义 BMad](../how-to/customize-bmad.md) — 可自定义项和合并规则的参考
|
||||
- [如何为组织扩展 BMad](../how-to/expand-bmad-for-your-org.md) — 五个实操方案,覆盖智能体全局规则、工作流约定、外部发布、模板替换和花名册管理
|
||||
- `bmad-customize` 技能 — 引导式编写助手,将你的意图转换为正确放置并经过验证的覆盖文件
|
||||
|
||||
## 更大的理念
|
||||
|
||||
当今大多数 AI 助手要么是菜单,要么是提示框,两者都把认知负担推给了用户。命名智能体加上可自定义技能,让你可以和一个了解项目的队友对话,并且让你的组织能塑造这个队友而不必 fork。
|
||||
|
||||
下次你输入"嘿 Mary,咱们来头脑风暴",她直接上手干活时,留意一下哪些事情**没有**发生。没有斜杠命令,没有菜单要翻,没有尴尬的功能介绍。这种"无感",正是设计本身。
|
||||
|
|
@ -0,0 +1,258 @@
|
|||
---
|
||||
title: "如何为组织扩展 BMad"
|
||||
description: 五个自定义方案,无需 fork 即可重塑 BMad——涵盖智能体全局规则、工作流约定、外部发布、模板替换和花名册变更
|
||||
sidebar:
|
||||
order: 9
|
||||
---
|
||||
|
||||
BMad 的自定义机制让组织无需编辑已安装文件或 fork 技能就能重塑行为。本指南介绍五个方案,覆盖大部分企业级需求。
|
||||
|
||||
:::note[前置条件]
|
||||
|
||||
- 已在项目中安装 BMad(参见[如何安装 BMad](./install-bmad.md))
|
||||
- 熟悉自定义模型(参见[如何自定义 BMad](./customize-bmad.md))
|
||||
- PATH 中有 Python 3.11+(解析器只用标准库,不需要 `pip install`)
|
||||
:::
|
||||
|
||||
:::tip[如何应用这些方案]
|
||||
下面的**逐技能方案**(方案 1–4)可以通过运行 `bmad-customize` 技能并描述意图来应用——它会选择正确的配置面、生成覆盖文件并验证合并结果。方案 5(中央配置的花名册覆盖)超出 v1 技能范围,仍需手动编写。本文档中的方案是覆盖**什么**的权威参考;`bmad-customize` 负责处理**怎么做**的部分(针对智能体/工作流层面)。
|
||||
:::
|
||||
|
||||
## 三层心智模型
|
||||
|
||||
在选择方案之前,先理解你的覆盖落在哪一层:
|
||||
|
||||
| 层 | 覆盖文件位置 | 作用范围 |
|
||||
|---|---|---|
|
||||
| **智能体**(如 Amelia、Mary、John) | `_bmad/custom/bmad-agent-{role}.toml` 中的 `[agent]` 段 | 跟随人设进入**该智能体分发的每个工作流** |
|
||||
| **工作流**(如 product-brief、create-prd) | `_bmad/custom/{workflow-name}.toml` 中的 `[workflow]` 段 | 仅作用于该工作流的单次运行 |
|
||||
| **中央配置** | `_bmad/custom/config.toml` 中的 `[agents.*]`、`[core]`、`[modules.*]` | 花名册(party-mode、retrospective、elicitation 可用的角色)、全组织统一的安装设置 |
|
||||
|
||||
经验法则:如果规则应当在工程师做任何开发工作时生效,就自定义**开发智能体**。如果只在撰写产品摘要时生效,就自定义 **product-brief 工作流**。如果要改变"谁在场"(重命名智能体、添加自定义角色、统一产物路径),就编辑**中央配置**。
|
||||
|
||||
## 方案 1:让智能体的规则贯穿其分发的所有工作流
|
||||
|
||||
**场景:** 统一工具使用和外部系统集成,让智能体分发的每个工作流都继承这些行为。这是影响面最大的模式。
|
||||
|
||||
**示例:Amelia(开发智能体)查库文档一律用 Context7,本地 epics 列表找不到 story 时回退到 Linear。**
|
||||
|
||||
```toml
|
||||
# _bmad/custom/bmad-agent-dev.toml
|
||||
|
||||
[agent]
|
||||
|
||||
# 每次激活时加载。传递到 dev-story、quick-dev、
|
||||
# create-story、code-review、qa-generate——Amelia 分发的每个技能。
|
||||
persistent_facts = [
|
||||
"For any library documentation lookup (React, TypeScript, Zod, Prisma, etc.), call the context7 MCP tool (`mcp__context7__resolve_library_id` then `mcp__context7__get_library_docs`) before relying on training-data knowledge. Up-to-date docs trump memorized APIs.",
|
||||
"When a story reference isn't found in {planning_artifacts}/epics-and-stories.md, search Linear via `mcp__linear__search_issues` using the story ID or title before asking the user to clarify. If Linear returns a match, treat it as the authoritative story source.",
|
||||
]
|
||||
```
|
||||
|
||||
**为什么有效:** 两句话就能重塑组织内所有开发工作流,无需逐工作流重复配置、无需改源码。每个新工程师拉下仓库就自动继承这些约定。
|
||||
|
||||
**团队文件 vs 个人文件:**
|
||||
- `bmad-agent-dev.toml`:提交到 git,对整个团队生效
|
||||
- `bmad-agent-dev.user.toml`:已 gitignore,个人偏好叠加在上面
|
||||
|
||||
## 方案 2:在特定工作流中强制执行组织规范
|
||||
|
||||
**场景:** 塑造工作流输出的*内容*,使其满足合规、审计或下游消费者的要求。
|
||||
|
||||
**示例:每份产品摘要都必须包含合规字段,智能体知晓组织的发布规范。**
|
||||
|
||||
```toml
|
||||
# _bmad/custom/bmad-product-brief.toml
|
||||
|
||||
[workflow]
|
||||
|
||||
persistent_facts = [
|
||||
"Every brief must include an 'Owner' field, a 'Target Release' field, and a 'Security Review Status' field.",
|
||||
"Non-commercial briefs (internal tools, research projects) must still include a user-value section, but can omit market differentiation.",
|
||||
"file:{project-root}/docs/enterprise/brief-publishing-conventions.md",
|
||||
]
|
||||
```
|
||||
|
||||
**效果:** 这些事实在工作流激活的第 3 步加载。当智能体起草摘要时,它已了解必填字段和企业规范文档。内置默认值(`file:{project-root}/**/project-context.md`)仍会加载,因为这是追加操作。
|
||||
|
||||
## 方案 3:将完成的产出发布到外部系统
|
||||
|
||||
**场景:** 工作流生成输出后,自动发布到企业级记录系统(Confluence、Notion、SharePoint)并创建后续工作项(Jira、Linear、Asana)。
|
||||
|
||||
**示例:摘要自动发布到 Confluence,并提供可选的 Jira Epic 创建。**
|
||||
|
||||
```toml
|
||||
# _bmad/custom/bmad-product-brief.toml
|
||||
|
||||
[workflow]
|
||||
|
||||
# 终端钩子。标量覆盖会整体替换空默认值。
|
||||
on_complete = """
|
||||
Publish and offer follow-up:
|
||||
|
||||
1. Read the finalized brief file path from the prior step.
|
||||
2. Call `mcp__atlassian__confluence_create_page` with:
|
||||
- space: "PRODUCT"
|
||||
- parent: "Product Briefs"
|
||||
- title: the brief's title
|
||||
- body: the brief's markdown contents
|
||||
Capture the returned page URL.
|
||||
3. Tell the user: "Brief published to Confluence: <url>".
|
||||
4. Ask: "Want me to open a Jira epic for this brief now?"
|
||||
5. If yes, call `mcp__atlassian__jira_create_issue` with:
|
||||
- type: "Epic"
|
||||
- project: "PROD"
|
||||
- summary: the brief's title
|
||||
- description: a short summary plus a link back to the Confluence page.
|
||||
Report the epic key and URL.
|
||||
6. If no, exit cleanly.
|
||||
|
||||
If either MCP tool fails, report the failure, print the brief path,
|
||||
and ask the user to publish manually.
|
||||
"""
|
||||
```
|
||||
|
||||
**为什么用 `on_complete` 而不是 `activation_steps_append`:** `on_complete` 只在终端阶段运行一次,在工作流主输出写入之后。这是发布产物的正确时机。`activation_steps_append` 在每次激活时运行,在工作流开始之前。
|
||||
|
||||
**权衡:**
|
||||
- **Confluence 发布是非破坏性的**,完成时始终运行
|
||||
- **Jira Epic 创建对全团队可见**,会触发 Sprint 规划信号,因此需用户确认
|
||||
- **优雅降级:** 如果 MCP 工具失败,交给用户手动处理,而不是静默丢弃输出
|
||||
|
||||
## 方案 4:替换为你自己的输出模板
|
||||
|
||||
**场景:** 默认输出结构不符合组织期望的格式,或同一仓库中不同团队需要不同模板。
|
||||
|
||||
**示例:将 product-brief 工作流指向企业自有模板。**
|
||||
|
||||
```toml
|
||||
# _bmad/custom/bmad-product-brief.toml
|
||||
|
||||
[workflow]
|
||||
brief_template = "{project-root}/docs/enterprise/brief-template.md"
|
||||
```
|
||||
|
||||
**原理:** 工作流自带的 `customize.toml` 中 `brief_template = "resources/brief-template.md"`(裸路径,从技能根目录解析)。你的覆盖指向 `{project-root}` 下的文件,智能体在第 4 步读取你的模板而非内置模板。
|
||||
|
||||
**模板编写建议:**
|
||||
- 将模板放在 `{project-root}/docs/` 或 `{project-root}/_bmad/custom/templates/` 下,使它们与覆盖文件一起版本管理
|
||||
- 沿用内置模板的结构约定(章节标题、frontmatter),智能体会适配实际内容
|
||||
- 对于多团队仓库,使用 `.user.toml` 让各团队指向自己的模板,无需改动已提交的团队文件
|
||||
|
||||
## 方案 5:自定义花名册
|
||||
|
||||
**场景:** 改变 `bmad-party-mode`、`bmad-retrospective` 和 `bmad-advanced-elicitation` 等花名册驱动技能中*谁在场*,无需编辑源码或 fork。以下是三种常见变体。
|
||||
|
||||
### 5a. 在全组织范围内重塑 BMad 智能体
|
||||
|
||||
每个真实智能体都有一段安装器从 `module.yaml` 合成的描述符。覆盖它可以在所有花名册消费者中改变语气和定位:
|
||||
|
||||
```toml
|
||||
# _bmad/custom/config.toml(提交到 git——对每个开发者生效)
|
||||
|
||||
[agents.bmad-agent-analyst]
|
||||
description = "Mary the Regulatory-Aware Business Analyst — channels Porter and Minto, but lives and breathes FDA audit trails. Speaks like a forensic investigator presenting a case file."
|
||||
```
|
||||
|
||||
Party-mode 会用新描述来生成 Mary。分析师激活流程本身不受影响,因为 Mary 的行为由她的每技能 `customize.toml` 控制。这个覆盖改变的是**外部技能如何感知和介绍她**,而不是她的内部工作方式。
|
||||
|
||||
### 5b. 添加虚构或自定义智能体
|
||||
|
||||
一段完整的描述符就足以让花名册功能识别,不需要技能目录。适合在 party mode 或头脑风暴中增加性格多样性:
|
||||
|
||||
```toml
|
||||
# _bmad/custom/config.user.toml(个人——已 gitignore)
|
||||
|
||||
[agents.spock]
|
||||
team = "startrek"
|
||||
name = "Commander Spock"
|
||||
title = "Science Officer"
|
||||
icon = "🖖"
|
||||
description = "Logic first, emotion suppressed. Begins observations with 'Fascinating.' Never rounds up. Counterpoint to any argument that relies on gut instinct."
|
||||
|
||||
[agents.mccoy]
|
||||
team = "startrek"
|
||||
name = "Dr. Leonard McCoy"
|
||||
title = "Chief Medical Officer"
|
||||
icon = "⚕️"
|
||||
description = "Country doctor's warmth, short fuse. 'Dammit Jim, I'm a doctor not a ___.' Ethics-driven counterweight to Spock."
|
||||
```
|
||||
|
||||
让 party-mode "邀请企业号船员",它会按 `team = "startrek"` 过滤并生成 Spock 和 McCoy。真实的 BMad 智能体(Mary、Amelia)也可以同桌。
|
||||
|
||||
### 5c. 锁定团队安装设置
|
||||
|
||||
安装器会向每个开发者提示 `planning_artifacts` 路径等值。当组织需要一个统一答案时,在中央配置中锁定——任何开发者本地的提示回答都会在解析时被覆盖:
|
||||
|
||||
```toml
|
||||
# _bmad/custom/config.toml
|
||||
|
||||
[modules.bmm]
|
||||
planning_artifacts = "{project-root}/shared/planning"
|
||||
implementation_artifacts = "{project-root}/shared/implementation"
|
||||
|
||||
[core]
|
||||
document_output_language = "English"
|
||||
```
|
||||
|
||||
个人设置如 `user_name`、`communication_language` 或 `user_skill_level` 留在各开发者自己的 `_bmad/config.user.toml` 中。团队文件不应触碰这些。
|
||||
|
||||
**为什么用中央配置而不是逐智能体的 customize.toml:** 逐智能体文件塑造*一个*智能体激活时的行为。中央配置塑造花名册消费者*查看全局时看到的内容:*有哪些智能体、叫什么、属于哪个团队,以及整个仓库共识的安装设置。两个层面,各司其职。
|
||||
|
||||
## 在 IDE 会话文件中强化全局规则
|
||||
|
||||
BMad 的自定义在技能激活时加载。许多 IDE 工具还会在**每次会话开始时**加载一个全局指令文件,在任何技能运行之前(`CLAUDE.md`、`AGENTS.md`、`.cursor/rules/`、`.github/copilot-instructions.md` 等)。对于即使在 BMad 技能之外也应生效的规则,请在全局指令中也声明一份。
|
||||
|
||||
**何时需要"双重声明":**
|
||||
- 规则足够重要,即使在普通对话(没有激活技能)中也应遵守
|
||||
- 你需要"双保险",因为模型的训练数据默认值可能会拉偏方向
|
||||
- 规则足够精简,重复一次不会让会话文件臃肿
|
||||
|
||||
**示例:在仓库的 `CLAUDE.md` 中强化方案 1 的开发智能体规则。**
|
||||
|
||||
```markdown
|
||||
<!-- Any file-read of library docs goes through the context7 MCP tool
|
||||
(`mcp__context7__resolve_library_id` then `mcp__context7__get_library_docs`)
|
||||
before relying on training-data knowledge. -->
|
||||
```
|
||||
|
||||
一句话,每次会话加载。它与 `bmad-agent-dev.toml` 自定义配合,使规则在 Amelia 的工作流内和与助手的临时对话中都生效。各层各管各的范围:
|
||||
|
||||
| 层 | 作用范围 | 用途 |
|
||||
|---|---|---|
|
||||
| IDE 会话文件(`CLAUDE.md` / `AGENTS.md`) | 每次会话,在任何技能激活之前 | 简短的、应在 BMad 之外也生效的通用规则 |
|
||||
| BMad 智能体自定义 | 该智能体分发的每个工作流 | 智能体人设相关的行为 |
|
||||
| BMad 工作流自定义 | 单次工作流运行 | 工作流特定的输出格式、发布钩子、模板 |
|
||||
| BMad 中央配置 | 花名册 + 共享安装设置 | 谁在场、团队使用的共享路径 |
|
||||
|
||||
IDE 会话文件要**精简**。十几行精挑细选的规则比长篇大论有效得多。模型每轮都会读取它,噪声会淹没信号。
|
||||
|
||||
## 组合使用
|
||||
|
||||
五个方案可以自由组合。一个典型的企业级 `bmad-product-brief` 覆盖可能同时设置 `persistent_facts`(方案 2)、`on_complete`(方案 3)和 `brief_template`(方案 4)。智能体级规则(方案 1)在另一个以智能体命名的文件中,中央配置(方案 5)锁定共享花名册和团队设置,四者并行生效。
|
||||
|
||||
```toml
|
||||
# _bmad/custom/bmad-product-brief.toml(工作流级)
|
||||
|
||||
[workflow]
|
||||
persistent_facts = ["..."]
|
||||
brief_template = "{project-root}/docs/enterprise/brief-template.md"
|
||||
on_complete = """ ... """
|
||||
```
|
||||
|
||||
```toml
|
||||
# _bmad/custom/bmad-agent-analyst.toml(智能体级——Mary 分发 product-brief)
|
||||
|
||||
[agent]
|
||||
persistent_facts = ["Always include a 'Regulatory Review' section when the domain involves healthcare, finance, or children's data."]
|
||||
```
|
||||
|
||||
效果:Mary 在人设激活时加载监管评审规则。当用户选择 product-brief 菜单项时,工作流加载自己的规范、写入企业模板,完成后发布到 Confluence。每一层各有贡献,且无一需要编辑 BMad 源码。
|
||||
|
||||
## 故障排查
|
||||
|
||||
**覆盖没有生效?** 检查文件是否在 `_bmad/custom/` 下且使用了准确的技能目录名(如 `bmad-agent-dev.toml`,而非 `bmad-dev.toml`)。参见[如何自定义 BMad](./customize-bmad.md)。
|
||||
|
||||
**MCP 工具名称不确定?** 使用 MCP 服务器在当前会话中暴露的准确名称。如果不确定,让 Claude Code 列出可用的 MCP 工具。在 `persistent_facts` 或 `on_complete` 中硬编码的名称,在 MCP 服务器未连接时不会生效。
|
||||
|
||||
**方案不适用于你的场景?** 以上方案是示例性的。底层机制(三层合并、结构化规则、智能体贯穿工作流)支持更多模式,按需组合即可。
|
||||
|
|
@ -68,6 +68,7 @@ Would you like to install from a custom source (Git URL or local path)?
|
|||
| 输入类型 | 示例 |
|
||||
| -------- | ---- |
|
||||
| HTTPS URL(任意主机) | `https://github.com/org/repo` |
|
||||
| HTTP URL(任意主机) | `http://host/org/repo` |
|
||||
| 带子目录的 HTTPS URL | `https://github.com/org/repo/tree/main/my-module` |
|
||||
| SSH URL | `git@github.com:org/repo.git` |
|
||||
| 本地路径 | `/Users/me/projects/my-module` |
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
{
|
||||
"name": "bmad-method",
|
||||
"version": "6.5.0",
|
||||
"version": "6.6.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "bmad-method",
|
||||
"version": "6.5.0",
|
||||
"version": "6.6.0",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@clack/core": "^1.0.0",
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
{
|
||||
"$schema": "https://json.schemastore.org/package.json",
|
||||
"name": "bmad-method",
|
||||
"version": "6.5.0",
|
||||
"version": "6.6.0",
|
||||
"description": "Breakthrough Method of Agile AI-driven Development",
|
||||
"keywords": [
|
||||
"agile",
|
||||
|
|
@ -39,12 +39,13 @@
|
|||
"lint:fix": "eslint . --ext .js,.cjs,.mjs,.yaml --fix",
|
||||
"lint:md": "markdownlint-cli2 \"**/*.md\"",
|
||||
"prepare": "command -v husky >/dev/null 2>&1 && husky || exit 0",
|
||||
"quality": "npm run format:check && npm run lint && npm run lint:md && npm run docs:build && npm run test:install && npm run validate:refs && npm run validate:skills",
|
||||
"quality": "npm run format:check && npm run lint && npm run lint:md && npm run docs:build && npm run test:install && npm run test:urls && npm run validate:refs && npm run validate:skills",
|
||||
"rebundle": "node tools/installer/bundlers/bundle-web.js rebundle",
|
||||
"test": "npm run test:refs && npm run test:install && npm run test:channels && npm run lint && npm run lint:md && npm run format:check",
|
||||
"test": "npm run test:refs && npm run test:install && npm run test:urls && npm run test:channels && npm run lint && npm run lint:md && npm run format:check",
|
||||
"test:channels": "node test/test-installer-channels.js",
|
||||
"test:install": "node test/test-installation-components.js",
|
||||
"test:refs": "node test/test-file-refs-csv.js",
|
||||
"test:urls": "node test/test-parse-source-urls.js",
|
||||
"validate:refs": "node tools/validate-file-refs.js --strict",
|
||||
"validate:skills": "node tools/validate-skills.js --strict"
|
||||
},
|
||||
|
|
|
|||
|
|
@ -7,8 +7,8 @@
|
|||
"description": "Produces battle-tested PRFAQ document and optional LLM distillate for PRD input.",
|
||||
"supports-headless": true,
|
||||
"phase-name": "1-analysis",
|
||||
"after": ["brainstorming", "perform-research"],
|
||||
"before": ["create-prd"],
|
||||
"preceded-by": ["brainstorming", "perform-research"],
|
||||
"followed-by": ["create-prd"],
|
||||
"is-required": false,
|
||||
"output-location": "{planning_artifacts}"
|
||||
}
|
||||
|
|
|
|||
|
|
@ -8,8 +8,8 @@
|
|||
"description": "Produces executive product brief and optional LLM distillate for PRD input.",
|
||||
"supports-headless": true,
|
||||
"phase-name": "1-analysis",
|
||||
"after": ["brainstorming", "perform-research"],
|
||||
"before": ["create-prd"],
|
||||
"preceded-by": ["brainstorming", "perform-research"],
|
||||
"followed-by": ["create-prd"],
|
||||
"is-required": true,
|
||||
"output-location": "{planning_artifacts}"
|
||||
}
|
||||
|
|
|
|||
|
|
@ -227,37 +227,39 @@ Prepare the content to append to the document:
|
|||
|
||||
### Architecture Completeness Checklist
|
||||
|
||||
**✅ Requirements Analysis**
|
||||
Mark each item `[x]` only if validation confirms it; leave `[ ]` if it is missing, partial, or unverified. Any unchecked item must be reflected in the Gap Analysis above and in the Overall Status below.
|
||||
|
||||
- [x] Project context thoroughly analyzed
|
||||
- [x] Scale and complexity assessed
|
||||
- [x] Technical constraints identified
|
||||
- [x] Cross-cutting concerns mapped
|
||||
**Requirements Analysis**
|
||||
|
||||
**✅ Architectural Decisions**
|
||||
- [ ] Project context thoroughly analyzed
|
||||
- [ ] Scale and complexity assessed
|
||||
- [ ] Technical constraints identified
|
||||
- [ ] Cross-cutting concerns mapped
|
||||
|
||||
- [x] Critical decisions documented with versions
|
||||
- [x] Technology stack fully specified
|
||||
- [x] Integration patterns defined
|
||||
- [x] Performance considerations addressed
|
||||
**Architectural Decisions**
|
||||
|
||||
**✅ Implementation Patterns**
|
||||
- [ ] Critical decisions documented with versions
|
||||
- [ ] Technology stack fully specified
|
||||
- [ ] Integration patterns defined
|
||||
- [ ] Performance considerations addressed
|
||||
|
||||
- [x] Naming conventions established
|
||||
- [x] Structure patterns defined
|
||||
- [x] Communication patterns specified
|
||||
- [x] Process patterns documented
|
||||
**Implementation Patterns**
|
||||
|
||||
**✅ Project Structure**
|
||||
- [ ] Naming conventions established
|
||||
- [ ] Structure patterns defined
|
||||
- [ ] Communication patterns specified
|
||||
- [ ] Process patterns documented
|
||||
|
||||
- [x] Complete directory structure defined
|
||||
- [x] Component boundaries established
|
||||
- [x] Integration points mapped
|
||||
- [x] Requirements to structure mapping complete
|
||||
**Project Structure**
|
||||
|
||||
- [ ] Complete directory structure defined
|
||||
- [ ] Component boundaries established
|
||||
- [ ] Integration points mapped
|
||||
- [ ] Requirements to structure mapping complete
|
||||
|
||||
### Architecture Readiness Assessment
|
||||
|
||||
**Overall Status:** READY FOR IMPLEMENTATION
|
||||
**Overall Status:** {{READY FOR IMPLEMENTATION | READY WITH MINOR GAPS | NOT READY}} (choose READY FOR IMPLEMENTATION only when all 16 checklist items are `[x]` and no Critical Gaps remain; choose NOT READY when any Critical Gap is open or any Requirements Analysis or Architectural Decisions item is unchecked; otherwise READY WITH MINOR GAPS)
|
||||
|
||||
**Confidence Level:** {{high/medium/low}} based on validation results
|
||||
|
||||
|
|
|
|||
|
|
@ -55,7 +55,8 @@ Load {planning_artifacts}/epics.md and review:
|
|||
2. **Requirements Grouping**: Group related FRs that deliver cohesive user outcomes
|
||||
3. **Incremental Delivery**: Each epic should deliver value independently
|
||||
4. **Logical Flow**: Natural progression from user's perspective
|
||||
5. **🔗 Dependency-Free Within Epic**: Stories within an epic must NOT depend on future stories
|
||||
5. **Dependency-Free Within Epic**: Stories within an epic must NOT depend on future stories
|
||||
6. **Implementation Efficiency**: Consider consolidating epics that all modify the same core files into fewer epics
|
||||
|
||||
**⚠️ CRITICAL PRINCIPLE:**
|
||||
Organize by USER VALUE, not technical layers:
|
||||
|
|
@ -74,6 +75,18 @@ Organize by USER VALUE, not technical layers:
|
|||
- Epic 3: Frontend Components (creates reusable components) - **No user value**
|
||||
- Epic 4: Deployment Pipeline (CI/CD setup) - **No user value**
|
||||
|
||||
**❌ WRONG Epic Examples (File Churn on Same Component):**
|
||||
|
||||
- Epic 1: File Upload (modifies model, controller, web form, web API)
|
||||
- Epic 2: File Status (modifies model, controller, web form, web API)
|
||||
- Epic 3: File Access permissions (modifies model, controller, web form, web API)
|
||||
- All three epics touch the same files — consolidate into one epic with ordered stories
|
||||
|
||||
**✅ CORRECT Alternative:**
|
||||
|
||||
- Epic 1: File Management Enhancement (upload, status, permissions as stories within one epic)
|
||||
- Rationale: Single component, fully pre-designed, no feedback loop between epics
|
||||
|
||||
**🔗 DEPENDENCY RULES:**
|
||||
|
||||
- Each epic must deliver COMPLETE functionality for its domain
|
||||
|
|
@ -82,21 +95,38 @@ Organize by USER VALUE, not technical layers:
|
|||
|
||||
### 3. Design Epic Structure Collaboratively
|
||||
|
||||
**Step A: Identify User Value Themes**
|
||||
**Step A: Assess Context and Identify Themes**
|
||||
|
||||
First, assess how much of the solution design is already validated (Architecture, UX, Test Design).
|
||||
When the outcome is certain and direction changes between epics are unlikely, prefer fewer but larger epics.
|
||||
Split into multiple epics when there is a genuine risk boundary or when early feedback could change direction
|
||||
of following epics.
|
||||
|
||||
Then, identify user value themes:
|
||||
|
||||
- Look for natural groupings in the FRs
|
||||
- Identify user journeys or workflows
|
||||
- Consider user types and their goals
|
||||
|
||||
**Step B: Propose Epic Structure**
|
||||
For each proposed epic:
|
||||
|
||||
For each proposed epic (considering whether epics share the same core files):
|
||||
|
||||
1. **Epic Title**: User-centric, value-focused
|
||||
2. **User Outcome**: What users can accomplish after this epic
|
||||
3. **FR Coverage**: Which FR numbers this epic addresses
|
||||
4. **Implementation Notes**: Any technical or UX considerations
|
||||
|
||||
**Step C: Create the epics_list**
|
||||
**Step C: Review for File Overlap**
|
||||
|
||||
Assess whether multiple proposed epics repeatedly target the same core files. If overlap is significant:
|
||||
|
||||
- Distinguish meaningful overlap (same component end-to-end) from incidental sharing
|
||||
- Ask whether to consolidate into one epic with ordered stories
|
||||
- If confirmed, merge the epic FRs into a single epic, preserving dependency flow: each story must still fit within
|
||||
a single dev agent's context
|
||||
|
||||
**Step D: Create the epics_list**
|
||||
|
||||
Format the epics_list as:
|
||||
|
||||
|
|
|
|||
|
|
@ -90,6 +90,12 @@ Review the complete epic and story breakdown to ensure EVERY FR is covered:
|
|||
- Dependencies flow naturally
|
||||
- Foundation stories only setup what's needed
|
||||
- No big upfront technical work
|
||||
- **File Churn Check:** Do multiple epics repeatedly modify the same core files?
|
||||
- Assess whether the overlap pattern suggests unnecessary churn or is incidental
|
||||
- If overlap is significant: Validate that splitting provides genuine value (risk mitigation, feedback loops, context size limits)
|
||||
- If no justification for the split: Recommend consolidation into fewer epics
|
||||
- ❌ WRONG: Multiple epics each modify the same core files with no feedback loop between them
|
||||
- ✅ RIGHT: Epics target distinct files/components, OR consolidation was explicitly considered and rejected with rationale
|
||||
|
||||
### 5. Dependency Validation (CRITICAL)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,33 +1,33 @@
|
|||
module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs
|
||||
module,skill,display-name,menu-code,description,action,args,phase,preceded-by,followed-by,required,output-location,outputs
|
||||
BMad Method,_meta,,,,,,,,,false,https://docs.bmad-method.org/llms.txt,
|
||||
BMad Method,bmad-document-project,Document Project,DP,Analyze an existing project to produce useful documentation.,,anytime,,,false,project-knowledge,*
|
||||
BMad Method,bmad-generate-project-context,Generate Project Context,GPC,Scan existing codebase to generate a lean LLM-optimized project-context.md. Essential for brownfield projects.,,anytime,,,false,output_folder,project context
|
||||
BMad Method,bmad-quick-dev,Quick Dev,QQ,Unified intent-in code-out workflow: clarify plan implement review and present.,,anytime,,,false,implementation_artifacts,spec and project implementation
|
||||
BMad Method,bmad-correct-course,Correct Course,CC,Navigate significant changes. May recommend start over update PRD redo architecture sprint planning or correct epics and stories.,,anytime,,,false,planning_artifacts,change proposal
|
||||
BMad Method,bmad-document-project,Document Project,DP,Analyze an existing project to produce useful documentation.,,,anytime,,,false,project-knowledge,*
|
||||
BMad Method,bmad-generate-project-context,Generate Project Context,GPC,Scan existing codebase to generate a lean LLM-optimized project-context.md. Essential for brownfield projects.,,,anytime,,,false,output_folder,project context
|
||||
BMad Method,bmad-quick-dev,Quick Dev,QQ,Unified intent-in code-out workflow: clarify plan implement review and present.,,,anytime,,,false,implementation_artifacts,spec and project implementation
|
||||
BMad Method,bmad-correct-course,Correct Course,CC,Navigate significant changes. May recommend start over update PRD redo architecture sprint planning or correct epics and stories.,,,anytime,,,false,planning_artifacts,change proposal
|
||||
BMad Method,bmad-agent-tech-writer,Write Document,WD,"Describe in detail what you want, and the agent will follow documentation best practices. Multi-turn conversation with subprocess for research/review.",write,,anytime,,,false,project-knowledge,document
|
||||
BMad Method,bmad-agent-tech-writer,Update Standards,US,Update agent memory documentation-standards.md with your specific preferences if you discover missing document conventions.,update-standards,,anytime,,,false,_bmad/_memory/tech-writer-sidecar,standards
|
||||
BMad Method,bmad-agent-tech-writer,Mermaid Generate,MG,Create a Mermaid diagram based on user description. Will suggest diagram types if not specified.,mermaid,,anytime,,,false,planning_artifacts,mermaid diagram
|
||||
BMad Method,bmad-agent-tech-writer,Validate Document,VD,Review the specified document against documentation standards and best practices. Returns specific actionable improvement suggestions organized by priority.,validate,[path],anytime,,,false,planning_artifacts,validation report
|
||||
BMad Method,bmad-agent-tech-writer,Explain Concept,EC,Create clear technical explanations with examples and diagrams for complex concepts.,explain,[topic],anytime,,,false,project_knowledge,explanation
|
||||
BMad Method,bmad-brainstorming,Brainstorm Project,BP,Expert guided facilitation through a single or multiple techniques.,,1-analysis,,,false,planning_artifacts,brainstorming session
|
||||
BMad Method,bmad-market-research,Market Research,MR,"Market analysis competitive landscape customer needs and trends.",,1-analysis,,,false,"planning_artifacts|project-knowledge",research documents
|
||||
BMad Method,bmad-domain-research,Domain Research,DR,Industry domain deep dive subject matter expertise and terminology.,,1-analysis,,,false,"planning_artifacts|project_knowledge",research documents
|
||||
BMad Method,bmad-technical-research,Technical Research,TR,Technical feasibility architecture options and implementation approaches.,,1-analysis,,,false,"planning_artifacts|project_knowledge",research documents
|
||||
BMad Method,bmad-brainstorming,Brainstorm Project,BP,Expert guided facilitation through a single or multiple techniques.,,,1-analysis,,,false,planning_artifacts,brainstorming session
|
||||
BMad Method,bmad-market-research,Market Research,MR,Market analysis competitive landscape customer needs and trends.,,,1-analysis,,,false,planning_artifacts|project-knowledge,research documents
|
||||
BMad Method,bmad-domain-research,Domain Research,DR,Industry domain deep dive subject matter expertise and terminology.,,,1-analysis,,,false,planning_artifacts|project_knowledge,research documents
|
||||
BMad Method,bmad-technical-research,Technical Research,TR,Technical feasibility architecture options and implementation approaches.,,,1-analysis,,,false,planning_artifacts|project_knowledge,research documents
|
||||
BMad Method,bmad-product-brief,Create Brief,CB,An expert guided experience to nail down your product idea in a brief. a gentler approach than PRFAQ when you are already sure of your concept and nothing will sway you.,,-A,1-analysis,,,false,planning_artifacts,product brief
|
||||
BMad Method,bmad-prfaq,PRFAQ Challenge,WB,Working Backwards guided experience to forge and stress-test your product concept to ensure you have a great product that users will love and need through the PRFAQ gauntlet to determine feasibility and alignment with user needs. alternative to product brief.,,-H,1-analysis,,,false,planning_artifacts,prfaq document
|
||||
BMad Method,bmad-create-prd,Create PRD,CP,Expert led facilitation to produce your Product Requirements Document.,,2-planning,,,true,planning_artifacts,prd
|
||||
BMad Method,bmad-create-prd,Create PRD,CP,Expert led facilitation to produce your Product Requirements Document.,,,2-planning,,,true,planning_artifacts,prd
|
||||
BMad Method,bmad-validate-prd,Validate PRD,VP,,,[path],2-planning,bmad-create-prd,,false,planning_artifacts,prd validation report
|
||||
BMad Method,bmad-edit-prd,Edit PRD,EP,,,[path],2-planning,bmad-validate-prd,,false,planning_artifacts,updated prd
|
||||
BMad Method,bmad-create-ux-design,Create UX,CU,"Guidance through realizing the plan for your UX, strongly recommended if a UI is a primary piece of the proposed project.",,2-planning,bmad-create-prd,,false,planning_artifacts,ux design
|
||||
BMad Method,bmad-create-architecture,Create Architecture,CA,Guided workflow to document technical decisions.,,3-solutioning,,,true,planning_artifacts,architecture
|
||||
BMad Method,bmad-create-epics-and-stories,Create Epics and Stories,CE,,,3-solutioning,bmad-create-architecture,,true,planning_artifacts,epics and stories
|
||||
BMad Method,bmad-check-implementation-readiness,Check Implementation Readiness,IR,Ensure PRD UX Architecture and Epics Stories are aligned.,,3-solutioning,bmad-create-epics-and-stories,,true,planning_artifacts,readiness report
|
||||
BMad Method,bmad-sprint-planning,Sprint Planning,SP,Kicks off implementation by producing a plan the implementation agents will follow in sequence for every story.,,4-implementation,,,true,implementation_artifacts,sprint status
|
||||
BMad Method,bmad-sprint-status,Sprint Status,SS,Anytime: Summarize sprint status and route to next workflow.,,4-implementation,bmad-sprint-planning,,false,,
|
||||
BMad Method,bmad-create-story,Create Story,CS,"Story cycle start: Prepare first found story in the sprint plan that is next or a specific epic/story designation.",create,,4-implementation,bmad-sprint-planning,bmad-create-story:validate,true,implementation_artifacts,story
|
||||
BMad Method,bmad-create-ux-design,Create UX,CU,"Guidance through realizing the plan for your UX, strongly recommended if a UI is a primary piece of the proposed project.",,,2-planning,bmad-create-prd,,false,planning_artifacts,ux design
|
||||
BMad Method,bmad-create-architecture,Create Architecture,CA,Guided workflow to document technical decisions.,,,3-solutioning,,,true,planning_artifacts,architecture
|
||||
BMad Method,bmad-create-epics-and-stories,Create Epics and Stories,CE,,,,3-solutioning,bmad-create-architecture,,true,planning_artifacts,epics and stories
|
||||
BMad Method,bmad-check-implementation-readiness,Check Implementation Readiness,IR,Ensure PRD UX Architecture and Epics Stories are aligned.,,,3-solutioning,bmad-create-epics-and-stories,,true,planning_artifacts,readiness report
|
||||
BMad Method,bmad-sprint-planning,Sprint Planning,SP,Kicks off implementation by producing a plan the implementation agents will follow in sequence for every story.,,,4-implementation,,,true,implementation_artifacts,sprint status
|
||||
BMad Method,bmad-sprint-status,Sprint Status,SS,Anytime: Summarize sprint status and route to next workflow.,,,4-implementation,bmad-sprint-planning,,false,,
|
||||
BMad Method,bmad-create-story,Create Story,CS,Story cycle start: Prepare first found story in the sprint plan that is next or a specific epic/story designation.,create,,4-implementation,bmad-sprint-planning,bmad-create-story:validate,true,implementation_artifacts,story
|
||||
BMad Method,bmad-create-story,Validate Story,VS,Validates story readiness and completeness before development work begins.,validate,,4-implementation,bmad-create-story:create,bmad-dev-story,false,implementation_artifacts,story validation report
|
||||
BMad Method,bmad-dev-story,Dev Story,DS,Story cycle: Execute story implementation tasks and tests then CR then back to DS if fixes needed.,,4-implementation,bmad-create-story:validate,,true,,
|
||||
BMad Method,bmad-code-review,Code Review,CR,Story cycle: If issues back to DS if approved then next CS or ER if epic complete.,,4-implementation,bmad-dev-story,,false,,
|
||||
BMad Method,bmad-checkpoint-preview,Checkpoint,CK,Guided walkthrough of a change from purpose and context into details. Use for human review of commits branches or PRs.,,4-implementation,,,false,,
|
||||
BMad Method,bmad-qa-generate-e2e-tests,QA Automation Test,QA,Generate automated API and E2E tests for implemented code. NOT for code review or story validation — use CR for that.,,4-implementation,bmad-dev-story,,false,implementation_artifacts,test suite
|
||||
BMad Method,bmad-retrospective,Retrospective,ER,Optional at epic end: Review completed work lessons learned and next epic or if major issues consider CC.,,4-implementation,bmad-code-review,,false,implementation_artifacts,retrospective
|
||||
BMad Method,bmad-dev-story,Dev Story,DS,Story cycle: Execute story implementation tasks and tests then CR then back to DS if fixes needed.,,,4-implementation,bmad-create-story:validate,,true,,
|
||||
BMad Method,bmad-code-review,Code Review,CR,Story cycle: If issues back to DS if approved then next CS or ER if epic complete.,,,4-implementation,bmad-dev-story,,false,,
|
||||
BMad Method,bmad-checkpoint-preview,Checkpoint,CK,Guided walkthrough of a change from purpose and context into details. Use for human review of commits branches or PRs.,,,4-implementation,,,false,,
|
||||
BMad Method,bmad-qa-generate-e2e-tests,QA Automation Test,QA,Generate automated API and E2E tests for implemented code. NOT for code review or story validation — use CR for that.,,,4-implementation,bmad-dev-story,,false,implementation_artifacts,test suite
|
||||
BMad Method,bmad-retrospective,Retrospective,ER,Optional at epic end: Review completed work lessons learned and next epic or if major issues consider CC.,,,4-implementation,bmad-code-review,,false,implementation_artifacts,retrospective
|
||||
|
|
|
|||
|
Can't render this file because it has a wrong number of fields in line 3.
|
|
|
@ -5,15 +5,11 @@ default_selected: true # This module will be selected by default for new install
|
|||
|
||||
# Variables from Core Config inserted:
|
||||
## user_name
|
||||
## project_name
|
||||
## communication_language
|
||||
## document_output_language
|
||||
## output_folder
|
||||
|
||||
project_name:
|
||||
prompt: "What is your project called?"
|
||||
default: "{directory_name}"
|
||||
result: "{value}"
|
||||
|
||||
user_skill_level:
|
||||
prompt:
|
||||
- "What is your development experience level?"
|
||||
|
|
|
|||
|
|
@ -139,7 +139,7 @@ parts: 1
|
|||
|
||||
## Solution Architecture
|
||||
- Plugins: skill bundles with Anthropic plugin standard as base format + bmad-manifest.json extending for BMAD-specific metadata (installer options, capabilities, help integration, phase ordering, dependencies)
|
||||
- Existing manifest example: `{"module-code":"bmm","replaces-skill":"bmad-create-product-brief","capabilities":[{"name":"create-brief","menu-code":"CB","supports-headless":true,"phase-name":"1-analysis","after":["brainstorming"],"before":["create-prd"],"is-required":true}]}`
|
||||
- Existing manifest example: `{"module-code":"bmm","replaces-skill":"bmad-create-product-brief","capabilities":[{"name":"create-brief","menu-code":"CB","supports-headless":true,"phase-name":"1-analysis","preceded-by":["brainstorming"],"followed-by":["create-prd"],"is-required":true}]}`
|
||||
- Vercel skills CLI handles platform translation; integration pattern (wrap/fork/call) is PRD decision
|
||||
- bmad-setup: global skill scanning installed bmad-manifest.json files, registering capabilities, configuring project settings; always included as base skill in every bundle (solves bootstrapping)
|
||||
- bmad-update: plugin update path without full reinstall; technical approach (diff/replace/preserve customizations) is PRD decision
|
||||
|
|
|
|||
|
|
@ -33,16 +33,16 @@ When this skill completes, the user should:
|
|||
The catalog uses this format:
|
||||
|
||||
```
|
||||
module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs
|
||||
module,skill,display-name,menu-code,description,action,args,phase,preceded-by,followed-by,required,output-location,outputs
|
||||
```
|
||||
|
||||
**Phases** determine the high-level flow:
|
||||
- `anytime` — available regardless of workflow state
|
||||
- Numbered phases (`1-analysis`, `2-planning`, etc.) flow in order; naming varies by module
|
||||
|
||||
**Dependencies** determine ordering within and across phases:
|
||||
- `after` — skills that should ideally complete before this one
|
||||
- `before` — skills that should run after this one
|
||||
**Sequencing** determines recommended ordering within and across phases (these are soft suggestions, not hard gates — see `required` for gating):
|
||||
- `preceded-by` — skills that should ideally complete before this one
|
||||
- `followed-by` — skills that should ideally run after this one
|
||||
- Format: `skill-name` for single-action skills, `skill-name:action` for multi-action skills
|
||||
|
||||
**Required gates**:
|
||||
|
|
|
|||
|
|
@ -1,13 +1,13 @@
|
|||
module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs
|
||||
module,skill,display-name,menu-code,description,action,args,phase,preceded-by,followed-by,required,output-location,outputs
|
||||
Core,_meta,,,,,,,,,false,https://docs.bmad-method.org/llms.txt,
|
||||
Core,bmad-brainstorming,Brainstorming,BSP,Use early in ideation or when stuck generating ideas.,,anytime,,,false,{output_folder}/brainstorming,brainstorming session
|
||||
Core,bmad-party-mode,Party Mode,PM,Orchestrate multi-agent discussions when you need multiple perspectives or want agents to collaborate.,,anytime,,,false,,
|
||||
Core,bmad-help,BMad Help,BH,,,anytime,,,false,,
|
||||
Core,bmad-index-docs,Index Docs,ID,Use when LLM needs to understand available docs without loading everything.,,anytime,,,false,,
|
||||
Core,bmad-shard-doc,Shard Document,SD,Use when doc becomes too large (>500 lines) to manage effectively.,[path],anytime,,,false,,
|
||||
Core,bmad-editorial-review-prose,Editorial Review - Prose,EP,Use after drafting to polish written content.,[path],anytime,,,false,report located with target document,three-column markdown table with suggested fixes
|
||||
Core,bmad-editorial-review-structure,Editorial Review - Structure,ES,Use when doc produced from multiple subprocesses or needs structural improvement.,[path],anytime,,,false,report located with target document,
|
||||
Core,bmad-review-adversarial-general,Adversarial Review,AR,"Use for quality assurance or before finalizing deliverables. Code Review in other modules runs this automatically, but also useful for document reviews.",[path],anytime,,,false,,
|
||||
Core,bmad-review-edge-case-hunter,Edge Case Hunter Review,ECH,Use alongside adversarial review for orthogonal coverage — method-driven not attitude-driven.,[path],anytime,,,false,,
|
||||
Core,bmad-distillator,Distillator,DG,Use when you need token-efficient distillates that preserve all information for downstream LLM consumption.,[path],anytime,,,false,adjacent to source document or specified output_path,distillate markdown file(s)
|
||||
Core,bmad-customize,BMad Customize,BC,"Use when you want to change how an agent or workflow behaves — add persistent facts, swap templates, insert activation hooks, or customize menus. Scans what's customizable, picks the right scope (agent vs workflow), writes the override to _bmad/custom/, and verifies the merge. No TOML hand-authoring required.",,anytime,,,false,{project-root}/_bmad/custom,TOML override files
|
||||
Core,bmad-brainstorming,Brainstorming,BSP,Use early in ideation or when stuck generating ideas.,,,anytime,,,false,{output_folder}/brainstorming,brainstorming session
|
||||
Core,bmad-party-mode,Party Mode,PM,Orchestrate multi-agent discussions when you need multiple perspectives or want agents to collaborate.,,,anytime,,,false,,
|
||||
Core,bmad-help,BMad Help,BH,,,,anytime,,,false,,
|
||||
Core,bmad-index-docs,Index Docs,ID,Use when LLM needs to understand available docs without loading everything.,,,anytime,,,false,,
|
||||
Core,bmad-shard-doc,Shard Document,SD,Use when doc becomes too large (>500 lines) to manage effectively.,,[path],anytime,,,false,,
|
||||
Core,bmad-editorial-review-prose,Editorial Review - Prose,EP,Use after drafting to polish written content.,,[path],anytime,,,false,report located with target document,three-column markdown table with suggested fixes
|
||||
Core,bmad-editorial-review-structure,Editorial Review - Structure,ES,Use when doc produced from multiple subprocesses or needs structural improvement.,,[path],anytime,,,false,report located with target document,
|
||||
Core,bmad-review-adversarial-general,Adversarial Review,AR,"Use for quality assurance or before finalizing deliverables. Code Review in other modules runs this automatically, but also useful for document reviews.",,[path],anytime,,,false,,
|
||||
Core,bmad-review-edge-case-hunter,Edge Case Hunter Review,ECH,Use alongside adversarial review for orthogonal coverage — method-driven not attitude-driven.,,[path],anytime,,,false,,
|
||||
Core,bmad-distillator,Distillator,DG,Use when you need token-efficient distillates that preserve all information for downstream LLM consumption.,,[path],anytime,,,false,adjacent to source document or specified output_path,distillate markdown file(s)
|
||||
Core,bmad-customize,BMad Customize,BC,"Use when you want to change how an agent or workflow behaves — add persistent facts, swap templates, insert activation hooks, or customize menus. Scans what's customizable, picks the right scope (agent vs workflow), writes the override to _bmad/custom/, and verifies the merge. No TOML hand-authoring required.",,,anytime,,,false,{project-root}/_bmad/custom,TOML override files
|
||||
|
|
|
|||
|
Can't render this file because it has a wrong number of fields in line 3.
|
|
|
@ -11,6 +11,11 @@ user_name:
|
|||
default: "BMad"
|
||||
result: "{value}"
|
||||
|
||||
project_name:
|
||||
prompt: "What is your project called?"
|
||||
default: "{directory_name}"
|
||||
result: "{value}"
|
||||
|
||||
communication_language:
|
||||
prompt: "What language should agents use when chatting with you?"
|
||||
scope: user
|
||||
|
|
|
|||
|
|
@ -285,6 +285,10 @@ async function runTests() {
|
|||
const opencodeInstaller = platformCodes.platforms.opencode?.installer;
|
||||
|
||||
assert(opencodeInstaller?.target_dir === '.agents/skills', 'OpenCode target_dir uses native skills path');
|
||||
assert(
|
||||
opencodeInstaller?.commands_target_dir === '.opencode/commands',
|
||||
'OpenCode commands_target_dir is configured for /<skill> slash commands',
|
||||
);
|
||||
|
||||
const tempProjectDir = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-opencode-test-'));
|
||||
const installedBmadDir = await createTestBmadFixture();
|
||||
|
|
@ -301,6 +305,55 @@ async function runTests() {
|
|||
const skillFile = path.join(tempProjectDir, '.agents', 'skills', 'bmad-master', 'SKILL.md');
|
||||
assert(await fs.pathExists(skillFile), 'OpenCode install writes SKILL.md directory output');
|
||||
|
||||
// Command pointer assertions: a /<canonicalId> slash command should exist
|
||||
// for each installed skill so users can invoke skills directly without
|
||||
// going through the /skills menu.
|
||||
const commandFile = path.join(tempProjectDir, '.opencode', 'commands', 'bmad-master.md');
|
||||
assert(await fs.pathExists(commandFile), 'OpenCode install writes per-skill command pointer file');
|
||||
|
||||
const commandContent = await fs.readFile(commandFile, 'utf8');
|
||||
assert(commandContent.includes('@skills/bmad-master'), 'Command pointer body references the skill via @skills/<canonicalId>');
|
||||
assert(commandContent.includes('description:'), 'Command pointer carries a description in YAML frontmatter');
|
||||
|
||||
// Idempotency: re-running install must not duplicate or rewrite pointers.
|
||||
const result2 = await ideManager.setup('opencode', tempProjectDir, installedBmadDir, {
|
||||
silent: true,
|
||||
selectedModules: ['bmm'],
|
||||
});
|
||||
assert(result2.success === true, 'Second OpenCode install succeeds (idempotent)');
|
||||
assert(await fs.pathExists(commandFile), 'Command pointer survives a second install pass');
|
||||
|
||||
// Description-update propagation: when the manifest description changes
|
||||
// and the on-disk pointer still matches the generator pattern, refresh
|
||||
// the file so users see the updated description.
|
||||
const csvPath = path.join(installedBmadDir, '_config', 'skill-manifest.csv');
|
||||
const updatedCsv =
|
||||
'canonicalId,name,description,module,path\n' +
|
||||
'"bmad-master","bmad-master","UPDATED description for the test agent","core","_bmad/core/bmad-master/SKILL.md"\n';
|
||||
await fs.writeFile(csvPath, updatedCsv);
|
||||
const result3 = await ideManager.setup('opencode', tempProjectDir, installedBmadDir, {
|
||||
silent: true,
|
||||
selectedModules: ['bmm'],
|
||||
});
|
||||
assert(result3.success === true, 'Third OpenCode install succeeds after description update');
|
||||
const refreshed = await fs.readFile(commandFile, 'utf8');
|
||||
assert(refreshed.includes('UPDATED description'), 'Generator-shaped pointer is refreshed when manifest description changes');
|
||||
|
||||
// Hand-edit preservation across the production install flow. The
|
||||
// installer passes previousSkillIds — without the cleanup-side spare,
|
||||
// hand edits would be wiped here.
|
||||
const SENTINEL = 'HAND_EDITED_BY_USER_SHOULD_SURVIVE';
|
||||
const handEditedBody = `---\ndescription: my custom description\n---\n\n${SENTINEL}\n`;
|
||||
await fs.writeFile(commandFile, handEditedBody);
|
||||
const result4 = await ideManager.setup('opencode', tempProjectDir, installedBmadDir, {
|
||||
silent: true,
|
||||
selectedModules: ['bmm'],
|
||||
previousSkillIds: new Set(['bmad-master']),
|
||||
});
|
||||
assert(result4.success === true, 'Fourth OpenCode install succeeds with hand-edited pointer present');
|
||||
const afterReinstall = await fs.readFile(commandFile, 'utf8');
|
||||
assert(afterReinstall.includes(SENTINEL), 'Hand-edited pointer survives a routine reinstall (cleanup spares active-manifest IDs)');
|
||||
|
||||
await fs.remove(tempProjectDir);
|
||||
await fs.remove(path.dirname(installedBmadDir));
|
||||
} catch (error) {
|
||||
|
|
@ -504,10 +557,83 @@ async function runTests() {
|
|||
const copilotInstaller = platformCodes17.platforms['github-copilot']?.installer;
|
||||
|
||||
assert(copilotInstaller?.target_dir === '.agents/skills', 'GitHub Copilot target_dir uses native skills path');
|
||||
assert(
|
||||
copilotInstaller?.commands_target_dir === '.github/agents',
|
||||
'GitHub Copilot commands_target_dir is configured for the Custom Agents picker',
|
||||
);
|
||||
assert(copilotInstaller?.commands_extension === '.agent.md', 'GitHub Copilot uses .agent.md extension for Custom Agents files');
|
||||
assert(
|
||||
typeof copilotInstaller?.commands_body_template === 'string' && copilotInstaller.commands_body_template.includes('{canonicalId}'),
|
||||
'GitHub Copilot defines a commands_body_template with {canonicalId} placeholder',
|
||||
);
|
||||
assert(
|
||||
copilotInstaller?.commands_filter === 'agents-only',
|
||||
'GitHub Copilot filters Custom Agents picker to persona agents only (agents-only)',
|
||||
);
|
||||
|
||||
const tempProjectDir17 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-copilot-test-'));
|
||||
const installedBmadDir17 = await createTestBmadFixture();
|
||||
|
||||
// Extend the fixture to exercise the agents-only filter, which detects
|
||||
// persona agents by the `[agent]` section in each skill's source
|
||||
// customize.toml. Five skill types covered:
|
||||
//
|
||||
// 1. Persona agent — has customize.toml with [agent] → INCLUDED
|
||||
// 2. Persona with non-conventional id — also has [agent] → INCLUDED
|
||||
// (verifies the filter doesn't depend on `-agent-` naming)
|
||||
// 3. Meta-skill whose id contains `-agent-` but isn't a
|
||||
// persona — has customize.toml with [workflow] → EXCLUDED
|
||||
// (mirrors `bmad-agent-builder` in the real manifest)
|
||||
// 4. Workflow skill — no customize.toml at all → EXCLUDED
|
||||
// 5. `bmad-help` — meta-help skill with no customize.toml;
|
||||
// every persona agent's activation already advertises it,
|
||||
// so it's correctly excluded from the picker as redundant → EXCLUDED
|
||||
const fixtureCsvPath17 = path.join(installedBmadDir17, '_config', 'skill-manifest.csv');
|
||||
await fs.writeFile(
|
||||
fixtureCsvPath17,
|
||||
[
|
||||
'canonicalId,name,description,module,path',
|
||||
'"bmad-master","bmad-master","Workflow with no customize.toml — should NOT appear in Copilot agents picker","core","_bmad/core/bmad-master/SKILL.md"',
|
||||
'"bmad-agent-fixture","bmad-agent-fixture","Persona agent — customize.toml has [agent], SHOULD appear","core","_bmad/core/bmad-agent-fixture/SKILL.md"',
|
||||
'"bmad-tea","bmad-tea","Non-conventional id but [agent] in customize.toml — SHOULD appear","core","_bmad/core/bmad-tea/SKILL.md"',
|
||||
'"bmad-agent-builder","bmad-agent-builder","Skill-builder workflow — id contains -agent- but customize.toml has [workflow] — should NOT appear","core","_bmad/core/bmad-agent-builder/SKILL.md"',
|
||||
'"bmad-help","bmad-help","Meta-help skill — no customize.toml; SHOULD NOT appear in agents picker (toml-driven filter)","core","_bmad/core/bmad-help/SKILL.md"',
|
||||
'',
|
||||
].join('\n'),
|
||||
);
|
||||
|
||||
// Materialise the source skill directories so the agents-only filter
|
||||
// can read their customize.toml. The bmad-master and bmad-agent-builder
|
||||
// SKILL.md files were already populated by createTestBmadFixture (they
|
||||
// share the bmad-master target_dir layout); only the customize.toml
|
||||
// and the new agent fixtures need to be created here.
|
||||
for (const id of ['bmad-agent-fixture', 'bmad-tea', 'bmad-agent-builder', 'bmad-help']) {
|
||||
const dir17 = path.join(installedBmadDir17, 'core', id);
|
||||
await fs.ensureDir(dir17);
|
||||
await fs.writeFile(
|
||||
path.join(dir17, 'SKILL.md'),
|
||||
['---', `name: ${id}`, `description: fixture for ${id}`, '---', '', `Body of ${id}.`].join('\n'),
|
||||
);
|
||||
}
|
||||
// Note: bmad-help intentionally has NO customize.toml — it exercises
|
||||
// the toml-driven filter's exclusion path (a skill with no
|
||||
// customize.toml is correctly kept out of the Copilot agents picker).
|
||||
// [agent] customize.toml for the two persona fixtures.
|
||||
await fs.writeFile(
|
||||
path.join(installedBmadDir17, 'core', 'bmad-agent-fixture', 'customize.toml'),
|
||||
['[agent]', 'name = "Fixture Agent"', 'title = "Test Persona"', ''].join('\n'),
|
||||
);
|
||||
await fs.writeFile(
|
||||
path.join(installedBmadDir17, 'core', 'bmad-tea', 'customize.toml'),
|
||||
['[agent]', 'name = "Murat"', 'title = "Test Architect"', ''].join('\n'),
|
||||
);
|
||||
// [workflow] customize.toml for the meta-skill — its id contains `-agent-`
|
||||
// but it is NOT a persona (mirrors bmad-agent-builder in production).
|
||||
await fs.writeFile(
|
||||
path.join(installedBmadDir17, 'core', 'bmad-agent-builder', 'customize.toml'),
|
||||
['[workflow]', '', '# Meta-skill that builds agents but is not itself a persona.', ''].join('\n'),
|
||||
);
|
||||
|
||||
const copilotInstructionsPath17 = path.join(tempProjectDir17, '.github', 'copilot-instructions.md');
|
||||
await fs.ensureDir(path.dirname(copilotInstructionsPath17));
|
||||
await fs.writeFile(
|
||||
|
|
@ -543,6 +669,56 @@ async function runTests() {
|
|||
'GitHub Copilot setup preserves user content in copilot-instructions.md',
|
||||
);
|
||||
|
||||
// Custom Agents picker integration: persona agents (those with [agent]
|
||||
// in their source customize.toml) get .agent.md files in
|
||||
// .github/agents/. Workflows and meta-skills with [workflow] (or no
|
||||
// customize.toml at all) do NOT — the agents-only filter keeps the
|
||||
// picker uncluttered and the signal is naming-independent.
|
||||
const agentsDir17 = path.join(tempProjectDir17, '.github', 'agents');
|
||||
const agentFileForPersona17 = path.join(agentsDir17, 'bmad-agent-fixture.agent.md');
|
||||
const agentFileForTea17 = path.join(agentsDir17, 'bmad-tea.agent.md');
|
||||
const agentFileForWorkflow17 = path.join(agentsDir17, 'bmad-master.agent.md');
|
||||
const agentFileForMetaSkill17 = path.join(agentsDir17, 'bmad-agent-builder.agent.md');
|
||||
const agentFileForBmadHelp17 = path.join(agentsDir17, 'bmad-help.agent.md');
|
||||
|
||||
assert(
|
||||
await fs.pathExists(agentFileForPersona17),
|
||||
'Persona agent ([agent] in customize.toml) gets a .agent.md file in .github/agents/',
|
||||
);
|
||||
assert(await fs.pathExists(agentFileForTea17), 'Non-conventional id with [agent] in customize.toml is included (no allowlist needed)');
|
||||
assert(!(await fs.pathExists(agentFileForWorkflow17)), 'Workflow skill (no customize.toml) is FILTERED OUT of .github/agents/');
|
||||
assert(
|
||||
!(await fs.pathExists(agentFileForBmadHelp17)),
|
||||
'bmad-help is excluded from Copilot agents picker (no customize.toml; allowlist removed per maintainer feedback)',
|
||||
);
|
||||
assert(
|
||||
!(await fs.pathExists(agentFileForMetaSkill17)),
|
||||
'Meta-skill with -agent- in id but [workflow] in customize.toml is FILTERED OUT (signal is behavior, not naming)',
|
||||
);
|
||||
|
||||
// Body content of the persona agent file: frontmatter description +
|
||||
// LOAD pattern referencing the skill's SKILL.md path under target_dir.
|
||||
const personaAgentContent17 = await fs.readFile(agentFileForPersona17, 'utf8');
|
||||
assert(
|
||||
personaAgentContent17.includes('description:'),
|
||||
'Copilot agent pointer carries a description in YAML frontmatter (drives the agents picker label)',
|
||||
);
|
||||
assert(
|
||||
personaAgentContent17.includes('{project-root}/.agents/skills/bmad-agent-fixture/SKILL.md'),
|
||||
'Copilot agent pointer body resolves to the skill via LOAD {project-root}/<target_dir>/<id>/SKILL.md',
|
||||
);
|
||||
|
||||
// Idempotency: re-running setup must not duplicate or rewrite the agent
|
||||
// pointer when the source manifest is unchanged, AND must not start
|
||||
// emitting workflow-skill agent files.
|
||||
const result17b = await ideManager17.setup('github-copilot', tempProjectDir17, installedBmadDir17, {
|
||||
silent: true,
|
||||
selectedModules: ['bmm'],
|
||||
});
|
||||
assert(result17b.success === true, 'Second GitHub Copilot install succeeds (idempotent)');
|
||||
assert(await fs.pathExists(agentFileForPersona17), 'Persona agent pointer survives a second install pass');
|
||||
assert(!(await fs.pathExists(agentFileForWorkflow17)), 'Workflow skill remains filtered out of agents picker on second install');
|
||||
|
||||
await fs.remove(tempProjectDir17);
|
||||
await fs.remove(path.dirname(installedBmadDir17));
|
||||
} catch (error) {
|
||||
|
|
@ -1813,12 +1989,12 @@ async function runTests() {
|
|||
const moduleConfigs = {
|
||||
core: {
|
||||
user_name: 'TestUser',
|
||||
project_name: 'demo-project',
|
||||
communication_language: 'Spanish',
|
||||
document_output_language: 'English',
|
||||
output_folder: '_bmad-output',
|
||||
},
|
||||
bmm: {
|
||||
project_name: 'demo-project',
|
||||
user_skill_level: 'expert',
|
||||
planning_artifacts: '{project-root}/_bmad-output/planning-artifacts',
|
||||
implementation_artifacts: '{project-root}/_bmad-output/implementation-artifacts',
|
||||
|
|
@ -1826,7 +2002,10 @@ async function runTests() {
|
|||
// Spread-from-core pollution: legacy per-module config.yaml merges
|
||||
// core values into every module; writeCentralConfig must strip these
|
||||
// from [modules.bmm] so core values only live in [core].
|
||||
// project_name is now a core key (#2279), so it joins user_name etc.
|
||||
// as a spread-from-core key that must be stripped.
|
||||
user_name: 'TestUser',
|
||||
project_name: 'stale-bmm-copy',
|
||||
communication_language: 'Spanish',
|
||||
document_output_language: 'English',
|
||||
output_folder: '_bmad-output',
|
||||
|
|
@ -1874,6 +2053,7 @@ async function runTests() {
|
|||
assert(teamContent.includes('[core]'), 'config.toml has [core] section');
|
||||
assert(teamContent.includes('document_output_language = "English"'), 'Team-scope core key lands in config.toml');
|
||||
assert(teamContent.includes('output_folder = "_bmad-output"'), 'Team-scope output_folder lands in config.toml');
|
||||
assert(teamContent.includes('project_name = "demo-project"'), 'project_name lands in [core] (core key as of #2279)');
|
||||
assert(!teamContent.includes('user_name'), 'user_name (scope: user) is absent from config.toml');
|
||||
assert(!teamContent.includes('communication_language'), 'communication_language (scope: user) is absent from config.toml');
|
||||
|
||||
|
|
@ -1888,7 +2068,9 @@ async function runTests() {
|
|||
assert(bmmTeamMatch !== null, 'config.toml has [modules.bmm] section');
|
||||
if (bmmTeamMatch) {
|
||||
const bmmTeamBlock = bmmTeamMatch[0];
|
||||
assert(bmmTeamBlock.includes('project_name = "demo-project"'), 'bmm team-scope key lands under [modules.bmm]');
|
||||
assert(bmmTeamBlock.includes('planning_artifacts'), 'bmm-owned team-scope key (planning_artifacts) lands under [modules.bmm]');
|
||||
assert(!bmmTeamBlock.includes('project_name'), 'project_name stripped from [modules.bmm] (now a core key, #2279)');
|
||||
assert(!bmmTeamBlock.includes('stale-bmm-copy'), 'stale bmm-copy of project_name not leaked into config.toml');
|
||||
assert(!bmmTeamBlock.includes('user_name'), 'user_name stripped from [modules.bmm] (core-key pollution)');
|
||||
assert(!bmmTeamBlock.includes('communication_language'), 'communication_language stripped from [modules.bmm]');
|
||||
assert(!bmmTeamBlock.includes('user_skill_level'), 'user_skill_level (scope: user) absent from [modules.bmm] in config.toml');
|
||||
|
|
@ -2731,6 +2913,113 @@ async function runTests() {
|
|||
|
||||
console.log('');
|
||||
|
||||
// ============================================================
|
||||
// Test Suite 40c: OpenCode command pointers in multi-IDE batches
|
||||
// ============================================================
|
||||
// Regression: when OpenCode is the *peer* in a setupBatch sharing
|
||||
// .agents/skills (e.g. with openhands), the skill write is dedup-skipped
|
||||
// but the per-IDE .opencode/commands/ pointers must still be generated.
|
||||
// Symmetrically, partial uninstall while a peer remains must still clean
|
||||
// up OpenCode's own command pointers.
|
||||
console.log(`${colors.yellow}Test Suite 40c: OpenCode command pointers in shared-target batches${colors.reset}\n`);
|
||||
|
||||
try {
|
||||
clearCache();
|
||||
const platformCodes40c = await loadPlatformCodes();
|
||||
const opencodeTarget40c = platformCodes40c.platforms.opencode?.installer?.target_dir;
|
||||
const openhandsTarget40c = platformCodes40c.platforms.openhands?.installer?.target_dir;
|
||||
assert(
|
||||
opencodeTarget40c === '.agents/skills' && openhandsTarget40c === '.agents/skills',
|
||||
'OpenCode and OpenHands share .agents/skills target_dir',
|
||||
);
|
||||
|
||||
// Order A: opencode first → opencode is the writer.
|
||||
const projA = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-opencode-batch-a-'));
|
||||
const bmadA = await createTestBmadFixture();
|
||||
const mgrA = new IdeManager();
|
||||
await mgrA.ensureInitialized();
|
||||
const resultsA = await mgrA.setupBatch(['opencode', 'openhands'], projA, bmadA, {
|
||||
silent: true,
|
||||
selectedModules: ['core'],
|
||||
});
|
||||
const cmdA = path.join(projA, '.opencode', 'commands', 'bmad-master.md');
|
||||
assert(
|
||||
resultsA.every((r) => r.success === true),
|
||||
'opencode-first batch: all platforms succeed',
|
||||
);
|
||||
assert(await fs.pathExists(cmdA), 'opencode-first batch: command pointer is created');
|
||||
|
||||
// Order B: openhands first → opencode is the peer (skipTarget=true).
|
||||
// Without the fix, the early-return would bypass installCommandPointers.
|
||||
const projB = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-opencode-batch-b-'));
|
||||
const bmadB = await createTestBmadFixture();
|
||||
const mgrB = new IdeManager();
|
||||
await mgrB.ensureInitialized();
|
||||
const resultsB = await mgrB.setupBatch(['openhands', 'opencode'], projB, bmadB, {
|
||||
silent: true,
|
||||
selectedModules: ['core'],
|
||||
});
|
||||
const cmdB = path.join(projB, '.opencode', 'commands', 'bmad-master.md');
|
||||
const opencodeResultB = resultsB.find((r) => r.ide === 'opencode');
|
||||
assert(
|
||||
resultsB.every((r) => r.success === true),
|
||||
'openhands-first batch: all platforms succeed',
|
||||
);
|
||||
assert(
|
||||
opencodeResultB?.handlerResult?.results?.sharedTargetHandledByPeer === true,
|
||||
'openhands-first batch: opencode is marked sharedTargetHandledByPeer (skill write deduped)',
|
||||
);
|
||||
assert(await fs.pathExists(cmdB), 'openhands-first batch: command pointer is generated even when skill write is deduped');
|
||||
|
||||
// Cleanup symmetry: uninstall opencode while openhands remains.
|
||||
// Uses an in-project bmadDir so the cleanup path can compute removalSet
|
||||
// from the manifest (the production layout). The cross-temp-dir fixture
|
||||
// above can't exercise this — same constraint Test Suite 40 documents.
|
||||
const projC = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-opencode-batch-c-'));
|
||||
const bmadC = path.join(projC, '_bmad');
|
||||
await fs.ensureDir(path.join(bmadC, '_config'));
|
||||
await fs.writeFile(
|
||||
path.join(bmadC, '_config', 'skill-manifest.csv'),
|
||||
'canonicalId,name,description,module,path\n' +
|
||||
'"bmad-master","bmad-master","Minimal test agent fixture","core","_bmad/core/bmad-master/SKILL.md"\n',
|
||||
);
|
||||
const skillC = path.join(bmadC, 'core', 'bmad-master');
|
||||
await fs.ensureDir(skillC);
|
||||
await fs.writeFile(
|
||||
path.join(skillC, 'SKILL.md'),
|
||||
['---', 'name: bmad-master', 'description: Minimal test agent fixture', '---', '', 'You are a test agent.'].join('\n'),
|
||||
);
|
||||
|
||||
const mgrC = new IdeManager();
|
||||
await mgrC.ensureInitialized();
|
||||
await mgrC.setupBatch(['openhands', 'opencode'], projC, bmadC, {
|
||||
silent: true,
|
||||
selectedModules: ['core'],
|
||||
});
|
||||
const cmdC = path.join(projC, '.opencode', 'commands', 'bmad-master.md');
|
||||
assert(await fs.pathExists(cmdC), 'in-project fixture: pointer is generated for opencode peer');
|
||||
|
||||
const cleanupResultsC = await mgrC.cleanupByList(projC, ['opencode'], {
|
||||
silent: true,
|
||||
remainingIdes: ['openhands'],
|
||||
});
|
||||
assert(cleanupResultsC[0].success !== false, 'opencode partial-uninstall reports success');
|
||||
const sharedSurvivesC = await fs.pathExists(path.join(projC, '.agents', 'skills', 'bmad-master', 'SKILL.md'));
|
||||
assert(sharedSurvivesC, 'shared .agents/skills/ survives partial uninstall (peer still uses it)');
|
||||
assert(!(await fs.pathExists(cmdC)), 'opencode command pointer is removed on partial uninstall even when peer remains');
|
||||
|
||||
await fs.remove(projA).catch(() => {});
|
||||
await fs.remove(path.dirname(bmadA)).catch(() => {});
|
||||
await fs.remove(projB).catch(() => {});
|
||||
await fs.remove(path.dirname(bmadB)).catch(() => {});
|
||||
await fs.remove(projC).catch(() => {});
|
||||
} catch (error) {
|
||||
console.log(`${colors.red}Test Suite 40c setup failed: ${error.message}${colors.reset}`);
|
||||
failed++;
|
||||
}
|
||||
|
||||
console.log('');
|
||||
|
||||
// ============================================================
|
||||
// Test Suite 41: Custom-module skill ownership (non-bmad prefix)
|
||||
// ============================================================
|
||||
|
|
@ -2773,6 +3062,464 @@ async function runTests() {
|
|||
|
||||
console.log('');
|
||||
|
||||
// ============================================================
|
||||
// Test Suite 42: --tools flag parsing & validation (#2326)
|
||||
// ============================================================
|
||||
console.log(`${colors.yellow}Test Suite 42: --tools flag parsing & validation${colors.reset}\n`);
|
||||
try {
|
||||
const { UI } = require('../tools/installer/ui');
|
||||
const ui = new UI();
|
||||
const known = new Set(['claude-code', 'cursor', 'windsurf']);
|
||||
|
||||
assert(
|
||||
JSON.stringify(ui._parseToolsFlag('claude-code', known)) === JSON.stringify(['claude-code']),
|
||||
'parseToolsFlag returns single ID',
|
||||
);
|
||||
|
||||
assert(
|
||||
JSON.stringify(ui._parseToolsFlag('claude-code,cursor', known)) === JSON.stringify(['claude-code', 'cursor']),
|
||||
'parseToolsFlag returns multiple IDs',
|
||||
);
|
||||
|
||||
assert(
|
||||
JSON.stringify(ui._parseToolsFlag(' claude-code , cursor ', known)) === JSON.stringify(['claude-code', 'cursor']),
|
||||
'parseToolsFlag trims whitespace',
|
||||
);
|
||||
|
||||
let emptyErr;
|
||||
try {
|
||||
ui._parseToolsFlag('', known);
|
||||
} catch (error) {
|
||||
emptyErr = error;
|
||||
}
|
||||
assert(
|
||||
emptyErr && emptyErr.expected === true && /empty/i.test(emptyErr.message),
|
||||
'parseToolsFlag rejects empty string with expected=true',
|
||||
);
|
||||
|
||||
let commasOnlyErr;
|
||||
try {
|
||||
ui._parseToolsFlag(' , , ', known);
|
||||
} catch (error) {
|
||||
commasOnlyErr = error;
|
||||
}
|
||||
assert(commasOnlyErr && commasOnlyErr.expected === true, 'parseToolsFlag rejects whitespace/comma-only input');
|
||||
|
||||
let noneErr;
|
||||
try {
|
||||
ui._parseToolsFlag('none', known);
|
||||
} catch (error) {
|
||||
noneErr = error;
|
||||
}
|
||||
assert(noneErr && noneErr.expected === true && /Unknown tool ID/.test(noneErr.message), 'parseToolsFlag rejects "none" as unknown ID');
|
||||
|
||||
let typoErr;
|
||||
try {
|
||||
ui._parseToolsFlag('claude-code,claude-cdoe', known);
|
||||
} catch (error) {
|
||||
typoErr = error;
|
||||
}
|
||||
const typoHeader = typoErr ? typoErr.message.split('\n')[0] : '';
|
||||
assert(
|
||||
typoErr && typoErr.expected === true && /claude-cdoe/.test(typoHeader) && !/claude-code/.test(typoHeader),
|
||||
'parseToolsFlag reports only the unknown ID in error header (valid ones not listed as unknown)',
|
||||
);
|
||||
|
||||
// --list-tools and --tools validation must agree on what counts as a valid ID.
|
||||
const { formatPlatformList } = require('../tools/installer/ide/platform-codes');
|
||||
const { IdeManager } = require('../tools/installer/ide/manager');
|
||||
const ideManager42 = new IdeManager();
|
||||
await ideManager42.ensureInitialized();
|
||||
const validIds = new Set(ideManager42.getAvailableIdes().map((i) => i.value));
|
||||
const listed = await formatPlatformList();
|
||||
// Each entry line starts with ' *' (preferred) or ' ' (other), followed by the ID, then padding.
|
||||
const entryLines = listed.split('\n').filter((l) => /^( \*| {2})[a-z]/.test(l));
|
||||
const listedIds = entryLines.map((l) => l.trim().replace(/^\*/, '').split(/\s+/)[0]);
|
||||
const missingFromList = [...validIds].filter((id) => !listedIds.includes(id));
|
||||
const extraInList = listedIds.filter((id) => !validIds.has(id));
|
||||
assert(
|
||||
missingFromList.length === 0 && extraInList.length === 0,
|
||||
'--list-tools output matches the IDs that --tools accepts',
|
||||
`Missing from list: ${missingFromList.join(',') || '(none)'}; Extra in list: ${extraInList.join(',') || '(none)'}`,
|
||||
);
|
||||
} catch (error) {
|
||||
console.log(`${colors.red}Test Suite 42 setup failed: ${error.message}${colors.reset}`);
|
||||
console.log(error.stack);
|
||||
failed++;
|
||||
}
|
||||
|
||||
console.log('');
|
||||
|
||||
// ============================================================
|
||||
// Test Suite 43: project_name promoted to core + hoist migration (#2279)
|
||||
// ============================================================
|
||||
console.log(`${colors.yellow}Test Suite 43: project_name in core + hoist migration${colors.reset}\n`);
|
||||
try {
|
||||
const yamlLib = require('yaml');
|
||||
const coreSchemaPath = path.join(__dirname, '..', 'src', 'core-skills', 'module.yaml');
|
||||
const bmmSchemaPath = path.join(__dirname, '..', 'src', 'bmm-skills', 'module.yaml');
|
||||
const coreSchema = yamlLib.parse(await fs.readFile(coreSchemaPath, 'utf8'));
|
||||
const bmmSchema = yamlLib.parse(await fs.readFile(bmmSchemaPath, 'utf8'));
|
||||
|
||||
assert(
|
||||
coreSchema.project_name && coreSchema.project_name.prompt && coreSchema.project_name.default === '{directory_name}',
|
||||
'core/module.yaml declares project_name with {directory_name} default',
|
||||
);
|
||||
|
||||
assert(coreSchema.project_name.scope === undefined, 'project_name has no user scope (project-scoped, not user-scoped)');
|
||||
|
||||
assert(bmmSchema.project_name === undefined, 'bmm/module.yaml no longer declares project_name (now inherited from core)');
|
||||
|
||||
// Set up a mock existing install: bmm directory has project_name (legacy),
|
||||
// core has user_name but not project_name. After hoist, project_name should
|
||||
// move to core, leaving bmm with only its own keys.
|
||||
const fixtureRoot43 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-fixture-43-'));
|
||||
const bmadDir43 = path.join(fixtureRoot43, '_bmad');
|
||||
await fs.ensureDir(path.join(bmadDir43, '_config'));
|
||||
await fs.writeFile(path.join(bmadDir43, '_config', 'manifest.yaml'), 'modules: []\n', 'utf8');
|
||||
await fs.ensureDir(path.join(bmadDir43, 'core'));
|
||||
await fs.ensureDir(path.join(bmadDir43, 'bmm'));
|
||||
await fs.writeFile(path.join(bmadDir43, 'core', 'config.yaml'), 'user_name: alice\n', 'utf8');
|
||||
await fs.writeFile(
|
||||
path.join(bmadDir43, 'bmm', 'config.yaml'),
|
||||
'project_name: legacy-from-bmm\nuser_skill_level: intermediate\n',
|
||||
'utf8',
|
||||
);
|
||||
|
||||
const officialModules43 = new OfficialModules();
|
||||
await officialModules43.loadExistingConfig(fixtureRoot43);
|
||||
|
||||
assert(
|
||||
officialModules43.existingConfig.core?.project_name === 'legacy-from-bmm',
|
||||
'loadExistingConfig hoists bmm.project_name to core on existing-install upgrade',
|
||||
);
|
||||
|
||||
assert(
|
||||
!('project_name' in (officialModules43.existingConfig.bmm || {})),
|
||||
'loadExistingConfig removes project_name from bmm after hoisting',
|
||||
);
|
||||
|
||||
assert(
|
||||
officialModules43.existingConfig.bmm?.user_skill_level === 'intermediate',
|
||||
'loadExistingConfig leaves non-core bmm keys (user_skill_level) untouched',
|
||||
);
|
||||
|
||||
assert(officialModules43.existingConfig.core?.user_name === 'alice', 'loadExistingConfig preserves pre-existing core values');
|
||||
|
||||
// Precedence: if core already has the key, hoist must NOT overwrite it.
|
||||
const fixtureRoot43b = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-fixture-43b-'));
|
||||
const bmadDir43b = path.join(fixtureRoot43b, '_bmad');
|
||||
await fs.ensureDir(path.join(bmadDir43b, '_config'));
|
||||
await fs.writeFile(path.join(bmadDir43b, '_config', 'manifest.yaml'), 'modules: []\n', 'utf8');
|
||||
await fs.ensureDir(path.join(bmadDir43b, 'core'));
|
||||
await fs.ensureDir(path.join(bmadDir43b, 'bmm'));
|
||||
await fs.writeFile(path.join(bmadDir43b, 'core', 'config.yaml'), 'project_name: from-core\n', 'utf8');
|
||||
await fs.writeFile(path.join(bmadDir43b, 'bmm', 'config.yaml'), 'project_name: stale-from-bmm\n', 'utf8');
|
||||
|
||||
const officialModules43b = new OfficialModules();
|
||||
await officialModules43b.loadExistingConfig(fixtureRoot43b);
|
||||
|
||||
assert(officialModules43b.existingConfig.core?.project_name === 'from-core', 'hoist does not overwrite an existing core value');
|
||||
|
||||
assert(
|
||||
!('project_name' in (officialModules43b.existingConfig.bmm || {})),
|
||||
'hoist still strips the duplicate from bmm so writeCentralConfig partition stays clean',
|
||||
);
|
||||
|
||||
// Malformed config.yaml (parses to a scalar) must not crash loadExistingConfig
|
||||
// or the hoist pass — they should treat it as "no config for that module"
|
||||
// and continue. Regression for augment review on PR #2348.
|
||||
const fixtureRoot43c = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-fixture-43c-'));
|
||||
const bmadDir43c = path.join(fixtureRoot43c, '_bmad');
|
||||
await fs.ensureDir(path.join(bmadDir43c, '_config'));
|
||||
await fs.writeFile(path.join(bmadDir43c, '_config', 'manifest.yaml'), 'modules: []\n', 'utf8');
|
||||
await fs.ensureDir(path.join(bmadDir43c, 'core'));
|
||||
await fs.ensureDir(path.join(bmadDir43c, 'bmm'));
|
||||
// Scalar YAML — yaml.parse returns the literal 42 (truthy non-object).
|
||||
// Pre-fix this crashed _hoistCoreKeysFromLegacyModuleConfigs with
|
||||
// "Cannot use 'in' operator to search for 'project_name' in 42".
|
||||
await fs.writeFile(path.join(bmadDir43c, 'core', 'config.yaml'), '42\n', 'utf8');
|
||||
await fs.writeFile(path.join(bmadDir43c, 'bmm', 'config.yaml'), 'project_name: rescued\n', 'utf8');
|
||||
|
||||
const officialModules43c = new OfficialModules();
|
||||
let crashErr;
|
||||
try {
|
||||
await officialModules43c.loadExistingConfig(fixtureRoot43c);
|
||||
} catch (error) {
|
||||
crashErr = error;
|
||||
}
|
||||
assert(!crashErr, 'loadExistingConfig does not crash on a scalar core/config.yaml', crashErr?.stack);
|
||||
|
||||
assert(
|
||||
officialModules43c.existingConfig.core?.project_name === 'rescued',
|
||||
'scalar core gets replaced with {} and bmm.project_name still hoists in',
|
||||
);
|
||||
|
||||
await fs.remove(fixtureRoot43).catch(() => {});
|
||||
await fs.remove(fixtureRoot43b).catch(() => {});
|
||||
await fs.remove(fixtureRoot43c).catch(() => {});
|
||||
} catch (error) {
|
||||
console.log(`${colors.red}Test Suite 43 setup failed: ${error.message}${colors.reset}`);
|
||||
console.log(error.stack);
|
||||
failed++;
|
||||
}
|
||||
|
||||
console.log('');
|
||||
|
||||
// ============================================================
|
||||
// Test Suite 44: --set <module>.<key>=<value> CLI overrides (#1663)
|
||||
// ============================================================
|
||||
console.log(`${colors.yellow}Test Suite 44: --set CLI overrides${colors.reset}\n`);
|
||||
try {
|
||||
const { parseSetEntry, parseSetEntries, applySetOverrides, upsertTomlKey, tomlString } = require('../tools/installer/set-overrides');
|
||||
const { discoverOfficialModuleYamls, formatOptionsList } = require('../tools/installer/list-options');
|
||||
|
||||
// ---- Parser ----------------------------------------------------------
|
||||
const ok = parseSetEntry('bmm.project_knowledge=research');
|
||||
assert(
|
||||
ok.module === 'bmm' && ok.key === 'project_knowledge' && ok.value === 'research',
|
||||
'parseSetEntry splits <module>.<key>=<value> correctly',
|
||||
);
|
||||
assert(parseSetEntry('bmm.weird=a=b=c').value === 'a=b=c', 'parseSetEntry preserves additional "=" inside the value');
|
||||
|
||||
const badInputs = ['no-equals', 'no-dot=value', '=value', '.=value', 'foo.=value', '.bar=value', ''];
|
||||
let allBadThrow = true;
|
||||
for (const bad of badInputs) {
|
||||
try {
|
||||
parseSetEntry(bad);
|
||||
allBadThrow = false;
|
||||
} catch {
|
||||
/* expected */
|
||||
}
|
||||
}
|
||||
assert(allBadThrow, `parseSetEntry rejects malformed inputs (${badInputs.length} cases)`);
|
||||
|
||||
const multi = parseSetEntries(['bmm.project_knowledge=research', 'bmm.user_skill_level=expert', 'core.user_name=Brian']);
|
||||
assert(
|
||||
multi.bmm.project_knowledge === 'research' && multi.bmm.user_skill_level === 'expert' && multi.core.user_name === 'Brian',
|
||||
'parseSetEntries groups by module',
|
||||
);
|
||||
assert(parseSetEntries(['bmm.x=first', 'bmm.x=second']).bmm.x === 'second', 'parseSetEntries: later --set entry overrides earlier');
|
||||
const empty = parseSetEntries();
|
||||
assert(empty && Object.keys(empty).length === 0, 'parseSetEntries() returns empty object when called without args');
|
||||
|
||||
// Prototype-pollution guard. `--set __proto__.x=1` would otherwise reach
|
||||
// `overrides.__proto__[x] = 1` and pollute every plain object.
|
||||
const polluteProbe = {};
|
||||
let pollutionThrown = false;
|
||||
try {
|
||||
parseSetEntries(['__proto__.polluted=1']);
|
||||
} catch {
|
||||
pollutionThrown = true;
|
||||
}
|
||||
assert(pollutionThrown, 'parseSetEntries rejects __proto__ as a module name');
|
||||
assert(polluteProbe.polluted === undefined, 'Object.prototype is not polluted by __proto__ in --set entries');
|
||||
let constructorThrown = false;
|
||||
try {
|
||||
parseSetEntries(['bmm.constructor=evil']);
|
||||
} catch {
|
||||
constructorThrown = true;
|
||||
}
|
||||
assert(constructorThrown, 'parseSetEntries rejects "constructor" as a key name');
|
||||
|
||||
// ---- tomlString ------------------------------------------------------
|
||||
assert(tomlString('hello') === '"hello"', 'tomlString quotes a plain string');
|
||||
assert(tomlString('with "quotes"') === String.raw`"with \"quotes\""`, 'tomlString escapes embedded double-quotes');
|
||||
assert(tomlString(String.raw`back\slash`) === String.raw`"back\\slash"`, 'tomlString escapes backslashes');
|
||||
assert(tomlString('line1\nline2') === String.raw`"line1\nline2"`, 'tomlString escapes newlines');
|
||||
|
||||
// ---- upsertTomlKey: insert into existing section ---------------------
|
||||
{
|
||||
const before = `[core]\nuser_name = "Brian"\n\n[modules.bmm]\nproject_knowledge = "{project-root}/docs"\n`;
|
||||
const after = upsertTomlKey(before, '[modules.bmm]', 'future_thing', '"persists"');
|
||||
assert(after.includes('future_thing = "persists"'), 'upsertTomlKey inserts a new key into an existing section');
|
||||
assert(/project_knowledge = "{project-root}\/docs"/.test(after), 'upsertTomlKey preserves existing keys');
|
||||
}
|
||||
|
||||
// ---- upsertTomlKey: replace existing key, keep comment tail ----------
|
||||
{
|
||||
const before = `[core]\nuser_name = "old" # set on first install\n`;
|
||||
const after = upsertTomlKey(before, '[core]', 'user_name', '"Brian"');
|
||||
assert(/user_name = "Brian"\s+# set on first install/.test(after), 'upsertTomlKey preserves trailing comments');
|
||||
assert(!after.includes('"old"'), 'upsertTomlKey replaces the prior value');
|
||||
}
|
||||
|
||||
// ---- upsertTomlKey: section missing → append new section -------------
|
||||
{
|
||||
const before = `[core]\nuser_name = "Brian"\n`;
|
||||
const after = upsertTomlKey(before, '[modules.bmm]', 'project_knowledge', '"research"');
|
||||
assert(after.includes('[modules.bmm]'), 'upsertTomlKey appends a new section when missing');
|
||||
assert(after.includes('project_knowledge = "research"'), 'upsertTomlKey appends the key under the new section');
|
||||
// Existing section remains untouched
|
||||
assert(after.indexOf('[core]') < after.indexOf('[modules.bmm]'), 'upsertTomlKey adds the new section AFTER existing content');
|
||||
}
|
||||
|
||||
// ---- upsertTomlKey: empty file ---------------------------------------
|
||||
{
|
||||
const after = upsertTomlKey('', '[core]', 'user_name', '"Brian"');
|
||||
assert(after.startsWith('[core]'), 'upsertTomlKey on an empty string emits the section header');
|
||||
assert(after.includes('user_name = "Brian"'), 'upsertTomlKey on an empty string writes the key');
|
||||
}
|
||||
|
||||
// ---- upsertTomlKey: trailing newline preserved -----------------------
|
||||
{
|
||||
const withTrailing = upsertTomlKey('[core]\nuser_name = "old"\n', '[core]', 'user_name', '"new"');
|
||||
assert(withTrailing.endsWith('\n'), 'upsertTomlKey preserves trailing newline');
|
||||
const withoutTrailing = upsertTomlKey('[core]\nuser_name = "old"', '[core]', 'user_name', '"new"');
|
||||
assert(!withoutTrailing.endsWith('\n'), 'upsertTomlKey preserves absence of trailing newline');
|
||||
}
|
||||
|
||||
// ---- applySetOverrides happy path ------------------------------------
|
||||
{
|
||||
const tmp = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-applyset-'));
|
||||
const bmadDir = path.join(tmp, '_bmad');
|
||||
await fs.ensureDir(bmadDir);
|
||||
// Seed a realistic post-install state: team config has bmm.project_knowledge,
|
||||
// user config has core.user_name. The applySetOverrides router should
|
||||
// route bmm.user_skill_level → user.toml (already there), core.user_name
|
||||
// update → user.toml (already there), and a brand-new key → team.toml.
|
||||
await fs.writeFile(
|
||||
path.join(bmadDir, 'config.toml'),
|
||||
'[core]\nproject_name = "demo"\n\n[modules.bmm]\nproject_knowledge = "{project-root}/docs"\n',
|
||||
'utf8',
|
||||
);
|
||||
await fs.writeFile(
|
||||
path.join(bmadDir, 'config.user.toml'),
|
||||
'[core]\nuser_name = "OldName"\n\n[modules.bmm]\nuser_skill_level = "intermediate"\n',
|
||||
'utf8',
|
||||
);
|
||||
// Per-module config.yaml stubs are the "is this module installed?"
|
||||
// signal applySetOverrides uses to skip uninstalled-module overrides.
|
||||
await fs.ensureDir(path.join(bmadDir, 'core'));
|
||||
await fs.writeFile(path.join(bmadDir, 'core', 'config.yaml'), 'project_name: demo\n', 'utf8');
|
||||
await fs.ensureDir(path.join(bmadDir, 'bmm'));
|
||||
await fs.writeFile(
|
||||
path.join(bmadDir, 'bmm', 'config.yaml'),
|
||||
'project_knowledge: "{project-root}/docs"\nuser_skill_level: intermediate\n',
|
||||
'utf8',
|
||||
);
|
||||
|
||||
const overrides = {
|
||||
core: { user_name: 'Brian' },
|
||||
bmm: { user_skill_level: 'expert', future_thing: 'persists' },
|
||||
};
|
||||
const applied = await applySetOverrides(overrides, bmadDir);
|
||||
|
||||
const team = await fs.readFile(path.join(bmadDir, 'config.toml'), 'utf8');
|
||||
const user = await fs.readFile(path.join(bmadDir, 'config.user.toml'), 'utf8');
|
||||
|
||||
assert(user.includes('user_name = "Brian"'), 'applySetOverrides updates user-scope key in config.user.toml');
|
||||
assert(user.includes('user_skill_level = "expert"'), 'applySetOverrides updates pre-existing user-scope key in config.user.toml');
|
||||
assert(team.includes('future_thing = "persists"'), 'applySetOverrides routes brand-new key to team config.toml');
|
||||
assert(team.includes('project_knowledge = "{project-root}/docs"'), 'applySetOverrides leaves untouched team keys alone');
|
||||
assert(!team.includes('user_name = "Brian"'), 'applySetOverrides does NOT duplicate user-scope key into team file');
|
||||
|
||||
const summary = applied
|
||||
.map((a) => `${a.module}.${a.key}->${a.scope}`)
|
||||
.sort()
|
||||
.join(',');
|
||||
assert(
|
||||
summary === 'bmm.future_thing->team,bmm.user_skill_level->user,core.user_name->user',
|
||||
`applySetOverrides reports correct routing decisions (got: ${summary})`,
|
||||
);
|
||||
|
||||
await fs.remove(tmp).catch(() => {});
|
||||
}
|
||||
|
||||
// ---- applySetOverrides creates config.user.toml if missing -----------
|
||||
{
|
||||
const tmp = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-applyset-nouser-'));
|
||||
const bmadDir = path.join(tmp, '_bmad');
|
||||
await fs.ensureDir(bmadDir);
|
||||
await fs.writeFile(path.join(bmadDir, 'config.toml'), '[core]\nuser_name = "Brian"\n', 'utf8');
|
||||
await fs.ensureDir(path.join(bmadDir, 'core'));
|
||||
await fs.writeFile(path.join(bmadDir, 'core', 'config.yaml'), 'user_name: Brian\n', 'utf8');
|
||||
// Override targets a key only in team config; routes to team. user.toml
|
||||
// never gets created in this case (correct — no user-scope writes).
|
||||
await applySetOverrides({ core: { user_name: 'Updated' } }, bmadDir);
|
||||
const team = await fs.readFile(path.join(bmadDir, 'config.toml'), 'utf8');
|
||||
assert(team.includes('user_name = "Updated"'), 'applySetOverrides updates team key when user.toml is absent');
|
||||
assert(
|
||||
!(await fs.pathExists(path.join(bmadDir, 'config.user.toml'))),
|
||||
'applySetOverrides does not create config.user.toml unnecessarily',
|
||||
);
|
||||
await fs.remove(tmp).catch(() => {});
|
||||
}
|
||||
|
||||
// ---- applySetOverrides skips modules without per-module config.yaml --
|
||||
{
|
||||
const tmp = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-applyset-skip-'));
|
||||
const bmadDir = path.join(tmp, '_bmad');
|
||||
await fs.ensureDir(bmadDir);
|
||||
await fs.writeFile(path.join(bmadDir, 'config.toml'), '[core]\nuser_name = "Brian"\n', 'utf8');
|
||||
await fs.ensureDir(path.join(bmadDir, 'core'));
|
||||
await fs.writeFile(path.join(bmadDir, 'core', 'config.yaml'), 'user_name: Brian\n', 'utf8');
|
||||
// bmm is not installed (no `_bmad/bmm/config.yaml`). The override for
|
||||
// bmm should be silently skipped, no `[modules.bmm]` section created.
|
||||
const applied = await applySetOverrides({ bmm: { foo: 'bar' }, core: { user_name: 'Updated' } }, bmadDir);
|
||||
const team = await fs.readFile(path.join(bmadDir, 'config.toml'), 'utf8');
|
||||
assert(!team.includes('[modules.bmm]'), 'applySetOverrides does NOT create section for uninstalled module');
|
||||
assert(team.includes('user_name = "Updated"'), 'applySetOverrides still applies overrides for installed modules');
|
||||
assert(applied.length === 1 && applied[0].module === 'core', 'applySetOverrides reports only the installed-module entries');
|
||||
await fs.remove(tmp).catch(() => {});
|
||||
}
|
||||
|
||||
// ---- applySetOverrides: empty/missing input is a no-op ---------------
|
||||
{
|
||||
const tmp = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-applyset-empty-'));
|
||||
const bmadDir = path.join(tmp, '_bmad');
|
||||
await fs.ensureDir(bmadDir);
|
||||
const empty1 = await applySetOverrides({}, bmadDir);
|
||||
const empty2 = await applySetOverrides(null, bmadDir);
|
||||
const empty3 = await applySetOverrides(undefined, bmadDir);
|
||||
assert(
|
||||
empty1.length === 0 && empty2.length === 0 && empty3.length === 0,
|
||||
'applySetOverrides is a no-op for empty/null/undefined input',
|
||||
);
|
||||
await fs.remove(tmp).catch(() => {});
|
||||
}
|
||||
|
||||
// ---- discoverOfficialModuleYamls + formatOptionsList -----------------
|
||||
// These read the on-disk external-module cache. Point that env at a temp
|
||||
// dir so test results don't depend on whatever the developer / CI runner
|
||||
// has cached.
|
||||
const priorCacheEnv44 = process.env.BMAD_EXTERNAL_MODULES_CACHE;
|
||||
const tempCacheDir44 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-list-options-cache-'));
|
||||
process.env.BMAD_EXTERNAL_MODULES_CACHE = tempCacheDir44;
|
||||
try {
|
||||
const discovered = await discoverOfficialModuleYamls();
|
||||
const codes = new Set(discovered.map((d) => d.code));
|
||||
assert(codes.has('core') && codes.has('bmm'), 'discoverOfficialModuleYamls finds core and bmm built-ins');
|
||||
|
||||
const bmmListing = await formatOptionsList('bmm');
|
||||
assert(bmmListing.ok === true, '--list-options bmm reports ok: true');
|
||||
assert(bmmListing.text.includes('bmm.project_knowledge'), '--list-options bmm renders bmm.project_knowledge');
|
||||
assert(bmmListing.text.includes('bmm.user_skill_level'), '--list-options bmm renders bmm.user_skill_level');
|
||||
|
||||
// Case-insensitive filter.
|
||||
const bmmUpper = await formatOptionsList('BMM');
|
||||
assert(bmmUpper.ok === true && bmmUpper.text.includes('bmm.project_knowledge'), '--list-options is case-insensitive');
|
||||
|
||||
// Unknown module → non-zero exit signal.
|
||||
const unknown = await formatOptionsList('definitely-not-a-module');
|
||||
assert(unknown.ok === false, '--list-options <unknown> reports ok: false');
|
||||
assert(unknown.text.includes('No locally-known module.yaml'), '--list-options unknown explains the miss');
|
||||
} finally {
|
||||
if (priorCacheEnv44 === undefined) {
|
||||
delete process.env.BMAD_EXTERNAL_MODULES_CACHE;
|
||||
} else {
|
||||
process.env.BMAD_EXTERNAL_MODULES_CACHE = priorCacheEnv44;
|
||||
}
|
||||
await fs.remove(tempCacheDir44).catch(() => {});
|
||||
}
|
||||
} catch (error) {
|
||||
console.log(`${colors.red}Test Suite 44 setup failed: ${error.message}${colors.reset}`);
|
||||
console.log(error.stack);
|
||||
failed++;
|
||||
}
|
||||
|
||||
console.log('');
|
||||
|
||||
// ============================================================
|
||||
// Summary
|
||||
// ============================================================
|
||||
|
|
|
|||
|
|
@ -0,0 +1,294 @@
|
|||
/**
|
||||
* parseSource() URL parsing tests
|
||||
*
|
||||
* Verifies that CustomModuleManager.parseSource() correctly handles Git URLs
|
||||
* across arbitrary hosts and path shapes (deep paths, nested groups, browse
|
||||
* links, repo names containing dots, etc.) using host-agnostic rules.
|
||||
*
|
||||
* Usage: node test/test-parse-source-urls.js
|
||||
*/
|
||||
|
||||
const { CustomModuleManager } = require('../tools/installer/modules/custom-module-manager');
|
||||
|
||||
// ANSI colors
|
||||
const colors = {
|
||||
reset: '\u001B[0m',
|
||||
green: '\u001B[32m',
|
||||
red: '\u001B[31m',
|
||||
cyan: '\u001B[36m',
|
||||
dim: '\u001B[2m',
|
||||
};
|
||||
|
||||
let passed = 0;
|
||||
let failed = 0;
|
||||
|
||||
function assert(condition, testName, errorMessage = '') {
|
||||
if (condition) {
|
||||
console.log(`${colors.green}✓${colors.reset} ${testName}`);
|
||||
passed++;
|
||||
} else {
|
||||
console.log(`${colors.red}✗${colors.reset} ${testName}`);
|
||||
if (errorMessage) {
|
||||
console.log(` ${colors.dim}${errorMessage}${colors.reset}`);
|
||||
}
|
||||
failed++;
|
||||
}
|
||||
}
|
||||
|
||||
const manager = new CustomModuleManager();
|
||||
|
||||
// ─── Deep path shapes (4+ segments) ─────────────────────────────────────────
|
||||
|
||||
console.log(`\n${colors.cyan}Deep path shapes${colors.reset}\n`);
|
||||
|
||||
{
|
||||
// Hosts that expose the repo at a nested path like /<org>/<project>/<marker>/<repo>.
|
||||
// The parser must preserve the full path (no stripping of intermediate segments).
|
||||
const result = manager.parseSource('https://git.example.com/myorg/MyProject/_git/my-module');
|
||||
assert(result.isValid === true, 'nested-path URL is valid');
|
||||
assert(result.type === 'url', 'nested-path type is url');
|
||||
assert(
|
||||
result.cloneUrl === 'https://git.example.com/myorg/MyProject/_git/my-module',
|
||||
'nested-path cloneUrl preserves full path',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.subdir === null, 'nested-path URL has no subdir');
|
||||
assert(
|
||||
result.cacheKey === 'git.example.com/myorg/MyProject/_git/my-module',
|
||||
'nested-path cacheKey includes full repo path',
|
||||
`Got: ${result.cacheKey}`,
|
||||
);
|
||||
assert(result.displayName === '_git/my-module', 'nested-path displayName uses last two segments', `Got: ${result.displayName}`);
|
||||
}
|
||||
|
||||
{
|
||||
const result = manager.parseSource('https://git.example.com/myorg/MyProject/_git/my-module.git');
|
||||
assert(result.isValid === true, 'nested-path URL with .git suffix is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://git.example.com/myorg/MyProject/_git/my-module',
|
||||
'nested-path .git suffix stripped from cloneUrl',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
}
|
||||
|
||||
{
|
||||
// Browse links that use ?path=/... to point at a subdirectory.
|
||||
const result = manager.parseSource('https://git.example.com/myorg/MyProject/_git/my-module?path=/path/to/subdir');
|
||||
assert(result.isValid === true, 'URL with ?path= is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://git.example.com/myorg/MyProject/_git/my-module',
|
||||
'?path= cloneUrl excludes subdir',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.subdir === 'path/to/subdir', '?path= subdir correctly extracted', `Got: ${result.subdir}`);
|
||||
}
|
||||
|
||||
// ─── Azure DevOps URLs (Issue #2268) ────────────────────────────────────────
|
||||
|
||||
console.log(`\n${colors.cyan}Azure DevOps URLs (Issue #2268)${colors.reset}\n`);
|
||||
|
||||
{
|
||||
// Modern dev.azure.com format — the exact URL from the bug report.
|
||||
const result = manager.parseSource('https://dev.azure.com/myorg/MyProject/_git/my-module');
|
||||
assert(result.isValid === true, 'ADO modern URL is valid');
|
||||
assert(result.type === 'url', 'ADO modern type is url');
|
||||
assert(
|
||||
result.cloneUrl === 'https://dev.azure.com/myorg/MyProject/_git/my-module',
|
||||
'ADO modern cloneUrl preserves full _git path',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(
|
||||
result.cacheKey === 'dev.azure.com/myorg/MyProject/_git/my-module',
|
||||
'ADO modern cacheKey includes full path',
|
||||
`Got: ${result.cacheKey}`,
|
||||
);
|
||||
assert(result.subdir === null, 'ADO modern URL has no subdir');
|
||||
}
|
||||
|
||||
{
|
||||
// Modern format with .git suffix
|
||||
const result = manager.parseSource('https://dev.azure.com/myorg/MyProject/_git/my-module.git');
|
||||
assert(result.isValid === true, 'ADO modern .git suffix is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://dev.azure.com/myorg/MyProject/_git/my-module',
|
||||
'ADO modern .git suffix stripped from cloneUrl',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
}
|
||||
|
||||
{
|
||||
// Modern format with ?path= subdir (browse link)
|
||||
const result = manager.parseSource('https://dev.azure.com/myorg/MyProject/_git/my-module?path=/src/skills');
|
||||
assert(result.isValid === true, 'ADO modern ?path= is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://dev.azure.com/myorg/MyProject/_git/my-module',
|
||||
'ADO modern ?path= cloneUrl excludes subdir',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.subdir === 'src/skills', 'ADO modern ?path= subdir extracted', `Got: ${result.subdir}`);
|
||||
}
|
||||
|
||||
{
|
||||
// Legacy visualstudio.com format
|
||||
const result = manager.parseSource('https://myorg.visualstudio.com/MyProject/_git/my-module');
|
||||
assert(result.isValid === true, 'ADO legacy URL is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://myorg.visualstudio.com/MyProject/_git/my-module',
|
||||
'ADO legacy cloneUrl preserves full path',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(
|
||||
result.cacheKey === 'myorg.visualstudio.com/MyProject/_git/my-module',
|
||||
'ADO legacy cacheKey includes full path',
|
||||
`Got: ${result.cacheKey}`,
|
||||
);
|
||||
}
|
||||
|
||||
{
|
||||
// Legacy format with .git suffix
|
||||
const result = manager.parseSource('https://myorg.visualstudio.com/MyProject/_git/my-module.git');
|
||||
assert(result.isValid === true, 'ADO legacy .git suffix is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://myorg.visualstudio.com/MyProject/_git/my-module',
|
||||
'ADO legacy .git suffix stripped from cloneUrl',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
}
|
||||
|
||||
{
|
||||
// Legacy format with ?path= subdir
|
||||
const result = manager.parseSource('https://myorg.visualstudio.com/MyProject/_git/my-module?path=/src');
|
||||
assert(result.isValid === true, 'ADO legacy ?path= is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://myorg.visualstudio.com/MyProject/_git/my-module',
|
||||
'ADO legacy ?path= cloneUrl excludes subdir',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.subdir === 'src', 'ADO legacy ?path= subdir extracted', `Got: ${result.subdir}`);
|
||||
}
|
||||
|
||||
// ─── Subdomain hosts ────────────────────────────────────────────────────────
|
||||
|
||||
console.log(`\n${colors.cyan}Subdomain hosts${colors.reset}\n`);
|
||||
|
||||
{
|
||||
const result = manager.parseSource('https://myorg.example.com/MyProject/_git/my-module');
|
||||
assert(result.isValid === true, 'subdomain URL is valid');
|
||||
assert(result.type === 'url', 'subdomain type is url');
|
||||
assert(
|
||||
result.cloneUrl === 'https://myorg.example.com/MyProject/_git/my-module',
|
||||
'subdomain cloneUrl preserves full path',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.subdir === null, 'subdomain URL has no subdir');
|
||||
assert(
|
||||
result.cacheKey === 'myorg.example.com/MyProject/_git/my-module',
|
||||
'subdomain cacheKey includes full repo path',
|
||||
`Got: ${result.cacheKey}`,
|
||||
);
|
||||
}
|
||||
|
||||
// ─── Simple owner/repo URLs (regression) ────────────────────────────────────
|
||||
|
||||
console.log(`\n${colors.cyan}Simple owner/repo URLs (regression check)${colors.reset}\n`);
|
||||
|
||||
{
|
||||
const result = manager.parseSource('https://github.com/owner/repo');
|
||||
assert(result.isValid === true, 'GitHub basic URL still valid');
|
||||
assert(result.cloneUrl === 'https://github.com/owner/repo', 'GitHub cloneUrl unchanged', `Got: ${result.cloneUrl}`);
|
||||
assert(result.cacheKey === 'github.com/owner/repo', 'GitHub cacheKey unchanged', `Got: ${result.cacheKey}`);
|
||||
}
|
||||
|
||||
{
|
||||
const result = manager.parseSource('https://github.com/owner/repo/tree/main/subdir');
|
||||
assert(result.isValid === true, 'GitHub URL with tree path still valid');
|
||||
assert(result.cloneUrl === 'https://github.com/owner/repo', 'GitHub tree URL cloneUrl correct', `Got: ${result.cloneUrl}`);
|
||||
assert(result.subdir === 'subdir', 'GitHub tree subdir still extracted', `Got: ${result.subdir}`);
|
||||
}
|
||||
|
||||
{
|
||||
const result = manager.parseSource('git@github.com:owner/repo.git');
|
||||
assert(result.isValid === true, 'SSH URL still valid');
|
||||
assert(result.cloneUrl === 'git@github.com:owner/repo.git', 'SSH cloneUrl unchanged', `Got: ${result.cloneUrl}`);
|
||||
}
|
||||
|
||||
// ─── Generic URL handling (any host, any path depth) ────────────────────────
|
||||
|
||||
console.log(`\n${colors.cyan}Generic URL handling${colors.reset}\n`);
|
||||
|
||||
{
|
||||
// GitLab nested groups — the old 2-segment regex would have failed this.
|
||||
const result = manager.parseSource('https://gitlab.com/group/subgroup/repo');
|
||||
assert(result.isValid === true, 'GitLab nested-group URL is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://gitlab.com/group/subgroup/repo',
|
||||
'GitLab nested-group cloneUrl preserves full path',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(
|
||||
result.cacheKey === 'gitlab.com/group/subgroup/repo',
|
||||
'GitLab nested-group cacheKey includes full path',
|
||||
`Got: ${result.cacheKey}`,
|
||||
);
|
||||
assert(result.displayName === 'subgroup/repo', 'GitLab nested-group displayName uses last two segments', `Got: ${result.displayName}`);
|
||||
}
|
||||
|
||||
{
|
||||
const result = manager.parseSource('https://gitlab.com/group/subgroup/repo/-/tree/main/src/module');
|
||||
assert(result.isValid === true, 'GitLab nested-group tree URL is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://gitlab.com/group/subgroup/repo',
|
||||
'GitLab nested-group tree cloneUrl excludes subdir',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.subdir === 'src/module', 'GitLab nested-group tree subdir extracted', `Got: ${result.subdir}`);
|
||||
}
|
||||
|
||||
{
|
||||
// Self-hosted host with a repo name containing dots — the old regex
|
||||
// explicitly excluded dots from the repo segment.
|
||||
const result = manager.parseSource('https://git.example.com/owner/my.repo.name');
|
||||
assert(result.isValid === true, 'repo name with dots is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://git.example.com/owner/my.repo.name',
|
||||
'repo name with dots preserved in cloneUrl',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.displayName === 'owner/my.repo.name', 'repo name with dots preserved in displayName', `Got: ${result.displayName}`);
|
||||
}
|
||||
|
||||
{
|
||||
// Browser URL pointing at a ref with NO trailing subdir must still strip
|
||||
// the /tree/<ref> segment from the clone URL.
|
||||
const result = manager.parseSource('https://github.com/owner/repo/tree/main');
|
||||
assert(result.isValid === true, 'tree URL without subdir is valid');
|
||||
assert(
|
||||
result.cloneUrl === 'https://github.com/owner/repo',
|
||||
'tree URL without subdir strips ref from cloneUrl',
|
||||
`Got: ${result.cloneUrl}`,
|
||||
);
|
||||
assert(result.subdir === null, 'tree URL without subdir yields null subdir', `Got: ${result.subdir}`);
|
||||
assert(result.displayName === 'owner/repo', 'tree URL without subdir displayName is owner/repo', `Got: ${result.displayName}`);
|
||||
}
|
||||
|
||||
{
|
||||
// Same shape for GitLab's /-/tree form and Gitea's /src/branch form.
|
||||
const gitlab = manager.parseSource('https://gitlab.com/group/repo/-/tree/main');
|
||||
assert(
|
||||
gitlab.cloneUrl === 'https://gitlab.com/group/repo' && gitlab.subdir === null,
|
||||
'GitLab /-/tree/<ref> without subdir strips ref',
|
||||
`Got: ${gitlab.cloneUrl} subdir=${gitlab.subdir}`,
|
||||
);
|
||||
|
||||
const gitea = manager.parseSource('https://gitea.example.com/owner/repo/src/branch/main');
|
||||
assert(
|
||||
gitea.cloneUrl === 'https://gitea.example.com/owner/repo' && gitea.subdir === null,
|
||||
'Gitea /src/branch/<ref> without subdir strips ref',
|
||||
`Got: ${gitea.cloneUrl} subdir=${gitea.subdir}`,
|
||||
);
|
||||
}
|
||||
|
||||
// ─── Summary ────────────────────────────────────────────────────────────────
|
||||
|
||||
console.log(`\n${colors.cyan}Results: ${passed} passed, ${failed} failed${colors.reset}\n`);
|
||||
process.exit(failed > 0 ? 1 : 0);
|
||||
|
|
@ -15,7 +15,18 @@ module.exports = {
|
|||
['--modules <modules>', 'Comma-separated list of module IDs to install (e.g., "bmm,bmb")'],
|
||||
[
|
||||
'--tools <tools>',
|
||||
'Comma-separated list of tool/IDE IDs to configure (e.g., "claude-code,cursor"). Use "none" to skip tool configuration.',
|
||||
'Comma-separated list of tool/IDE IDs to configure (e.g., "claude-code,cursor"). Required for fresh non-interactive (--yes) installs. Run with --list-tools to see all valid IDs.',
|
||||
],
|
||||
['--list-tools', 'Print all supported tool/IDE IDs (with target directories) and exit.'],
|
||||
[
|
||||
'--set <spec>',
|
||||
'Set a module config option non-interactively. Spec format: <module>.<key>=<value> (e.g. bmm.project_knowledge=research). Repeatable. Run --list-options to see available keys.',
|
||||
(value, prev) => [...(prev || []), value],
|
||||
[],
|
||||
],
|
||||
[
|
||||
'--list-options [module]',
|
||||
'List available --set keys for all locally-known official modules, or for a single module by code, then exit.',
|
||||
],
|
||||
['--action <type>', 'Action type for existing installations: install, update, or quick-update'],
|
||||
['--user-name <name>', 'Name for agents to use (default: system username)'],
|
||||
|
|
@ -40,12 +51,49 @@ module.exports = {
|
|||
],
|
||||
action: async (options) => {
|
||||
try {
|
||||
if (options.listTools) {
|
||||
const { formatPlatformList } = require('../ide/platform-codes');
|
||||
process.stdout.write((await formatPlatformList()) + '\n');
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
if (options.listOptions !== undefined) {
|
||||
const { formatOptionsList } = require('../list-options');
|
||||
const moduleArg = options.listOptions === true ? null : options.listOptions;
|
||||
const { text, ok } = await formatOptionsList(moduleArg);
|
||||
const stream = ok ? process.stdout : process.stderr;
|
||||
// process.exit() forces immediate termination and can truncate the
|
||||
// buffered write when stdout/stderr is piped or captured by CI. Wait
|
||||
// for the write to flush, then set process.exitCode and return so the
|
||||
// event loop drains naturally. Non-zero exit when a single-module
|
||||
// lookup misses so a CI typo like `--list-options bmn` doesn't look
|
||||
// successful in scripts.
|
||||
await new Promise((resolve, reject) => {
|
||||
stream.write(text + '\n', (error) => (error ? reject(error) : resolve()));
|
||||
});
|
||||
process.exitCode = ok ? 0 : 1;
|
||||
return;
|
||||
}
|
||||
|
||||
// Set debug flag as environment variable for all components
|
||||
if (options.debug) {
|
||||
process.env.BMAD_DEBUG_MANIFEST = 'true';
|
||||
await prompts.log.info('Debug mode enabled');
|
||||
}
|
||||
|
||||
// Validate --set syntax up-front so malformed entries fail fast,
|
||||
// before we touch the network or filesystem. Parsed entries are
|
||||
// re-derived inside ui.js where overrides are seeded.
|
||||
if (options.set && options.set.length > 0) {
|
||||
const { parseSetEntries } = require('../set-overrides');
|
||||
try {
|
||||
parseSetEntries(options.set);
|
||||
} catch (error) {
|
||||
await prompts.log.error(error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
const config = await ui.promptInstall(options);
|
||||
|
||||
// Handle cancel
|
||||
|
|
@ -54,8 +102,13 @@ module.exports = {
|
|||
process.exit(0);
|
||||
}
|
||||
|
||||
// Handle quick update separately
|
||||
// Handle quick update separately. --set is a post-install TOML patch so
|
||||
// it works the same way for quick-update as for a regular install — the
|
||||
// installer runs, then `applySetOverrides` patches the central config
|
||||
// files. Pass the parsed overrides through.
|
||||
if (config.actionType === 'quick-update') {
|
||||
const { parseSetEntries } = require('../set-overrides');
|
||||
config.setOverrides = parseSetEntries(options.set || []);
|
||||
const result = await installer.quickUpdate(config);
|
||||
await prompts.log.success('Quick update complete!');
|
||||
await prompts.log.info(`Updated ${result.moduleCount} modules with preserved settings (${result.modules.join(', ')})`);
|
||||
|
|
@ -81,7 +134,7 @@ module.exports = {
|
|||
} else {
|
||||
await prompts.log.error(`Installation failed: ${error.message}`);
|
||||
}
|
||||
if (error.stack) {
|
||||
if (error.stack && !error.expected) {
|
||||
await prompts.log.message(error.stack);
|
||||
}
|
||||
} catch {
|
||||
|
|
|
|||
|
|
@ -3,7 +3,19 @@
|
|||
* User input comes from either UI answers or headless CLI flags.
|
||||
*/
|
||||
class Config {
|
||||
constructor({ directory, modules, ides, skipPrompts, verbose, actionType, coreConfig, moduleConfigs, quickUpdate, channelOptions }) {
|
||||
constructor({
|
||||
directory,
|
||||
modules,
|
||||
ides,
|
||||
skipPrompts,
|
||||
verbose,
|
||||
actionType,
|
||||
coreConfig,
|
||||
moduleConfigs,
|
||||
quickUpdate,
|
||||
channelOptions,
|
||||
setOverrides,
|
||||
}) {
|
||||
this.directory = directory;
|
||||
this.modules = Object.freeze([...modules]);
|
||||
this.ides = Object.freeze([...ides]);
|
||||
|
|
@ -15,6 +27,11 @@ class Config {
|
|||
this._quickUpdate = quickUpdate;
|
||||
// channelOptions carry a Map + Set; don't deep-freeze.
|
||||
this.channelOptions = channelOptions || null;
|
||||
// Parsed `--set <module>.<key>=<value>` overrides, applied as a TOML
|
||||
// patch AFTER the install finishes. Shape: { moduleCode: { key: value } }.
|
||||
// Intentionally NOT integrated with the prompt/template/schema flow; see
|
||||
// `tools/installer/set-overrides.js` for the rationale and tradeoffs.
|
||||
this.setOverrides = setOverrides || {};
|
||||
Object.freeze(this);
|
||||
}
|
||||
|
||||
|
|
@ -40,6 +57,7 @@ class Config {
|
|||
moduleConfigs: userInput.moduleConfigs || null,
|
||||
quickUpdate: userInput._quickUpdate || false,
|
||||
channelOptions: userInput.channelOptions || null,
|
||||
setOverrides: userInput.setOverrides || {},
|
||||
});
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ const { BMAD_FOLDER_NAME } = require('../ide/shared/path-utils');
|
|||
const { InstallPaths } = require('./install-paths');
|
||||
const { ExternalModuleManager } = require('../modules/external-manager');
|
||||
const { resolveModuleVersion } = require('../modules/version-resolver');
|
||||
const { MODULE_HELP_CSV_HEADER } = require('../modules/module-help-schema');
|
||||
|
||||
const { ExistingInstall } = require('./existing-install');
|
||||
const { warnPreNativeSkillsLegacy } = require('./legacy-warnings');
|
||||
|
|
@ -310,6 +311,19 @@ class Installer {
|
|||
moduleConfigs,
|
||||
});
|
||||
|
||||
// Apply post-install --set TOML patches. Runs after writeCentralConfig
|
||||
// (inside generateManifests above) so the patch operates on the
|
||||
// freshly written `_bmad/config.toml` / `_bmad/config.user.toml`.
|
||||
// See `tools/installer/set-overrides.js` for routing rules.
|
||||
if (config.setOverrides && Object.keys(config.setOverrides).length > 0) {
|
||||
const { applySetOverrides } = require('../set-overrides');
|
||||
const applied = await applySetOverrides(config.setOverrides, paths.bmadDir);
|
||||
if (applied.length > 0) {
|
||||
const summary = applied.map((a) => `${a.module}.${a.key} → ${a.file}`).join(', ');
|
||||
await prompts.log.info(`Applied --set overrides: ${summary}`);
|
||||
}
|
||||
}
|
||||
|
||||
message('Generating help catalog...');
|
||||
await this.mergeModuleHelpCatalogs(paths.bmadDir, manifestGen.agents);
|
||||
addResult('Help catalog', 'ok');
|
||||
|
|
@ -923,29 +937,15 @@ class Installer {
|
|||
/**
|
||||
* Merge all module-help.csv files into a single bmad-help.csv.
|
||||
* Scans all installed modules for module-help.csv and merges them.
|
||||
* Enriches agent info from the in-memory agent list produced by ManifestGenerator.
|
||||
* Output is written to _bmad/_config/bmad-help.csv.
|
||||
* Output preserves the source schema verbatim — see schema below.
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @param {Array<Object>} agentEntries - Agents collected from module.yaml (code, name, title, icon, module, ...)
|
||||
* @param {Array<Object>} _agentEntries - Unused; retained for call-site compatibility
|
||||
*/
|
||||
async mergeModuleHelpCatalogs(bmadDir, agentEntries = []) {
|
||||
async mergeModuleHelpCatalogs(bmadDir, _agentEntries = []) {
|
||||
const allRows = [];
|
||||
const headerRow =
|
||||
'module,phase,name,code,sequence,workflow-file,command,required,agent-name,agent-command,agent-display-name,agent-title,options,description,output-location,outputs';
|
||||
|
||||
// Build agent lookup from the in-memory list (agent code → command + display fields).
|
||||
const agentInfo = new Map();
|
||||
for (const agent of agentEntries) {
|
||||
if (!agent || !agent.code) continue;
|
||||
const agentCommand = agent.module ? `bmad:${agent.module}:agent:${agent.code}` : `bmad:agent:${agent.code}`;
|
||||
const displayName = agent.name || agent.code;
|
||||
const titleCombined = agent.icon && agent.title ? `${agent.icon} ${agent.title}` : agent.title || agent.code;
|
||||
agentInfo.set(agent.code, {
|
||||
command: agentCommand,
|
||||
displayName,
|
||||
title: titleCombined,
|
||||
});
|
||||
}
|
||||
const headerRow = MODULE_HELP_CSV_HEADER;
|
||||
const COLUMN_COUNT = 13;
|
||||
const PHASE_INDEX = 7;
|
||||
|
||||
// Get all installed module directories
|
||||
const entries = await fs.readdir(bmadDir, { withFileTypes: true });
|
||||
|
|
@ -976,72 +976,37 @@ class Installer {
|
|||
const content = await fs.readFile(helpFilePath, 'utf8');
|
||||
const lines = content.split('\n').filter((line) => line.trim() && !line.startsWith('#'));
|
||||
|
||||
let headerWarned = false;
|
||||
for (const line of lines) {
|
||||
// Skip header row
|
||||
// Header row: warn on drift from canonical schema, then skip.
|
||||
// Data rows are loaded positionally regardless, so the warning
|
||||
// is advisory — the maintainer should rename their columns.
|
||||
if (line.startsWith('module,')) {
|
||||
if (!headerWarned && line.trim() !== headerRow) {
|
||||
await prompts.log.warn(
|
||||
` ${moduleName}/module-help.csv header does not match canonical schema. ` +
|
||||
`Expected: ${headerRow} | Found: ${line.trim()} | Data loaded positionally.`,
|
||||
);
|
||||
headerWarned = true;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
// Parse the line - handle quoted fields with commas
|
||||
const columns = this.parseCSVLine(line);
|
||||
if (columns.length >= 12) {
|
||||
// Map old schema to new schema
|
||||
// Old: module,phase,name,code,sequence,workflow-file,command,required,agent,options,description,output-location,outputs
|
||||
// New: module,phase,name,code,sequence,workflow-file,command,required,agent-name,agent-command,agent-display-name,agent-title,options,description,output-location,outputs
|
||||
if (columns.length < COLUMN_COUNT - 1) continue;
|
||||
|
||||
const [
|
||||
module,
|
||||
phase,
|
||||
name,
|
||||
code,
|
||||
sequence,
|
||||
workflowFile,
|
||||
command,
|
||||
required,
|
||||
agentName,
|
||||
options,
|
||||
description,
|
||||
outputLocation,
|
||||
outputs,
|
||||
] = columns;
|
||||
// Pad short rows; truncate over-long rows
|
||||
const padded = columns.slice(0, COLUMN_COUNT);
|
||||
while (padded.length < COLUMN_COUNT) padded.push('');
|
||||
|
||||
// Pass through _meta rows as-is (module metadata, not a skill)
|
||||
if (phase === '_meta') {
|
||||
const finalModule = (!module || module.trim() === '') && moduleName !== 'core' ? moduleName : module || '';
|
||||
const metaRow = [finalModule, '_meta', '', '', '', '', '', 'false', '', '', '', '', '', '', outputLocation || '', ''];
|
||||
allRows.push(metaRow.map((c) => this.escapeCSVField(c)).join(','));
|
||||
continue;
|
||||
// If module column is empty, fill with this module's name
|
||||
// (core stays empty so its rows render as universal tools)
|
||||
if ((!padded[0] || padded[0].trim() === '') && moduleName !== 'core') {
|
||||
padded[0] = moduleName;
|
||||
}
|
||||
|
||||
// If module column is empty, set it to this module's name (except for core which stays empty for universal tools)
|
||||
const finalModule = (!module || module.trim() === '') && moduleName !== 'core' ? moduleName : module || '';
|
||||
|
||||
// Lookup agent info
|
||||
const cleanAgentName = agentName ? agentName.trim() : '';
|
||||
const agentData = agentInfo.get(cleanAgentName) || { command: '', displayName: '', title: '' };
|
||||
|
||||
// Build new row with agent info
|
||||
const newRow = [
|
||||
finalModule,
|
||||
phase || '',
|
||||
name || '',
|
||||
code || '',
|
||||
sequence || '',
|
||||
workflowFile || '',
|
||||
command || '',
|
||||
required || 'false',
|
||||
cleanAgentName,
|
||||
agentData.command,
|
||||
agentData.displayName,
|
||||
agentData.title,
|
||||
options || '',
|
||||
description || '',
|
||||
outputLocation || '',
|
||||
outputs || '',
|
||||
];
|
||||
|
||||
allRows.push(newRow.map((c) => this.escapeCSVField(c)).join(','));
|
||||
}
|
||||
allRows.push(padded.map((c) => this.escapeCSVField(c)).join(','));
|
||||
}
|
||||
|
||||
if (process.env.BMAD_VERBOSE_INSTALL === 'true') {
|
||||
|
|
@ -1053,44 +1018,34 @@ class Installer {
|
|||
}
|
||||
}
|
||||
|
||||
// Sort by module, then phase, then sequence
|
||||
allRows.sort((a, b) => {
|
||||
const colsA = this.parseCSVLine(a);
|
||||
const colsB = this.parseCSVLine(b);
|
||||
// Sort by module, then phase. Stable sort preserves authored order within a phase.
|
||||
const decorated = allRows.map((row, index) => ({ row, index, cols: this.parseCSVLine(row) }));
|
||||
decorated.sort((a, b) => {
|
||||
const moduleA = (a.cols[0] || '').toLowerCase();
|
||||
const moduleB = (b.cols[0] || '').toLowerCase();
|
||||
if (moduleA !== moduleB) return moduleA.localeCompare(moduleB);
|
||||
|
||||
// Module comparison (empty module/universal tools come first)
|
||||
const moduleA = (colsA[0] || '').toLowerCase();
|
||||
const moduleB = (colsB[0] || '').toLowerCase();
|
||||
if (moduleA !== moduleB) {
|
||||
return moduleA.localeCompare(moduleB);
|
||||
}
|
||||
const phaseA = a.cols[PHASE_INDEX] || '';
|
||||
const phaseB = b.cols[PHASE_INDEX] || '';
|
||||
if (phaseA !== phaseB) return phaseA.localeCompare(phaseB);
|
||||
|
||||
// Phase comparison
|
||||
const phaseA = colsA[1] || '';
|
||||
const phaseB = colsB[1] || '';
|
||||
if (phaseA !== phaseB) {
|
||||
return phaseA.localeCompare(phaseB);
|
||||
}
|
||||
|
||||
// Sequence comparison
|
||||
const seqA = parseInt(colsA[4] || '0', 10);
|
||||
const seqB = parseInt(colsB[4] || '0', 10);
|
||||
return seqA - seqB;
|
||||
return a.index - b.index;
|
||||
});
|
||||
const sortedRows = decorated.map((d) => d.row);
|
||||
|
||||
// Write merged catalog
|
||||
const outputDir = path.join(bmadDir, '_config');
|
||||
await fs.ensureDir(outputDir);
|
||||
const outputPath = path.join(outputDir, 'bmad-help.csv');
|
||||
|
||||
const mergedContent = [headerRow, ...allRows].join('\n');
|
||||
const mergedContent = [headerRow, ...sortedRows].join('\n');
|
||||
await fs.writeFile(outputPath, mergedContent, 'utf8');
|
||||
|
||||
// Track the installed file
|
||||
this.installedFiles.add(outputPath);
|
||||
|
||||
if (process.env.BMAD_VERBOSE_INSTALL === 'true') {
|
||||
await prompts.log.message(` Generated bmad-help.csv: ${allRows.length} workflows`);
|
||||
await prompts.log.message(` Generated bmad-help.csv: ${sortedRows.length} workflows`);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -1352,6 +1307,10 @@ class Installer {
|
|||
ides: configuredIdes,
|
||||
coreConfig: quickModules.collectedConfig.core,
|
||||
moduleConfigs: quickModules.collectedConfig,
|
||||
// Forward `--set` overrides so the post-install patch step
|
||||
// (`applySetOverrides`) runs at the end of quick-update too. The
|
||||
// installer.install path applies them after writeCentralConfig.
|
||||
setOverrides: config.setOverrides || {},
|
||||
actionType: 'install',
|
||||
_quickUpdate: true,
|
||||
_preserveModules: skippedModules,
|
||||
|
|
|
|||
|
|
@ -6,6 +6,125 @@ const csv = require('csv-parse/sync');
|
|||
const { BMAD_FOLDER_NAME } = require('./shared/path-utils');
|
||||
const { getInstalledCanonicalIds, isBmadOwnedEntry } = require('./shared/installed-skills');
|
||||
|
||||
// Reserved OpenCode slash commands. A skill whose canonicalId collides with
|
||||
// one of these is skipped during command-pointer generation so it doesn't
|
||||
// shadow a built-in.
|
||||
const RESERVED_OPENCODE_COMMANDS = new Set([
|
||||
'review',
|
||||
'commit',
|
||||
'init',
|
||||
'help',
|
||||
'skills',
|
||||
'fast',
|
||||
'compact',
|
||||
'clear',
|
||||
'undo',
|
||||
'redo',
|
||||
'edit',
|
||||
'editor',
|
||||
'exit',
|
||||
'quit',
|
||||
'theme',
|
||||
'config',
|
||||
'model',
|
||||
'session',
|
||||
]);
|
||||
|
||||
// Wrap a description for safe insertion into single-line YAML frontmatter.
|
||||
// Leaves plain values untouched; double-quotes (and escapes) anything that
|
||||
// could break YAML parsing or span multiple lines.
|
||||
function yamlSafeSingleLine(value) {
|
||||
const collapsed = String(value)
|
||||
.replaceAll(/[\r\n]+/g, ' ')
|
||||
.trim();
|
||||
const needsQuoting = /[:#'"\\]/.test(collapsed) || /^[!&*?|>%@`[{]/.test(collapsed);
|
||||
if (!needsQuoting) return collapsed;
|
||||
const escaped = collapsed.replaceAll('\\', '\\\\').replaceAll('"', String.raw`\"`);
|
||||
return `"${escaped}"`;
|
||||
}
|
||||
|
||||
// Validate that a canonicalId is a safe basename — no path separators, no
|
||||
// parent-dir traversal, no leading dots, only the character set we expect.
|
||||
// Defense-in-depth: the manifest is trusted today, but the value flows
|
||||
// directly into a file path and a malformed entry should not write outside
|
||||
// the commands directory.
|
||||
function isSafeCanonicalId(value) {
|
||||
return typeof value === 'string' && /^[a-zA-Z0-9][a-zA-Z0-9_.-]*$/.test(value) && !value.includes('..');
|
||||
}
|
||||
|
||||
// Default body template for command pointer files. Used when a platform's
|
||||
// installer config doesn't override `commands_body_template`. Matches
|
||||
// OpenCode's native `@skills/<id>` skill-reference syntax.
|
||||
const DEFAULT_COMMANDS_BODY_TEMPLATE = '@skills/{canonicalId}';
|
||||
|
||||
// Is this skill a persona agent (vs. a workflow/tool/standalone skill)?
|
||||
// Used by platforms that surface only persona agents (e.g. Copilot's Custom
|
||||
// Agents picker). Signal: the skill's source `customize.toml` has an
|
||||
// `[agent]` section. This is the actual configuration source of truth —
|
||||
// every BMAD persona is configured via [agent] in its customize.toml,
|
||||
// every workflow uses [workflow], every standalone skill has no
|
||||
// customize.toml at all. Verified against the full installed manifest:
|
||||
// catches exactly the 20 description-confirmed personas across BMM, CIS,
|
||||
// GDS, WDS, TEA, and correctly excludes meta-skills like
|
||||
// `bmad-agent-builder` (a skill-builder workflow whose canonical id
|
||||
// contains `-agent-` but which has no [agent] section because it isn't a
|
||||
// persona itself).
|
||||
//
|
||||
// Reading the source toml — at install time the source skill directory
|
||||
// (resolved from manifest record.path) still exists; cleanup runs later
|
||||
// in the install flow.
|
||||
async function isAgentSkill(record, bmadDir) {
|
||||
if (!record?.path || !bmadDir) return false;
|
||||
const bmadFolderName = path.basename(bmadDir);
|
||||
const bmadPrefix = bmadFolderName + '/';
|
||||
const relativePath = record.path.startsWith(bmadPrefix) ? record.path.slice(bmadPrefix.length) : record.path;
|
||||
const tomlPath = path.join(bmadDir, path.dirname(relativePath), 'customize.toml');
|
||||
if (!(await fs.pathExists(tomlPath))) return false;
|
||||
try {
|
||||
const content = await fs.readFile(tomlPath, 'utf8');
|
||||
return /^\[agent\]/m.test(content);
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// Resolve placeholders in a body template. Supported placeholders:
|
||||
// {canonicalId} — the skill's canonical id
|
||||
// {target_dir} — the platform's skill install directory (e.g. .agents/skills)
|
||||
// {project-root} — left as a literal placeholder for the model/tool to expand
|
||||
// at runtime; consistent with PR #1769's templates.
|
||||
function expandBodyTemplate(template, { canonicalId, targetDir }) {
|
||||
return template.replaceAll('{canonicalId}', canonicalId).replaceAll('{target_dir}', targetDir);
|
||||
}
|
||||
|
||||
// The exact body the installer would generate for a given description and
|
||||
// canonicalId, given the platform's body template. Centralised so both the
|
||||
// write and the freshness-check paths agree on the canonical form.
|
||||
function buildCommandPointerBody(description, canonicalId, { template, targetDir }) {
|
||||
const bodyText = expandBodyTemplate(template, { canonicalId, targetDir });
|
||||
return `---\ndescription: ${yamlSafeSingleLine(description)}\n---\n\n${bodyText}\n`;
|
||||
}
|
||||
|
||||
// Heuristic: does an existing pointer file look like our generator's output
|
||||
// (and therefore safe to refresh) versus a user-modified file (which we
|
||||
// preserve)? We check the body shape rather than full equality so that
|
||||
// description-only edits in the manifest can propagate without trampling
|
||||
// hand edits to the body.
|
||||
function looksLikeGeneratorOutput(content, canonicalId, { template, targetDir }) {
|
||||
if (typeof content !== 'string') return false;
|
||||
const trimmed = content.trim();
|
||||
const expectedTail = expandBodyTemplate(template, { canonicalId, targetDir }).trim();
|
||||
// Must end with the exact body our generator writes (post-expansion).
|
||||
if (!trimmed.endsWith(expectedTail)) return false;
|
||||
// Must start with frontmatter containing exactly one description: line.
|
||||
const fmMatch = trimmed.match(/^---\n([\S\s]*?)\n---\n/);
|
||||
if (!fmMatch) return false;
|
||||
const fmLines = fmMatch[1].split('\n').filter((l) => l.length > 0);
|
||||
if (fmLines.length !== 1) return false;
|
||||
if (!fmLines[0].startsWith('description:')) return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Config-driven IDE setup handler
|
||||
*
|
||||
|
|
@ -97,9 +216,15 @@ class ConfigDrivenIdeSetup {
|
|||
}
|
||||
|
||||
// When a peer platform in the same install batch owns this target_dir,
|
||||
// skip the skill write — the peer has already populated it.
|
||||
// skip the skill write — the peer has already populated it. Command
|
||||
// pointers, however, write to a separate per-IDE directory and must
|
||||
// still be generated for this IDE; they are not deduped across peers.
|
||||
if (options.skipTarget) {
|
||||
return { success: true, results: { skills: 0, sharedTargetHandledByPeer: true } };
|
||||
const results = { skills: 0, sharedTargetHandledByPeer: true };
|
||||
if (this.installerConfig.commands_target_dir) {
|
||||
results.commands = await this.installCommandPointers(projectDir, bmadDir, this.installerConfig, options);
|
||||
}
|
||||
return { success: true, results };
|
||||
}
|
||||
|
||||
if (this.installerConfig.target_dir) {
|
||||
|
|
@ -128,11 +253,157 @@ class ConfigDrivenIdeSetup {
|
|||
results.skills = await this.installVerbatimSkills(projectDir, bmadDir, targetPath, config);
|
||||
results.skillDirectories = this.skillWriteTracker.size;
|
||||
|
||||
if (config.commands_target_dir) {
|
||||
results.commands = await this.installCommandPointers(projectDir, bmadDir, config, options);
|
||||
}
|
||||
|
||||
await this.printSummary(results, target_dir, options);
|
||||
this.skillWriteTracker = null;
|
||||
return { success: true, results };
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate per-skill command pointer files for IDEs that surface commands
|
||||
* separately from skills (e.g. OpenCode's `.opencode/commands/<name>.md`).
|
||||
*
|
||||
* Each pointer is a tiny markdown file whose body is `@skills/<canonicalId>`
|
||||
* so invoking `/<canonicalId>` routes the user straight to the skill instead
|
||||
* of forcing them through a `/skills` menu.
|
||||
*
|
||||
* Skips:
|
||||
* - Names that collide with reserved built-in slash commands.
|
||||
* - canonicalIds that aren't safe basename-only identifiers (defense
|
||||
* against path traversal even though the manifest is currently trusted).
|
||||
* - Existing files whose body looks user-modified (preserves hand edits);
|
||||
* pointer files matching the generator pattern get overwritten so that
|
||||
* description changes in skill-manifest.csv propagate on re-install.
|
||||
*
|
||||
* Per-file write failures are recorded and reported but do not abort the
|
||||
* rest of the install — pointer files are a non-essential adjunct to the
|
||||
* skill copy that already succeeded.
|
||||
*
|
||||
* @param {string} projectDir
|
||||
* @param {string} bmadDir
|
||||
* @param {Object} config - Installer config; reads commands_target_dir.
|
||||
* @param {Object} options - Setup options. forceCommands overwrites existing
|
||||
* files unconditionally (including hand-modified ones).
|
||||
* @returns {Promise<Object>} { created, updated, skippedExisting, skippedCollision, skippedInvalidId, writeFailures, fallbackDescription }
|
||||
*/
|
||||
async installCommandPointers(projectDir, bmadDir, config, options = {}) {
|
||||
const result = {
|
||||
created: 0,
|
||||
updated: 0,
|
||||
skippedExisting: 0,
|
||||
skippedCollision: 0,
|
||||
skippedInvalidId: 0,
|
||||
skippedFiltered: 0,
|
||||
writeFailures: 0,
|
||||
fallbackDescription: 0,
|
||||
};
|
||||
|
||||
const csvPath = path.join(bmadDir, '_config', 'skill-manifest.csv');
|
||||
if (!(await fs.pathExists(csvPath))) return result;
|
||||
|
||||
const commandsPath = path.join(projectDir, config.commands_target_dir);
|
||||
await fs.ensureDir(commandsPath);
|
||||
|
||||
// Per-platform pointer-file shape, all overrideable in platform-codes.yaml.
|
||||
const extension = config.commands_extension || '.md';
|
||||
const template = config.commands_body_template || DEFAULT_COMMANDS_BODY_TEMPLATE;
|
||||
const targetDir = config.target_dir;
|
||||
const filter = config.commands_filter || null;
|
||||
|
||||
const csvContent = await fs.readFile(csvPath, 'utf8');
|
||||
const records = csv.parse(csvContent, { columns: true, skip_empty_lines: true });
|
||||
|
||||
for (const record of records) {
|
||||
const canonicalId = record.canonicalId;
|
||||
if (!canonicalId) continue;
|
||||
|
||||
// Defensive basename validation. canonicalId comes from a trusted
|
||||
// manifest today, but the value flows directly into a file path —
|
||||
// reject anything that could escape commands_target_dir.
|
||||
if (!isSafeCanonicalId(canonicalId)) {
|
||||
result.skippedInvalidId++;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Optional per-platform filter: surfaces that should only show
|
||||
// persona agents (e.g. Copilot's Custom Agents picker) skip
|
||||
// workflow/tool skills here so the picker isn't cluttered with
|
||||
// 90+ unrelated entries.
|
||||
if (filter === 'agents-only' && !(await isAgentSkill(record, bmadDir))) {
|
||||
result.skippedFiltered++;
|
||||
continue;
|
||||
}
|
||||
|
||||
// Reserved-name guard is OpenCode-specific. Other adapters that opt
|
||||
// into commands_target_dir later should declare their own reserved
|
||||
// set rather than inheriting OpenCode's.
|
||||
if (this.name === 'opencode' && RESERVED_OPENCODE_COMMANDS.has(canonicalId)) {
|
||||
result.skippedCollision++;
|
||||
continue;
|
||||
}
|
||||
|
||||
let description = (record.description || '').trim();
|
||||
if (!description) {
|
||||
description = `Run the ${canonicalId} skill`;
|
||||
result.fallbackDescription++;
|
||||
}
|
||||
|
||||
const body = buildCommandPointerBody(description, canonicalId, { template, targetDir });
|
||||
const commandFile = path.join(commandsPath, `${canonicalId}${extension}`);
|
||||
|
||||
// If a pointer file already exists, decide whether to overwrite based
|
||||
// on whether it looks like generator output (description-only diff) or
|
||||
// a user-modified file. forceCommands overrides this protection.
|
||||
if (!options.forceCommands && (await fs.pathExists(commandFile))) {
|
||||
let existing;
|
||||
try {
|
||||
existing = await fs.readFile(commandFile, 'utf8');
|
||||
} catch {
|
||||
// Treat unreadable as user-owned and skip — safer than overwriting.
|
||||
result.skippedExisting++;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (existing === body) {
|
||||
// No-op idempotent re-run.
|
||||
result.skippedExisting++;
|
||||
continue;
|
||||
}
|
||||
if (looksLikeGeneratorOutput(existing, canonicalId, { template, targetDir })) {
|
||||
// Description (or other generated bit) has changed; refresh in place.
|
||||
try {
|
||||
await fs.writeFile(commandFile, body, 'utf8');
|
||||
result.updated++;
|
||||
} catch (error) {
|
||||
result.writeFailures++;
|
||||
if (!options.silent) {
|
||||
await prompts.log.warn(`Failed to update command pointer ${canonicalId}${extension}: ${error.message}`);
|
||||
}
|
||||
}
|
||||
continue;
|
||||
}
|
||||
// Hand-modified pointer — preserve it.
|
||||
result.skippedExisting++;
|
||||
continue;
|
||||
}
|
||||
|
||||
try {
|
||||
await fs.writeFile(commandFile, body, 'utf8');
|
||||
result.created++;
|
||||
} catch (error) {
|
||||
result.writeFailures++;
|
||||
if (!options.silent) {
|
||||
await prompts.log.warn(`Failed to write command pointer ${canonicalId}${extension}: ${error.message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Install verbatim native SKILL.md directories from skill-manifest.csv.
|
||||
* Copies the entire source directory as-is into the IDE skill directory.
|
||||
|
|
@ -207,6 +478,18 @@ class ConfigDrivenIdeSetup {
|
|||
if (count > 0) {
|
||||
await prompts.log.success(`${this.name} configured: ${count} skills → ${targetDir}`);
|
||||
}
|
||||
const cmd = results.commands;
|
||||
if (cmd && (cmd.created > 0 || cmd.updated > 0) && this.installerConfig?.commands_target_dir) {
|
||||
const total = cmd.created + cmd.updated;
|
||||
const detail = cmd.updated > 0 ? `${cmd.created} new, ${cmd.updated} refreshed` : `${total}`;
|
||||
await prompts.log.success(`${this.name} commands: ${detail} → ${this.installerConfig.commands_target_dir}`);
|
||||
if (cmd.skippedCollision > 0) {
|
||||
await prompts.log.message(` (${cmd.skippedCollision} skipped — name collides with reserved slash command)`);
|
||||
}
|
||||
if (cmd.writeFailures > 0) {
|
||||
await prompts.log.warn(` (${cmd.writeFailures} pointer writes failed — see warnings above)`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -247,6 +530,36 @@ class ConfigDrivenIdeSetup {
|
|||
await this.cleanupRovoDevPrompts(projectDir, options);
|
||||
}
|
||||
|
||||
// Clean generated command pointer files in commands_target_dir.
|
||||
// Mirrors target_dir cleanup so uninstalls and skill removals don't
|
||||
// leave dangling /<canonicalId> commands pointing at missing skills.
|
||||
// Runs regardless of skipTarget — command pointers live in a per-IDE
|
||||
// directory and are not deduped across peers, so a peer-owned shared
|
||||
// skills directory does not protect this IDE's command pointers from
|
||||
// cleanup. The "currently active" set is passed so install-flow cleanup
|
||||
// (where removalSet contains skills that will be re-added moments later)
|
||||
// doesn't trample hand-edited pointers; install-flow cleanup will only
|
||||
// delete pointers for skills that are not in the new manifest.
|
||||
if (this.installerConfig?.commands_target_dir) {
|
||||
// In the install/update flow (signal: previousSkillIds was passed),
|
||||
// spare pointers whose canonicalId is still in the manifest so hand
|
||||
// edits survive a routine reinstall. In the uninstall flow (no
|
||||
// previousSkillIds — full uninstall or per-IDE removal via
|
||||
// cleanupByList), don't spare anything; the IDE itself is going away,
|
||||
// so its pointers should go with it.
|
||||
const isInstallFlow = options.previousSkillIds && options.previousSkillIds.size > 0;
|
||||
const activeSkillIds = isInstallFlow ? await this._readActiveSkillIds(resolvedBmadDir) : new Set();
|
||||
const extension = this.installerConfig.commands_extension || '.md';
|
||||
await this.cleanupCommandPointers(
|
||||
projectDir,
|
||||
this.installerConfig.commands_target_dir,
|
||||
options,
|
||||
removalSet,
|
||||
activeSkillIds,
|
||||
extension,
|
||||
);
|
||||
}
|
||||
|
||||
// Skip target_dir cleanup when a peer platform owns this directory
|
||||
// (set during dedup'd install or when uninstalling one of several
|
||||
// platforms that share the same target_dir).
|
||||
|
|
@ -346,6 +659,97 @@ class ConfigDrivenIdeSetup {
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Cleanup generated command pointer files for entries in removalSet.
|
||||
* Symmetric counterpart to installCommandPointers — removes
|
||||
* `<canonicalId><extension>` files whose canonicalId is in the set. Removes
|
||||
* the commands directory entirely if it ends up empty.
|
||||
* @param {string} projectDir
|
||||
* @param {string} commandsTargetDir - Relative dir (e.g. .opencode/commands)
|
||||
* @param {Object} options
|
||||
* @param {Set<string>} removalSet - canonicalIds whose pointer files to remove
|
||||
* @param {Set<string>} [activeSkillIds] - canonicalIds present in the
|
||||
* current manifest. Pointers for IDs in this set are spared so an
|
||||
* install-flow cleanup (where removalSet === previousSkillIds and the
|
||||
* same skills are about to be re-installed) doesn't wipe hand-edited
|
||||
* pointer files. Pass an empty set or omit to delete every match in
|
||||
* removalSet (uninstall flow).
|
||||
* @param {string} [extension] - Pointer file extension (default '.md');
|
||||
* matches the platform's commands_extension config value so cleanup
|
||||
* correctly identifies pointer files for IDEs whose convention isn't .md
|
||||
* (e.g. Copilot's `.agent.md`).
|
||||
*/
|
||||
async cleanupCommandPointers(
|
||||
projectDir,
|
||||
commandsTargetDir,
|
||||
options = {},
|
||||
removalSet = new Set(),
|
||||
activeSkillIds = new Set(),
|
||||
extension = '.md',
|
||||
) {
|
||||
if (!removalSet || removalSet.size === 0) return;
|
||||
|
||||
const commandsPath = path.join(projectDir, commandsTargetDir);
|
||||
if (!(await fs.pathExists(commandsPath))) return;
|
||||
|
||||
let entries;
|
||||
try {
|
||||
entries = await fs.readdir(commandsPath);
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
|
||||
for (const entry of entries) {
|
||||
if (!entry.endsWith(extension)) continue;
|
||||
const canonicalId = entry.slice(0, -extension.length);
|
||||
if (!removalSet.has(canonicalId)) continue;
|
||||
// Spare pointers for skills that are still in the manifest; the
|
||||
// install pass will refresh them in place if their content has gone
|
||||
// stale, while preserving hand edits.
|
||||
if (activeSkillIds.has(canonicalId)) continue;
|
||||
try {
|
||||
await fs.remove(path.join(commandsPath, entry));
|
||||
} catch {
|
||||
// Skip files we can't remove.
|
||||
}
|
||||
}
|
||||
|
||||
// Remove the commands directory if we emptied it.
|
||||
try {
|
||||
const remaining = await fs.readdir(commandsPath);
|
||||
if (remaining.length === 0) {
|
||||
await fs.remove(commandsPath);
|
||||
}
|
||||
} catch {
|
||||
// Directory may already be gone.
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Read the canonicalIds currently present in the skill-manifest.csv.
|
||||
* Used by cleanup to distinguish "re-install of an existing skill"
|
||||
* (preserve pointer) from "skill truly being removed" (delete pointer).
|
||||
* @param {string|null} bmadDir
|
||||
* @returns {Promise<Set<string>>}
|
||||
*/
|
||||
async _readActiveSkillIds(bmadDir) {
|
||||
const ids = new Set();
|
||||
if (!bmadDir) return ids;
|
||||
const csvPath = path.join(bmadDir, '_config', 'skill-manifest.csv');
|
||||
if (!(await fs.pathExists(csvPath))) return ids;
|
||||
try {
|
||||
const content = await fs.readFile(csvPath, 'utf8');
|
||||
const records = csv.parse(content, { columns: true, skip_empty_lines: true });
|
||||
for (const record of records) {
|
||||
if (record.canonicalId) ids.add(record.canonicalId);
|
||||
}
|
||||
} catch {
|
||||
// Manifest unreadable — return an empty set so cleanup falls back to
|
||||
// the conservative "delete what removalSet says" behavior.
|
||||
}
|
||||
return ids;
|
||||
}
|
||||
|
||||
/**
|
||||
* Cleanup a specific target directory.
|
||||
* When removalSet is provided, only removes entries in that set.
|
||||
|
|
|
|||
|
|
@ -31,7 +31,50 @@ function clearCache() {
|
|||
_cachedPlatformCodes = null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Format the installable platform list for human-readable output (used by --list-tools).
|
||||
* Sourced from IdeManager so this view matches what --tools accepts at install time
|
||||
* (suspended platforms excluded).
|
||||
* @returns {Promise<string>} Formatted multi-line string with id, name, target_dir, preferred flag.
|
||||
*/
|
||||
async function formatPlatformList() {
|
||||
const { IdeManager } = require('./manager');
|
||||
const ideManager = new IdeManager();
|
||||
await ideManager.ensureInitialized();
|
||||
|
||||
const entries = ideManager.getAvailableIdes().map((ide) => {
|
||||
const handler = ideManager.handlers.get(ide.value);
|
||||
return {
|
||||
id: ide.value,
|
||||
name: ide.name,
|
||||
targetDir: handler?.installerConfig?.target_dir || '',
|
||||
preferred: ide.preferred,
|
||||
};
|
||||
});
|
||||
|
||||
const idWidth = Math.max(...entries.map((e) => e.id.length), 'ID'.length);
|
||||
const nameWidth = Math.max(...entries.map((e) => e.name.length), 'Name'.length);
|
||||
|
||||
const pad = (s, w) => s + ' '.repeat(Math.max(0, w - s.length));
|
||||
const lines = [
|
||||
`Supported tool IDs (pass via --tools <id>[,<id>...]):`,
|
||||
'',
|
||||
` ${pad('ID', idWidth)} ${pad('Name', nameWidth)} Target dir`,
|
||||
` ${pad('-'.repeat(idWidth), idWidth)} ${pad('-'.repeat(nameWidth), nameWidth)} ${'-'.repeat(10)}`,
|
||||
];
|
||||
|
||||
for (const e of entries) {
|
||||
const star = e.preferred ? ' *' : ' ';
|
||||
lines.push(`${star}${pad(e.id, idWidth)} ${pad(e.name, nameWidth)} ${e.targetDir}`);
|
||||
}
|
||||
|
||||
lines.push('', '* = recommended / preferred', '', 'Example: bmad-method install --modules bmm --tools claude-code');
|
||||
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
loadPlatformCodes,
|
||||
clearCache,
|
||||
formatPlatformList,
|
||||
};
|
||||
|
|
|
|||
|
|
@ -132,6 +132,21 @@ platforms:
|
|||
installer:
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
commands_target_dir: .github/agents
|
||||
commands_extension: .agent.md
|
||||
commands_body_template: "LOAD the FULL {project-root}/{target_dir}/{canonicalId}/SKILL.md, READ its entire contents and follow its directions exactly!"
|
||||
# The Custom Agents picker should only show persona agents (not
|
||||
# workflows/tools). Detected by reading each skill's source
|
||||
# `customize.toml` and checking for an `[agent]` section — that's
|
||||
# the actual configuration source of truth: every BMAD persona is
|
||||
# configured under `[agent]`, every workflow under `[workflow]`,
|
||||
# every standalone skill has no customize.toml. This signal is
|
||||
# naming-independent, so personas like `bmad-tea` (which doesn't
|
||||
# follow the `-agent-` convention) are still included, and
|
||||
# meta-skills like `bmad-agent-builder` (which contains `-agent-`
|
||||
# but is a skill-builder workflow, not a persona) are correctly
|
||||
# excluded.
|
||||
commands_filter: agents-only
|
||||
|
||||
goose:
|
||||
name: "Block Goose"
|
||||
|
|
@ -222,6 +237,7 @@ platforms:
|
|||
installer:
|
||||
target_dir: .agents/skills
|
||||
global_target_dir: ~/.agents/skills
|
||||
commands_target_dir: .opencode/commands
|
||||
|
||||
openhands:
|
||||
name: "OpenHands"
|
||||
|
|
|
|||
|
|
@ -0,0 +1,210 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('./fs-native');
|
||||
const yaml = require('yaml');
|
||||
const { getProjectRoot, getModulePath, getExternalModuleCachePath } = require('./project-root');
|
||||
|
||||
/**
|
||||
* Read a module.yaml and return its declared `code:` field, or null if missing/unparseable.
|
||||
*/
|
||||
async function readModuleCode(yamlPath) {
|
||||
try {
|
||||
const parsed = yaml.parse(await fs.readFile(yamlPath, 'utf8'));
|
||||
if (parsed && typeof parsed === 'object' && typeof parsed.code === 'string') {
|
||||
return parsed.code;
|
||||
}
|
||||
} catch {
|
||||
// fall through
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Discover module.yaml files for officials we can read locally:
|
||||
* - core, bmm: bundled in src/ (always present)
|
||||
* - external officials: only if previously cloned to ~/.bmad/cache/external-modules/
|
||||
*
|
||||
* Each result's `code` is the `code:` field from the module.yaml when present;
|
||||
* that's the value `--set <module>.<key>=<value>` matches against.
|
||||
*
|
||||
* Community/custom modules are not enumerated; users reference their own
|
||||
* module.yaml directly per the design (see issue #1663).
|
||||
*
|
||||
* @returns {Promise<Array<{code: string, yamlPath: string, source: string}>>}
|
||||
*/
|
||||
async function discoverOfficialModuleYamls() {
|
||||
const found = [];
|
||||
// Dedupe is case-insensitive because module caches occasionally retain a
|
||||
// legacy UPPERCASE-named directory alongside the canonical lowercase one
|
||||
// (same module, different cache key from an older schema). We pick whichever
|
||||
// entry we see first and skip the alternate-case duplicate. NOTE: `--set`
|
||||
// matching itself is case-sensitive (it keys on `moduleName` from the install
|
||||
// flow's selected list, which is always lowercase short codes), so the
|
||||
// surfaced `code` here is what users should type. Don't change to
|
||||
// case-sensitive dedupe without revisiting that contract.
|
||||
const seenCodes = new Set();
|
||||
|
||||
const addFound = async (yamlPath, source, fallbackCode) => {
|
||||
const declaredCode = await readModuleCode(yamlPath);
|
||||
const code = declaredCode || fallbackCode;
|
||||
if (!code) return;
|
||||
const lower = code.toLowerCase();
|
||||
if (seenCodes.has(lower)) return;
|
||||
seenCodes.add(lower);
|
||||
found.push({ code, yamlPath, source });
|
||||
};
|
||||
|
||||
// Built-ins.
|
||||
for (const code of ['core', 'bmm']) {
|
||||
const yamlPath = path.join(getModulePath(code), 'module.yaml');
|
||||
if (await fs.pathExists(yamlPath)) {
|
||||
// Built-ins use their well-known short codes regardless of what the
|
||||
// module.yaml `code:` says, since the install flow keys on these.
|
||||
seenCodes.add(code.toLowerCase());
|
||||
found.push({ code, yamlPath, source: 'built-in' });
|
||||
}
|
||||
}
|
||||
|
||||
// Bundled in src/modules/<code>/module.yaml (rare, but supported by getModulePath).
|
||||
const srcModulesDir = path.join(getProjectRoot(), 'src', 'modules');
|
||||
if (await fs.pathExists(srcModulesDir)) {
|
||||
const entries = await fs.readdir(srcModulesDir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (!entry.isDirectory()) continue;
|
||||
const yamlPath = path.join(srcModulesDir, entry.name, 'module.yaml');
|
||||
if (await fs.pathExists(yamlPath)) {
|
||||
await addFound(yamlPath, 'bundled', entry.name);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// External cache (~/.bmad/cache/external-modules/<code>/...).
|
||||
const cacheRoot = getExternalModuleCachePath('').replace(/\/$/, '');
|
||||
if (await fs.pathExists(cacheRoot)) {
|
||||
const rawEntries = await fs.readdir(cacheRoot, { withFileTypes: true });
|
||||
for (const entry of rawEntries) {
|
||||
if (!entry.isDirectory()) continue;
|
||||
const candidates = [
|
||||
path.join(cacheRoot, entry.name, 'module.yaml'),
|
||||
path.join(cacheRoot, entry.name, 'src', 'module.yaml'),
|
||||
path.join(cacheRoot, entry.name, 'skills', 'module.yaml'),
|
||||
];
|
||||
for (const candidate of candidates) {
|
||||
if (await fs.pathExists(candidate)) {
|
||||
await addFound(candidate, 'cached', entry.name);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return found;
|
||||
}
|
||||
|
||||
function formatPromptText(item) {
|
||||
if (Array.isArray(item.prompt)) return item.prompt.join(' ');
|
||||
return String(item.prompt || '').trim();
|
||||
}
|
||||
|
||||
function inferType(item) {
|
||||
if (item['single-select']) return 'single-select';
|
||||
if (item['multi-select']) return 'multi-select';
|
||||
if (typeof item.default === 'boolean') return 'boolean';
|
||||
if (typeof item.default === 'number') return 'number';
|
||||
return 'string';
|
||||
}
|
||||
|
||||
function formatModuleOptions(code, parsed, source) {
|
||||
const lines = [];
|
||||
const header = source === 'built-in' ? code : `${code} (${source})`;
|
||||
lines.push(header + ':');
|
||||
|
||||
let count = 0;
|
||||
for (const [key, item] of Object.entries(parsed)) {
|
||||
if (!item || typeof item !== 'object' || !('prompt' in item)) continue;
|
||||
count++;
|
||||
const type = inferType(item);
|
||||
const scope = item.scope === 'user' ? ' [user-scope]' : '';
|
||||
const defaultStr = item.default === undefined || item.default === null ? '(none)' : String(item.default);
|
||||
lines.push(` ${code}.${key} (${type}${scope}) default: ${defaultStr}`);
|
||||
const promptText = formatPromptText(item);
|
||||
if (promptText) lines.push(` ${promptText}`);
|
||||
if (Array.isArray(item['single-select'])) {
|
||||
const values = item['single-select'].map((v) => (typeof v === 'object' ? v.value : v)).filter((v) => v !== undefined);
|
||||
if (values.length > 0) lines.push(` values: ${values.join(' | ')}`);
|
||||
}
|
||||
lines.push('');
|
||||
}
|
||||
|
||||
if (count === 0) {
|
||||
lines.push(' (no configurable options)', '');
|
||||
}
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
/**
|
||||
* Render `--list-options` output.
|
||||
*
|
||||
* Returns `{ text, ok }` so callers can surface a non-zero exit code on
|
||||
* a typo'd module-code lookup. Discovery dedupes case-insensitively, so
|
||||
* the lookup is also case-insensitive — typing `--list-options BMM` and
|
||||
* `--list-options bmm` both find the bmm built-in.
|
||||
*
|
||||
* @param {string|null} moduleCode - if non-null, restrict to this module
|
||||
* @returns {Promise<{text: string, ok: boolean}>}
|
||||
*/
|
||||
async function formatOptionsList(moduleCode) {
|
||||
const discovered = await discoverOfficialModuleYamls();
|
||||
const needle = moduleCode ? moduleCode.toLowerCase() : null;
|
||||
const filtered = needle ? discovered.filter((d) => d.code.toLowerCase() === needle) : discovered;
|
||||
|
||||
if (filtered.length === 0) {
|
||||
if (moduleCode) {
|
||||
const text = [
|
||||
`No locally-known module.yaml for '${moduleCode}'.`,
|
||||
'',
|
||||
'Built-in modules (core, bmm) are always available. External officials',
|
||||
'appear here after they have been installed at least once on this machine',
|
||||
'(they are cached under ~/.bmad/cache/external-modules/).',
|
||||
'',
|
||||
'For community or custom modules, read the module.yaml file in that',
|
||||
"module's source repository directly.",
|
||||
].join('\n');
|
||||
return { text, ok: false };
|
||||
}
|
||||
return { text: 'No modules found.', ok: false };
|
||||
}
|
||||
|
||||
const sections = [];
|
||||
// Track when a module-scoped lookup couldn't actually be rendered (yaml
|
||||
// unparseable or empty after parse). The full `--list-options` output is
|
||||
// tolerant of one bad entry, but `--list-options <module>` against a single
|
||||
// unreadable module should still fail tooling so a CI script catches it.
|
||||
let moduleScopedFailure = false;
|
||||
sections.push('Available --set keys', 'Format: --set <module>.<key>=<value> (repeatable)', '');
|
||||
for (const { code, yamlPath, source } of filtered) {
|
||||
let parsed;
|
||||
try {
|
||||
parsed = yaml.parse(await fs.readFile(yamlPath, 'utf8'));
|
||||
} catch {
|
||||
sections.push(`${code} (${source}): could not parse module.yaml`, '');
|
||||
if (moduleCode) moduleScopedFailure = true;
|
||||
continue;
|
||||
}
|
||||
if (!parsed || typeof parsed !== 'object' || Array.isArray(parsed)) {
|
||||
sections.push(`${code} (${source}): module.yaml is not a valid object (got ${Array.isArray(parsed) ? 'array' : typeof parsed})`, '');
|
||||
if (moduleCode) moduleScopedFailure = true;
|
||||
continue;
|
||||
}
|
||||
sections.push(formatModuleOptions(code, parsed, source));
|
||||
}
|
||||
|
||||
if (!moduleCode) {
|
||||
sections.push(
|
||||
'Community and custom modules are not listed here — read their module.yaml directly. Unknown keys still persist with a warning.',
|
||||
);
|
||||
}
|
||||
|
||||
return { text: sections.join('\n'), ok: !moduleScopedFailure };
|
||||
}
|
||||
|
||||
module.exports = { formatOptionsList, discoverOfficialModuleYamls };
|
||||
|
|
@ -29,6 +29,11 @@ class CommunityModuleManager {
|
|||
// Shared across all instances; the manifest writer often uses a fresh instance.
|
||||
static _resolutions = new Map();
|
||||
|
||||
// moduleCode → ResolvedModule (from PluginResolver) when the cloned repo ships
|
||||
// a `.claude-plugin/marketplace.json`. Lets community installs reuse the same
|
||||
// skill-level install pipeline as custom-source installs (installFromResolution).
|
||||
static _pluginResolutions = new Map();
|
||||
|
||||
constructor() {
|
||||
this._client = new RegistryClient();
|
||||
this._cachedIndex = null;
|
||||
|
|
@ -40,6 +45,11 @@ class CommunityModuleManager {
|
|||
return CommunityModuleManager._resolutions.get(moduleCode) || null;
|
||||
}
|
||||
|
||||
/** Get the marketplace.json-derived plugin resolution for a community module, if any. */
|
||||
getPluginResolution(moduleCode) {
|
||||
return CommunityModuleManager._pluginResolutions.get(moduleCode) || null;
|
||||
}
|
||||
|
||||
// ─── Data Loading ──────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
|
|
@ -371,6 +381,18 @@ class CommunityModuleManager {
|
|||
planSource: planEntry.source,
|
||||
});
|
||||
|
||||
// If the repo ships a marketplace.json, route through PluginResolver so the
|
||||
// skill-level install pipeline (installFromResolution) handles the copy.
|
||||
// Repos without marketplace.json fall through to the legacy findModuleSource
|
||||
// path unchanged.
|
||||
await this._tryResolveMarketplacePlugin(moduleCacheDir, moduleInfo, {
|
||||
channel: planEntry.channel,
|
||||
version: recordedVersion,
|
||||
sha: installedSha,
|
||||
approvedTag,
|
||||
approvedSha,
|
||||
});
|
||||
|
||||
// Install dependencies if needed
|
||||
const packageJsonPath = path.join(moduleCacheDir, 'package.json');
|
||||
if ((needsDependencyInstall || wasNewClone) && (await fs.pathExists(packageJsonPath))) {
|
||||
|
|
@ -392,6 +414,204 @@ class CommunityModuleManager {
|
|||
return moduleCacheDir;
|
||||
}
|
||||
|
||||
// ─── Marketplace.json Resolution ──────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Detect `.claude-plugin/marketplace.json` in a cloned community repo and
|
||||
* route through PluginResolver. When successful, caches the resolution so
|
||||
* OfficialModulesManager.install() can route the copy through
|
||||
* installFromResolution() — the same path used by custom-source installs.
|
||||
*
|
||||
* Silent no-op when marketplace.json is absent or the resolver returns no
|
||||
* matches; the legacy findModuleSource path then handles the install.
|
||||
*
|
||||
* @param {string} repoPath - Absolute path to the cloned repo
|
||||
* @param {Object} moduleInfo - Normalized community module info
|
||||
* @param {Object} resolution - Resolution metadata from cloneModule
|
||||
* @param {string} resolution.channel - Channel ('stable' | 'next' | 'pinned')
|
||||
* @param {string} resolution.version - Recorded version string
|
||||
* @param {string} resolution.sha - Resolved git SHA
|
||||
* @param {string|null} resolution.approvedTag - Registry approved tag
|
||||
* @param {string|null} resolution.approvedSha - Registry approved SHA
|
||||
*/
|
||||
async _tryResolveMarketplacePlugin(repoPath, moduleInfo, resolution) {
|
||||
const marketplacePath = path.join(repoPath, '.claude-plugin', 'marketplace.json');
|
||||
if (!(await fs.pathExists(marketplacePath))) return;
|
||||
|
||||
let marketplaceData;
|
||||
try {
|
||||
marketplaceData = JSON.parse(await fs.readFile(marketplacePath, 'utf8'));
|
||||
} catch {
|
||||
// Malformed marketplace.json — fall through to legacy path.
|
||||
return;
|
||||
}
|
||||
|
||||
const plugins = Array.isArray(marketplaceData?.plugins) ? marketplaceData.plugins : [];
|
||||
if (plugins.length === 0) return;
|
||||
|
||||
const selection = this._selectPluginForModule(plugins, moduleInfo);
|
||||
if (!selection) {
|
||||
await this._safeWarn(
|
||||
`Community module '${moduleInfo.code}' ships marketplace.json but no plugin entry matches the registry code. ` +
|
||||
`Falling back to legacy install path.`,
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
if (selection.source === 'single-fallback') {
|
||||
// Single-entry marketplace.json whose plugin name doesn't match the registry
|
||||
// code or the module_definition hint. Most likely correct, but worth surfacing
|
||||
// in case marketplace.json is misconfigured and we'd install the wrong plugin.
|
||||
await this._safeWarn(
|
||||
`Community module '${moduleInfo.code}' picked the only plugin in marketplace.json ('${selection.plugin?.name}') ` +
|
||||
`because no name or module_definition match was found. Verify marketplace.json if the install looks wrong.`,
|
||||
);
|
||||
}
|
||||
|
||||
const { PluginResolver } = require('./plugin-resolver');
|
||||
const resolver = new PluginResolver();
|
||||
let resolved;
|
||||
try {
|
||||
resolved = await resolver.resolve(repoPath, selection.plugin);
|
||||
} catch (error) {
|
||||
// PluginResolver threw (malformed plugin entry, missing files, etc.).
|
||||
// Honor the silent-fallthrough contract — warn and let the legacy
|
||||
// findModuleSource path handle the install.
|
||||
await this._safeWarn(
|
||||
`PluginResolver failed for community module '${moduleInfo.code}': ${error.message}. ` + `Falling back to legacy install path.`,
|
||||
);
|
||||
return;
|
||||
}
|
||||
if (!resolved || resolved.length === 0) return;
|
||||
|
||||
// The registry registers a single code per module. If the resolver returns
|
||||
// multiple modules (Strategy 4: multiple standalone skills), accept only
|
||||
// the entry whose code matches the registry. Other entries are ignored —
|
||||
// they belong to plugins not registered in the community catalog.
|
||||
const matched = resolved.find((mod) => mod.code === moduleInfo.code) || (resolved.length === 1 ? resolved[0] : null);
|
||||
if (!matched) return;
|
||||
|
||||
// Shallow-clone before stamping provenance — the resolver may cache or reuse
|
||||
// its return objects, and we don't want install-specific fields leaking back.
|
||||
const stamped = {
|
||||
...matched,
|
||||
code: moduleInfo.code,
|
||||
repoUrl: moduleInfo.url,
|
||||
cloneRef: resolution.channel === 'pinned' ? resolution.version : resolution.approvedTag || null,
|
||||
cloneSha: resolution.sha,
|
||||
communitySource: true,
|
||||
communityChannel: resolution.channel,
|
||||
communityVersion: resolution.version,
|
||||
registryApprovedTag: resolution.approvedTag,
|
||||
registryApprovedSha: resolution.approvedSha,
|
||||
};
|
||||
|
||||
CommunityModuleManager._pluginResolutions.set(moduleInfo.code, stamped);
|
||||
}
|
||||
|
||||
/**
|
||||
* Lazy fallback: resolve marketplace.json straight from the on-disk cache
|
||||
* when `_pluginResolutions` is empty (e.g. callers that reach `install()`
|
||||
* without `cloneModule` having populated the cache earlier in this process).
|
||||
*
|
||||
* Reuses an existing channel resolution if present; otherwise synthesizes a
|
||||
* minimal stable-channel stub from the registry entry + the cached repo's
|
||||
* current HEAD. Returns the cached plugin resolution if one is produced,
|
||||
* otherwise null (caller falls back to the legacy path).
|
||||
*
|
||||
* @param {string} moduleCode
|
||||
* @returns {Promise<Object|null>}
|
||||
*/
|
||||
async resolveFromCache(moduleCode) {
|
||||
const existing = this.getPluginResolution(moduleCode);
|
||||
if (existing) return existing;
|
||||
|
||||
const cacheRepoDir = path.join(this.getCacheDir(), moduleCode);
|
||||
const marketplacePath = path.join(cacheRepoDir, '.claude-plugin', 'marketplace.json');
|
||||
if (!(await fs.pathExists(marketplacePath))) return null;
|
||||
|
||||
let moduleInfo;
|
||||
try {
|
||||
moduleInfo = await this.getModuleByCode(moduleCode);
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
if (!moduleInfo) return null;
|
||||
|
||||
let channelResolution = this.getResolution(moduleCode);
|
||||
if (!channelResolution) {
|
||||
let sha = '';
|
||||
try {
|
||||
sha = execSync('git rev-parse HEAD', { cwd: cacheRepoDir, stdio: 'pipe' }).toString().trim();
|
||||
} catch {
|
||||
// Not a git repo or unreadable — give up and let the legacy path run.
|
||||
return null;
|
||||
}
|
||||
channelResolution = {
|
||||
channel: 'stable',
|
||||
version: moduleInfo.approvedTag || sha.slice(0, 7),
|
||||
sha,
|
||||
registryApprovedTag: moduleInfo.approvedTag || null,
|
||||
registryApprovedSha: moduleInfo.approvedSha || null,
|
||||
};
|
||||
}
|
||||
|
||||
await this._tryResolveMarketplacePlugin(cacheRepoDir, moduleInfo, {
|
||||
channel: channelResolution.channel,
|
||||
version: channelResolution.version,
|
||||
sha: channelResolution.sha,
|
||||
approvedTag: channelResolution.registryApprovedTag,
|
||||
approvedSha: channelResolution.registryApprovedSha,
|
||||
});
|
||||
|
||||
return this.getPluginResolution(moduleCode);
|
||||
}
|
||||
|
||||
/**
|
||||
* Best-effort warning emitter. `prompts.log.warn` may be undefined in some
|
||||
* harnesses and may return a rejected promise — swallow both cases so a
|
||||
* fallthrough warning can never crash the install.
|
||||
*/
|
||||
async _safeWarn(message) {
|
||||
try {
|
||||
const result = prompts.log?.warn?.(message);
|
||||
if (result && typeof result.then === 'function') await result;
|
||||
} catch {
|
||||
/* ignore */
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Pick which plugin entry from marketplace.json represents this community module.
|
||||
* Precedence:
|
||||
* 1. Exact match on `plugin.name === moduleInfo.code`
|
||||
* 2. Trailing directory of `module_definition` matches `plugin.name`
|
||||
* 3. Single plugin in marketplace.json — accepted with a warning so a
|
||||
* mismatched-but-uniquely-named plugin doesn't install silently.
|
||||
* Otherwise null (caller falls back to legacy path).
|
||||
*
|
||||
* @returns {{plugin: Object, source: 'name'|'hint'|'single-fallback'}|null}
|
||||
*/
|
||||
_selectPluginForModule(plugins, moduleInfo) {
|
||||
const byCode = plugins.find((p) => p && p.name === moduleInfo.code);
|
||||
if (byCode) return { plugin: byCode, source: 'name' };
|
||||
|
||||
if (moduleInfo.moduleDefinition) {
|
||||
// module_definition like "src/skills/suno-setup/assets/module.yaml" →
|
||||
// hint segment "suno-setup". Match that against plugin names.
|
||||
const segments = moduleInfo.moduleDefinition.split('/').filter(Boolean);
|
||||
const setupIdx = segments.findIndex((s) => s.endsWith('-setup'));
|
||||
if (setupIdx !== -1) {
|
||||
const hint = segments[setupIdx];
|
||||
const byHint = plugins.find((p) => p && p.name === hint);
|
||||
if (byHint) return { plugin: byHint, source: 'hint' };
|
||||
}
|
||||
}
|
||||
|
||||
if (plugins.length === 1) return { plugin: plugins[0], source: 'single-fallback' };
|
||||
return null;
|
||||
}
|
||||
|
||||
// ─── Source Finding ───────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -24,8 +24,9 @@ class CustomModuleManager {
|
|||
|
||||
/**
|
||||
* Parse a user-provided source input into a structured descriptor.
|
||||
* Accepts local file paths, HTTPS Git URLs, and SSH Git URLs.
|
||||
* For HTTPS URLs with deep paths (e.g., /tree/main/subdir), extracts the subdir.
|
||||
* Accepts local file paths, HTTPS Git URLs, HTTP Git URLs, and SSH Git URLs.
|
||||
* For HTTPS/HTTP URLs with deep paths (e.g., /tree/main/subdir), extracts the subdir.
|
||||
* The original protocol (http or https) is preserved in the returned cloneUrl.
|
||||
*
|
||||
* @param {string} input - URL or local file path
|
||||
* @returns {Object} Parsed source descriptor:
|
||||
|
|
@ -127,43 +128,86 @@ class CustomModuleManager {
|
|||
};
|
||||
}
|
||||
|
||||
// HTTPS URL: https://host/owner/repo[/tree/branch/subdir][.git]
|
||||
const httpsMatch = trimmed.match(/^https?:\/\/([^/]+)\/([^/]+)\/([^/.]+?)(?:\.git)?(\/.*)?$/);
|
||||
if (httpsMatch) {
|
||||
const [, host, owner, repo, remainder] = httpsMatch;
|
||||
const cloneUrl = `https://${host}/${owner}/${repo}`;
|
||||
// HTTPS/HTTP URL: generic handling for any Git host.
|
||||
// We avoid host-specific parsing — `git clone` will accept whatever URL the
|
||||
// user provides. We only need to (a) separate an optional browser-style
|
||||
// subdir suffix from the clone URL, (b) extract any embedded ref
|
||||
// (branch/tag) from deep-path URLs, and (c) derive a cache key / display
|
||||
// name from the path. The original protocol (http or https) is preserved.
|
||||
if (/^https?:\/\//i.test(trimmed)) {
|
||||
let url;
|
||||
try {
|
||||
url = new URL(trimmed);
|
||||
} catch {
|
||||
url = null;
|
||||
}
|
||||
|
||||
if (url && url.host) {
|
||||
const host = url.host;
|
||||
let repoPath = url.pathname.replace(/^\/+/, '').replace(/\/+$/, '');
|
||||
let subdir = null;
|
||||
let urlRef = null; // branch/tag extracted from /tree/<ref>/subdir
|
||||
let urlRef = null; // branch/tag/commit extracted from deep-path URLs
|
||||
|
||||
if (remainder) {
|
||||
// Extract subdir from deep path patterns used by various Git hosts
|
||||
// Detect browser-style deep-path patterns that embed a ref
|
||||
// (branch/tag/commit) and optional subdirectory. These appear
|
||||
// across many hosts:
|
||||
// GitHub /<repo>/tree|blob/<ref>[/<subdir>]
|
||||
// GitLab /<repo>/-/tree|blob/<ref>[/<subdir>]
|
||||
// Gitea /<repo>/src/<ref>[/<subdir>]
|
||||
// Gitea /<repo>/src/(branch|commit|tag)/<ref>[/<subdir>]
|
||||
// Group 1 = repo path prefix, Group 2 = ref, Group 3 = subdir (optional).
|
||||
const deepPathPatterns = [
|
||||
{ regex: /^\/(?:-\/)?tree\/([^/]+)\/(.+)$/, refIdx: 1, pathIdx: 2 }, // GitHub, GitLab
|
||||
{ regex: /^\/(?:-\/)?blob\/([^/]+)\/(.+)$/, refIdx: 1, pathIdx: 2 },
|
||||
{ regex: /^\/src\/([^/]+)\/(.+)$/, refIdx: 1, pathIdx: 2 }, // Gitea/Forgejo
|
||||
/^(.+?)\/(?:-\/)?(?:tree|blob)\/([^/]+)(?:\/(.+))?$/,
|
||||
/^(.+?)\/src\/(?:branch\/|commit\/|tag\/)?([^/]+)(?:\/(.+))?$/,
|
||||
];
|
||||
// Also match `/tree/<ref>` with no subdir
|
||||
const refOnlyPatterns = [/^\/(?:-\/)?tree\/([^/]+?)\/?$/, /^\/(?:-\/)?blob\/([^/]+?)\/?$/, /^\/src\/([^/]+?)\/?$/];
|
||||
for (const pattern of deepPathPatterns) {
|
||||
const match = repoPath.match(pattern);
|
||||
if (match) {
|
||||
repoPath = match[1];
|
||||
if (match[2]) urlRef = match[2];
|
||||
if (match[3]) {
|
||||
const cleaned = match[3].replace(/\/+$/, '');
|
||||
if (cleaned) subdir = cleaned;
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
for (const p of deepPathPatterns) {
|
||||
const match = remainder.match(p.regex);
|
||||
if (match) {
|
||||
urlRef = match[p.refIdx];
|
||||
subdir = match[p.pathIdx].replace(/\/$/, '');
|
||||
break;
|
||||
}
|
||||
}
|
||||
// Some hosts use ?path=/subdir on browse links to point at a file or
|
||||
// directory. Honor it when no deep-path marker matched above.
|
||||
if (!subdir) {
|
||||
for (const r of refOnlyPatterns) {
|
||||
const match = remainder.match(r);
|
||||
if (match) {
|
||||
urlRef = match[1];
|
||||
break;
|
||||
}
|
||||
const pathParam = url.searchParams.get('path');
|
||||
if (pathParam) {
|
||||
const cleaned = pathParam.replace(/^\/+/, '').replace(/\/+$/, '');
|
||||
if (cleaned) subdir = cleaned;
|
||||
}
|
||||
}
|
||||
|
||||
// Strip a single trailing .git for a stable cacheKey/displayName.
|
||||
const repoPathClean = repoPath.replace(/\.git$/i, '');
|
||||
if (!repoPathClean) {
|
||||
return {
|
||||
type: null,
|
||||
cloneUrl: null,
|
||||
subdir: null,
|
||||
localPath: null,
|
||||
cacheKey: null,
|
||||
displayName: null,
|
||||
isValid: false,
|
||||
error: 'Not a valid Git URL or local path',
|
||||
};
|
||||
}
|
||||
|
||||
const cloneUrl = `${url.protocol}//${host}/${repoPathClean}`;
|
||||
const cacheKey = `${host}/${repoPathClean}`;
|
||||
|
||||
// Display name: prefer "<owner>/<repo>" using the last two meaningful
|
||||
// path segments.
|
||||
const segments = repoPathClean.split('/').filter(Boolean);
|
||||
const repoSeg = segments.at(-1);
|
||||
const ownerSeg = segments.at(-2);
|
||||
const displayName = ownerSeg ? `${ownerSeg}/${repoSeg}` : repoSeg;
|
||||
|
||||
// Precedence: explicit @version suffix > URL /tree/<ref> path segment.
|
||||
const version = versionSuffix || urlRef || null;
|
||||
|
||||
|
|
@ -174,12 +218,13 @@ class CustomModuleManager {
|
|||
localPath: null,
|
||||
version,
|
||||
rawInput: trimmedRaw,
|
||||
cacheKey: `${host}/${owner}/${repo}`,
|
||||
displayName: `${owner}/${repo}`,
|
||||
cacheKey,
|
||||
displayName,
|
||||
isValid: true,
|
||||
error: null,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
type: null,
|
||||
|
|
@ -311,7 +356,7 @@ class CustomModuleManager {
|
|||
/**
|
||||
* Clone a custom module repository to cache.
|
||||
* Supports any Git host (GitHub, GitLab, Bitbucket, self-hosted, etc.).
|
||||
* @param {string} sourceInput - Git URL (HTTPS or SSH)
|
||||
* @param {string} sourceInput - Git URL (HTTPS, HTTP, or SSH)
|
||||
* @param {Object} [options] - Clone options
|
||||
* @param {boolean} [options.silent] - Suppress spinner output
|
||||
* @param {boolean} [options.skipInstall] - Skip npm install (for browsing before user confirms)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,13 @@
|
|||
/**
|
||||
* Canonical schema for per-module `module-help.csv` files.
|
||||
*
|
||||
* Both the merger (`Installer.mergeModuleHelpCatalogs`) and the synthesizer
|
||||
* (`PluginResolver._buildSynthesizedHelpCsv`) emit this exact header. The
|
||||
* merger compares each per-module file's header against this string and
|
||||
* warns on drift, so any rename here must be matched in external module
|
||||
* authors' CSVs (or accepted as a positional fall-through with a warning).
|
||||
*/
|
||||
const MODULE_HELP_CSV_HEADER =
|
||||
'module,skill,display-name,menu-code,description,action,args,phase,preceded-by,followed-by,required,output-location,outputs';
|
||||
|
||||
module.exports = { MODULE_HELP_CSV_HEADER };
|
||||
|
|
@ -269,6 +269,21 @@ class OfficialModules {
|
|||
return this.installFromResolution(resolved, bmadDir, fileTrackingCallback, options);
|
||||
}
|
||||
|
||||
// Community modules whose cloned repo ships marketplace.json get the same
|
||||
// skill-level install treatment as custom-source installs. If the in-process
|
||||
// cache wasn't populated (e.g. caller skipped the pre-clone phase), fall
|
||||
// back to resolving directly from `~/.bmad/cache/community-modules/<name>/`
|
||||
// so we don't silently regress to the legacy half-install path.
|
||||
const { CommunityModuleManager } = require('./community-manager');
|
||||
const communityMgr = new CommunityModuleManager();
|
||||
let communityResolved = communityMgr.getPluginResolution(moduleName);
|
||||
if (!communityResolved) {
|
||||
communityResolved = await communityMgr.resolveFromCache(moduleName);
|
||||
}
|
||||
if (communityResolved) {
|
||||
return this.installFromResolution(communityResolved, bmadDir, fileTrackingCallback, options);
|
||||
}
|
||||
|
||||
const sourcePath = await this.findModuleSource(moduleName, {
|
||||
silent: options.silent,
|
||||
channelOptions: options.channelOptions,
|
||||
|
|
@ -360,21 +375,27 @@ class OfficialModules {
|
|||
await this.createModuleDirectories(resolved.code, bmadDir, options);
|
||||
}
|
||||
|
||||
// Update manifest. For custom modules, derive channel from the git ref:
|
||||
// cloneRef present → pinned at that ref
|
||||
// cloneRef absent → next (main HEAD)
|
||||
// local path → no channel concept
|
||||
// Update manifest. For community installs we honor the channel resolved by
|
||||
// CommunityModuleManager (stable/next/pinned) and propagate the registry's
|
||||
// approved tag/sha. For custom-source installs we derive channel from the
|
||||
// cloneRef (present → pinned, absent → next; local paths have no channel).
|
||||
const { Manifest } = require('../core/manifest');
|
||||
const manifestObj = new Manifest();
|
||||
|
||||
const hasGitClone = !!resolved.repoUrl;
|
||||
const isCommunity = resolved.communitySource === true;
|
||||
const manifestEntry = {
|
||||
version: resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || null),
|
||||
source: 'custom',
|
||||
version: resolved.communityVersion || resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || null),
|
||||
source: isCommunity ? 'community' : 'custom',
|
||||
npmPackage: null,
|
||||
repoUrl: resolved.repoUrl || null,
|
||||
};
|
||||
if (hasGitClone) {
|
||||
if (isCommunity) {
|
||||
if (resolved.communityChannel) manifestEntry.channel = resolved.communityChannel;
|
||||
if (resolved.cloneSha) manifestEntry.sha = resolved.cloneSha;
|
||||
if (resolved.registryApprovedTag) manifestEntry.registryApprovedTag = resolved.registryApprovedTag;
|
||||
if (resolved.registryApprovedSha) manifestEntry.registryApprovedSha = resolved.registryApprovedSha;
|
||||
} else if (hasGitClone) {
|
||||
manifestEntry.channel = resolved.cloneRef ? 'pinned' : 'next';
|
||||
if (resolved.cloneSha) manifestEntry.sha = resolved.cloneSha;
|
||||
if (resolved.rawInput) manifestEntry.rawSource = resolved.rawInput;
|
||||
|
|
@ -386,10 +407,13 @@ class OfficialModules {
|
|||
success: true,
|
||||
module: resolved.code,
|
||||
path: targetPath,
|
||||
// Match the manifestEntry.version expression above so downstream summary
|
||||
// lines show the cloned ref (tag or 'main') instead of the on-disk
|
||||
// package.json version for git-backed custom installs.
|
||||
versionInfo: { version: resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || '') },
|
||||
// Mirror the manifestEntry.version precedence above so downstream summary
|
||||
// lines show the same string we just wrote to disk (community installs
|
||||
// use the registry-approved tag via `communityVersion`; custom git-backed
|
||||
// installs show the cloned ref or 'main').
|
||||
versionInfo: {
|
||||
version: resolved.communityVersion || resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || ''),
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
|
|
@ -879,7 +903,10 @@ class OfficialModules {
|
|||
try {
|
||||
const content = await fs.readFile(moduleConfigPath, 'utf8');
|
||||
const moduleConfig = yaml.parse(content);
|
||||
if (moduleConfig) {
|
||||
// Only keep plain object parses. A corrupt config.yaml that parses
|
||||
// to a scalar or array would crash later code that does `key in cfg`
|
||||
// / `Object.keys(cfg)`; treat it the same as a parse error.
|
||||
if (moduleConfig && typeof moduleConfig === 'object' && !Array.isArray(moduleConfig)) {
|
||||
this._existingConfig[entry.name] = moduleConfig;
|
||||
foundAny = true;
|
||||
}
|
||||
|
|
@ -890,9 +917,58 @@ class OfficialModules {
|
|||
}
|
||||
}
|
||||
|
||||
if (foundAny) {
|
||||
await this._hoistCoreKeysFromLegacyModuleConfigs();
|
||||
}
|
||||
|
||||
return foundAny;
|
||||
}
|
||||
|
||||
/**
|
||||
* Migrate prior answers when a key has moved from a non-core module to core
|
||||
* (e.g. project_name moving from bmm to core in #2279). Without this, the
|
||||
* partition logic in writeCentralConfig drops the value from the bmm bucket
|
||||
* (because it's now a core key) without re-homing it under [core], so the
|
||||
* user's prior answer silently disappears on the next install/quick-update.
|
||||
*/
|
||||
async _hoistCoreKeysFromLegacyModuleConfigs() {
|
||||
const coreSchemaPath = path.join(getSourcePath(), 'core-skills', 'module.yaml');
|
||||
if (!(await fs.pathExists(coreSchemaPath))) return;
|
||||
|
||||
let coreSchema;
|
||||
try {
|
||||
coreSchema = yaml.parse(await fs.readFile(coreSchemaPath, 'utf8'));
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
if (!coreSchema || typeof coreSchema !== 'object') return;
|
||||
|
||||
const coreKeys = new Set(
|
||||
Object.entries(coreSchema)
|
||||
.filter(([, v]) => v && typeof v === 'object' && 'prompt' in v)
|
||||
.map(([k]) => k),
|
||||
);
|
||||
if (coreKeys.size === 0) return;
|
||||
|
||||
// Belt-and-suspenders: loadExistingConfig already filters non-object parses,
|
||||
// but anyone calling _hoistCoreKeysFromLegacyModuleConfigs in isolation (or
|
||||
// future code paths populating _existingConfig directly) shouldn't be able
|
||||
// to crash this with a scalar / array.
|
||||
const existingCore = this._existingConfig.core;
|
||||
this._existingConfig.core = existingCore && typeof existingCore === 'object' && !Array.isArray(existingCore) ? existingCore : {};
|
||||
|
||||
for (const [moduleName, cfg] of Object.entries(this._existingConfig)) {
|
||||
if (moduleName === 'core' || !cfg || typeof cfg !== 'object' || Array.isArray(cfg)) continue;
|
||||
for (const key of Object.keys(cfg)) {
|
||||
if (!coreKeys.has(key)) continue;
|
||||
if (!(key in this._existingConfig.core)) {
|
||||
this._existingConfig.core[key] = cfg[key];
|
||||
}
|
||||
delete cfg[key];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Pre-scan module schemas to gather metadata for the configuration gateway prompt.
|
||||
* Returns info about which modules have configurable options.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
const fs = require('../fs-native');
|
||||
const path = require('node:path');
|
||||
const yaml = require('yaml');
|
||||
const { MODULE_HELP_CSV_HEADER } = require('./module-help-schema');
|
||||
|
||||
/**
|
||||
* Resolves how to install a plugin from marketplace.json by analyzing
|
||||
|
|
@ -338,8 +339,7 @@ class PluginResolver {
|
|||
* @returns {string} CSV content
|
||||
*/
|
||||
_buildSynthesizedHelpCsv(moduleName, skillInfos) {
|
||||
const header = 'module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs';
|
||||
const rows = [header];
|
||||
const rows = [MODULE_HELP_CSV_HEADER];
|
||||
|
||||
for (const info of skillInfos) {
|
||||
const displayName = this._formatDisplayName(info.name || info.dirName);
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
const path = require('node:path');
|
||||
const os = require('node:os');
|
||||
const yaml = require('yaml');
|
||||
const fs = require('./fs-native');
|
||||
|
||||
/**
|
||||
|
|
@ -86,8 +87,11 @@ function getExternalModuleCachePath(moduleName, ...segments) {
|
|||
* Built-in modules (core, bmm) live under <src>. External official modules are
|
||||
* cloned into ~/.bmad/cache/external-modules/<name>/ with varying internal
|
||||
* layouts (some at src/module.yaml, some at skills/module.yaml, some nested).
|
||||
* Local custom-source modules are not cached; their path is read from the
|
||||
* CustomModuleManager resolution cache set during the same install run.
|
||||
* Url-source custom modules are cloned into ~/.bmad/cache/custom-modules/<host>/<owner>/<repo>/
|
||||
* and are resolved by walking the cache and matching `code` or `name` from the
|
||||
* discovered module.yaml. Local custom-source modules are not cached; their
|
||||
* path is read from the CustomModuleManager resolution cache set during the
|
||||
* same install run.
|
||||
* This mirrors the candidate-path search in
|
||||
* ExternalModuleManager.findExternalModuleSource but performs no git/network
|
||||
* work, which keeps it safe to call during manifest writing.
|
||||
|
|
@ -99,11 +103,14 @@ async function resolveInstalledModuleYaml(moduleName) {
|
|||
const builtIn = path.join(getModulePath(moduleName), 'module.yaml');
|
||||
if (await fs.pathExists(builtIn)) return builtIn;
|
||||
|
||||
// Search a resolved root directory using the same candidate-path pattern.
|
||||
async function searchRoot(root) {
|
||||
// Collect every module.yaml under a root using the standard candidate paths.
|
||||
// Url-source repos can host multiple plugins (discovery mode), so we need all
|
||||
// matches, not just the first. Returned in priority order.
|
||||
async function searchRootAll(root) {
|
||||
const results = [];
|
||||
for (const dir of ['skills', 'src']) {
|
||||
const direct = path.join(root, dir, 'module.yaml');
|
||||
if (await fs.pathExists(direct)) return direct;
|
||||
if (await fs.pathExists(direct)) results.push(direct);
|
||||
|
||||
const dirPath = path.join(root, dir);
|
||||
if (await fs.pathExists(dirPath)) {
|
||||
|
|
@ -111,22 +118,35 @@ async function resolveInstalledModuleYaml(moduleName) {
|
|||
for (const entry of entries) {
|
||||
if (!entry.isDirectory()) continue;
|
||||
const nested = path.join(dirPath, entry.name, 'module.yaml');
|
||||
if (await fs.pathExists(nested)) return nested;
|
||||
if (await fs.pathExists(nested)) results.push(nested);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// BMB standard: {setup-skill}/assets/module.yaml (setup skill is any *-setup directory)
|
||||
const rootEntries = await fs.readdir(root, { withFileTypes: true });
|
||||
for (const entry of rootEntries) {
|
||||
// BMB standard: {setup-skill}/assets/module.yaml (setup skill is any *-setup directory).
|
||||
// Check at the repo root, and also under src/skills/ and skills/ since
|
||||
// marketplace plugins commonly nest skills under src/skills/<name>/.
|
||||
const setupSearchRoots = [root, path.join(root, 'src', 'skills'), path.join(root, 'skills')];
|
||||
for (const setupRoot of setupSearchRoots) {
|
||||
if (!(await fs.pathExists(setupRoot))) continue;
|
||||
const entries = await fs.readdir(setupRoot, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (!entry.isDirectory() || !entry.name.endsWith('-setup')) continue;
|
||||
const setupAssets = path.join(root, entry.name, 'assets', 'module.yaml');
|
||||
if (await fs.pathExists(setupAssets)) return setupAssets;
|
||||
const setupAssets = path.join(setupRoot, entry.name, 'assets', 'module.yaml');
|
||||
if (await fs.pathExists(setupAssets)) results.push(setupAssets);
|
||||
}
|
||||
}
|
||||
|
||||
const atRoot = path.join(root, 'module.yaml');
|
||||
if (await fs.pathExists(atRoot)) return atRoot;
|
||||
return null;
|
||||
if (await fs.pathExists(atRoot)) results.push(atRoot);
|
||||
return results;
|
||||
}
|
||||
|
||||
// Backwards-compatible single-result variant for the existing external-cache
|
||||
// and resolution-cache fallbacks (one module per root by construction).
|
||||
async function searchRoot(root) {
|
||||
const all = await searchRootAll(root);
|
||||
return all.length > 0 ? all[0] : null;
|
||||
}
|
||||
|
||||
const cacheRoot = getExternalModuleCachePath(moduleName);
|
||||
|
|
@ -135,6 +155,16 @@ async function resolveInstalledModuleYaml(moduleName) {
|
|||
if (found) return found;
|
||||
}
|
||||
|
||||
// Community modules are cloned to ~/.bmad/cache/community-modules/<name>/
|
||||
// (parallel to the external-modules cache used above). Search there too so
|
||||
// collectAgentsFromModuleYaml and writeCentralConfig can locate community
|
||||
// module.yaml files regardless of how nested the layout is.
|
||||
const communityCacheRoot = path.join(os.homedir(), '.bmad', 'cache', 'community-modules', moduleName);
|
||||
if (await fs.pathExists(communityCacheRoot)) {
|
||||
const found = await searchRoot(communityCacheRoot);
|
||||
if (found) return found;
|
||||
}
|
||||
|
||||
// Fallback: local custom-source modules store their source path in the
|
||||
// CustomModuleManager resolution cache populated during the same install run.
|
||||
// Match by code OR name since callers may use either form.
|
||||
|
|
@ -150,6 +180,37 @@ async function resolveInstalledModuleYaml(moduleName) {
|
|||
// Resolution cache unavailable — continue
|
||||
}
|
||||
|
||||
// Fallback: url-source custom modules cloned to ~/.bmad/cache/custom-modules/.
|
||||
// Walk every cached repo, enumerate ALL module.yaml files via searchRootAll
|
||||
// (a single repo can host multiple plugins in discovery mode), and match by
|
||||
// the yaml's `code` or `name` field. This works on re-install runs where
|
||||
// _resolutionCache is empty and covers both discovery-mode (with marketplace.json)
|
||||
// and direct-mode modules, since we identify repo roots by .bmad-source.json
|
||||
// (written by cloneRepo) or .claude-plugin/ rather than by marketplace.json.
|
||||
try {
|
||||
const customCacheDir = path.join(os.homedir(), '.bmad', 'cache', 'custom-modules');
|
||||
if (await fs.pathExists(customCacheDir)) {
|
||||
const { CustomModuleManager } = require('./modules/custom-module-manager');
|
||||
const customMgr = new CustomModuleManager();
|
||||
const repoRoots = await customMgr._findCacheRepoRoots(customCacheDir);
|
||||
for (const { repoPath } of repoRoots) {
|
||||
const candidates = await searchRootAll(repoPath);
|
||||
for (const candidate of candidates) {
|
||||
try {
|
||||
const parsed = yaml.parse(await fs.readFile(candidate, 'utf8'));
|
||||
if (parsed && (parsed.code === moduleName || parsed.name === moduleName)) {
|
||||
return candidate;
|
||||
}
|
||||
} catch {
|
||||
// Malformed yaml — skip
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Custom-modules cache walk failed — continue
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,330 @@
|
|||
// `--set <module>.<key>=<value>` is a post-install patch. The installer runs
|
||||
// its normal flow and writes `_bmad/config.toml`, `_bmad/config.user.toml`,
|
||||
// and `_bmad/<module>/config.yaml`; afterwards `applySetOverrides` upserts
|
||||
// each override into those files.
|
||||
//
|
||||
// This is intentionally NOT integrated with the prompt/template/schema
|
||||
// system. Tradeoffs:
|
||||
// - No `result:` template rendering: `--set bmm.project_knowledge=research`
|
||||
// writes "research" verbatim. Pass `--set bmm.project_knowledge='{project-root}/research'`
|
||||
// if you want the rendered form.
|
||||
// - Carry-forward across installs is best-effort: declared schema keys
|
||||
// persist via the existingValue path on the next interactive run; values
|
||||
// for keys outside any module's schema may need to be re-passed on each
|
||||
// install (or edited directly in `_bmad/config.toml`).
|
||||
// - No "key not in schema" validation: whatever you assert, we write.
|
||||
//
|
||||
// Names that, when used as object keys, can mutate `Object.prototype` and
|
||||
// cascade into every plain-object lookup in the process. The `--set` pipeline
|
||||
// assigns into plain `{}` maps keyed by user input, so `--set __proto__.x=1`
|
||||
// would otherwise reach `overrides.__proto__[x] = 1` and pollute every plain
|
||||
// object. We reject the names at parse time and harden the maps in
|
||||
// `parseSetEntries` with `Object.create(null)` for defense-in-depth.
|
||||
const PROTOTYPE_POLLUTING_NAMES = new Set(['__proto__', 'prototype', 'constructor']);
|
||||
|
||||
const path = require('node:path');
|
||||
const fs = require('./fs-native');
|
||||
const yaml = require('yaml');
|
||||
|
||||
/**
|
||||
* Parse a single `--set <module>.<key>=<value>` entry.
|
||||
* @param {string} entry - raw flag value
|
||||
* @returns {{module: string, key: string, value: string}}
|
||||
* @throws {Error} on malformed input
|
||||
*/
|
||||
function parseSetEntry(entry) {
|
||||
if (typeof entry !== 'string' || entry.length === 0) {
|
||||
throw new Error('--set: empty entry. Expected <module>.<key>=<value>');
|
||||
}
|
||||
const eq = entry.indexOf('=');
|
||||
if (eq === -1) {
|
||||
throw new Error(`--set "${entry}": missing '='. Expected <module>.<key>=<value>`);
|
||||
}
|
||||
const lhs = entry.slice(0, eq);
|
||||
// Note: only the LHS is trimmed. Values may legitimately contain leading
|
||||
// or trailing whitespace (paths with spaces, quoted strings); module / key
|
||||
// names cannot, so it's safe to be strict on the left.
|
||||
const value = entry.slice(eq + 1);
|
||||
const dot = lhs.indexOf('.');
|
||||
if (dot === -1) {
|
||||
throw new Error(`--set "${entry}": missing '.'. Expected <module>.<key>=<value>`);
|
||||
}
|
||||
const moduleCode = lhs.slice(0, dot).trim();
|
||||
const key = lhs.slice(dot + 1).trim();
|
||||
if (!moduleCode || !key) {
|
||||
throw new Error(`--set "${entry}": empty module or key. Expected <module>.<key>=<value>`);
|
||||
}
|
||||
if (PROTOTYPE_POLLUTING_NAMES.has(moduleCode) || PROTOTYPE_POLLUTING_NAMES.has(key)) {
|
||||
throw new Error(
|
||||
`--set "${entry}": '__proto__', 'prototype', and 'constructor' are reserved and cannot be used as a module or key name.`,
|
||||
);
|
||||
}
|
||||
return { module: moduleCode, key, value };
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse repeated `--set` entries into a `{ module: { key: value } }` map.
|
||||
* Later entries overwrite earlier ones for the same key. Both the outer
|
||||
* map and the per-module inner maps are `Object.create(null)` so callers
|
||||
* that bypass `parseSetEntry`'s name check still can't pollute prototypes.
|
||||
*
|
||||
* @param {string[]} entries
|
||||
* @returns {Object<string, Object<string, string>>}
|
||||
*/
|
||||
function parseSetEntries(entries) {
|
||||
const overrides = Object.create(null);
|
||||
if (!Array.isArray(entries)) return overrides;
|
||||
for (const entry of entries) {
|
||||
const { module: moduleCode, key, value } = parseSetEntry(entry);
|
||||
if (!overrides[moduleCode]) overrides[moduleCode] = Object.create(null);
|
||||
overrides[moduleCode][key] = value;
|
||||
}
|
||||
return overrides;
|
||||
}
|
||||
|
||||
/**
|
||||
* Encode a JS string as a TOML basic string (double-quoted with escapes).
|
||||
* @param {string} value
|
||||
*/
|
||||
function tomlString(value) {
|
||||
const s = String(value);
|
||||
// Per the TOML spec, basic strings escape `\`, `"`, and control characters.
|
||||
return (
|
||||
'"' +
|
||||
s
|
||||
.replaceAll('\\', '\\\\')
|
||||
.replaceAll('"', String.raw`\"`)
|
||||
.replaceAll('\b', String.raw`\b`)
|
||||
.replaceAll('\f', String.raw`\f`)
|
||||
.replaceAll('\n', String.raw`\n`)
|
||||
.replaceAll('\r', String.raw`\r`)
|
||||
.replaceAll('\t', String.raw`\t`) +
|
||||
'"'
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Section header for a given module code.
|
||||
* - `core` → `[core]`
|
||||
* - `<other>` → `[modules.<other>]`
|
||||
*
|
||||
* Mirrors the layout `manifest-generator.writeCentralConfig` produces.
|
||||
*/
|
||||
function sectionHeader(moduleCode) {
|
||||
return moduleCode === 'core' ? '[core]' : `[modules.${moduleCode}]`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Insert or update `key = value` inside a TOML section, returning the new
|
||||
* file content. The format produced by the installer is regular and small
|
||||
* enough that a line scanner is more reliable than pulling in a TOML
|
||||
* round-tripper that would normalize the file's existing whitespace and
|
||||
* comment structure.
|
||||
*
|
||||
* - If `[section]` exists and contains `key`, replace the value on that
|
||||
* line (preserving any inline comment after the value).
|
||||
* - If `[section]` exists but `key` doesn't, append `key = value` at the
|
||||
* end of the section (before the next `[...]` header or EOF, skipping
|
||||
* trailing blank lines so the section stays tidy).
|
||||
* - If `[section]` doesn't exist, append a new section block at EOF.
|
||||
*
|
||||
* @param {string} content existing file content (may be empty)
|
||||
* @param {string} section exact `[section]` header to target
|
||||
* @param {string} key
|
||||
* @param {string} valueToml already TOML-encoded value (e.g. `"foo"`)
|
||||
* @returns {string} new content
|
||||
*/
|
||||
function upsertTomlKey(content, section, key, valueToml) {
|
||||
const lines = content.split('\n');
|
||||
// Track whether the file already ended with a newline so we can preserve
|
||||
// that. `split('\n')` on `"a\n"` yields `['a', '']`, which gives us the
|
||||
// marker we need.
|
||||
const hadTrailingNewline = lines.length > 0 && lines.at(-1) === '';
|
||||
if (hadTrailingNewline) lines.pop();
|
||||
|
||||
// Locate the target section.
|
||||
const sectionStart = lines.findIndex((line) => line.trim() === section);
|
||||
if (sectionStart === -1) {
|
||||
// Section doesn't exist — append a new block. Pad with a blank line if
|
||||
// the file is non-empty so sections stay visually separated.
|
||||
if (lines.length > 0 && lines.at(-1).trim() !== '') lines.push('');
|
||||
lines.push(section, `${key} = ${valueToml}`);
|
||||
return lines.join('\n') + (hadTrailingNewline ? '\n' : '');
|
||||
}
|
||||
|
||||
// Find the section's end (next `[...]` header or EOF).
|
||||
let sectionEnd = lines.length;
|
||||
for (let i = sectionStart + 1; i < lines.length; i++) {
|
||||
if (/^\s*\[/.test(lines[i])) {
|
||||
sectionEnd = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Look for the key inside the section. Match `<key> = ...` allowing
|
||||
// optional leading whitespace; preserve the comment tail (`# ...`) if any.
|
||||
const keyPattern = new RegExp(`^(\\s*)${escapeRegExp(key)}\\s*=\\s*(.*)$`);
|
||||
for (let i = sectionStart + 1; i < sectionEnd; i++) {
|
||||
const match = lines[i].match(keyPattern);
|
||||
if (match) {
|
||||
const indent = match[1];
|
||||
// Preserve trailing comment if present. We split on the first `#` that
|
||||
// is preceded by whitespace — TOML strings can't contain unescaped `#`
|
||||
// in basic-string form so this is safe for the values we emit.
|
||||
const tail = match[2];
|
||||
const commentIdx = tail.search(/\s+#/);
|
||||
const commentSuffix = commentIdx === -1 ? '' : tail.slice(commentIdx);
|
||||
lines[i] = `${indent}${key} = ${valueToml}${commentSuffix}`;
|
||||
return lines.join('\n') + (hadTrailingNewline ? '\n' : '');
|
||||
}
|
||||
}
|
||||
|
||||
// Section exists but key doesn't. Insert before the next section header,
|
||||
// skipping trailing blank lines inside the current section so the new
|
||||
// entry sits with its siblings.
|
||||
let insertAt = sectionEnd;
|
||||
while (insertAt > sectionStart + 1 && lines[insertAt - 1].trim() === '') {
|
||||
insertAt--;
|
||||
}
|
||||
lines.splice(insertAt, 0, `${key} = ${valueToml}`);
|
||||
return lines.join('\n') + (hadTrailingNewline ? '\n' : '');
|
||||
}
|
||||
|
||||
function escapeRegExp(s) {
|
||||
return s.replaceAll(/[.*+?^${}()|[\]\\]/g, String.raw`\$&`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Look up `[section] key` in a TOML file. Returns true if the file exists,
|
||||
* the section is present, and `key` is set within it. Used by
|
||||
* `applySetOverrides` to route an override to the file that already owns
|
||||
* the key (so user-scope keys land in `config.user.toml`, team-scope keys
|
||||
* land in `config.toml`).
|
||||
*/
|
||||
async function tomlHasKey(filePath, section, key) {
|
||||
if (!(await fs.pathExists(filePath))) return false;
|
||||
const content = await fs.readFile(filePath, 'utf8');
|
||||
const lines = content.split('\n');
|
||||
const sectionStart = lines.findIndex((line) => line.trim() === section);
|
||||
if (sectionStart === -1) return false;
|
||||
const keyPattern = new RegExp(`^\\s*${escapeRegExp(key)}\\s*=`);
|
||||
for (let i = sectionStart + 1; i < lines.length; i++) {
|
||||
if (/^\s*\[/.test(lines[i])) return false;
|
||||
if (keyPattern.test(lines[i])) return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Apply parsed `--set` overrides to the central TOML files written by the
|
||||
* installer. Called at the end of an install / quick-update.
|
||||
*
|
||||
* Routing per (module, key):
|
||||
* 1. If `_bmad/config.user.toml` already has `[section] key`, update there
|
||||
* (user-scope key like `core.user_name`, `bmm.user_skill_level`).
|
||||
* 2. Otherwise update `_bmad/config.toml` (team scope, the default).
|
||||
*
|
||||
* The schema-correct user/team partition lives in `manifest-generator`. We
|
||||
* intentionally don't re-read module schemas here — the only goal is to
|
||||
* match the file the installer just wrote the key to. For brand-new keys
|
||||
* (not in either file yet), team scope is the safe default.
|
||||
*
|
||||
* @param {Object<string, Object<string, string>>} overrides
|
||||
* @param {string} bmadDir absolute path to `_bmad/`
|
||||
* @returns {Promise<Array<{module:string,key:string,scope:'team'|'user',file:string}>>}
|
||||
* a list of applied entries (for caller logging)
|
||||
*/
|
||||
async function applySetOverrides(overrides, bmadDir) {
|
||||
const applied = [];
|
||||
if (!overrides || typeof overrides !== 'object') return applied;
|
||||
|
||||
const teamPath = path.join(bmadDir, 'config.toml');
|
||||
const userPath = path.join(bmadDir, 'config.user.toml');
|
||||
|
||||
for (const moduleCode of Object.keys(overrides)) {
|
||||
// Skip overrides for modules not actually installed. The installer writes
|
||||
// `_bmad/<module>/config.yaml` for every installed module (including core),
|
||||
// so its presence is a reliable "is this module here?" signal that works
|
||||
// for both fresh installs and quick-updates without coupling to caller-
|
||||
// supplied module lists.
|
||||
const moduleConfigYaml = path.join(bmadDir, moduleCode, 'config.yaml');
|
||||
if (!(await fs.pathExists(moduleConfigYaml))) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const section = sectionHeader(moduleCode);
|
||||
const moduleOverrides = overrides[moduleCode] || {};
|
||||
for (const key of Object.keys(moduleOverrides)) {
|
||||
const value = moduleOverrides[key];
|
||||
const valueToml = tomlString(value);
|
||||
|
||||
const userOwnsIt = await tomlHasKey(userPath, section, key);
|
||||
const targetPath = userOwnsIt ? userPath : teamPath;
|
||||
|
||||
// The team file always exists post-install; the user file only exists
|
||||
// if the install wrote at least one user-scope key. If we're routing to
|
||||
// it but it doesn't exist yet, create it with a minimal header so it
|
||||
// has the same shape as installer-written user toml.
|
||||
let content = '';
|
||||
if (await fs.pathExists(targetPath)) {
|
||||
content = await fs.readFile(targetPath, 'utf8');
|
||||
} else {
|
||||
content = '# Personal overrides for _bmad/config.toml.\n';
|
||||
}
|
||||
|
||||
const next = upsertTomlKey(content, section, key, valueToml);
|
||||
await fs.writeFile(targetPath, next, 'utf8');
|
||||
applied.push({
|
||||
module: moduleCode,
|
||||
key,
|
||||
scope: userOwnsIt ? 'user' : 'team',
|
||||
file: path.basename(targetPath),
|
||||
});
|
||||
}
|
||||
|
||||
// Also patch the per-module yaml (`_bmad/<module>/config.yaml`). The
|
||||
// installer reads this file as `_existingConfig` on subsequent runs and
|
||||
// surfaces declared values as prompt defaults — under `--yes` those
|
||||
// defaults are accepted, so patching here gives `--set` natural
|
||||
// carry-forward for declared keys without needing schema-strict
|
||||
// partition exemptions in the manifest writer. For undeclared keys the
|
||||
// value lives in the per-module yaml but won't be re-emitted into
|
||||
// config.toml on the next install (the schema-strict partition drops
|
||||
// it); re-pass `--set` if you need it sticky.
|
||||
const moduleYamlPath = path.join(bmadDir, moduleCode, 'config.yaml');
|
||||
if (await fs.pathExists(moduleYamlPath)) {
|
||||
try {
|
||||
const text = await fs.readFile(moduleYamlPath, 'utf8');
|
||||
const parsed = yaml.parse(text);
|
||||
if (parsed && typeof parsed === 'object' && !Array.isArray(parsed)) {
|
||||
// Preserve the installer's banner header (everything up to the
|
||||
// first non-comment line) so `_bmad/<module>/config.yaml` keeps
|
||||
// its provenance comments after we round-trip it.
|
||||
const headerLines = [];
|
||||
for (const line of text.split('\n')) {
|
||||
if (line.startsWith('#') || line.trim() === '') {
|
||||
headerLines.push(line);
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
for (const key of Object.keys(moduleOverrides)) {
|
||||
parsed[key] = moduleOverrides[key];
|
||||
}
|
||||
const body = yaml.stringify(parsed, { indent: 2, lineWidth: 0, minContentWidth: 0 });
|
||||
const header = headerLines.length > 0 ? headerLines.join('\n') + '\n' : '';
|
||||
await fs.writeFile(moduleYamlPath, header + body, 'utf8');
|
||||
}
|
||||
} catch {
|
||||
// Per-module yaml unparseable — skip silently. The central toml was
|
||||
// already patched above, which is the user-visible state for the
|
||||
// current install. Carry-forward will fail next install but the
|
||||
// current install reflects the override.
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return applied;
|
||||
}
|
||||
|
||||
module.exports = { parseSetEntry, parseSetEntries, applySetOverrides, upsertTomlKey, tomlString };
|
||||
|
|
@ -16,6 +16,7 @@ const {
|
|||
} = require('./modules/channel-plan');
|
||||
const channelResolver = require('./modules/channel-resolver');
|
||||
const prompts = require('./prompts');
|
||||
const { parseSetEntries } = require('./set-overrides');
|
||||
|
||||
const manifest = new Manifest();
|
||||
|
||||
|
|
@ -200,12 +201,15 @@ class UI {
|
|||
actionType = options.action;
|
||||
await prompts.log.info(`Using action from command-line: ${actionType}`);
|
||||
} else if (options.yes) {
|
||||
// Default to quick-update if available, otherwise first available choice
|
||||
// Default to quick-update if available, unless flags that require the
|
||||
// full update path are present (e.g. --custom-source which re-clones
|
||||
// modules at a new version — quick-update skips that entirely).
|
||||
if (choices.length === 0) {
|
||||
throw new Error('No valid actions available for this installation');
|
||||
}
|
||||
const hasQuickUpdate = choices.some((c) => c.value === 'quick-update');
|
||||
actionType = hasQuickUpdate ? 'quick-update' : choices[0].value;
|
||||
const needsFullUpdate = !!options.customSource;
|
||||
actionType = hasQuickUpdate && !needsFullUpdate ? 'quick-update' : (choices.find((c) => c.value === 'update') || choices[0]).value;
|
||||
await prompts.log.info(`Non-interactive mode (--yes): defaulting to ${actionType}`);
|
||||
} else {
|
||||
actionType = await prompts.select({
|
||||
|
|
@ -241,8 +245,11 @@ class UI {
|
|||
.map((m) => m.trim())
|
||||
.filter(Boolean);
|
||||
await prompts.log.info(`Using modules from command-line: ${selectedModules.join(', ')}`);
|
||||
} else if (options.customSource) {
|
||||
// Custom source without --modules: start with empty list (core added below)
|
||||
} else if (options.customSource && !options.yes) {
|
||||
// Custom source without --modules or --yes: start with empty list
|
||||
// (only custom source modules + core will be installed).
|
||||
// When --yes is also set, fall through to the --yes branch so all
|
||||
// installed modules are included alongside the custom source modules.
|
||||
selectedModules = [];
|
||||
} else if (options.yes) {
|
||||
selectedModules = await this.getDefaultModules(installedModuleIds);
|
||||
|
|
@ -281,7 +288,7 @@ class UI {
|
|||
// Get tool selection
|
||||
const toolSelection = await this.promptToolSelection(confirmedDirectory, options);
|
||||
|
||||
const moduleConfigs = await this.collectModuleConfigs(confirmedDirectory, selectedModules, {
|
||||
const { moduleConfigs, setOverrides } = await this.collectModuleConfigs(confirmedDirectory, selectedModules, {
|
||||
...options,
|
||||
channelOptions,
|
||||
});
|
||||
|
|
@ -307,6 +314,7 @@ class UI {
|
|||
skipIde: toolSelection.skipIde,
|
||||
coreConfig: moduleConfigs.core || {},
|
||||
moduleConfigs: moduleConfigs,
|
||||
setOverrides,
|
||||
skipPrompts: options.yes || false,
|
||||
channelOptions,
|
||||
};
|
||||
|
|
@ -358,7 +366,7 @@ class UI {
|
|||
await this._interactiveChannelGate({ options, channelOptions, selectedModules });
|
||||
|
||||
let toolSelection = await this.promptToolSelection(confirmedDirectory, options);
|
||||
const moduleConfigs = await this.collectModuleConfigs(confirmedDirectory, selectedModules, {
|
||||
const { moduleConfigs, setOverrides } = await this.collectModuleConfigs(confirmedDirectory, selectedModules, {
|
||||
...options,
|
||||
channelOptions,
|
||||
});
|
||||
|
|
@ -384,6 +392,7 @@ class UI {
|
|||
skipIde: toolSelection.skipIde,
|
||||
coreConfig: moduleConfigs.core || {},
|
||||
moduleConfigs: moduleConfigs,
|
||||
setOverrides,
|
||||
skipPrompts: options.yes || false,
|
||||
channelOptions,
|
||||
};
|
||||
|
|
@ -398,6 +407,37 @@ class UI {
|
|||
* @param {Object} options - Command-line options
|
||||
* @returns {Object} Tool configuration
|
||||
*/
|
||||
_parseToolsFlag(toolsArg, allKnownValues) {
|
||||
const selectedIdes = toolsArg
|
||||
.split(',')
|
||||
.map((t) => t.trim())
|
||||
.filter(Boolean);
|
||||
|
||||
if (selectedIdes.length === 0) {
|
||||
const err = new Error(
|
||||
'--tools was passed empty. Provide at least one tool ID (e.g. --tools claude-code) or run with --list-tools to see valid IDs.',
|
||||
);
|
||||
err.expected = true;
|
||||
throw err;
|
||||
}
|
||||
|
||||
const unknown = selectedIdes.filter((id) => !allKnownValues.has(id));
|
||||
if (unknown.length > 0) {
|
||||
const err = new Error(
|
||||
[
|
||||
`Unknown tool ID${unknown.length === 1 ? '' : 's'}: ${unknown.join(', ')}`,
|
||||
'',
|
||||
'Run with --list-tools to see all valid IDs.',
|
||||
'Common: claude-code, cursor, copilot, windsurf, cline',
|
||||
].join('\n'),
|
||||
);
|
||||
err.expected = true;
|
||||
throw err;
|
||||
}
|
||||
|
||||
return selectedIdes;
|
||||
}
|
||||
|
||||
async promptToolSelection(projectDir, options = {}) {
|
||||
const { ExistingInstall } = require('./core/existing-install');
|
||||
const { Installer } = require('./core/installer');
|
||||
|
|
@ -432,15 +472,10 @@ class UI {
|
|||
const allTools = [...preferredIdes, ...otherIdes];
|
||||
|
||||
// Non-interactive: handle --tools and --yes flags before interactive prompt
|
||||
if (options.tools) {
|
||||
if (options.tools.toLowerCase() === 'none') {
|
||||
await prompts.log.info('Skipping tool configuration (--tools none)');
|
||||
return { ides: [], skipIde: true };
|
||||
}
|
||||
const selectedIdes = options.tools
|
||||
.split(',')
|
||||
.map((t) => t.trim())
|
||||
.filter(Boolean);
|
||||
// Use !== undefined so an explicit --tools "" falls through to _parseToolsFlag and
|
||||
// gets a specific "passed empty" error instead of being silently ignored.
|
||||
if (options.tools !== undefined) {
|
||||
const selectedIdes = this._parseToolsFlag(options.tools, allKnownValues);
|
||||
await prompts.log.info(`Using tools from command-line: ${selectedIdes.join(', ')}`);
|
||||
await this.displaySelectedTools(selectedIdes, preferredIdes, allTools);
|
||||
return { ides: selectedIdes, skipIde: false };
|
||||
|
|
@ -516,21 +551,13 @@ class UI {
|
|||
|
||||
let selectedIdes = [];
|
||||
|
||||
// Check if tools are provided via command-line
|
||||
if (options.tools) {
|
||||
// Check for explicit "none" value to skip tool installation
|
||||
if (options.tools.toLowerCase() === 'none') {
|
||||
await prompts.log.info('Skipping tool configuration (--tools none)');
|
||||
return { ides: [], skipIde: true };
|
||||
} else {
|
||||
selectedIdes = options.tools
|
||||
.split(',')
|
||||
.map((t) => t.trim())
|
||||
.filter(Boolean);
|
||||
// Check if tools are provided via command-line.
|
||||
// Use !== undefined so an explicit --tools "" still hits _parseToolsFlag's empty-value error.
|
||||
if (options.tools !== undefined) {
|
||||
selectedIdes = this._parseToolsFlag(options.tools, allKnownValues);
|
||||
await prompts.log.info(`Using tools from command-line: ${selectedIdes.join(', ')}`);
|
||||
await this.displaySelectedTools(selectedIdes, preferredIdes, allTools);
|
||||
return { ides: selectedIdes, skipIde: false };
|
||||
}
|
||||
} else if (options.yes) {
|
||||
// If --yes flag is set, skip tool prompt and use previously configured tools or empty
|
||||
if (configuredIdes.length > 0) {
|
||||
|
|
@ -538,8 +565,18 @@ class UI {
|
|||
await this.displaySelectedTools(configuredIdes, preferredIdes, allTools);
|
||||
return { ides: configuredIdes, skipIde: false };
|
||||
} else {
|
||||
await prompts.log.info('Skipping tool configuration (--yes flag, no previous tools)');
|
||||
return { ides: [], skipIde: true };
|
||||
const err = new Error(
|
||||
[
|
||||
'--tools is required for non-interactive install (--yes / -y) when no tools are previously configured.',
|
||||
'',
|
||||
'Common: claude-code, cursor, copilot, windsurf, cline',
|
||||
'See all supported tools: bmad-method install --list-tools',
|
||||
'',
|
||||
'Example: bmad-method install --modules bmm --tools claude-code -y',
|
||||
].join('\n'),
|
||||
);
|
||||
err.expected = true;
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -675,6 +712,33 @@ class UI {
|
|||
*/
|
||||
async collectModuleConfigs(directory, modules, options = {}) {
|
||||
const { OfficialModules } = require('./modules/official-modules');
|
||||
|
||||
// Parse --set up front purely to surface user-error before the install
|
||||
// burns time on the network / filesystem. The actual application happens
|
||||
// in installer.install() as a post-write TOML patch — see
|
||||
// `tools/installer/set-overrides.js`. We also warn about overrides
|
||||
// targeting modules the user didn't include, since those will silently
|
||||
// miss the file the patch step looks for.
|
||||
let setOverrides = {};
|
||||
try {
|
||||
setOverrides = parseSetEntries(options.set || []);
|
||||
} catch (error) {
|
||||
// install.js validated already; rethrow as-is for the user.
|
||||
throw error;
|
||||
}
|
||||
// Drop overrides for modules that aren't in the install set so the
|
||||
// post-install patch step doesn't create orphan sections in config.toml
|
||||
// for modules that were never installed.
|
||||
const selectedModuleSet = new Set(['core', ...modules]);
|
||||
for (const moduleCode of Object.keys(setOverrides)) {
|
||||
if (!selectedModuleSet.has(moduleCode)) {
|
||||
await prompts.log.warn(
|
||||
`--set ${moduleCode}.* — module '${moduleCode}' is not in the install set; values will be ignored. Add it to --modules to apply.`,
|
||||
);
|
||||
delete setOverrides[moduleCode];
|
||||
}
|
||||
}
|
||||
|
||||
const configCollector = new OfficialModules({ channelOptions: options.channelOptions });
|
||||
|
||||
// Seed core config from CLI options if provided
|
||||
|
|
@ -724,6 +788,9 @@ class UI {
|
|||
const defaultUsername = safeUsername.charAt(0).toUpperCase() + safeUsername.slice(1);
|
||||
configCollector.collectedConfig.core = {
|
||||
user_name: defaultUsername,
|
||||
// {directory_name} default per src/core-skills/module.yaml — matches what the
|
||||
// interactive flow resolves via buildQuestion()'s {directory_name} placeholder.
|
||||
project_name: path.basename(directory),
|
||||
communication_language: 'English',
|
||||
document_output_language: 'English',
|
||||
output_folder: '_bmad-output',
|
||||
|
|
@ -737,7 +804,7 @@ class UI {
|
|||
skipPrompts: options.yes || false,
|
||||
});
|
||||
|
||||
return configCollector.collectedConfig;
|
||||
return { moduleConfigs: configCollector.collectedConfig, setOverrides };
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -129,13 +129,45 @@ export default defineConfig({
|
|||
// TEA docs moved to standalone module site; keep BMM sidebar focused.
|
||||
{
|
||||
label: 'BMad Ecosystem',
|
||||
translations: { 'vi-VN': 'Hệ sinh thái BMad', 'zh-CN': 'BMad 生态系统', 'fr-FR': 'Écosystème BMad', 'cs-CZ': 'Ekosystém BMad' },
|
||||
collapsed: false,
|
||||
items: [
|
||||
{ label: 'BMad Builder', link: 'https://bmad-builder-docs.bmad-method.org/', attrs: { target: '_blank' } },
|
||||
{ label: 'Creative Intelligence Suite', link: 'https://cis-docs.bmad-method.org/', attrs: { target: '_blank' } },
|
||||
{ label: 'Game Dev Studio', link: 'https://game-dev-studio-docs.bmad-method.org/', attrs: { target: '_blank' } },
|
||||
{
|
||||
label: 'BMad Builder',
|
||||
translations: { 'vi-VN': 'BMad Builder', 'zh-CN': 'BMad 构建器', 'fr-FR': 'BMad Builder', 'cs-CZ': 'BMad Builder' },
|
||||
link: 'https://bmad-builder-docs.bmad-method.org/',
|
||||
attrs: { target: '_blank' },
|
||||
},
|
||||
{
|
||||
label: 'Creative Intelligence Suite',
|
||||
translations: {
|
||||
'vi-VN': 'Bộ công cụ Trí tuệ Sáng tạo',
|
||||
'zh-CN': '创意智能套件',
|
||||
'fr-FR': "Suite d'Intelligence Créative",
|
||||
'cs-CZ': 'Sada kreativní inteligence',
|
||||
},
|
||||
link: 'https://cis-docs.bmad-method.org/',
|
||||
attrs: { target: '_blank' },
|
||||
},
|
||||
{
|
||||
label: 'Game Dev Studio',
|
||||
translations: {
|
||||
'vi-VN': 'Xưởng phát triển Game',
|
||||
'zh-CN': '游戏开发工作室',
|
||||
'fr-FR': 'Studio de Développement de Jeux',
|
||||
'cs-CZ': 'Herní vývojové studio',
|
||||
},
|
||||
link: 'https://game-dev-studio-docs.bmad-method.org/',
|
||||
attrs: { target: '_blank' },
|
||||
},
|
||||
{
|
||||
label: 'Test Architect (TEA)',
|
||||
translations: {
|
||||
'vi-VN': 'Kiến trúc sư Kiểm thử (TEA)',
|
||||
'zh-CN': '测试架构师 (TEA)',
|
||||
'fr-FR': 'Architecte de Tests (TEA)',
|
||||
'cs-CZ': 'Testovací architekt (TEA)',
|
||||
},
|
||||
link: 'https://bmad-code-org.github.io/bmad-method-test-architecture-enterprise/',
|
||||
attrs: { target: '_blank' },
|
||||
},
|
||||
|
|
|
|||
Loading…
Reference in New Issue