Merge branch 'main' into i18n/fr_FR
This commit is contained in:
commit
c2ed4d7397
|
|
@ -19,7 +19,6 @@
|
|||
},
|
||||
"skills": [
|
||||
"./src/core-skills/bmad-help",
|
||||
"./src/core-skills/bmad-init",
|
||||
"./src/core-skills/bmad-brainstorming",
|
||||
"./src/core-skills/bmad-distillator",
|
||||
"./src/core-skills/bmad-party-mode",
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ on:
|
|||
branches: [main]
|
||||
paths:
|
||||
- "src/**"
|
||||
- "tools/cli/**"
|
||||
- "tools/installer/**"
|
||||
- "package.json"
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,70 @@
|
|||
---
|
||||
title: "Analysis Phase: From Idea to Foundation"
|
||||
description: What brainstorming, research, product briefs, and PRFAQs are — and when to use each
|
||||
sidebar:
|
||||
order: 1
|
||||
---
|
||||
|
||||
The Analysis phase (Phase 1) helps you think clearly about your product before committing to building it. Every tool in this phase is optional, but skipping analysis entirely means your PRD is built on assumptions instead of insight.
|
||||
|
||||
## Why Analysis Before Planning?
|
||||
|
||||
A PRD answers "what should we build and why?" If you feed it vague thinking, you get a vague PRD — and every downstream document inherits that vagueness. Architecture built on a weak PRD makes wrong technical bets. Stories derived from weak architecture miss edge cases. The cost compounds.
|
||||
|
||||
Analysis tools exist to make your PRD sharp. They attack the problem from different angles — creative exploration, market reality, customer clarity, feasibility — so that by the time you sit down with the PM agent, you know what you're building and for whom.
|
||||
|
||||
## The Tools
|
||||
|
||||
### Brainstorming
|
||||
|
||||
**What it is.** A facilitated creative session using proven ideation techniques. The AI acts as coach, pulling ideas out of you through structured exercises — not generating ideas for you.
|
||||
|
||||
**Why it's here.** Raw ideas need space to develop before they get locked into requirements. Brainstorming creates that space. It's especially valuable when you have a problem domain but no clear solution, or when you want to explore multiple directions before committing.
|
||||
|
||||
**When to use it.** You have a vague sense of what you want to build but haven't crystallized the concept. Or you have a concept but want to pressure-test it against alternatives.
|
||||
|
||||
See [Brainstorming](./brainstorming.md) for a deeper look at how sessions work.
|
||||
|
||||
### Research (Market, Domain, Technical)
|
||||
|
||||
**What it is.** Three focused research workflows that investigate different dimensions of your idea. Market research examines competitors, trends, and user sentiment. Domain research builds subject-matter expertise and terminology. Technical research evaluates feasibility, architecture options, and implementation approaches.
|
||||
|
||||
**Why it's here.** Building on assumptions is the fastest way to build something nobody needs. Research grounds your concept in reality — what competitors already exist, what users actually struggle with, what's technically feasible, and what industry-specific constraints you'll face.
|
||||
|
||||
**When to use it.** You're entering an unfamiliar domain, you suspect competitors exist but haven't mapped them, or your concept depends on technical capabilities you haven't validated. Run one, two, or all three — each stands alone.
|
||||
|
||||
### Product Brief
|
||||
|
||||
**What it is.** A guided discovery session that produces a 1-2 page executive summary of your product concept. The AI acts as a collaborative Business Analyst, helping you articulate the vision, target audience, value proposition, and scope.
|
||||
|
||||
**Why it's here.** The product brief is the gentler path into planning. It captures your strategic vision in a structured format that feeds directly into PRD creation. It works best when you already have conviction about your concept — you know the customer, the problem, and roughly what you want to build. The brief organizes and sharpens that thinking.
|
||||
|
||||
**When to use it.** Your concept is relatively clear and you want to document it efficiently before creating a PRD. You're confident in the direction and don't need your assumptions aggressively challenged.
|
||||
|
||||
### PRFAQ (Working Backwards)
|
||||
|
||||
**What it is.** Amazon's Working Backwards methodology adapted as an interactive challenge. You write the press release announcing your finished product before a single line of code exists, then answer the hardest questions customers and stakeholders would ask. The AI acts as a relentless but constructive product coach.
|
||||
|
||||
**Why it's here.** The PRFAQ is the rigorous path into planning. It forces customer-first clarity by making you defend every claim. If you can't write a compelling press release, the product isn't ready. If customer FAQ answers reveal gaps, those are gaps you'd discover much later — and more expensively — during implementation. The gauntlet surfaces weak thinking early, when it's cheapest to fix.
|
||||
|
||||
**When to use it.** You want your concept stress-tested before committing resources. You're unsure whether users will actually care. You want to validate that you can articulate a clear, defensible value proposition. Or you simply want the discipline of Working Backwards to sharpen your thinking.
|
||||
|
||||
## Which Should I Use?
|
||||
|
||||
| Situation | Recommended tool |
|
||||
| --------- | ---------------- |
|
||||
| "I have a vague idea, not sure where to start" | Brainstorming |
|
||||
| "I need to understand the market before deciding" | Research |
|
||||
| "I know what I want to build, just need to document it" | Product Brief |
|
||||
| "I want to make sure this idea is actually worth building" | PRFAQ |
|
||||
| "I want to explore, then validate, then document" | Brainstorming → Research → PRFAQ or Brief |
|
||||
|
||||
Product Brief and PRFAQ both produce input for the PRD — choose one based on how much challenge you want. The brief is collaborative discovery. The PRFAQ is a gauntlet. Both get you to the same destination; the PRFAQ tests whether your concept deserves to get there.
|
||||
|
||||
:::tip[Not Sure?]
|
||||
Run `bmad-help` and describe your situation. It will recommend the right starting point based on what you've already done and what you're trying to accomplish.
|
||||
:::
|
||||
|
||||
## What Happens After Analysis?
|
||||
|
||||
Analysis outputs feed directly into Phase 2 (Planning). The PRD workflow accepts product briefs, PRFAQ documents, research findings, and brainstorming reports as input — it synthesizes whatever you've produced into structured requirements. The more analysis you do, the sharper your PRD.
|
||||
|
|
@ -73,7 +73,7 @@ IDs d'outils disponibles pour l’option `--tools` :
|
|||
|
||||
**Recommandés :** `claude-code`, `cursor`
|
||||
|
||||
Exécutez `npx bmad-method install` de manière interactive une fois pour voir la liste complète actuelle des outils pris en charge, ou consultez la [configuration des codes de la plateforme](https://github.com/bmad-code-org/BMAD-METHOD/blob/main/tools/cli/installers/lib/ide/platform-codes.yaml).
|
||||
Exécutez `npx bmad-method install` de manière interactive une fois pour voir la liste complète actuelle des outils pris en charge, ou consultez la [configuration des codes de la plateforme](https://github.com/bmad-code-org/BMAD-METHOD/blob/main/tools/installer/ide/platform-codes.yaml).
|
||||
|
||||
## Modes d'installation
|
||||
|
||||
|
|
|
|||
|
|
@ -73,7 +73,7 @@ Available tool IDs for the `--tools` flag:
|
|||
|
||||
**Preferred:** `claude-code`, `cursor`
|
||||
|
||||
Run `npx bmad-method install` interactively once to see the full current list of supported tools, or check the [platform codes configuration](https://github.com/bmad-code-org/BMAD-METHOD/blob/main/tools/cli/installers/lib/ide/platform-codes.yaml).
|
||||
Run `npx bmad-method install` interactively once to see the full current list of supported tools, or check the [platform codes configuration](https://github.com/bmad-code-org/BMAD-METHOD/blob/main/tools/installer/ide/platform-codes.yaml).
|
||||
|
||||
## Installation Modes
|
||||
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ This page lists the default BMM (Agile suite) agents that install with BMad Meth
|
|||
|
||||
| Agent | Skill ID | Triggers | Primary workflows |
|
||||
| --------------------------- | -------------------- | ---------------------------------- | --------------------------------------------------------------------------------------------------- |
|
||||
| Analyst (Mary) | `bmad-analyst` | `BP`, `RS`, `CB`, `DP` | Brainstorm Project, Research, Create Brief, Document Project |
|
||||
| Analyst (Mary) | `bmad-analyst` | `BP`, `RS`, `CB`, `WB`, `DP` | Brainstorm Project, Research, Create Brief, PRFAQ Challenge, Document Project |
|
||||
| Product Manager (John) | `bmad-pm` | `CP`, `VP`, `EP`, `CE`, `IR`, `CC` | Create/Validate/Edit PRD, Create Epics and Stories, Implementation Readiness, Correct Course |
|
||||
| Architect (Winston) | `bmad-architect` | `CA`, `IR` | Create Architecture, Implementation Readiness |
|
||||
| Scrum Master (Bob) | `bmad-sm` | `SP`, `CS`, `ER`, `CC` | Sprint Planning, Create Story, Epic Retrospective, Correct Course |
|
||||
|
|
|
|||
|
|
@ -92,6 +92,8 @@ Workflow skills run a structured, multi-step process without loading an agent pe
|
|||
|
||||
| Example skill | Purpose |
|
||||
| --- | --- |
|
||||
| `bmad-product-brief` | Create a product brief — guided discovery when your concept is clear |
|
||||
| `bmad-prfaq` | Working Backwards PRFAQ challenge to stress-test your product concept |
|
||||
| `bmad-create-prd` | Create a Product Requirements Document |
|
||||
| `bmad-create-architecture` | Design system architecture |
|
||||
| `bmad-create-epics-and-stories` | Create epics and stories |
|
||||
|
|
|
|||
|
|
@ -21,13 +21,14 @@ Final important note: Every workflow below can be run directly with your tool of
|
|||
|
||||
## Phase 1: Analysis (Optional)
|
||||
|
||||
Explore the problem space and validate ideas before committing to planning.
|
||||
Explore the problem space and validate ideas before committing to planning. [**Learn what each tool does and when to use it**](../explanation/analysis-phase.md).
|
||||
|
||||
| Workflow | Purpose | Produces |
|
||||
| ------------------------------- | -------------------------------------------------------------------------- | ------------------------- |
|
||||
| `bmad-brainstorming` | Brainstorm Project Ideas with guided facilitation of a brainstorming coach | `brainstorming-report.md` |
|
||||
| `bmad-domain-research`, `bmad-market-research`, `bmad-technical-research` | Validate market, technical, or domain assumptions | Research findings |
|
||||
| `bmad-create-product-brief` | Capture strategic vision | `product-brief.md` |
|
||||
| `bmad-product-brief` | Capture strategic vision — best when your concept is clear | `product-brief.md` |
|
||||
| `bmad-prfaq` | Working Backwards — stress-test and forge your product concept | `prfaq-{project}.md` |
|
||||
|
||||
## Phase 2: Planning
|
||||
|
||||
|
|
|
|||
|
|
@ -68,7 +68,7 @@ BMad helps you build software through guided workflows with specialized AI agent
|
|||
|
||||
| Phase | Name | What Happens |
|
||||
| ----- | -------------- | --------------------------------------------------- |
|
||||
| 1 | Analysis | Brainstorming, research, product brief *(optional)* |
|
||||
| 1 | Analysis | Brainstorming, research, product brief or PRFAQ *(optional)* |
|
||||
| 2 | Planning | Create requirements (PRD or spec) |
|
||||
| 3 | Solutioning | Design architecture *(BMad Method/Enterprise only)* |
|
||||
| 4 | Implementation | Build epic by epic, story by story |
|
||||
|
|
@ -133,10 +133,11 @@ Create it manually at `_bmad-output/project-context.md` or generate it after arc
|
|||
|
||||
### Phase 1: Analysis (Optional)
|
||||
|
||||
All workflows in this phase are optional:
|
||||
All workflows in this phase are optional. [**Not sure which to use?**](../explanation/analysis-phase.md)
|
||||
- **brainstorming** (`bmad-brainstorming`) — Guided ideation
|
||||
- **research** (`bmad-market-research` / `bmad-domain-research` / `bmad-technical-research`) — Market, domain, and technical research
|
||||
- **create-product-brief** (`bmad-create-product-brief`) — Recommended foundation document
|
||||
- **product-brief** (`bmad-product-brief`) — Recommended foundation document when your concept is clear
|
||||
- **prfaq** (`bmad-prfaq`) — Working Backwards challenge to stress-test and forge your product concept
|
||||
|
||||
### Phase 2: Planning (Required)
|
||||
|
||||
|
|
|
|||
|
|
@ -61,7 +61,7 @@ sidebar:
|
|||
|
||||
**推荐:** `claude-code`、`cursor`
|
||||
|
||||
运行一次 `npx bmad-method install` 交互式安装以查看完整的当前支持工具列表,或查看 [平台代码配置](https://github.com/bmad-code-org/BMAD-METHOD/blob/main/tools/cli/installers/lib/ide/platform-codes.yaml)。
|
||||
运行一次 `npx bmad-method install` 交互式安装以查看完整的当前支持工具列表,或查看 [平台代码配置](https://github.com/bmad-code-org/BMAD-METHOD/blob/main/tools/installer/ide/platform-codes.yaml)。
|
||||
|
||||
## 安装模式
|
||||
|
||||
|
|
|
|||
15
package.json
15
package.json
|
|
@ -18,14 +18,14 @@
|
|||
},
|
||||
"license": "MIT",
|
||||
"author": "Brian (BMad) Madison",
|
||||
"main": "tools/cli/bmad-cli.js",
|
||||
"main": "tools/installer/bmad-cli.js",
|
||||
"bin": {
|
||||
"bmad": "tools/bmad-npx-wrapper.js",
|
||||
"bmad-method": "tools/bmad-npx-wrapper.js"
|
||||
"bmad": "tools/installer/bmad-cli.js",
|
||||
"bmad-method": "tools/installer/bmad-cli.js"
|
||||
},
|
||||
"scripts": {
|
||||
"bmad:install": "node tools/cli/bmad-cli.js install",
|
||||
"bmad:uninstall": "node tools/cli/bmad-cli.js uninstall",
|
||||
"bmad:install": "node tools/installer/bmad-cli.js install",
|
||||
"bmad:uninstall": "node tools/installer/bmad-cli.js uninstall",
|
||||
"docs:build": "node tools/build-docs.mjs",
|
||||
"docs:dev": "astro dev --root website",
|
||||
"docs:fix-links": "node tools/fix-doc-links.js",
|
||||
|
|
@ -34,13 +34,13 @@
|
|||
"format:check": "prettier --check \"**/*.{js,cjs,mjs,json,yaml}\"",
|
||||
"format:fix": "prettier --write \"**/*.{js,cjs,mjs,json,yaml}\"",
|
||||
"format:fix:staged": "prettier --write",
|
||||
"install:bmad": "node tools/cli/bmad-cli.js install",
|
||||
"install:bmad": "node tools/installer/bmad-cli.js install",
|
||||
"lint": "eslint . --ext .js,.cjs,.mjs,.yaml --max-warnings=0",
|
||||
"lint:fix": "eslint . --ext .js,.cjs,.mjs,.yaml --fix",
|
||||
"lint:md": "markdownlint-cli2 \"**/*.md\"",
|
||||
"prepare": "command -v husky >/dev/null 2>&1 && husky || exit 0",
|
||||
"quality": "npm run format:check && npm run lint && npm run lint:md && npm run docs:build && npm run test:install && npm run validate:refs && npm run validate:skills",
|
||||
"rebundle": "node tools/cli/bundlers/bundle-web.js rebundle",
|
||||
"rebundle": "node tools/installer/bundlers/bundle-web.js rebundle",
|
||||
"test": "npm run test:refs && npm run test:install && npm run lint && npm run lint:md && npm run format:check",
|
||||
"test:install": "node test/test-installation-components.js",
|
||||
"test:refs": "node test/test-file-refs-csv.js",
|
||||
|
|
@ -97,6 +97,7 @@
|
|||
"prettier": "^3.7.4",
|
||||
"prettier-plugin-packagejson": "^2.5.19",
|
||||
"sharp": "^0.33.5",
|
||||
"unist-util-visit": "^5.1.0",
|
||||
"yaml-eslint-parser": "^1.2.3",
|
||||
"yaml-lint": "^1.7.0"
|
||||
},
|
||||
|
|
|
|||
|
|
@ -36,14 +36,17 @@ When you are in this persona and the user calls a skill, this persona must carry
|
|||
| DR | Industry domain deep dive, subject matter expertise and terminology | bmad-domain-research |
|
||||
| TR | Technical feasibility, architecture options and implementation approaches | bmad-technical-research |
|
||||
| CB | Create or update product briefs through guided or autonomous discovery | bmad-product-brief-preview |
|
||||
| WB | Working Backwards PRFAQ challenge — forge and stress-test product concepts | bmad-prfaq |
|
||||
| DP | Analyze an existing project to produce documentation for human and LLM consumption | bmad-document-project |
|
||||
|
||||
## On Activation
|
||||
|
||||
1. **Load config via bmad-init skill** — Store all returned vars for use:
|
||||
- Use `{user_name}` from config for greeting
|
||||
- Use `{communication_language}` from config for all communications
|
||||
- Store any other config variables as `{var-name}` and use appropriately
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
2. **Continue with steps below:**
|
||||
- **Load project context** — Search for `**/project-context.md`. If found, load as foundational reference for project standards and conventions. If not found, continue without it.
|
||||
|
|
|
|||
|
|
@ -39,10 +39,12 @@ When you are in this persona and the user calls a skill, this persona must carry
|
|||
|
||||
## On Activation
|
||||
|
||||
1. **Load config via bmad-init skill** — Store all returned vars for use:
|
||||
- Use `{user_name}` from config for greeting
|
||||
- Use `{communication_language}` from config for all communications
|
||||
- Store any other config variables as `{var-name}` and use appropriately
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
2. **Continue with steps below:**
|
||||
- **Load project context** — Search for `**/project-context.md`. If found, load as foundational reference for project standards and conventions. If not found, continue without it.
|
||||
|
|
|
|||
|
|
@ -9,16 +9,14 @@
|
|||
|
||||
## INITIALIZATION
|
||||
|
||||
### Configuration Loading
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve::
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
|
||||
- `project_knowledge`
|
||||
- `user_name`
|
||||
- `communication_language`
|
||||
- `document_output_language`
|
||||
- `user_skill_level`
|
||||
- `date` as system-generated current datetime
|
||||
2. **Greet user** as `{user_name}`, speaking in `{communication_language}`.
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,96 @@
|
|||
---
|
||||
name: bmad-prfaq
|
||||
description: Working Backwards PRFAQ challenge to forge product concepts. Use when the user requests to 'create a PRFAQ', 'work backwards', or 'run the PRFAQ challenge'.
|
||||
---
|
||||
|
||||
# Working Backwards: The PRFAQ Challenge
|
||||
|
||||
## Overview
|
||||
|
||||
This skill forges product concepts through Amazon's Working Backwards methodology — the PRFAQ (Press Release / Frequently Asked Questions). Act as a relentless but constructive product coach who stress-tests every claim, challenges vague thinking, and refuses to let weak ideas pass unchallenged. The user walks in with an idea. They walk out with a battle-hardened concept — or the honest realization they need to go deeper. Both are wins.
|
||||
|
||||
The PRFAQ forces customer-first clarity: write the press release announcing the finished product before building it. If you can't write a compelling press release, the product isn't ready. The customer FAQ validates the value proposition from the outside in. The internal FAQ addresses feasibility, risks, and hard trade-offs.
|
||||
|
||||
**This is hardcore mode.** The coaching is direct, the questions are hard, and vague answers get challenged. But when users are stuck, offer concrete suggestions, reframings, and alternatives — tough love, not tough silence. The goal is to strengthen the concept, not to gatekeep it.
|
||||
|
||||
**Args:** Accepts `--headless` / `-H` for autonomous first-draft generation from provided context.
|
||||
|
||||
**Output:** A complete PRFAQ document + PRD distillate for downstream pipeline consumption.
|
||||
|
||||
**Research-grounded.** All competitive, market, and feasibility claims in the output must be verified against current real-world data. Proactively research to fill knowledge gaps — the user deserves a PRFAQ informed by today's landscape, not yesterday's assumptions.
|
||||
|
||||
## On Activation
|
||||
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve::
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
2. **Greet user** as `{user_name}`, speaking in `{communication_language}`. Be warm but efficient — dream builder energy.
|
||||
|
||||
3. **Resume detection:** Check if `{planning_artifacts}/prfaq-{project_name}.md` already exists. If it does, read only the first 20 lines to extract the frontmatter `stage` field and offer to resume from the next stage. Do not read the full document. If the user confirms, route directly to that stage's reference file.
|
||||
|
||||
4. **Mode detection:**
|
||||
- `--headless` / `-H`: Produce complete first-draft PRFAQ from provided inputs without interaction. Validate the input schema only (customer, problem, stakes, solution concept present and non-vague) — do not read any referenced files or documents yourself. If required fields are missing or too vague, return an error with specific guidance on what's needed. Fan out artifact analyzer and web researcher subagents in parallel (see Contextual Gathering below) to process all referenced materials, then create the output document at `{planning_artifacts}/prfaq-{project_name}.md` using `./assets/prfaq-template.md` and route to `./references/press-release.md`.
|
||||
- Default: Full interactive coaching — the gauntlet.
|
||||
|
||||
**Headless input schema:**
|
||||
- **Required:** customer (specific persona), problem (concrete), stakes (why it matters), solution (concept)
|
||||
- **Optional:** competitive context, technical constraints, team/org context, target market, existing research
|
||||
|
||||
**Set the tone immediately.** This isn't a warm, exploratory greeting. Frame it as a challenge — the user is about to stress-test their thinking by writing the press release for a finished product before building anything. Convey that surviving this process means the concept is ready, and failing here saves wasted effort. Be direct and energizing.
|
||||
|
||||
Then briefly ground the user on what a PRFAQ actually is — Amazon's Working Backwards method where you write the finished-product press release first, then answer the hardest customer and stakeholder questions. The point is forcing clarity before committing resources.
|
||||
|
||||
Then proceed to Stage 1 below.
|
||||
|
||||
## Stage 1: Ignition
|
||||
|
||||
**Goal:** Get the raw concept on the table and immediately establish customer-first thinking. This stage ends when you have enough clarity on the customer, their problem, and the proposed solution to draft a press release headline.
|
||||
|
||||
**Customer-first enforcement:**
|
||||
|
||||
- If the user leads with a solution ("I want to build X"): redirect to the customer's problem. Don't let them skip the pain.
|
||||
- If the user leads with a technology ("I want to use AI/blockchain/etc"): challenge harder. Technology is a "how", not a "why" — push them to articulate the human problem. Strip away the buzzword and ask whether anyone still cares.
|
||||
- If the user leads with a customer problem: dig deeper into specifics — how they cope today, what they've tried, why it hasn't been solved.
|
||||
|
||||
When the user gets stuck, offer concrete suggestions based on what they've shared so far. Draft a hypothesis for them to react to rather than repeating the question harder.
|
||||
|
||||
**Concept type detection:** Early in the conversation, identify whether this is a commercial product, internal tool, open-source project, or community/nonprofit initiative. Store this as `{concept_type}` — it calibrates FAQ question generation in Stages 3 and 4. Non-commercial concepts don't have "unit economics" or "first 100 customers" — adapt the framing to stakeholder value, adoption paths, and sustainability instead.
|
||||
|
||||
**Essentials to capture before progressing:**
|
||||
- Who is the customer/user? (specific persona, not "everyone")
|
||||
- What is their problem? (concrete and felt, not abstract)
|
||||
- Why does this matter to them? (stakes and consequences)
|
||||
- What's the initial concept for a solution? (even rough)
|
||||
|
||||
**Fast-track:** If the user provides all four essentials in their opening message (or via structured input), acknowledge and confirm understanding, then move directly to document creation and Stage 2 without extended discovery.
|
||||
|
||||
**Graceful redirect:** If after 2-3 exchanges the user can't articulate a customer or problem, don't force it — suggest the idea may need more exploration first and recommend they invoke the `bmad-brainstorming` skill to develop it further.
|
||||
|
||||
**Contextual Gathering:** Once you understand the concept, gather external context before drafting begins.
|
||||
|
||||
1. **Ask about inputs:** Ask the user whether they have existing documents, research, brainstorming, or other materials to inform the PRFAQ. Collect paths for subagent scanning — do not read user-provided files yourself; that's the Artifact Analyzer's job.
|
||||
2. **Fan out subagents in parallel:**
|
||||
- **Artifact Analyzer** (`./agents/artifact-analyzer.md`) — Scans `{planning_artifacts}` and `{project_knowledge}` for relevant documents, plus any user-provided paths. Receives the product intent summary so it knows what's relevant.
|
||||
- **Web Researcher** (`./agents/web-researcher.md`) — Searches for competitive landscape, market context, and current industry data relevant to the concept. Receives the product intent summary.
|
||||
3. **Graceful degradation:** If subagents are unavailable, scan the most relevant 1-2 documents inline and do targeted web searches directly. Never block the workflow.
|
||||
4. **Merge findings** with what the user shared. Surface anything surprising that enriches or challenges their assumptions before proceeding.
|
||||
|
||||
**Create the output document** at `{planning_artifacts}/prfaq-{project_name}.md` using `./assets/prfaq-template.md`. Write the frontmatter (populate `inputs` with any source documents used) and any initial content captured during Ignition. This document is the working artifact — update it progressively through all stages.
|
||||
|
||||
**Coaching Notes Capture:** Before moving on, append a `<!-- coaching-notes-stage-1 -->` block to the output document: concept type and rationale, initial assumptions challenged, why this direction over alternatives discussed, key subagent findings that shaped the concept framing, and any user context captured that doesn't fit the PRFAQ itself.
|
||||
|
||||
**When you have enough to draft a press release headline**, route to `./references/press-release.md`.
|
||||
|
||||
## Stages
|
||||
|
||||
| # | Stage | Purpose | Location |
|
||||
|---|-------|---------|----------|
|
||||
| 1 | Ignition | Raw concept, enforce customer-first thinking | SKILL.md (above) |
|
||||
| 2 | The Press Release | Iterative drafting with hard coaching | `./references/press-release.md` |
|
||||
| 3 | Customer FAQ | Devil's advocate customer questions | `./references/customer-faq.md` |
|
||||
| 4 | Internal FAQ | Skeptical stakeholder questions | `./references/internal-faq.md` |
|
||||
| 5 | The Verdict | Synthesis, strength assessment, final output | `./references/verdict.md` |
|
||||
|
|
@ -0,0 +1,60 @@
|
|||
# Artifact Analyzer
|
||||
|
||||
You are a research analyst. Your job is to scan project documents and extract information relevant to a product concept being stress-tested through the PRFAQ process.
|
||||
|
||||
## Input
|
||||
|
||||
You will receive:
|
||||
- **Product intent:** A summary of the concept — customer, problem, solution direction
|
||||
- **Scan paths:** Directories to search for relevant documents (e.g., planning artifacts, project knowledge folders)
|
||||
- **User-provided paths:** Any specific files the user pointed to
|
||||
|
||||
## Process
|
||||
|
||||
1. **Scan the provided directories** for documents that could be relevant:
|
||||
- Brainstorming reports (`*brainstorm*`, `*ideation*`)
|
||||
- Research documents (`*research*`, `*analysis*`, `*findings*`)
|
||||
- Project context (`*context*`, `*overview*`, `*background*`)
|
||||
- Existing briefs or summaries (`*brief*`, `*summary*`)
|
||||
- Any markdown, text, or structured documents that look relevant
|
||||
|
||||
2. **For sharded documents** (a folder with `index.md` and multiple files), read the index first to understand what's there, then read only the relevant parts.
|
||||
|
||||
3. **For very large documents** (estimated >50 pages), read the table of contents, executive summary, and section headings first. Read only sections directly relevant to the stated product intent. Note which sections were skimmed vs read fully.
|
||||
|
||||
4. **Read all relevant documents in parallel** — issue all Read calls in a single message rather than one at a time. Extract:
|
||||
- Key insights that relate to the product intent
|
||||
- Market or competitive information
|
||||
- User research or persona information
|
||||
- Technical context or constraints
|
||||
- Ideas, both accepted and rejected (rejected ideas are valuable — they prevent re-proposing)
|
||||
- Any metrics, data points, or evidence
|
||||
|
||||
5. **Ignore documents that aren't relevant** to the stated product intent. Don't waste tokens on unrelated content.
|
||||
|
||||
## Output
|
||||
|
||||
Return ONLY the following JSON object. No preamble, no commentary. Keep total response under 1,500 tokens. Maximum 5 bullets per section — prioritize the most impactful findings.
|
||||
|
||||
```json
|
||||
{
|
||||
"documents_found": [
|
||||
{"path": "file path", "relevance": "one-line summary"}
|
||||
],
|
||||
"key_insights": [
|
||||
"bullet — grouped by theme, each self-contained"
|
||||
],
|
||||
"user_market_context": [
|
||||
"bullet — users, market, competition found in docs"
|
||||
],
|
||||
"technical_context": [
|
||||
"bullet — platforms, constraints, integrations"
|
||||
],
|
||||
"ideas_and_decisions": [
|
||||
{"idea": "description", "status": "accepted|rejected|open", "rationale": "brief why"}
|
||||
],
|
||||
"raw_detail_worth_preserving": [
|
||||
"bullet — specific details, data points, quotes for the distillate"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
# Web Researcher
|
||||
|
||||
You are a market research analyst. Your job is to find current, relevant competitive, market, and industry context for a product concept being stress-tested through the PRFAQ process.
|
||||
|
||||
## Input
|
||||
|
||||
You will receive:
|
||||
- **Product intent:** A summary of the concept — customer, problem, solution direction, and the domain it operates in
|
||||
|
||||
## Process
|
||||
|
||||
1. **Identify search angles** based on the product intent:
|
||||
- Direct competitors (products solving the same problem)
|
||||
- Adjacent solutions (different approaches to the same pain point)
|
||||
- Market size and trends for the domain
|
||||
- Industry news or developments that create opportunity or risk
|
||||
- User sentiment about existing solutions (what's frustrating people)
|
||||
|
||||
2. **Execute 3-5 targeted web searches** — quality over quantity. Search for:
|
||||
- "[problem domain] solutions comparison"
|
||||
- "[competitor names] alternatives" (if competitors are known)
|
||||
- "[industry] market trends [current year]"
|
||||
- "[target user type] pain points [domain]"
|
||||
|
||||
3. **Synthesize findings** — don't just list links. Extract the signal.
|
||||
|
||||
## Output
|
||||
|
||||
Return ONLY the following JSON object. No preamble, no commentary. Keep total response under 1,000 tokens. Maximum 5 bullets per section.
|
||||
|
||||
```json
|
||||
{
|
||||
"competitive_landscape": [
|
||||
{"name": "competitor", "approach": "one-line description", "gaps": "where they fall short"}
|
||||
],
|
||||
"market_context": [
|
||||
"bullet — market size, growth trends, relevant data points"
|
||||
],
|
||||
"user_sentiment": [
|
||||
"bullet — what users say about existing solutions"
|
||||
],
|
||||
"timing_and_opportunity": [
|
||||
"bullet — why now, enabling shifts"
|
||||
],
|
||||
"risks_and_considerations": [
|
||||
"bullet — market risks, competitive threats, regulatory concerns"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
---
|
||||
title: "PRFAQ: {project_name}"
|
||||
status: "{status}"
|
||||
created: "{timestamp}"
|
||||
updated: "{timestamp}"
|
||||
stage: "{current_stage}"
|
||||
inputs: []
|
||||
---
|
||||
|
||||
# {Headline}
|
||||
|
||||
## {Subheadline — one sentence: who benefits and what changes for them}
|
||||
|
||||
**{City, Date}** — {Opening paragraph: announce the product/initiative, state the user's problem, and the key benefit.}
|
||||
|
||||
{Problem paragraph: the user's pain today. Specific, concrete, felt. No mention of the solution yet.}
|
||||
|
||||
{Solution paragraph: what changes for the user. Benefits, not features. Outcomes, not implementation.}
|
||||
|
||||
> "{Leader/founder quote — the vision beyond the feature list.}"
|
||||
> — {Name, Title/Role}
|
||||
|
||||
### How It Works
|
||||
|
||||
{The user experience, step by step. Written from THEIR perspective. How they discover it, start using it, and get value from it.}
|
||||
|
||||
> "{User quote — what a real person would say after using this. Must sound human, not like marketing copy.}"
|
||||
> — {Name, Role}
|
||||
|
||||
### Getting Started
|
||||
|
||||
{Clear, concrete path to first value. How to access, try, adopt, or contribute.}
|
||||
|
||||
---
|
||||
|
||||
## Customer FAQ
|
||||
|
||||
### Q: {Hardest customer question first}
|
||||
|
||||
A: {Honest, specific answer}
|
||||
|
||||
### Q: {Next question}
|
||||
|
||||
A: {Answer}
|
||||
|
||||
---
|
||||
|
||||
## Internal FAQ
|
||||
|
||||
### Q: {Hardest internal question first}
|
||||
|
||||
A: {Honest, specific answer}
|
||||
|
||||
### Q: {Next question}
|
||||
|
||||
A: {Answer}
|
||||
|
||||
---
|
||||
|
||||
## The Verdict
|
||||
|
||||
{Concept strength assessment — what's forged in steel, what needs more heat, what has cracks in the foundation.}
|
||||
|
|
@ -0,0 +1,16 @@
|
|||
{
|
||||
"module-code": "bmm",
|
||||
"capabilities": [
|
||||
{
|
||||
"name": "working-backwards",
|
||||
"menu-code": "WB",
|
||||
"description": "Produces battle-tested PRFAQ document and optional LLM distillate for PRD input.",
|
||||
"supports-headless": true,
|
||||
"phase-name": "1-analysis",
|
||||
"after": ["brainstorming", "perform-research"],
|
||||
"before": ["create-prd"],
|
||||
"is-required": false,
|
||||
"output-location": "{planning_artifacts}"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -0,0 +1,55 @@
|
|||
**Language:** Use `{communication_language}` for all output.
|
||||
**Output Language:** Use `{document_output_language}` for documents.
|
||||
**Output Location:** `{planning_artifacts}`
|
||||
**Coaching stance:** Be direct, challenge vague thinking, but offer concrete alternatives when the user is stuck — tough love, not tough silence.
|
||||
**Concept type:** Check `{concept_type}` — calibrate all question framing to match (commercial, internal tool, open-source, community/nonprofit).
|
||||
|
||||
# Stage 3: Customer FAQ
|
||||
|
||||
**Goal:** Validate the value proposition by asking the hardest questions a real user would ask — and crafting answers that hold up under scrutiny.
|
||||
|
||||
## The Devil's Advocate
|
||||
|
||||
You are now the customer. Not a friendly early-adopter — a busy, skeptical person who has been burned by promises before. You've read the press release. Now you have questions.
|
||||
|
||||
**Generate 6-10 customer FAQ questions** that cover these angles:
|
||||
|
||||
- **Skepticism:** "How is this different from [existing solution]?" / "Why should I switch from what I use today?"
|
||||
- **Trust:** "What happens to my data?" / "What if this shuts down?" / "Who's behind this?"
|
||||
- **Practical concerns:** "How much does it cost?" / "How long does it take to get started?" / "Does it work with [thing I already use]?"
|
||||
- **Edge cases:** "What if I need to [uncommon but real scenario]?" / "Does it work for [adjacent use case]?"
|
||||
- **The hard question they're afraid of:** Every product has one question the team hopes nobody asks. Find it and ask it.
|
||||
|
||||
**Don't generate softball questions.** "How do I sign up?" is not a FAQ — it's a CTA. Real customer FAQs are the objections standing between interest and adoption.
|
||||
|
||||
**Calibrate to concept type.** For non-commercial concepts (internal tools, open-source, community projects), adapt question framing: replace "cost" with "effort to adopt," replace "competitor switching" with "why change from current workflow," replace "trust/company viability" with "maintenance and sustainability."
|
||||
|
||||
## Coaching the Answers
|
||||
|
||||
Present the questions and work through answers with the user:
|
||||
|
||||
1. **Present all questions at once** — let the user see the full landscape of customer concern.
|
||||
2. **Work through answers together.** The user drafts (or you draft and they react). For each answer:
|
||||
- Is it honest? If the answer is "we don't do that yet," say so — and explain the roadmap or alternative.
|
||||
- Is it specific? "We have enterprise-grade security" is not an answer. What certifications? What encryption? What SLA?
|
||||
- Would a customer believe it? Marketing language in FAQ answers destroys credibility.
|
||||
3. **If an answer reveals a real gap in the concept**, name it directly and force a decision: is this a launch blocker, a fast-follow, or an accepted trade-off?
|
||||
4. **The user can add their own questions too.** Often they know the scary questions better than anyone.
|
||||
|
||||
## Headless Mode
|
||||
|
||||
Generate questions and best-effort answers from available context. Flag answers with low confidence so a human can review.
|
||||
|
||||
## Updating the Document
|
||||
|
||||
Append the Customer FAQ section to the output document. Update frontmatter: `status: "customer-faq"`, `stage: 3`, `updated` timestamp.
|
||||
|
||||
## Coaching Notes Capture
|
||||
|
||||
Before moving on, append a `<!-- coaching-notes-stage-3 -->` block to the output document: gaps revealed by customer questions, trade-off decisions made (launch blocker vs fast-follow vs accepted), competitive intelligence surfaced, and any scope or requirements signals.
|
||||
|
||||
## Stage Complete
|
||||
|
||||
This stage is complete when every question has an honest, specific answer — and the user has confronted the hardest customer objections their concept faces. No softballs survived.
|
||||
|
||||
Route to `./internal-faq.md`.
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
**Language:** Use `{communication_language}` for all output.
|
||||
**Output Language:** Use `{document_output_language}` for documents.
|
||||
**Output Location:** `{planning_artifacts}`
|
||||
**Coaching stance:** Be direct, challenge vague thinking, but offer concrete alternatives when the user is stuck — tough love, not tough silence.
|
||||
**Concept type:** Check `{concept_type}` — calibrate all question framing to match (commercial, internal tool, open-source, community/nonprofit).
|
||||
|
||||
# Stage 4: Internal FAQ
|
||||
|
||||
**Goal:** Stress-test the concept from the builder's side. The customer FAQ asked "should I use this?" The internal FAQ asks "can we actually pull this off — and should we?"
|
||||
|
||||
## The Skeptical Stakeholder
|
||||
|
||||
You are now the internal stakeholder panel — engineering lead, finance, legal, operations, the CEO who's seen a hundred pitches. The press release was inspiring. Now prove it's real.
|
||||
|
||||
**Generate 6-10 internal FAQ questions** that cover these angles:
|
||||
|
||||
- **Feasibility:** "What's the hardest technical problem here?" / "What do we not know how to build yet?" / "What are the key dependencies and risks?"
|
||||
- **Business viability:** "What does the unit economics look like?" / "How do we acquire the first 100 customers?" / "What's the competitive moat — and how durable is it?"
|
||||
- **Resource reality:** "What does the team need to look like?" / "What's the realistic timeline to a usable product?" / "What do we have to say no to in order to do this?"
|
||||
- **Risk:** "What kills this?" / "What's the worst-case scenario if we ship and it doesn't work?" / "What regulatory or legal exposure exists?"
|
||||
- **Strategic fit:** "Why us? Why now?" / "What does this cannibalize?" / "If this succeeds, what does the company look like in 3 years?"
|
||||
- **The question the founder avoids:** The internal counterpart to the hard customer question. The thing that keeps them up at night but hasn't been said out loud.
|
||||
|
||||
**Calibrate questions to context.** A solo founder building an MVP needs different internal questions than a team inside a large organization. Don't ask about "board alignment" for a weekend project. Don't ask about "weekend viability" for an enterprise product. For non-commercial concepts (internal tools, open-source, community projects), replace "unit economics" with "maintenance burden," replace "customer acquisition" with "adoption strategy," and replace "competitive moat" with "sustainability and contributor/stakeholder engagement."
|
||||
|
||||
## Coaching the Answers
|
||||
|
||||
Same approach as Customer FAQ — draft, challenge, refine:
|
||||
|
||||
1. **Present all questions at once.**
|
||||
2. **Work through answers.** Demand specificity. "We'll figure it out" is not an answer. Neither is "we'll hire for that." What's the actual plan?
|
||||
3. **Honest unknowns are fine — unexamined unknowns are not.** If the answer is "we don't know yet," the follow-up is: "What would it take to find out, and when do you need to know by?"
|
||||
4. **Watch for hand-waving on resources and timeline.** These are the most commonly over-optimistic answers. Push for concrete scoping.
|
||||
|
||||
## Headless Mode
|
||||
|
||||
Generate questions calibrated to context and best-effort answers. Flag high-risk areas and unknowns prominently.
|
||||
|
||||
## Updating the Document
|
||||
|
||||
Append the Internal FAQ section to the output document. Update frontmatter: `status: "internal-faq"`, `stage: 4`, `updated` timestamp.
|
||||
|
||||
## Coaching Notes Capture
|
||||
|
||||
Before moving on, append a `<!-- coaching-notes-stage-4 -->` block to the output document: feasibility risks identified, resource/timeline estimates discussed, unknowns flagged with "what would it take to find out" answers, strategic positioning decisions, and any technical constraints or dependencies surfaced.
|
||||
|
||||
## Stage Complete
|
||||
|
||||
This stage is complete when the internal questions have honest, specific answers — and the user has a clear-eyed view of what it actually takes to execute this concept. Optimism is fine. Delusion is not.
|
||||
|
||||
Route to `./verdict.md`.
|
||||
|
|
@ -0,0 +1,60 @@
|
|||
**Language:** Use `{communication_language}` for all output.
|
||||
**Output Language:** Use `{document_output_language}` for documents.
|
||||
**Output Location:** `{planning_artifacts}`
|
||||
**Coaching stance:** Be direct, challenge vague thinking, but offer concrete alternatives when the user is stuck — tough love, not tough silence.
|
||||
|
||||
# Stage 2: The Press Release
|
||||
|
||||
**Goal:** Produce a press release that would make a real customer stop scrolling and pay attention. Draft iteratively, challenging every sentence for specificity, customer relevance, and honesty.
|
||||
|
||||
**Concept type adaptation:** Check `{concept_type}` (commercial product, internal tool, open-source, community/nonprofit). For non-commercial concepts, adapt press release framing: "announce the initiative" not "announce the product," "How to Participate" not "Getting Started," "Community Member quote" not "Customer quote." The structure stays — the language shifts to match the audience.
|
||||
|
||||
## The Forge
|
||||
|
||||
The press release is the heart of Working Backwards. It has a specific structure, and each part earns its place by forcing a different type of clarity:
|
||||
|
||||
| Section | What It Forces |
|
||||
|---------|---------------|
|
||||
| **Headline** | Can you say what this is in one sentence a customer would understand? |
|
||||
| **Subheadline** | Who benefits and what changes for them? |
|
||||
| **Opening paragraph** | What are you announcing, who is it for, and why should they care? |
|
||||
| **Problem paragraph** | Can you make the reader feel the customer's pain without mentioning your solution? |
|
||||
| **Solution paragraph** | What changes for the customer? (Not: what did you build.) |
|
||||
| **Leader quote** | What's the vision beyond the feature list? |
|
||||
| **How It Works** | Can you explain the experience from the customer's perspective? |
|
||||
| **Customer quote** | Would a real person say this? Does it sound human? |
|
||||
| **Getting Started** | Is the path to value clear and concrete? |
|
||||
|
||||
## Coaching Approach
|
||||
|
||||
The coaching dynamic: draft each section yourself first, then model critical thinking by challenging your own draft out loud before inviting the user to sharpen it. Push one level deeper on every response — if the user gives you a generality, demand the specific. The cycle is: draft → self-challenge → invite → deepen.
|
||||
|
||||
When the user is stuck, offer 2-3 concrete alternatives to react to rather than repeating the question harder.
|
||||
|
||||
## Quality Bars
|
||||
|
||||
These are the standards to hold the press release to. Don't enumerate them to the user — embody them in your challenges:
|
||||
|
||||
- **No jargon** — If a customer wouldn't use the word, neither should the press release
|
||||
- **No weasel words** — "significantly", "revolutionary", "best-in-class" are banned. Replace with specifics.
|
||||
- **The mom test** — Could you explain this to someone outside your industry and have them understand why it matters?
|
||||
- **The "so what?" test** — Every sentence should survive "so what?" If it can't, cut or sharpen it.
|
||||
- **Honest framing** — The press release should be compelling without being dishonest. If you're overselling, the customer FAQ will expose it.
|
||||
|
||||
## Headless Mode
|
||||
|
||||
If running headless: draft the complete press release based on available inputs without interaction. Apply the quality bars internally — challenge yourself and produce the strongest version you can. Write directly to the output document.
|
||||
|
||||
## Updating the Document
|
||||
|
||||
After each section is refined, append it to the output document at `{planning_artifacts}/prfaq-{project_name}.md`. Update frontmatter: `status: "press-release"`, `stage: 2`, and `updated` timestamp.
|
||||
|
||||
## Coaching Notes Capture
|
||||
|
||||
Before moving on, append a brief `<!-- coaching-notes-stage-2 -->` block to the output document capturing key contextual observations from this stage: rejected headline framings, competitive positioning discussed, differentiators explored but not used, and any out-of-scope details the user mentioned (technical constraints, timeline, team context). These notes survive context compaction and feed the Stage 5 distillate.
|
||||
|
||||
## Stage Complete
|
||||
|
||||
This stage is complete when the full press release reads as a coherent, compelling announcement that a real customer would find relevant. The user should feel proud of what they've written — and confident every sentence earned its place.
|
||||
|
||||
Route to `./customer-faq.md`.
|
||||
|
|
@ -0,0 +1,79 @@
|
|||
**Language:** Use `{communication_language}` for all output.
|
||||
**Output Language:** Use `{document_output_language}` for documents.
|
||||
**Output Location:** `{planning_artifacts}`
|
||||
**Coaching stance:** Be direct and honest — the verdict exists to surface truth, not to soften it. But frame every finding constructively.
|
||||
|
||||
# Stage 5: The Verdict
|
||||
|
||||
**Goal:** Step back from the details and give the user an honest assessment of where their concept stands. Finalize the PRFAQ document and produce the downstream distillate.
|
||||
|
||||
## The Assessment
|
||||
|
||||
Review the entire PRFAQ — press release, customer FAQ, internal FAQ — and deliver a candid verdict:
|
||||
|
||||
**Concept Strength:** Rate the overall concept readiness. Not a score — a narrative assessment. Where is the thinking sharp and where is it still soft? What survived the gauntlet and what barely held together?
|
||||
|
||||
**Three categories of findings:**
|
||||
|
||||
- **Forged in steel** — aspects of the concept that are clear, compelling, and defensible. The press release sections that would actually make a customer stop. The FAQ answers that are honest and convincing.
|
||||
- **Needs more heat** — areas that are promising but underdeveloped. The user has a direction but hasn't gone deep enough. These need more work before they're ready for a PRD.
|
||||
- **Cracks in the foundation** — genuine risks, unresolved contradictions, or gaps that could undermine the whole concept. Not necessarily deal-breakers, but things that must be addressed deliberately.
|
||||
|
||||
**Present the verdict directly.** Don't soften it. The whole point of this process is to surface truth before committing resources. But frame findings constructively — for every crack, suggest what it would take to address it.
|
||||
|
||||
## Finalize the Document
|
||||
|
||||
1. **Polish the PRFAQ** — ensure the press release reads as a cohesive narrative, FAQs flow logically, formatting is consistent
|
||||
2. **Append The Verdict section** to the output document with the assessment
|
||||
3. Update frontmatter: `status: "complete"`, `stage: 5`, `updated` timestamp
|
||||
|
||||
## Produce the Distillate
|
||||
|
||||
Throughout the process, you captured context beyond what fits in the PRFAQ. Source material for the distillate includes the `<!-- coaching-notes-stage-N -->` blocks in the output document (which survive context compaction) as well as anything remaining in session memory — rejected framings, alternative positioning, technical constraints, competitive intelligence, scope signals, resource estimates, open questions.
|
||||
|
||||
**Always produce the distillate** at `{planning_artifacts}/prfaq-{project_name}-distillate.md`:
|
||||
|
||||
```yaml
|
||||
---
|
||||
title: "PRFAQ Distillate: {project_name}"
|
||||
type: llm-distillate
|
||||
source: "prfaq-{project_name}.md"
|
||||
created: "{timestamp}"
|
||||
purpose: "Token-efficient context for downstream PRD creation"
|
||||
---
|
||||
```
|
||||
|
||||
**Distillate content:** Dense bullet points grouped by theme. Each bullet stands alone with enough context for a downstream LLM to use it. Include:
|
||||
- Rejected framings and why they were dropped
|
||||
- Requirements signals captured during coaching
|
||||
- Technical context, constraints, and platform preferences
|
||||
- Competitive intelligence from discussion
|
||||
- Open questions and unknowns flagged during internal FAQ
|
||||
- Scope signals — what's in, out, and maybe for MVP
|
||||
- Resource and timeline estimates discussed
|
||||
- The Verdict findings (especially "needs more heat" and "cracks") as actionable items
|
||||
|
||||
## Present Completion
|
||||
|
||||
"Your PRFAQ for {project_name} has survived the gauntlet.
|
||||
|
||||
**PRFAQ:** `{planning_artifacts}/prfaq-{project_name}.md`
|
||||
**Detail Pack:** `{planning_artifacts}/prfaq-{project_name}-distillate.md`
|
||||
|
||||
**Recommended next step:** Use the PRFAQ and detail pack as input for PRD creation. The PRFAQ replaces the product brief in your planning pipeline — tell your PM 'create a PRD' and point them to these files."
|
||||
|
||||
**Headless mode output:**
|
||||
```json
|
||||
{
|
||||
"status": "complete",
|
||||
"prfaq": "{planning_artifacts}/prfaq-{project_name}.md",
|
||||
"distillate": "{planning_artifacts}/prfaq-{project_name}-distillate.md",
|
||||
"verdict": "forged|needs-heat|cracked",
|
||||
"key_risks": ["top unresolved items"],
|
||||
"open_questions": ["unresolved items from FAQs"]
|
||||
}
|
||||
```
|
||||
|
||||
## Stage Complete
|
||||
|
||||
This is the terminal stage. If the user wants to revise, loop back to the relevant stage. Otherwise, the workflow is done.
|
||||
|
|
@ -37,7 +37,7 @@ Check activation context immediately:
|
|||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
2. **Greet user** as `{user_name}`, speaking in `{communication_language}`. Be warm but efficient — dream builder energy.
|
||||
2. **Greet user** as `{user_name}`, speaking in `{communication_language}`.
|
||||
|
||||
3. **Stage 1: Understand Intent** (handled here in SKILL.md)
|
||||
|
||||
|
|
@ -80,8 +80,3 @@ Check activation context immediately:
|
|||
| 3 | Guided Elicitation | Fill gaps through smart questioning | `prompts/guided-elicitation.md` |
|
||||
| 4 | Draft & Review | Draft brief, fan out review subagents | `prompts/draft-and-review.md` |
|
||||
| 5 | Finalize | Polish, output, offer distillate | `prompts/finalize.md` |
|
||||
|
||||
## External Skills
|
||||
|
||||
This workflow uses:
|
||||
- `bmad-init` — Configuration loading (module: bmm)
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@
|
|||
"description": "Produces executive product brief and optional LLM distillate for PRD input.",
|
||||
"supports-headless": true,
|
||||
"phase-name": "1-analysis",
|
||||
"after": ["brainstorming, perform-research"],
|
||||
"after": ["brainstorming", "perform-research"],
|
||||
"before": ["create-prd"],
|
||||
"is-required": true,
|
||||
"output-location": "{planning_artifacts}"
|
||||
|
|
|
|||
|
|
@ -8,12 +8,14 @@
|
|||
|
||||
**⛔ Web search required.** If unavailable, abort and tell the user.
|
||||
|
||||
## CONFIGURATION
|
||||
## Activation
|
||||
|
||||
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
|
||||
- `communication_language`, `document_output_language`, `user_skill_level`
|
||||
- `date` as a system-generated value
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve::
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
## QUICK TOPIC DISCOVERY
|
||||
|
||||
|
|
|
|||
|
|
@ -8,12 +8,14 @@
|
|||
|
||||
**⛔ Web search required.** If unavailable, abort and tell the user.
|
||||
|
||||
## CONFIGURATION
|
||||
## Activation
|
||||
|
||||
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
|
||||
- `communication_language`, `document_output_language`, `user_skill_level`
|
||||
- `date` as a system-generated value
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve::
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
## QUICK TOPIC DISCOVERY
|
||||
|
||||
|
|
|
|||
|
|
@ -9,12 +9,14 @@
|
|||
|
||||
**⛔ Web search required.** If unavailable, abort and tell the user.
|
||||
|
||||
## CONFIGURATION
|
||||
## Activation
|
||||
|
||||
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
|
||||
- `communication_language`, `document_output_language`, `user_skill_level`
|
||||
- `date` as a system-generated value
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve::
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
## QUICK TOPIC DISCOVERY
|
||||
|
||||
|
|
|
|||
|
|
@ -41,10 +41,12 @@ When you are in this persona and the user calls a skill, this persona must carry
|
|||
|
||||
## On Activation
|
||||
|
||||
1. **Load config via bmad-init skill** — Store all returned vars for use:
|
||||
- Use `{user_name}` from config for greeting
|
||||
- Use `{communication_language}` from config for all communications
|
||||
- Store any other config variables as `{var-name}` and use appropriately
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
2. **Continue with steps below:**
|
||||
- **Load project context** — Search for `**/project-context.md`. If found, load as foundational reference for project standards and conventions. If not found, continue without it.
|
||||
|
|
|
|||
|
|
@ -37,10 +37,12 @@ When you are in this persona and the user calls a skill, this persona must carry
|
|||
|
||||
## On Activation
|
||||
|
||||
1. **Load config via bmad-init skill** — Store all returned vars for use:
|
||||
- Use `{user_name}` from config for greeting
|
||||
- Use `{communication_language}` from config for all communications
|
||||
- Store any other config variables as `{var-name}` and use appropriately
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
2. **Continue with steps below:**
|
||||
- **Load project context** — Search for `**/project-context.md`. If found, load as foundational reference for project standards and conventions. If not found, continue without it.
|
||||
|
|
|
|||
|
|
@ -42,20 +42,19 @@ This uses **step-file architecture** for disciplined execution:
|
|||
- ⏸️ **ALWAYS** halt at menus and wait for user input
|
||||
- 📋 **NEVER** create mental todo lists from future steps
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
## Activation
|
||||
|
||||
### 1. Configuration Loading
|
||||
|
||||
Load and read full config from {main_config} and resolve:
|
||||
|
||||
- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
|
||||
- `communication_language`, `document_output_language`, `user_skill_level`
|
||||
- `date` as system-generated current datetime
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve::
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the configured `{communication_language}`.
|
||||
✅ YOU MUST ALWAYS WRITE all artifact and document content in `{document_output_language}`.
|
||||
|
||||
### 2. Route to Create Workflow
|
||||
2. Route to Create Workflow
|
||||
|
||||
"**Create Mode: Creating a new PRD from scratch.**"
|
||||
|
||||
|
|
|
|||
|
|
@ -15,15 +15,14 @@ This uses **micro-file architecture** for disciplined execution:
|
|||
|
||||
---
|
||||
|
||||
## INITIALIZATION
|
||||
## Activation
|
||||
|
||||
### Configuration Loading
|
||||
|
||||
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
|
||||
- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
|
||||
- `communication_language`, `document_output_language`, `user_skill_level`
|
||||
- `date` as system-generated current datetime
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve::
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
### Paths
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
# File references (ONLY variables used in this step)
|
||||
prdPurpose: '{project-root}/_bmad/bmm-skills/2-plan-workflows/create-prd/data/prd-purpose.md'
|
||||
prdPurpose: '{project-root}/_bmad/bmm-skills/2-plan-workflows/bmad-create-prd/data/prd-purpose.md'
|
||||
---
|
||||
|
||||
# Step E-1: Discovery & Understanding
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
# File references (ONLY variables used in this step)
|
||||
prdFile: '{prd_file_path}'
|
||||
prdPurpose: '{project-root}/_bmad/bmm-skills/2-plan-workflows/create-prd/data/prd-purpose.md'
|
||||
prdPurpose: '{project-root}/_bmad/bmm-skills/2-plan-workflows/bmad-create-prd/data/prd-purpose.md'
|
||||
---
|
||||
|
||||
# Step E-1B: Legacy PRD Conversion Assessment
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
# File references (ONLY variables used in this step)
|
||||
prdFile: '{prd_file_path}'
|
||||
validationReport: '{validation_report_path}' # If provided
|
||||
prdPurpose: '{project-root}/_bmad/bmm-skills/2-plan-workflows/create-prd/data/prd-purpose.md'
|
||||
prdPurpose: '{project-root}/_bmad/bmm-skills/2-plan-workflows/bmad-create-prd/data/prd-purpose.md'
|
||||
---
|
||||
|
||||
# Step E-2: Deep Review & Analysis
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
# File references (ONLY variables used in this step)
|
||||
prdFile: '{prd_file_path}'
|
||||
prdPurpose: '{project-root}/_bmad/bmm-skills/2-plan-workflows/create-prd/data/prd-purpose.md'
|
||||
prdPurpose: '{project-root}/_bmad/bmm-skills/2-plan-workflows/bmad-create-prd/data/prd-purpose.md'
|
||||
---
|
||||
|
||||
# Step E-3: Edit & Update
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
# File references (ONLY variables used in this step)
|
||||
prdFile: '{prd_file_path}'
|
||||
validationWorkflow: '{project-root}/_bmad/bmm-skills/2-plan-workflows/create-prd/steps-v/step-v-01-discovery.md'
|
||||
validationWorkflow: '{project-root}/_bmad/bmm-skills/2-plan-workflows/bmad-validate-prd/steps-v/step-v-01-discovery.md'
|
||||
---
|
||||
|
||||
# Step E-4: Complete & Validate
|
||||
|
|
|
|||
|
|
@ -41,20 +41,19 @@ This uses **step-file architecture** for disciplined execution:
|
|||
- ⏸️ **ALWAYS** halt at menus and wait for user input
|
||||
- 📋 **NEVER** create mental todo lists from future steps
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
## Activation
|
||||
|
||||
### 1. Configuration Loading
|
||||
|
||||
Load and read full config from {main_config} and resolve:
|
||||
|
||||
- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
|
||||
- `communication_language`, `document_output_language`, `user_skill_level`
|
||||
- `date` as system-generated current datetime
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve::
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the configured `{communication_language}`.
|
||||
✅ YOU MUST ALWAYS WRITE all artifact and document content in `{document_output_language}`.
|
||||
|
||||
### 2. Route to Edit Workflow
|
||||
2. Route to Edit Workflow
|
||||
|
||||
"**Edit Mode: Improving an existing PRD.**"
|
||||
|
||||
|
|
|
|||
|
|
@ -42,20 +42,19 @@ This uses **step-file architecture** for disciplined execution:
|
|||
- ⏸️ **ALWAYS** halt at menus and wait for user input
|
||||
- 📋 **NEVER** create mental todo lists from future steps
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
## Activation
|
||||
|
||||
### 1. Configuration Loading
|
||||
|
||||
Load and read full config from {main_config} and resolve:
|
||||
|
||||
- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
|
||||
- `communication_language`, `document_output_language`, `user_skill_level`
|
||||
- `date` as system-generated current datetime
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve::
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the configured `{communication_language}`.
|
||||
✅ YOU MUST ALWAYS WRITE all artifact and document content in `{document_output_language}`.
|
||||
|
||||
### 2. Route to Validate Workflow
|
||||
2. Route to Validate Workflow
|
||||
|
||||
"**Validate Mode: Validating an existing PRD against BMAD standards.**"
|
||||
|
||||
|
|
|
|||
|
|
@ -1,15 +0,0 @@
|
|||
domain,signals,complexity,key_concerns,required_knowledge,suggested_workflow,web_searches,special_sections
|
||||
healthcare,"medical,diagnostic,clinical,FDA,patient,treatment,HIPAA,therapy,pharma,drug",high,"FDA approval;Clinical validation;HIPAA compliance;Patient safety;Medical device classification;Liability","Regulatory pathways;Clinical trial design;Medical standards;Data privacy;Integration requirements","domain-research","FDA software medical device guidance {date};HIPAA compliance software requirements;Medical software standards {date};Clinical validation software","clinical_requirements;regulatory_pathway;validation_methodology;safety_measures"
|
||||
fintech,"payment,banking,trading,investment,crypto,wallet,transaction,KYC,AML,funds,fintech",high,"Regional compliance;Security standards;Audit requirements;Fraud prevention;Data protection","KYC/AML requirements;PCI DSS;Open banking;Regional laws (US/EU/APAC);Crypto regulations","domain-research","fintech regulations {date};payment processing compliance {date};open banking API standards;cryptocurrency regulations {date}","compliance_matrix;security_architecture;audit_requirements;fraud_prevention"
|
||||
govtech,"government,federal,civic,public sector,citizen,municipal,voting",high,"Procurement rules;Security clearance;Accessibility (508);FedRAMP;Privacy;Transparency","Government procurement;Security frameworks;Accessibility standards;Privacy laws;Open data requirements","domain-research","government software procurement {date};FedRAMP compliance requirements;section 508 accessibility;government security standards","procurement_compliance;security_clearance;accessibility_standards;transparency_requirements"
|
||||
edtech,"education,learning,student,teacher,curriculum,assessment,K-12,university,LMS",medium,"Student privacy (COPPA/FERPA);Accessibility;Content moderation;Age verification;Curriculum standards","Educational privacy laws;Learning standards;Accessibility requirements;Content guidelines;Assessment validity","domain-research","educational software privacy {date};COPPA FERPA compliance;WCAG education requirements;learning management standards","privacy_compliance;content_guidelines;accessibility_features;curriculum_alignment"
|
||||
aerospace,"aircraft,spacecraft,aviation,drone,satellite,propulsion,flight,radar,navigation",high,"Safety certification;DO-178C compliance;Performance validation;Simulation accuracy;Export controls","Aviation standards;Safety analysis;Simulation validation;ITAR/export controls;Performance requirements","domain-research + technical-model","DO-178C software certification;aerospace simulation standards {date};ITAR export controls software;aviation safety requirements","safety_certification;simulation_validation;performance_requirements;export_compliance"
|
||||
automotive,"vehicle,car,autonomous,ADAS,automotive,driving,EV,charging",high,"Safety standards;ISO 26262;V2X communication;Real-time requirements;Certification","Automotive standards;Functional safety;V2X protocols;Real-time systems;Testing requirements","domain-research","ISO 26262 automotive software;automotive safety standards {date};V2X communication protocols;EV charging standards","safety_standards;functional_safety;communication_protocols;certification_requirements"
|
||||
scientific,"research,algorithm,simulation,modeling,computational,analysis,data science,ML,AI",medium,"Reproducibility;Validation methodology;Peer review;Performance;Accuracy;Computational resources","Scientific method;Statistical validity;Computational requirements;Domain expertise;Publication standards","technical-model","scientific computing best practices {date};research reproducibility standards;computational modeling validation;peer review software","validation_methodology;accuracy_metrics;reproducibility_plan;computational_requirements"
|
||||
legaltech,"legal,law,contract,compliance,litigation,patent,attorney,court",high,"Legal ethics;Bar regulations;Data retention;Attorney-client privilege;Court system integration","Legal practice rules;Ethics requirements;Court filing systems;Document standards;Confidentiality","domain-research","legal technology ethics {date};law practice management software requirements;court filing system standards;attorney client privilege technology","ethics_compliance;data_retention;confidentiality_measures;court_integration"
|
||||
insuretech,"insurance,claims,underwriting,actuarial,policy,risk,premium",high,"Insurance regulations;Actuarial standards;Data privacy;Fraud detection;State compliance","Insurance regulations by state;Actuarial methods;Risk modeling;Claims processing;Regulatory reporting","domain-research","insurance software regulations {date};actuarial standards software;insurance fraud detection;state insurance compliance","regulatory_requirements;risk_modeling;fraud_detection;reporting_compliance"
|
||||
energy,"energy,utility,grid,solar,wind,power,electricity,oil,gas",high,"Grid compliance;NERC standards;Environmental regulations;Safety requirements;Real-time operations","Energy regulations;Grid standards;Environmental compliance;Safety protocols;SCADA systems","domain-research","energy sector software compliance {date};NERC CIP standards;smart grid requirements;renewable energy software standards","grid_compliance;safety_protocols;environmental_compliance;operational_requirements"
|
||||
process_control,"industrial automation,process control,PLC,SCADA,DCS,HMI,operational technology,OT,control system,cyberphysical,MES,historian,instrumentation,I&C,P&ID",high,"Functional safety;OT cybersecurity;Real-time control requirements;Legacy system integration;Process safety and hazard analysis;Environmental compliance and permitting;Engineering authority and PE requirements","Functional safety standards;OT security frameworks;Industrial protocols;Process control architecture;Plant reliability and maintainability","domain-research + technical-model","IEC 62443 OT cybersecurity requirements {date};functional safety software requirements {date};industrial process control architecture;ISA-95 manufacturing integration","functional_safety;ot_security;process_requirements;engineering_authority"
|
||||
building_automation,"building automation,BAS,BMS,HVAC,smart building,lighting control,fire alarm,fire protection,fire suppression,life safety,elevator,access control,DDC,energy management,sequence of operations,commissioning",high,"Life safety codes;Building energy standards;Multi-trade coordination and interoperability;Commissioning and ongoing operational performance;Indoor environmental quality and occupant comfort;Engineering authority and PE requirements","Building automation protocols;HVAC and mechanical controls;Fire alarm, fire protection, and life safety design;Commissioning process and sequence of operations;Building codes and energy standards","domain-research","smart building software architecture {date};BACnet integration best practices;building automation cybersecurity {date};ASHRAE building standards","life_safety;energy_compliance;commissioning_requirements;engineering_authority"
|
||||
gaming,"game,player,gameplay,level,character,multiplayer,quest",redirect,"REDIRECT TO GAME WORKFLOWS","Game design","game-brief","NA","NA"
|
||||
general,"",low,"Standard requirements;Basic security;User experience;Performance","General software practices","continue","software development best practices {date}","standard_requirements"
|
||||
|
|
|
@ -1,197 +0,0 @@
|
|||
# BMAD PRD Purpose
|
||||
|
||||
**The PRD is the top of the required funnel that feeds all subsequent product development work in rhw BMad Method.**
|
||||
|
||||
---
|
||||
|
||||
## What is a BMAD PRD?
|
||||
|
||||
A dual-audience document serving:
|
||||
1. **Human Product Managers and builders** - Vision, strategy, stakeholder communication
|
||||
2. **LLM Downstream Consumption** - UX Design → Architecture → Epics → Development AI Agents
|
||||
|
||||
Each successive document becomes more AI-tailored and granular.
|
||||
|
||||
---
|
||||
|
||||
## Core Philosophy: Information Density
|
||||
|
||||
**High Signal-to-Noise Ratio**
|
||||
|
||||
Every sentence must carry information weight. LLMs consume precise, dense content efficiently.
|
||||
|
||||
**Anti-Patterns (Eliminate These):**
|
||||
- ❌ "The system will allow users to..." → ✅ "Users can..."
|
||||
- ❌ "It is important to note that..." → ✅ State the fact directly
|
||||
- ❌ "In order to..." → ✅ "To..."
|
||||
- ❌ Conversational filler and padding → ✅ Direct, concise statements
|
||||
|
||||
**Goal:** Maximum information per word. Zero fluff.
|
||||
|
||||
---
|
||||
|
||||
## The Traceability Chain
|
||||
|
||||
**PRD starts the chain:**
|
||||
```
|
||||
Vision → Success Criteria → User Journeys → Functional Requirements → (future: User Stories)
|
||||
```
|
||||
|
||||
**In the PRD, establish:**
|
||||
- Vision → Success Criteria alignment
|
||||
- Success Criteria → User Journey coverage
|
||||
- User Journey → Functional Requirement mapping
|
||||
- All requirements traceable to user needs
|
||||
|
||||
**Why:** Each downstream artifact (UX, Architecture, Epics, Stories) must trace back to documented user needs and business objectives. This chain ensures we build the right thing.
|
||||
|
||||
---
|
||||
|
||||
## What Makes Great Functional Requirements?
|
||||
|
||||
### FRs are Capabilities, Not Implementation
|
||||
|
||||
**Good FR:** "Users can reset their password via email link"
|
||||
**Bad FR:** "System sends JWT via email and validates with database" (implementation leakage)
|
||||
|
||||
**Good FR:** "Dashboard loads in under 2 seconds for 95th percentile"
|
||||
**Bad FR:** "Fast loading time" (subjective, unmeasurable)
|
||||
|
||||
### SMART Quality Criteria
|
||||
|
||||
**Specific:** Clear, precisely defined capability
|
||||
**Measurable:** Quantifiable with test criteria
|
||||
**Attainable:** Realistic within constraints
|
||||
**Relevant:** Aligns with business objectives
|
||||
**Traceable:** Links to source (executive summary or user journey)
|
||||
|
||||
### FR Anti-Patterns
|
||||
|
||||
**Subjective Adjectives:**
|
||||
- ❌ "easy to use", "intuitive", "user-friendly", "fast", "responsive"
|
||||
- ✅ Use metrics: "completes task in under 3 clicks", "loads in under 2 seconds"
|
||||
|
||||
**Implementation Leakage:**
|
||||
- ❌ Technology names, specific libraries, implementation details
|
||||
- ✅ Focus on capability and measurable outcomes
|
||||
|
||||
**Vague Quantifiers:**
|
||||
- ❌ "multiple users", "several options", "various formats"
|
||||
- ✅ "up to 100 concurrent users", "3-5 options", "PDF, DOCX, TXT formats"
|
||||
|
||||
**Missing Test Criteria:**
|
||||
- ❌ "The system shall provide notifications"
|
||||
- ✅ "The system shall send email notifications within 30 seconds of trigger event"
|
||||
|
||||
---
|
||||
|
||||
## What Makes Great Non-Functional Requirements?
|
||||
|
||||
### NFRs Must Be Measurable
|
||||
|
||||
**Template:**
|
||||
```
|
||||
"The system shall [metric] [condition] [measurement method]"
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
- ✅ "The system shall respond to API requests in under 200ms for 95th percentile as measured by APM monitoring"
|
||||
- ✅ "The system shall maintain 99.9% uptime during business hours as measured by cloud provider SLA"
|
||||
- ✅ "The system shall support 10,000 concurrent users as measured by load testing"
|
||||
|
||||
### NFR Anti-Patterns
|
||||
|
||||
**Unmeasurable Claims:**
|
||||
- ❌ "The system shall be scalable" → ✅ "The system shall handle 10x load growth through horizontal scaling"
|
||||
- ❌ "High availability required" → ✅ "99.9% uptime as measured by cloud provider SLA"
|
||||
|
||||
**Missing Context:**
|
||||
- ❌ "Response time under 1 second" → ✅ "API response time under 1 second for 95th percentile under normal load"
|
||||
|
||||
---
|
||||
|
||||
## Domain-Specific Requirements
|
||||
|
||||
**Auto-Detect and Enforce Based on Project Context**
|
||||
|
||||
Certain industries have mandatory requirements that must be present:
|
||||
|
||||
- **Healthcare:** HIPAA Privacy & Security Rules, PHI encryption, audit logging, MFA
|
||||
- **Fintech:** PCI-DSS Level 1, AML/KYC compliance, SOX controls, financial audit trails
|
||||
- **GovTech:** NIST framework, Section 508 accessibility (WCAG 2.1 AA), FedRAMP, data residency
|
||||
- **E-Commerce:** PCI-DSS for payments, inventory accuracy, tax calculation by jurisdiction
|
||||
|
||||
**Why:** Missing these requirements in the PRD means they'll be missed in architecture and implementation, creating expensive rework. During PRD creation there is a step to cover this - during validation we want to make sure it was covered. For this purpose steps will utilize a domain-complexity.csv and project-types.csv.
|
||||
|
||||
---
|
||||
|
||||
## Document Structure (Markdown, Human-Readable)
|
||||
|
||||
### Required Sections
|
||||
1. **Executive Summary** - Vision, differentiator, target users
|
||||
2. **Success Criteria** - Measurable outcomes (SMART)
|
||||
3. **Product Scope** - MVP, Growth, Vision phases
|
||||
4. **User Journeys** - Comprehensive coverage
|
||||
5. **Domain Requirements** - Industry-specific compliance (if applicable)
|
||||
6. **Innovation Analysis** - Competitive differentiation (if applicable)
|
||||
7. **Project-Type Requirements** - Platform-specific needs
|
||||
8. **Functional Requirements** - Capability contract (FRs)
|
||||
9. **Non-Functional Requirements** - Quality attributes (NFRs)
|
||||
|
||||
### Formatting for Dual Consumption
|
||||
|
||||
**For Humans:**
|
||||
- Clear, professional language
|
||||
- Logical flow from vision to requirements
|
||||
- Easy for stakeholders to review and approve
|
||||
|
||||
**For LLMs:**
|
||||
- ## Level 2 headers for all main sections (enables extraction)
|
||||
- Consistent structure and patterns
|
||||
- Precise, testable language
|
||||
- High information density
|
||||
|
||||
---
|
||||
|
||||
## Downstream Impact
|
||||
|
||||
**How the PRD Feeds Next Artifacts:**
|
||||
|
||||
**UX Design:**
|
||||
- User journeys → interaction flows
|
||||
- FRs → design requirements
|
||||
- Success criteria → UX metrics
|
||||
|
||||
**Architecture:**
|
||||
- FRs → system capabilities
|
||||
- NFRs → architecture decisions
|
||||
- Domain requirements → compliance architecture
|
||||
- Project-type requirements → platform choices
|
||||
|
||||
**Epics & Stories (created after architecture):**
|
||||
- FRs → user stories (1 FR could map to 1-3 stories potentially)
|
||||
- Acceptance criteria → story acceptance tests
|
||||
- Priority → sprint sequencing
|
||||
- Traceability → stories map back to vision
|
||||
|
||||
**Development AI Agents:**
|
||||
- Precise requirements → implementation clarity
|
||||
- Test criteria → automated test generation
|
||||
- Domain requirements → compliance enforcement
|
||||
- Measurable NFRs → performance targets
|
||||
|
||||
---
|
||||
|
||||
## Summary: What Makes a Great BMAD PRD?
|
||||
|
||||
✅ **High Information Density** - Every sentence carries weight, zero fluff
|
||||
✅ **Measurable Requirements** - All FRs and NFRs are testable with specific criteria
|
||||
✅ **Clear Traceability** - Each requirement links to user need and business objective
|
||||
✅ **Domain Awareness** - Industry-specific requirements auto-detected and included
|
||||
✅ **Zero Anti-Patterns** - No subjective adjectives, implementation leakage, or vague quantifiers
|
||||
✅ **Dual Audience Optimized** - Human-readable AND LLM-consumable
|
||||
✅ **Markdown Format** - Professional, clean, accessible to all stakeholders
|
||||
|
||||
---
|
||||
|
||||
**Remember:** The PRD is the foundation. Quality here ripples through every subsequent phase. A dense, precise, well-traced PRD makes UX design, architecture, epic breakdown, and AI development dramatically more effective.
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
project_type,detection_signals,key_questions,required_sections,skip_sections,web_search_triggers,innovation_signals
|
||||
api_backend,"API,REST,GraphQL,backend,service,endpoints","Endpoints needed?;Authentication method?;Data formats?;Rate limits?;Versioning?;SDK needed?","endpoint_specs;auth_model;data_schemas;error_codes;rate_limits;api_docs","ux_ui;visual_design;user_journeys","framework best practices;OpenAPI standards","API composition;New protocol"
|
||||
mobile_app,"iOS,Android,app,mobile,iPhone,iPad","Native or cross-platform?;Offline needed?;Push notifications?;Device features?;Store compliance?","platform_reqs;device_permissions;offline_mode;push_strategy;store_compliance","desktop_features;cli_commands","app store guidelines;platform requirements","Gesture innovation;AR/VR features"
|
||||
saas_b2b,"SaaS,B2B,platform,dashboard,teams,enterprise","Multi-tenant?;Permission model?;Subscription tiers?;Integrations?;Compliance?","tenant_model;rbac_matrix;subscription_tiers;integration_list;compliance_reqs","cli_interface;mobile_first","compliance requirements;integration guides","Workflow automation;AI agents"
|
||||
developer_tool,"SDK,library,package,npm,pip,framework","Language support?;Package managers?;IDE integration?;Documentation?;Examples?","language_matrix;installation_methods;api_surface;code_examples;migration_guide","visual_design;store_compliance","package manager best practices;API design patterns","New paradigm;DSL creation"
|
||||
cli_tool,"CLI,command,terminal,bash,script","Interactive or scriptable?;Output formats?;Config method?;Shell completion?","command_structure;output_formats;config_schema;scripting_support","visual_design;ux_principles;touch_interactions","CLI design patterns;shell integration","Natural language CLI;AI commands"
|
||||
web_app,"website,webapp,browser,SPA,PWA","SPA or MPA?;Browser support?;SEO needed?;Real-time?;Accessibility?","browser_matrix;responsive_design;performance_targets;seo_strategy;accessibility_level","native_features;cli_commands","web standards;WCAG guidelines","New interaction;WebAssembly use"
|
||||
game,"game,player,gameplay,level,character","REDIRECT TO USE THE BMad Method Game Module Agent and Workflows - HALT","game-brief;GDD","most_sections","game design patterns","Novel mechanics;Genre mixing"
|
||||
desktop_app,"desktop,Windows,Mac,Linux,native","Cross-platform?;Auto-update?;System integration?;Offline?","platform_support;system_integration;update_strategy;offline_capabilities","web_seo;mobile_features","desktop guidelines;platform requirements","Desktop AI;System automation"
|
||||
iot_embedded,"IoT,embedded,device,sensor,hardware","Hardware specs?;Connectivity?;Power constraints?;Security?;OTA updates?","hardware_reqs;connectivity_protocol;power_profile;security_model;update_mechanism","visual_ui;browser_support","IoT standards;protocol specs","Edge AI;New sensors"
|
||||
blockchain_web3,"blockchain,crypto,DeFi,NFT,smart contract","Chain selection?;Wallet integration?;Gas optimization?;Security audit?","chain_specs;wallet_support;smart_contracts;security_audit;gas_optimization","traditional_auth;centralized_db","blockchain standards;security patterns","Novel tokenomics;DAO structure"
|
||||
|
|
|
@ -1,224 +0,0 @@
|
|||
---
|
||||
name: 'step-v-01-discovery'
|
||||
description: 'Document Discovery & Confirmation - Handle fresh context validation, confirm PRD path, discover input documents'
|
||||
|
||||
# File references (ONLY variables used in this step)
|
||||
nextStepFile: './step-v-02-format-detection.md'
|
||||
prdPurpose: '../data/prd-purpose.md'
|
||||
---
|
||||
|
||||
# Step 1: Document Discovery & Confirmation
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Handle fresh context validation by confirming PRD path, discovering and loading input documents from frontmatter, and initializing the validation report.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Validation Architect and Quality Assurance Specialist
|
||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
||||
- ✅ We engage in collaborative dialogue, not command-response
|
||||
- ✅ You bring systematic validation expertise and analytical rigor
|
||||
- ✅ User brings domain knowledge and specific PRD context
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on discovering PRD and input documents, not validating yet
|
||||
- 🚫 FORBIDDEN to perform any validation checks in this step
|
||||
- 💬 Approach: Systematic discovery with clear reporting to user
|
||||
- 🚪 This is the setup step - get everything ready for validation
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Discover and confirm PRD to validate
|
||||
- 💾 Load PRD and all input documents from frontmatter
|
||||
- 📖 Initialize validation report next to PRD
|
||||
- 🚫 FORBIDDEN to load next step until user confirms setup
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: PRD path (user-specified or discovered), workflow configuration
|
||||
- Focus: Document discovery and setup only
|
||||
- Limits: Don't perform validation, don't skip discovery
|
||||
- Dependencies: Configuration loaded from PRD workflow.md initialization
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Load PRD Purpose and Standards
|
||||
|
||||
Load and read the complete file at:
|
||||
`{prdPurpose}`
|
||||
|
||||
This file contains the BMAD PRD philosophy, standards, and validation criteria that will guide all validation checks. Internalize this understanding - it defines what makes a great BMAD PRD.
|
||||
|
||||
### 2. Discover PRD to Validate
|
||||
|
||||
**If PRD path provided as invocation parameter:**
|
||||
- Use provided path
|
||||
|
||||
**If no PRD path provided, auto-discover:**
|
||||
- Search `{planning_artifacts}` for files matching `*prd*.md`
|
||||
- Also check for sharded PRDs: `{planning_artifacts}/*prd*/*.md`
|
||||
|
||||
**If exactly ONE PRD found:**
|
||||
- Use it automatically
|
||||
- Inform user: "Found PRD: {discovered_path} — using it for validation."
|
||||
|
||||
**If MULTIPLE PRDs found:**
|
||||
- List all discovered PRDs with numbered options
|
||||
- "I found multiple PRDs. Which one would you like to validate?"
|
||||
- Wait for user selection
|
||||
|
||||
**If NO PRDs found:**
|
||||
- "I couldn't find any PRD files in {planning_artifacts}. Please provide the path to the PRD file you want to validate."
|
||||
- Wait for user to provide PRD path.
|
||||
|
||||
### 3. Validate PRD Exists and Load
|
||||
|
||||
Once PRD path is provided:
|
||||
|
||||
- Check if PRD file exists at specified path
|
||||
- If not found: "I cannot find a PRD at that path. Please check the path and try again."
|
||||
- If found: Load the complete PRD file including frontmatter
|
||||
|
||||
### 4. Extract Frontmatter and Input Documents
|
||||
|
||||
From the loaded PRD frontmatter, extract:
|
||||
|
||||
- `inputDocuments: []` array (if present)
|
||||
- Any other relevant metadata (classification, date, etc.)
|
||||
|
||||
**If no inputDocuments array exists:**
|
||||
Note this and proceed with PRD-only validation
|
||||
|
||||
### 5. Load Input Documents
|
||||
|
||||
For each document listed in `inputDocuments`:
|
||||
|
||||
- Attempt to load the document
|
||||
- Track successfully loaded documents
|
||||
- Note any documents that fail to load
|
||||
|
||||
**Build list of loaded input documents:**
|
||||
- Product Brief (if present)
|
||||
- Research documents (if present)
|
||||
- Other reference materials (if present)
|
||||
|
||||
### 6. Ask About Additional Reference Documents
|
||||
|
||||
"**I've loaded the following documents from your PRD frontmatter:**
|
||||
|
||||
{list loaded documents with file names}
|
||||
|
||||
**Are there any additional reference documents you'd like me to include in this validation?**
|
||||
|
||||
These could include:
|
||||
- Additional research or context documents
|
||||
- Project documentation not tracked in frontmatter
|
||||
- Standards or compliance documents
|
||||
- Competitive analysis or benchmarks
|
||||
|
||||
Please provide paths to any additional documents, or type 'none' to proceed."
|
||||
|
||||
**Load any additional documents provided by user.**
|
||||
|
||||
### 7. Initialize Validation Report
|
||||
|
||||
Create validation report at: `{validationReportPath}`
|
||||
|
||||
**Initialize with frontmatter:**
|
||||
```yaml
|
||||
---
|
||||
validationTarget: '{prd_path}'
|
||||
validationDate: '{current_date}'
|
||||
inputDocuments: [list of all loaded documents]
|
||||
validationStepsCompleted: []
|
||||
validationStatus: IN_PROGRESS
|
||||
---
|
||||
```
|
||||
|
||||
**Initial content:**
|
||||
```markdown
|
||||
# PRD Validation Report
|
||||
|
||||
**PRD Being Validated:** {prd_path}
|
||||
**Validation Date:** {current_date}
|
||||
|
||||
## Input Documents
|
||||
|
||||
{list all documents loaded for validation}
|
||||
|
||||
## Validation Findings
|
||||
|
||||
[Findings will be appended as validation progresses]
|
||||
```
|
||||
|
||||
### 8. Present Discovery Summary
|
||||
|
||||
"**Setup Complete!**
|
||||
|
||||
**PRD to Validate:** {prd_path}
|
||||
|
||||
**Input Documents Loaded:**
|
||||
- PRD: {prd_name} ✓
|
||||
- Product Brief: {count} {if count > 0}✓{else}(none found){/if}
|
||||
- Research: {count} {if count > 0}✓{else}(none found){/if}
|
||||
- Additional References: {count} {if count > 0}✓{else}(none){/if}
|
||||
|
||||
**Validation Report:** {validationReportPath}
|
||||
|
||||
**Ready to begin validation.**"
|
||||
|
||||
### 9. Present MENU OPTIONS
|
||||
|
||||
Display: **Select an Option:** [A] Advanced Elicitation [P] Party Mode [C] Continue to Format Detection
|
||||
|
||||
#### EXECUTION RULES:
|
||||
|
||||
- ALWAYS halt and wait for user input after presenting menu
|
||||
- ONLY proceed to next step when user selects 'C'
|
||||
- User can ask questions or add more documents - always respond and redisplay menu
|
||||
|
||||
#### Menu Handling Logic:
|
||||
|
||||
- IF A: Invoke the `bmad-advanced-elicitation` skill, and when finished redisplay the menu
|
||||
- IF P: Invoke the `bmad-party-mode` skill, and when finished redisplay the menu
|
||||
- IF C: Read fully and follow: {nextStepFile} to begin format detection
|
||||
- IF user provides additional document: Load it, update report, redisplay summary
|
||||
- IF Any other: help user, then redisplay menu
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- PRD path discovered and confirmed
|
||||
- PRD file exists and loads successfully
|
||||
- All input documents from frontmatter loaded
|
||||
- Additional reference documents (if any) loaded
|
||||
- Validation report initialized next to PRD
|
||||
- User clearly informed of setup status
|
||||
- Menu presented and user input handled correctly
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Proceeding with non-existent PRD file
|
||||
- Not loading input documents from frontmatter
|
||||
- Creating validation report in wrong location
|
||||
- Proceeding without user confirming setup
|
||||
- Not handling missing input documents gracefully
|
||||
|
||||
**Master Rule:** Complete discovery and setup BEFORE validation. This step ensures everything is in place for systematic validation checks.
|
||||
|
|
@ -1,191 +0,0 @@
|
|||
---
|
||||
name: 'step-v-02-format-detection'
|
||||
description: 'Format Detection & Structure Analysis - Classify PRD format and route appropriately'
|
||||
|
||||
# File references (ONLY variables used in this step)
|
||||
nextStepFile: './step-v-03-density-validation.md'
|
||||
altStepFile: './step-v-02b-parity-check.md'
|
||||
prdFile: '{prd_file_path}'
|
||||
validationReportPath: '{validation_report_path}'
|
||||
---
|
||||
|
||||
# Step 2: Format Detection & Structure Analysis
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Detect if PRD follows BMAD format and route appropriately - classify as BMAD Standard / BMAD Variant / Non-Standard, with optional parity check for non-standard formats.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Validation Architect and Quality Assurance Specialist
|
||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
||||
- ✅ We engage in collaborative dialogue, not command-response
|
||||
- ✅ You bring systematic validation expertise and pattern recognition
|
||||
- ✅ User brings domain knowledge and PRD context
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on detecting format and classifying structure
|
||||
- 🚫 FORBIDDEN to perform other validation checks in this step
|
||||
- 💬 Approach: Analytical and systematic, clear reporting of findings
|
||||
- 🚪 This is a branch step - may route to parity check for non-standard PRDs
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Analyze PRD structure systematically
|
||||
- 💾 Append format findings to validation report
|
||||
- 📖 Route appropriately based on format classification
|
||||
- 🚫 FORBIDDEN to skip format detection or proceed without classification
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: PRD file loaded in step 1, validation report initialized
|
||||
- Focus: Format detection and classification only
|
||||
- Limits: Don't perform other validation, don't skip classification
|
||||
- Dependencies: Step 1 completed - PRD loaded and report initialized
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Extract PRD Structure
|
||||
|
||||
Load the complete PRD file and extract:
|
||||
|
||||
**All Level 2 (##) headers:**
|
||||
- Scan through entire PRD document
|
||||
- Extract all ## section headers
|
||||
- List them in order
|
||||
|
||||
**PRD frontmatter:**
|
||||
- Extract classification.domain if present
|
||||
- Extract classification.projectType if present
|
||||
- Note any other relevant metadata
|
||||
|
||||
### 2. Check for BMAD PRD Core Sections
|
||||
|
||||
Check if the PRD contains the following BMAD PRD core sections:
|
||||
|
||||
1. **Executive Summary** (or variations: ## Executive Summary, ## Overview, ## Introduction)
|
||||
2. **Success Criteria** (or: ## Success Criteria, ## Goals, ## Objectives)
|
||||
3. **Product Scope** (or: ## Product Scope, ## Scope, ## In Scope, ## Out of Scope)
|
||||
4. **User Journeys** (or: ## User Journeys, ## User Stories, ## User Flows)
|
||||
5. **Functional Requirements** (or: ## Functional Requirements, ## Features, ## Capabilities)
|
||||
6. **Non-Functional Requirements** (or: ## Non-Functional Requirements, ## NFRs, ## Quality Attributes)
|
||||
|
||||
**Count matches:**
|
||||
- How many of these 6 core sections are present?
|
||||
- Which specific sections are present?
|
||||
- Which are missing?
|
||||
|
||||
### 3. Classify PRD Format
|
||||
|
||||
Based on core section count, classify:
|
||||
|
||||
**BMAD Standard:**
|
||||
- 5-6 core sections present
|
||||
- Follows BMAD PRD structure closely
|
||||
|
||||
**BMAD Variant:**
|
||||
- 3-4 core sections present
|
||||
- Generally follows BMAD patterns but may have structural differences
|
||||
- Missing some sections but recognizable as BMAD-style
|
||||
|
||||
**Non-Standard:**
|
||||
- Fewer than 3 core sections present
|
||||
- Does not follow BMAD PRD structure
|
||||
- May be completely custom format, legacy format, or from another framework
|
||||
|
||||
### 4. Report Format Findings to Validation Report
|
||||
|
||||
Append to validation report:
|
||||
|
||||
```markdown
|
||||
## Format Detection
|
||||
|
||||
**PRD Structure:**
|
||||
[List all ## Level 2 headers found]
|
||||
|
||||
**BMAD Core Sections Present:**
|
||||
- Executive Summary: [Present/Missing]
|
||||
- Success Criteria: [Present/Missing]
|
||||
- Product Scope: [Present/Missing]
|
||||
- User Journeys: [Present/Missing]
|
||||
- Functional Requirements: [Present/Missing]
|
||||
- Non-Functional Requirements: [Present/Missing]
|
||||
|
||||
**Format Classification:** [BMAD Standard / BMAD Variant / Non-Standard]
|
||||
**Core Sections Present:** [count]/6
|
||||
```
|
||||
|
||||
### 5. Route Based on Format Classification
|
||||
|
||||
**IF format is BMAD Standard or BMAD Variant:**
|
||||
|
||||
Display: "**Format Detected:** {classification}
|
||||
|
||||
Proceeding to systematic validation checks..."
|
||||
|
||||
Without delay, read fully and follow: {nextStepFile} (step-v-03-density-validation.md)
|
||||
|
||||
**IF format is Non-Standard (< 3 core sections):**
|
||||
|
||||
Display: "**Format Detected:** Non-Standard PRD
|
||||
|
||||
This PRD does not follow BMAD standard structure (only {count}/6 core sections present).
|
||||
|
||||
You have options:"
|
||||
|
||||
Present MENU OPTIONS below for user selection
|
||||
|
||||
### 6. Present MENU OPTIONS (Non-Standard PRDs Only)
|
||||
|
||||
**[A] Parity Check** - Analyze gaps and estimate effort to reach BMAD PRD parity
|
||||
**[B] Validate As-Is** - Proceed with validation using current structure
|
||||
**[C] Exit** - Exit validation and review format findings
|
||||
|
||||
#### EXECUTION RULES:
|
||||
|
||||
- ALWAYS halt and wait for user input
|
||||
- Only proceed based on user selection
|
||||
|
||||
#### Menu Handling Logic:
|
||||
|
||||
- IF A (Parity Check): Read fully and follow: {altStepFile} (step-v-02b-parity-check.md)
|
||||
- IF B (Validate As-Is): Display "Proceeding with validation..." then read fully and follow: {nextStepFile}
|
||||
- IF C (Exit): Display format findings summary and exit validation
|
||||
- IF Any other: help user respond, then redisplay menu
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All ## Level 2 headers extracted successfully
|
||||
- BMAD core sections checked systematically
|
||||
- Format classified correctly based on section count
|
||||
- Findings reported to validation report
|
||||
- BMAD Standard/Variant PRDs proceed directly to next validation step
|
||||
- Non-Standard PRDs pause and present options to user
|
||||
- User can choose parity check, validate as-is, or exit
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not extracting all headers before classification
|
||||
- Incorrect format classification
|
||||
- Not reporting findings to validation report
|
||||
- Not pausing for non-standard PRDs
|
||||
- Proceeding without user decision for non-standard formats
|
||||
|
||||
**Master Rule:** Format detection determines validation path. Non-standard PRDs require user choice before proceeding.
|
||||
|
|
@ -1,209 +0,0 @@
|
|||
---
|
||||
name: 'step-v-02b-parity-check'
|
||||
description: 'Document Parity Check - Analyze non-standard PRD and identify gaps to achieve BMAD PRD parity'
|
||||
|
||||
# File references (ONLY variables used in this step)
|
||||
nextStepFile: './step-v-03-density-validation.md'
|
||||
prdFile: '{prd_file_path}'
|
||||
validationReportPath: '{validation_report_path}'
|
||||
---
|
||||
|
||||
# Step 2B: Document Parity Check
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Analyze non-standard PRD and identify gaps to achieve BMAD PRD parity, presenting user with options for how to proceed.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Validation Architect and Quality Assurance Specialist
|
||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
||||
- ✅ We engage in collaborative dialogue, not command-response
|
||||
- ✅ You bring BMAD PRD standards expertise and gap analysis
|
||||
- ✅ User brings domain knowledge and PRD context
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on analyzing gaps and estimating parity effort
|
||||
- 🚫 FORBIDDEN to perform other validation checks in this step
|
||||
- 💬 Approach: Systematic gap analysis with clear recommendations
|
||||
- 🚪 This is an optional branch step - user chooses next action
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Analyze each BMAD PRD section for gaps
|
||||
- 💾 Append parity analysis to validation report
|
||||
- 📖 Present options and await user decision
|
||||
- 🚫 FORBIDDEN to proceed without user selection
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Non-standard PRD from step 2, validation report in progress
|
||||
- Focus: Parity analysis only - what's missing, what's needed
|
||||
- Limits: Don't perform validation checks, don't auto-proceed
|
||||
- Dependencies: Step 2 classified PRD as non-standard and user chose parity check
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Analyze Each BMAD PRD Section
|
||||
|
||||
For each of the 6 BMAD PRD core sections, analyze:
|
||||
|
||||
**Executive Summary:**
|
||||
- Does PRD have vision/overview?
|
||||
- Is problem statement clear?
|
||||
- Are target users identified?
|
||||
- Gap: [What's missing or incomplete]
|
||||
|
||||
**Success Criteria:**
|
||||
- Are measurable goals defined?
|
||||
- Is success clearly defined?
|
||||
- Gap: [What's missing or incomplete]
|
||||
|
||||
**Product Scope:**
|
||||
- Is scope clearly defined?
|
||||
- Are in-scope items listed?
|
||||
- Are out-of-scope items listed?
|
||||
- Gap: [What's missing or incomplete]
|
||||
|
||||
**User Journeys:**
|
||||
- Are user types/personas identified?
|
||||
- Are user flows documented?
|
||||
- Gap: [What's missing or incomplete]
|
||||
|
||||
**Functional Requirements:**
|
||||
- Are features/capabilities listed?
|
||||
- Are requirements structured?
|
||||
- Gap: [What's missing or incomplete]
|
||||
|
||||
**Non-Functional Requirements:**
|
||||
- Are quality attributes defined?
|
||||
- Are performance/security/etc. requirements documented?
|
||||
- Gap: [What's missing or incomplete]
|
||||
|
||||
### 2. Estimate Effort to Reach Parity
|
||||
|
||||
For each missing or incomplete section, estimate:
|
||||
|
||||
**Effort Level:**
|
||||
- Minimal - Section exists but needs minor enhancements
|
||||
- Moderate - Section missing but content exists elsewhere in PRD
|
||||
- Significant - Section missing, requires new content creation
|
||||
|
||||
**Total Parity Effort:**
|
||||
- Based on individual section estimates
|
||||
- Classify overall: Quick / Moderate / Substantial effort
|
||||
|
||||
### 3. Report Parity Analysis to Validation Report
|
||||
|
||||
Append to validation report:
|
||||
|
||||
```markdown
|
||||
## Parity Analysis (Non-Standard PRD)
|
||||
|
||||
### Section-by-Section Gap Analysis
|
||||
|
||||
**Executive Summary:**
|
||||
- Status: [Present/Missing/Incomplete]
|
||||
- Gap: [specific gap description]
|
||||
- Effort to Complete: [Minimal/Moderate/Significant]
|
||||
|
||||
**Success Criteria:**
|
||||
- Status: [Present/Missing/Incomplete]
|
||||
- Gap: [specific gap description]
|
||||
- Effort to Complete: [Minimal/Moderate/Significant]
|
||||
|
||||
**Product Scope:**
|
||||
- Status: [Present/Missing/Incomplete]
|
||||
- Gap: [specific gap description]
|
||||
- Effort to Complete: [Minimal/Moderate/Significant]
|
||||
|
||||
**User Journeys:**
|
||||
- Status: [Present/Missing/Incomplete]
|
||||
- Gap: [specific gap description]
|
||||
- Effort to Complete: [Minimal/Moderate/Significant]
|
||||
|
||||
**Functional Requirements:**
|
||||
- Status: [Present/Missing/Incomplete]
|
||||
- Gap: [specific gap description]
|
||||
- Effort to Complete: [Minimal/Moderate/Significant]
|
||||
|
||||
**Non-Functional Requirements:**
|
||||
- Status: [Present/Missing/Incomplete]
|
||||
- Gap: [specific gap description]
|
||||
- Effort to Complete: [Minimal/Moderate/Significant]
|
||||
|
||||
### Overall Parity Assessment
|
||||
|
||||
**Overall Effort to Reach BMAD Standard:** [Quick/Moderate/Substantial]
|
||||
**Recommendation:** [Brief recommendation based on analysis]
|
||||
```
|
||||
|
||||
### 4. Present Parity Analysis and Options
|
||||
|
||||
Display:
|
||||
|
||||
"**Parity Analysis Complete**
|
||||
|
||||
Your PRD is missing {count} of 6 core BMAD PRD sections. The overall effort to reach BMAD standard is: **{effort level}**
|
||||
|
||||
**Quick Summary:**
|
||||
[2-3 sentence summary of key gaps]
|
||||
|
||||
**Recommendation:**
|
||||
{recommendation from analysis}
|
||||
|
||||
**How would you like to proceed?**"
|
||||
|
||||
### 5. Present MENU OPTIONS
|
||||
|
||||
**[C] Continue Validation** - Proceed with validation using current structure
|
||||
**[E] Exit & Review** - Exit validation and review parity report
|
||||
**[S] Save & Exit** - Save parity report and exit
|
||||
|
||||
#### EXECUTION RULES:
|
||||
|
||||
- ALWAYS halt and wait for user input
|
||||
- Only proceed based on user selection
|
||||
|
||||
#### Menu Handling Logic:
|
||||
|
||||
- IF C (Continue): Display "Proceeding with validation..." then read fully and follow: {nextStepFile}
|
||||
- IF E (Exit): Display parity summary and exit validation
|
||||
- IF S (Save): Confirm saved, display summary, exit
|
||||
- IF Any other: help user respond, then redisplay menu
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All 6 BMAD PRD sections analyzed for gaps
|
||||
- Effort estimates provided for each gap
|
||||
- Overall parity effort assessed correctly
|
||||
- Parity analysis reported to validation report
|
||||
- Clear summary presented to user
|
||||
- User can choose to continue validation, exit, or save report
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not analyzing all 6 sections systematically
|
||||
- Missing effort estimates
|
||||
- Not reporting parity analysis to validation report
|
||||
- Auto-proceeding without user decision
|
||||
- Unclear recommendations
|
||||
|
||||
**Master Rule:** Parity check informs user of gaps and effort, but user decides whether to proceed with validation or address gaps first.
|
||||
|
|
@ -1,174 +0,0 @@
|
|||
---
|
||||
name: 'step-v-03-density-validation'
|
||||
description: 'Information Density Check - Scan for anti-patterns that violate information density principles'
|
||||
|
||||
# File references (ONLY variables used in this step)
|
||||
nextStepFile: './step-v-04-brief-coverage-validation.md'
|
||||
prdFile: '{prd_file_path}'
|
||||
validationReportPath: '{validation_report_path}'
|
||||
---
|
||||
|
||||
# Step 3: Information Density Validation
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Validate PRD meets BMAD information density standards by scanning for conversational filler, wordy phrases, and redundant expressions that violate conciseness principles.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Validation Architect and Quality Assurance Specialist
|
||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
||||
- ✅ We engage in systematic validation, not collaborative dialogue
|
||||
- ✅ You bring analytical rigor and attention to detail
|
||||
- ✅ This step runs autonomously - no user input needed
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on information density anti-patterns
|
||||
- 🚫 FORBIDDEN to validate other aspects in this step
|
||||
- 💬 Approach: Systematic scanning and categorization
|
||||
- 🚪 This is a validation sequence step - auto-proceeds when complete
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Scan PRD for density anti-patterns systematically
|
||||
- 💾 Append density findings to validation report
|
||||
- 📖 Display "Proceeding to next check..." and load next step
|
||||
- 🚫 FORBIDDEN to pause or request user input
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: PRD file, validation report with format findings
|
||||
- Focus: Information density validation only
|
||||
- Limits: Don't validate other aspects, don't pause for user input
|
||||
- Dependencies: Step 2 completed - format classification done
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Attempt Sub-Process Validation
|
||||
|
||||
**Try to use Task tool to spawn a subprocess:**
|
||||
|
||||
"Perform information density validation on this PRD:
|
||||
|
||||
1. Load the PRD file
|
||||
2. Scan for the following anti-patterns:
|
||||
- Conversational filler phrases (examples: 'The system will allow users to...', 'It is important to note that...', 'In order to')
|
||||
- Wordy phrases (examples: 'Due to the fact that', 'In the event of', 'For the purpose of')
|
||||
- Redundant phrases (examples: 'Future plans', 'Absolutely essential', 'Past history')
|
||||
3. Count violations by category with line numbers
|
||||
4. Classify severity: Critical (>10 violations), Warning (5-10), Pass (<5)
|
||||
|
||||
Return structured findings with counts and examples."
|
||||
|
||||
### 2. Graceful Degradation (if Task tool unavailable)
|
||||
|
||||
If Task tool unavailable, perform analysis directly:
|
||||
|
||||
**Scan for conversational filler patterns:**
|
||||
- "The system will allow users to..."
|
||||
- "It is important to note that..."
|
||||
- "In order to"
|
||||
- "For the purpose of"
|
||||
- "With regard to"
|
||||
- Count occurrences and note line numbers
|
||||
|
||||
**Scan for wordy phrases:**
|
||||
- "Due to the fact that" (use "because")
|
||||
- "In the event of" (use "if")
|
||||
- "At this point in time" (use "now")
|
||||
- "In a manner that" (use "how")
|
||||
- Count occurrences and note line numbers
|
||||
|
||||
**Scan for redundant phrases:**
|
||||
- "Future plans" (just "plans")
|
||||
- "Past history" (just "history")
|
||||
- "Absolutely essential" (just "essential")
|
||||
- "Completely finish" (just "finish")
|
||||
- Count occurrences and note line numbers
|
||||
|
||||
### 3. Classify Severity
|
||||
|
||||
**Calculate total violations:**
|
||||
- Conversational filler count
|
||||
- Wordy phrases count
|
||||
- Redundant phrases count
|
||||
- Total = sum of all categories
|
||||
|
||||
**Determine severity:**
|
||||
- **Critical:** Total > 10 violations
|
||||
- **Warning:** Total 5-10 violations
|
||||
- **Pass:** Total < 5 violations
|
||||
|
||||
### 4. Report Density Findings to Validation Report
|
||||
|
||||
Append to validation report:
|
||||
|
||||
```markdown
|
||||
## Information Density Validation
|
||||
|
||||
**Anti-Pattern Violations:**
|
||||
|
||||
**Conversational Filler:** {count} occurrences
|
||||
[If count > 0, list examples with line numbers]
|
||||
|
||||
**Wordy Phrases:** {count} occurrences
|
||||
[If count > 0, list examples with line numbers]
|
||||
|
||||
**Redundant Phrases:** {count} occurrences
|
||||
[If count > 0, list examples with line numbers]
|
||||
|
||||
**Total Violations:** {total}
|
||||
|
||||
**Severity Assessment:** [Critical/Warning/Pass]
|
||||
|
||||
**Recommendation:**
|
||||
[If Critical] "PRD requires significant revision to improve information density. Every sentence should carry weight without filler."
|
||||
[If Warning] "PRD would benefit from reducing wordiness and eliminating filler phrases."
|
||||
[If Pass] "PRD demonstrates good information density with minimal violations."
|
||||
```
|
||||
|
||||
### 5. Display Progress and Auto-Proceed
|
||||
|
||||
Display: "**Information Density Validation Complete**
|
||||
|
||||
Severity: {Critical/Warning/Pass}
|
||||
|
||||
**Proceeding to next validation check...**"
|
||||
|
||||
Without delay, read fully and follow: {nextStepFile} (step-v-04-brief-coverage-validation.md)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- PRD scanned for all three anti-pattern categories
|
||||
- Violations counted with line numbers
|
||||
- Severity classified correctly
|
||||
- Findings reported to validation report
|
||||
- Auto-proceeds to next validation step
|
||||
- Subprocess attempted with graceful degradation
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not scanning all anti-pattern categories
|
||||
- Missing severity classification
|
||||
- Not reporting findings to validation report
|
||||
- Pausing for user input (should auto-proceed)
|
||||
- Not attempting subprocess architecture
|
||||
|
||||
**Master Rule:** Information density validation runs autonomously. Scan, classify, report, auto-proceed. No user interaction needed.
|
||||
|
|
@ -1,214 +0,0 @@
|
|||
---
|
||||
name: 'step-v-04-brief-coverage-validation'
|
||||
description: 'Product Brief Coverage Check - Validate PRD covers all content from Product Brief (if used as input)'
|
||||
|
||||
# File references (ONLY variables used in this step)
|
||||
nextStepFile: './step-v-05-measurability-validation.md'
|
||||
prdFile: '{prd_file_path}'
|
||||
productBrief: '{product_brief_path}'
|
||||
validationReportPath: '{validation_report_path}'
|
||||
---
|
||||
|
||||
# Step 4: Product Brief Coverage Validation
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Validate that PRD covers all content from Product Brief (if brief was used as input), mapping brief content to PRD sections and identifying gaps.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Validation Architect and Quality Assurance Specialist
|
||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
||||
- ✅ We engage in systematic validation, not collaborative dialogue
|
||||
- ✅ You bring analytical rigor and traceability expertise
|
||||
- ✅ This step runs autonomously - no user input needed
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on Product Brief coverage (conditional on brief existence)
|
||||
- 🚫 FORBIDDEN to validate other aspects in this step
|
||||
- 💬 Approach: Systematic mapping and gap analysis
|
||||
- 🚪 This is a validation sequence step - auto-proceeds when complete
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Check if Product Brief exists in input documents
|
||||
- 💬 If no brief: Skip this check and report "N/A - No Product Brief"
|
||||
- 🎯 If brief exists: Map brief content to PRD sections
|
||||
- 💾 Append coverage findings to validation report
|
||||
- 📖 Display "Proceeding to next check..." and load next step
|
||||
- 🚫 FORBIDDEN to pause or request user input
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: PRD file, input documents from step 1, validation report
|
||||
- Focus: Product Brief coverage only (conditional)
|
||||
- Limits: Don't validate other aspects, conditional execution
|
||||
- Dependencies: Step 1 completed - input documents loaded
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Check for Product Brief
|
||||
|
||||
Check if Product Brief was loaded in step 1's inputDocuments:
|
||||
|
||||
**IF no Product Brief found:**
|
||||
Append to validation report:
|
||||
```markdown
|
||||
## Product Brief Coverage
|
||||
|
||||
**Status:** N/A - No Product Brief was provided as input
|
||||
```
|
||||
|
||||
Display: "**Product Brief Coverage: Skipped** (No Product Brief provided)
|
||||
|
||||
**Proceeding to next validation check...**"
|
||||
|
||||
Without delay, read fully and follow: {nextStepFile}
|
||||
|
||||
**IF Product Brief exists:** Continue to step 2 below
|
||||
|
||||
### 2. Attempt Sub-Process Validation
|
||||
|
||||
**Try to use Task tool to spawn a subprocess:**
|
||||
|
||||
"Perform Product Brief coverage validation:
|
||||
|
||||
1. Load the Product Brief
|
||||
2. Extract key content:
|
||||
- Vision statement
|
||||
- Target users/personas
|
||||
- Problem statement
|
||||
- Key features
|
||||
- Goals/objectives
|
||||
- Differentiators
|
||||
- Constraints
|
||||
3. For each item, search PRD for corresponding coverage
|
||||
4. Classify coverage: Fully Covered / Partially Covered / Not Found / Intentionally Excluded
|
||||
5. Note any gaps with severity: Critical / Moderate / Informational
|
||||
|
||||
Return structured coverage map with classifications."
|
||||
|
||||
### 3. Graceful Degradation (if Task tool unavailable)
|
||||
|
||||
If Task tool unavailable, perform analysis directly:
|
||||
|
||||
**Extract from Product Brief:**
|
||||
- Vision: What is this product?
|
||||
- Users: Who is it for?
|
||||
- Problem: What problem does it solve?
|
||||
- Features: What are the key capabilities?
|
||||
- Goals: What are the success criteria?
|
||||
- Differentiators: What makes it unique?
|
||||
|
||||
**For each item, search PRD:**
|
||||
- Scan Executive Summary for vision
|
||||
- Check User Journeys or user personas
|
||||
- Look for problem statement
|
||||
- Review Functional Requirements for features
|
||||
- Check Success Criteria section
|
||||
- Search for differentiators
|
||||
|
||||
**Classify coverage:**
|
||||
- **Fully Covered:** Content present and complete
|
||||
- **Partially Covered:** Content present but incomplete
|
||||
- **Not Found:** Content missing from PRD
|
||||
- **Intentionally Excluded:** Content explicitly out of scope
|
||||
|
||||
### 4. Assess Coverage and Severity
|
||||
|
||||
**For each gap (Partially Covered or Not Found):**
|
||||
- Is this Critical? (Core vision, primary users, main features)
|
||||
- Is this Moderate? (Secondary features, some goals)
|
||||
- Is this Informational? (Nice-to-have features, minor details)
|
||||
|
||||
**Note:** Some exclusions may be intentional (valid scoping decisions)
|
||||
|
||||
### 5. Report Coverage Findings to Validation Report
|
||||
|
||||
Append to validation report:
|
||||
|
||||
```markdown
|
||||
## Product Brief Coverage
|
||||
|
||||
**Product Brief:** {brief_file_name}
|
||||
|
||||
### Coverage Map
|
||||
|
||||
**Vision Statement:** [Fully/Partially/Not Found/Intentionally Excluded]
|
||||
[If gap: Note severity and specific missing content]
|
||||
|
||||
**Target Users:** [Fully/Partially/Not Found/Intentionally Excluded]
|
||||
[If gap: Note severity and specific missing content]
|
||||
|
||||
**Problem Statement:** [Fully/Partially/Not Found/Intentionally Excluded]
|
||||
[If gap: Note severity and specific missing content]
|
||||
|
||||
**Key Features:** [Fully/Partially/Not Found/Intentionally Excluded]
|
||||
[If gap: List specific features with severity]
|
||||
|
||||
**Goals/Objectives:** [Fully/Partially/Not Found/Intentionally Excluded]
|
||||
[If gap: Note severity and specific missing content]
|
||||
|
||||
**Differentiators:** [Fully/Partially/Not Found/Intentionally Excluded]
|
||||
[If gap: Note severity and specific missing content]
|
||||
|
||||
### Coverage Summary
|
||||
|
||||
**Overall Coverage:** [percentage or qualitative assessment]
|
||||
**Critical Gaps:** [count] [list if any]
|
||||
**Moderate Gaps:** [count] [list if any]
|
||||
**Informational Gaps:** [count] [list if any]
|
||||
|
||||
**Recommendation:**
|
||||
[If critical gaps exist] "PRD should be revised to cover critical Product Brief content."
|
||||
[If moderate gaps] "Consider addressing moderate gaps for complete coverage."
|
||||
[If minimal gaps] "PRD provides good coverage of Product Brief content."
|
||||
```
|
||||
|
||||
### 6. Display Progress and Auto-Proceed
|
||||
|
||||
Display: "**Product Brief Coverage Validation Complete**
|
||||
|
||||
Overall Coverage: {assessment}
|
||||
|
||||
**Proceeding to next validation check...**"
|
||||
|
||||
Without delay, read fully and follow: {nextStepFile} (step-v-05-measurability-validation.md)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Checked for Product Brief existence correctly
|
||||
- If no brief: Reported "N/A" and skipped gracefully
|
||||
- If brief exists: Mapped all key brief content to PRD sections
|
||||
- Coverage classified appropriately (Fully/Partially/Not Found/Intentionally Excluded)
|
||||
- Severity assessed for gaps (Critical/Moderate/Informational)
|
||||
- Findings reported to validation report
|
||||
- Auto-proceeds to next validation step
|
||||
- Subprocess attempted with graceful degradation
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not checking for brief existence before attempting validation
|
||||
- If brief exists: not mapping all key content areas
|
||||
- Missing coverage classifications
|
||||
- Not reporting findings to validation report
|
||||
- Not auto-proceeding
|
||||
|
||||
**Master Rule:** Product Brief coverage is conditional - skip if no brief, validate thoroughly if brief exists. Always auto-proceed.
|
||||
|
|
@ -1,228 +0,0 @@
|
|||
---
|
||||
name: 'step-v-05-measurability-validation'
|
||||
description: 'Measurability Validation - Validate that all requirements (FRs and NFRs) are measurable and testable'
|
||||
|
||||
# File references (ONLY variables used in this step)
|
||||
nextStepFile: './step-v-06-traceability-validation.md'
|
||||
prdFile: '{prd_file_path}'
|
||||
validationReportPath: '{validation_report_path}'
|
||||
---
|
||||
|
||||
# Step 5: Measurability Validation
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Validate that all Functional Requirements (FRs) and Non-Functional Requirements (NFRs) are measurable, testable, and follow proper format without implementation details.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Validation Architect and Quality Assurance Specialist
|
||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
||||
- ✅ We engage in systematic validation, not collaborative dialogue
|
||||
- ✅ You bring analytical rigor and requirements engineering expertise
|
||||
- ✅ This step runs autonomously - no user input needed
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on FR and NFR measurability
|
||||
- 🚫 FORBIDDEN to validate other aspects in this step
|
||||
- 💬 Approach: Systematic requirement-by-requirement analysis
|
||||
- 🚪 This is a validation sequence step - auto-proceeds when complete
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Extract all FRs and NFRs from PRD
|
||||
- 💾 Validate each for measurability and format
|
||||
- 📖 Append findings to validation report
|
||||
- 📖 Display "Proceeding to next check..." and load next step
|
||||
- 🚫 FORBIDDEN to pause or request user input
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: PRD file, validation report
|
||||
- Focus: FR and NFR measurability only
|
||||
- Limits: Don't validate other aspects, don't pause for user input
|
||||
- Dependencies: Steps 2-4 completed - initial validation checks done
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Attempt Sub-Process Validation
|
||||
|
||||
**Try to use Task tool to spawn a subprocess:**
|
||||
|
||||
"Perform measurability validation on this PRD:
|
||||
|
||||
**Functional Requirements (FRs):**
|
||||
1. Extract all FRs from Functional Requirements section
|
||||
2. Check each FR for:
|
||||
- '[Actor] can [capability]' format compliance
|
||||
- No subjective adjectives (easy, fast, simple, intuitive, etc.)
|
||||
- No vague quantifiers (multiple, several, some, many, etc.)
|
||||
- No implementation details (technology names, library names, data structures unless capability-relevant)
|
||||
3. Document violations with line numbers
|
||||
|
||||
**Non-Functional Requirements (NFRs):**
|
||||
1. Extract all NFRs from Non-Functional Requirements section
|
||||
2. Check each NFR for:
|
||||
- Specific metrics with measurement methods
|
||||
- Template compliance (criterion, metric, measurement method, context)
|
||||
- Context included (why this matters, who it affects)
|
||||
3. Document violations with line numbers
|
||||
|
||||
Return structured findings with violation counts and examples."
|
||||
|
||||
### 2. Graceful Degradation (if Task tool unavailable)
|
||||
|
||||
If Task tool unavailable, perform analysis directly:
|
||||
|
||||
**Functional Requirements Analysis:**
|
||||
|
||||
Extract all FRs and check each for:
|
||||
|
||||
**Format compliance:**
|
||||
- Does it follow "[Actor] can [capability]" pattern?
|
||||
- Is actor clearly defined?
|
||||
- Is capability actionable and testable?
|
||||
|
||||
**No subjective adjectives:**
|
||||
- Scan for: easy, fast, simple, intuitive, user-friendly, responsive, quick, efficient (without metrics)
|
||||
- Note line numbers
|
||||
|
||||
**No vague quantifiers:**
|
||||
- Scan for: multiple, several, some, many, few, various, number of
|
||||
- Note line numbers
|
||||
|
||||
**No implementation details:**
|
||||
- Scan for: React, Vue, Angular, PostgreSQL, MongoDB, AWS, Docker, Kubernetes, Redux, etc.
|
||||
- Unless capability-relevant (e.g., "API consumers can access...")
|
||||
- Note line numbers
|
||||
|
||||
**Non-Functional Requirements Analysis:**
|
||||
|
||||
Extract all NFRs and check each for:
|
||||
|
||||
**Specific metrics:**
|
||||
- Is there a measurable criterion? (e.g., "response time < 200ms", not "fast response")
|
||||
- Can this be measured or tested?
|
||||
|
||||
**Template compliance:**
|
||||
- Criterion defined?
|
||||
- Metric specified?
|
||||
- Measurement method included?
|
||||
- Context provided?
|
||||
|
||||
### 3. Tally Violations
|
||||
|
||||
**FR Violations:**
|
||||
- Format violations: count
|
||||
- Subjective adjectives: count
|
||||
- Vague quantifiers: count
|
||||
- Implementation leakage: count
|
||||
- Total FR violations: sum
|
||||
|
||||
**NFR Violations:**
|
||||
- Missing metrics: count
|
||||
- Incomplete template: count
|
||||
- Missing context: count
|
||||
- Total NFR violations: sum
|
||||
|
||||
**Total violations:** FR violations + NFR violations
|
||||
|
||||
### 4. Report Measurability Findings to Validation Report
|
||||
|
||||
Append to validation report:
|
||||
|
||||
```markdown
|
||||
## Measurability Validation
|
||||
|
||||
### Functional Requirements
|
||||
|
||||
**Total FRs Analyzed:** {count}
|
||||
|
||||
**Format Violations:** {count}
|
||||
[If violations exist, list examples with line numbers]
|
||||
|
||||
**Subjective Adjectives Found:** {count}
|
||||
[If found, list examples with line numbers]
|
||||
|
||||
**Vague Quantifiers Found:** {count}
|
||||
[If found, list examples with line numbers]
|
||||
|
||||
**Implementation Leakage:** {count}
|
||||
[If found, list examples with line numbers]
|
||||
|
||||
**FR Violations Total:** {total}
|
||||
|
||||
### Non-Functional Requirements
|
||||
|
||||
**Total NFRs Analyzed:** {count}
|
||||
|
||||
**Missing Metrics:** {count}
|
||||
[If missing, list examples with line numbers]
|
||||
|
||||
**Incomplete Template:** {count}
|
||||
[If incomplete, list examples with line numbers]
|
||||
|
||||
**Missing Context:** {count}
|
||||
[If missing, list examples with line numbers]
|
||||
|
||||
**NFR Violations Total:** {total}
|
||||
|
||||
### Overall Assessment
|
||||
|
||||
**Total Requirements:** {FRs + NFRs}
|
||||
**Total Violations:** {FR violations + NFR violations}
|
||||
|
||||
**Severity:** [Critical if >10 violations, Warning if 5-10, Pass if <5]
|
||||
|
||||
**Recommendation:**
|
||||
[If Critical] "Many requirements are not measurable or testable. Requirements must be revised to be testable for downstream work."
|
||||
[If Warning] "Some requirements need refinement for measurability. Focus on violating requirements above."
|
||||
[If Pass] "Requirements demonstrate good measurability with minimal issues."
|
||||
```
|
||||
|
||||
### 5. Display Progress and Auto-Proceed
|
||||
|
||||
Display: "**Measurability Validation Complete**
|
||||
|
||||
Total Violations: {count} ({severity})
|
||||
|
||||
**Proceeding to next validation check...**"
|
||||
|
||||
Without delay, read fully and follow: {nextStepFile} (step-v-06-traceability-validation.md)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All FRs extracted and analyzed for measurability
|
||||
- All NFRs extracted and analyzed for measurability
|
||||
- Violations documented with line numbers
|
||||
- Severity assessed correctly
|
||||
- Findings reported to validation report
|
||||
- Auto-proceeds to next validation step
|
||||
- Subprocess attempted with graceful degradation
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not analyzing all FRs and NFRs
|
||||
- Missing line numbers for violations
|
||||
- Not reporting findings to validation report
|
||||
- Not assessing severity
|
||||
- Not auto-proceeding
|
||||
|
||||
**Master Rule:** Requirements must be testable to be useful. Validate every requirement for measurability, document violations, auto-proceed.
|
||||
|
|
@ -1,217 +0,0 @@
|
|||
---
|
||||
name: 'step-v-06-traceability-validation'
|
||||
description: 'Traceability Validation - Validate the traceability chain from vision → success → journeys → FRs is intact'
|
||||
|
||||
# File references (ONLY variables used in this step)
|
||||
nextStepFile: './step-v-07-implementation-leakage-validation.md'
|
||||
prdFile: '{prd_file_path}'
|
||||
validationReportPath: '{validation_report_path}'
|
||||
---
|
||||
|
||||
# Step 6: Traceability Validation
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Validate the traceability chain from Executive Summary → Success Criteria → User Journeys → Functional Requirements is intact, ensuring every requirement traces back to a user need or business objective.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Validation Architect and Quality Assurance Specialist
|
||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
||||
- ✅ We engage in systematic validation, not collaborative dialogue
|
||||
- ✅ You bring analytical rigor and traceability matrix expertise
|
||||
- ✅ This step runs autonomously - no user input needed
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on traceability chain validation
|
||||
- 🚫 FORBIDDEN to validate other aspects in this step
|
||||
- 💬 Approach: Systematic chain validation and orphan detection
|
||||
- 🚪 This is a validation sequence step - auto-proceeds when complete
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Build and validate traceability matrix
|
||||
- 💾 Identify broken chains and orphan requirements
|
||||
- 📖 Append findings to validation report
|
||||
- 📖 Display "Proceeding to next check..." and load next step
|
||||
- 🚫 FORBIDDEN to pause or request user input
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: PRD file, validation report
|
||||
- Focus: Traceability chain validation only
|
||||
- Limits: Don't validate other aspects, don't pause for user input
|
||||
- Dependencies: Steps 2-5 completed - initial validations done
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Attempt Sub-Process Validation
|
||||
|
||||
**Try to use Task tool to spawn a subprocess:**
|
||||
|
||||
"Perform traceability validation on this PRD:
|
||||
|
||||
1. Extract content from Executive Summary (vision, goals)
|
||||
2. Extract Success Criteria
|
||||
3. Extract User Journeys (user types, flows, outcomes)
|
||||
4. Extract Functional Requirements (FRs)
|
||||
5. Extract Product Scope (in-scope items)
|
||||
|
||||
**Validate chains:**
|
||||
- Executive Summary → Success Criteria: Does vision align with defined success?
|
||||
- Success Criteria → User Journeys: Are success criteria supported by user journeys?
|
||||
- User Journeys → Functional Requirements: Does each FR trace back to a user journey?
|
||||
- Scope → FRs: Do MVP scope FRs align with in-scope items?
|
||||
|
||||
**Identify orphans:**
|
||||
- FRs not traceable to any user journey or business objective
|
||||
- Success criteria not supported by user journeys
|
||||
- User journeys without supporting FRs
|
||||
|
||||
Build traceability matrix and identify broken chains and orphan FRs.
|
||||
|
||||
Return structured findings with chain status and orphan list."
|
||||
|
||||
### 2. Graceful Degradation (if Task tool unavailable)
|
||||
|
||||
If Task tool unavailable, perform analysis directly:
|
||||
|
||||
**Step 1: Extract key elements**
|
||||
- Executive Summary: Note vision, goals, objectives
|
||||
- Success Criteria: List all criteria
|
||||
- User Journeys: List user types and their flows
|
||||
- Functional Requirements: List all FRs
|
||||
- Product Scope: List in-scope items
|
||||
|
||||
**Step 2: Validate Executive Summary → Success Criteria**
|
||||
- Does Executive Summary mention the success dimensions?
|
||||
- Are Success Criteria aligned with vision?
|
||||
- Note any misalignment
|
||||
|
||||
**Step 3: Validate Success Criteria → User Journeys**
|
||||
- For each success criterion, is there a user journey that achieves it?
|
||||
- Note success criteria without supporting journeys
|
||||
|
||||
**Step 4: Validate User Journeys → FRs**
|
||||
- For each user journey/flow, are there FRs that enable it?
|
||||
- List FRs with no clear user journey origin
|
||||
- Note orphan FRs (requirements without traceable source)
|
||||
|
||||
**Step 5: Validate Scope → FR Alignment**
|
||||
- Does MVP scope align with essential FRs?
|
||||
- Are in-scope items supported by FRs?
|
||||
- Note misalignments
|
||||
|
||||
**Step 6: Build traceability matrix**
|
||||
- Map each FR to its source (journey or business objective)
|
||||
- Note orphan FRs
|
||||
- Identify broken chains
|
||||
|
||||
### 3. Tally Traceability Issues
|
||||
|
||||
**Broken chains:**
|
||||
- Executive Summary → Success Criteria gaps: count
|
||||
- Success Criteria → User Journeys gaps: count
|
||||
- User Journeys → FRs gaps: count
|
||||
- Scope → FR misalignments: count
|
||||
|
||||
**Orphan elements:**
|
||||
- Orphan FRs (no traceable source): count
|
||||
- Unsupported success criteria: count
|
||||
- User journeys without FRs: count
|
||||
|
||||
**Total issues:** Sum of all broken chains and orphans
|
||||
|
||||
### 4. Report Traceability Findings to Validation Report
|
||||
|
||||
Append to validation report:
|
||||
|
||||
```markdown
|
||||
## Traceability Validation
|
||||
|
||||
### Chain Validation
|
||||
|
||||
**Executive Summary → Success Criteria:** [Intact/Gaps Identified]
|
||||
{If gaps: List specific misalignments}
|
||||
|
||||
**Success Criteria → User Journeys:** [Intact/Gaps Identified]
|
||||
{If gaps: List unsupported success criteria}
|
||||
|
||||
**User Journeys → Functional Requirements:** [Intact/Gaps Identified]
|
||||
{If gaps: List journeys without supporting FRs}
|
||||
|
||||
**Scope → FR Alignment:** [Intact/Misaligned]
|
||||
{If misaligned: List specific issues}
|
||||
|
||||
### Orphan Elements
|
||||
|
||||
**Orphan Functional Requirements:** {count}
|
||||
{List orphan FRs with numbers}
|
||||
|
||||
**Unsupported Success Criteria:** {count}
|
||||
{List unsupported criteria}
|
||||
|
||||
**User Journeys Without FRs:** {count}
|
||||
{List journeys without FRs}
|
||||
|
||||
### Traceability Matrix
|
||||
|
||||
{Summary table showing traceability coverage}
|
||||
|
||||
**Total Traceability Issues:** {total}
|
||||
|
||||
**Severity:** [Critical if orphan FRs exist, Warning if gaps, Pass if intact]
|
||||
|
||||
**Recommendation:**
|
||||
[If Critical] "Orphan requirements exist - every FR must trace back to a user need or business objective."
|
||||
[If Warning] "Traceability gaps identified - strengthen chains to ensure all requirements are justified."
|
||||
[If Pass] "Traceability chain is intact - all requirements trace to user needs or business objectives."
|
||||
```
|
||||
|
||||
### 5. Display Progress and Auto-Proceed
|
||||
|
||||
Display: "**Traceability Validation Complete**
|
||||
|
||||
Total Issues: {count} ({severity})
|
||||
|
||||
**Proceeding to next validation check...**"
|
||||
|
||||
Without delay, read fully and follow: {nextStepFile} (step-v-07-implementation-leakage-validation.md)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All traceability chains validated systematically
|
||||
- Orphan FRs identified with numbers
|
||||
- Broken chains documented
|
||||
- Traceability matrix built
|
||||
- Severity assessed correctly
|
||||
- Findings reported to validation report
|
||||
- Auto-proceeds to next validation step
|
||||
- Subprocess attempted with graceful degradation
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not validating all traceability chains
|
||||
- Missing orphan FR detection
|
||||
- Not building traceability matrix
|
||||
- Not reporting findings to validation report
|
||||
- Not auto-proceeding
|
||||
|
||||
**Master Rule:** Every requirement should trace to a user need or business objective. Orphan FRs indicate broken traceability that must be fixed.
|
||||
|
|
@ -1,205 +0,0 @@
|
|||
---
|
||||
name: 'step-v-07-implementation-leakage-validation'
|
||||
description: 'Implementation Leakage Check - Ensure FRs and NFRs don\'t include implementation details'
|
||||
|
||||
# File references (ONLY variables used in this step)
|
||||
nextStepFile: './step-v-08-domain-compliance-validation.md'
|
||||
prdFile: '{prd_file_path}'
|
||||
validationReportPath: '{validation_report_path}'
|
||||
---
|
||||
|
||||
# Step 7: Implementation Leakage Validation
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Ensure Functional Requirements and Non-Functional Requirements don't include implementation details - they should specify WHAT, not HOW.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Validation Architect and Quality Assurance Specialist
|
||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
||||
- ✅ We engage in systematic validation, not collaborative dialogue
|
||||
- ✅ You bring analytical rigor and separation of concerns expertise
|
||||
- ✅ This step runs autonomously - no user input needed
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on implementation leakage detection
|
||||
- 🚫 FORBIDDEN to validate other aspects in this step
|
||||
- 💬 Approach: Systematic scanning for technology and implementation terms
|
||||
- 🚪 This is a validation sequence step - auto-proceeds when complete
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Scan FRs and NFRs for implementation terms
|
||||
- 💾 Distinguish capability-relevant vs leakage
|
||||
- 📖 Append findings to validation report
|
||||
- 📖 Display "Proceeding to next check..." and load next step
|
||||
- 🚫 FORBIDDEN to pause or request user input
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: PRD file, validation report
|
||||
- Focus: Implementation leakage detection only
|
||||
- Limits: Don't validate other aspects, don't pause for user input
|
||||
- Dependencies: Steps 2-6 completed - initial validations done
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Attempt Sub-Process Validation
|
||||
|
||||
**Try to use Task tool to spawn a subprocess:**
|
||||
|
||||
"Perform implementation leakage validation on this PRD:
|
||||
|
||||
**Scan for:**
|
||||
1. Technology names (React, Vue, Angular, PostgreSQL, MongoDB, AWS, GCP, Azure, Docker, Kubernetes, etc.)
|
||||
2. Library names (Redux, axios, lodash, Express, Django, Rails, Spring, etc.)
|
||||
3. Data structures (JSON, XML, CSV) unless relevant to capability
|
||||
4. Architecture patterns (MVC, microservices, serverless) unless business requirement
|
||||
5. Protocol names (HTTP, REST, GraphQL, WebSockets) - check if capability-relevant
|
||||
|
||||
**For each term found:**
|
||||
- Is this capability-relevant? (e.g., 'API consumers can access...' - API is capability)
|
||||
- Or is this implementation detail? (e.g., 'React component for...' - implementation)
|
||||
|
||||
Document violations with line numbers and explanation.
|
||||
|
||||
Return structured findings with leakage counts and examples."
|
||||
|
||||
### 2. Graceful Degradation (if Task tool unavailable)
|
||||
|
||||
If Task tool unavailable, perform analysis directly:
|
||||
|
||||
**Implementation leakage terms to scan for:**
|
||||
|
||||
**Frontend Frameworks:**
|
||||
React, Vue, Angular, Svelte, Solid, Next.js, Nuxt, etc.
|
||||
|
||||
**Backend Frameworks:**
|
||||
Express, Django, Rails, Spring, Laravel, FastAPI, etc.
|
||||
|
||||
**Databases:**
|
||||
PostgreSQL, MySQL, MongoDB, Redis, DynamoDB, Cassandra, etc.
|
||||
|
||||
**Cloud Platforms:**
|
||||
AWS, GCP, Azure, Cloudflare, Vercel, Netlify, etc.
|
||||
|
||||
**Infrastructure:**
|
||||
Docker, Kubernetes, Terraform, Ansible, etc.
|
||||
|
||||
**Libraries:**
|
||||
Redux, Zustand, axios, fetch, lodash, jQuery, etc.
|
||||
|
||||
**Data Formats:**
|
||||
JSON, XML, YAML, CSV (unless capability-relevant)
|
||||
|
||||
**For each term found in FRs/NFRs:**
|
||||
- Determine if it's capability-relevant or implementation leakage
|
||||
- Example: "API consumers can access data via REST endpoints" - API/REST is capability
|
||||
- Example: "React components fetch data using Redux" - implementation leakage
|
||||
|
||||
**Count violations and note line numbers**
|
||||
|
||||
### 3. Tally Implementation Leakage
|
||||
|
||||
**By category:**
|
||||
- Frontend framework leakage: count
|
||||
- Backend framework leakage: count
|
||||
- Database leakage: count
|
||||
- Cloud platform leakage: count
|
||||
- Infrastructure leakage: count
|
||||
- Library leakage: count
|
||||
- Other implementation details: count
|
||||
|
||||
**Total implementation leakage violations:** sum
|
||||
|
||||
### 4. Report Implementation Leakage Findings to Validation Report
|
||||
|
||||
Append to validation report:
|
||||
|
||||
```markdown
|
||||
## Implementation Leakage Validation
|
||||
|
||||
### Leakage by Category
|
||||
|
||||
**Frontend Frameworks:** {count} violations
|
||||
{If violations, list examples with line numbers}
|
||||
|
||||
**Backend Frameworks:** {count} violations
|
||||
{If violations, list examples with line numbers}
|
||||
|
||||
**Databases:** {count} violations
|
||||
{If violations, list examples with line numbers}
|
||||
|
||||
**Cloud Platforms:** {count} violations
|
||||
{If violations, list examples with line numbers}
|
||||
|
||||
**Infrastructure:** {count} violations
|
||||
{If violations, list examples with line numbers}
|
||||
|
||||
**Libraries:** {count} violations
|
||||
{If violations, list examples with line numbers}
|
||||
|
||||
**Other Implementation Details:** {count} violations
|
||||
{If violations, list examples with line numbers}
|
||||
|
||||
### Summary
|
||||
|
||||
**Total Implementation Leakage Violations:** {total}
|
||||
|
||||
**Severity:** [Critical if >5 violations, Warning if 2-5, Pass if <2]
|
||||
|
||||
**Recommendation:**
|
||||
[If Critical] "Extensive implementation leakage found. Requirements specify HOW instead of WHAT. Remove all implementation details - these belong in architecture, not PRD."
|
||||
[If Warning] "Some implementation leakage detected. Review violations and remove implementation details from requirements."
|
||||
[If Pass] "No significant implementation leakage found. Requirements properly specify WHAT without HOW."
|
||||
|
||||
**Note:** API consumers, GraphQL (when required), and other capability-relevant terms are acceptable when they describe WHAT the system must do, not HOW to build it.
|
||||
```
|
||||
|
||||
### 5. Display Progress and Auto-Proceed
|
||||
|
||||
Display: "**Implementation Leakage Validation Complete**
|
||||
|
||||
Total Violations: {count} ({severity})
|
||||
|
||||
**Proceeding to next validation check...**"
|
||||
|
||||
Without delay, read fully and follow: {nextStepFile} (step-v-08-domain-compliance-validation.md)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Scanned FRs and NFRs for all implementation term categories
|
||||
- Distinguished capability-relevant from implementation leakage
|
||||
- Violations documented with line numbers and explanations
|
||||
- Severity assessed correctly
|
||||
- Findings reported to validation report
|
||||
- Auto-proceeds to next validation step
|
||||
- Subprocess attempted with graceful degradation
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not scanning all implementation term categories
|
||||
- Not distinguishing capability-relevant from leakage
|
||||
- Missing line numbers for violations
|
||||
- Not reporting findings to validation report
|
||||
- Not auto-proceeding
|
||||
|
||||
**Master Rule:** Requirements specify WHAT, not HOW. Implementation details belong in architecture documents, not PRDs.
|
||||
|
|
@ -1,243 +0,0 @@
|
|||
---
|
||||
name: 'step-v-08-domain-compliance-validation'
|
||||
description: 'Domain Compliance Validation - Validate domain-specific requirements are present for high-complexity domains'
|
||||
|
||||
# File references (ONLY variables used in this step)
|
||||
nextStepFile: './step-v-09-project-type-validation.md'
|
||||
prdFile: '{prd_file_path}'
|
||||
prdFrontmatter: '{prd_frontmatter}'
|
||||
validationReportPath: '{validation_report_path}'
|
||||
domainComplexityData: '../data/domain-complexity.csv'
|
||||
---
|
||||
|
||||
# Step 8: Domain Compliance Validation
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Validate domain-specific requirements are present for high-complexity domains (Healthcare, Fintech, GovTech, etc.), ensuring regulatory and compliance requirements are properly documented.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Validation Architect and Quality Assurance Specialist
|
||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
||||
- ✅ We engage in systematic validation, not collaborative dialogue
|
||||
- ✅ You bring domain expertise and compliance knowledge
|
||||
- ✅ This step runs autonomously - no user input needed
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on domain-specific compliance requirements
|
||||
- 🚫 FORBIDDEN to validate other aspects in this step
|
||||
- 💬 Approach: Conditional validation based on domain classification
|
||||
- 🚪 This is a validation sequence step - auto-proceeds when complete
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Check classification.domain from PRD frontmatter
|
||||
- 💬 If low complexity (general): Skip detailed checks
|
||||
- 🎯 If high complexity: Validate required special sections
|
||||
- 💾 Append compliance findings to validation report
|
||||
- 📖 Display "Proceeding to next check..." and load next step
|
||||
- 🚫 FORBIDDEN to pause or request user input
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: PRD file with frontmatter classification, validation report
|
||||
- Focus: Domain compliance only (conditional on domain complexity)
|
||||
- Limits: Don't validate other aspects, conditional execution
|
||||
- Dependencies: Steps 2-7 completed - format and requirements validation done
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Load Domain Complexity Data
|
||||
|
||||
Load and read the complete file at:
|
||||
`{domainComplexityData}` (../data/domain-complexity.csv)
|
||||
|
||||
This CSV contains:
|
||||
- Domain classifications and complexity levels (high/medium/low)
|
||||
- Required special sections for each domain
|
||||
- Key concerns and requirements for regulated industries
|
||||
|
||||
Internalize this data - it drives which domains require special compliance sections.
|
||||
|
||||
### 2. Extract Domain Classification
|
||||
|
||||
From PRD frontmatter, extract:
|
||||
- `classification.domain` - what domain is this PRD for?
|
||||
|
||||
**If no domain classification found:**
|
||||
Treat as "general" (low complexity) and proceed to step 4
|
||||
|
||||
### 2. Determine Domain Complexity
|
||||
|
||||
**Low complexity domains (skip detailed checks):**
|
||||
- General
|
||||
- Consumer apps (standard e-commerce, social, productivity)
|
||||
- Content websites
|
||||
- Business tools (standard)
|
||||
|
||||
**High complexity domains (require special sections):**
|
||||
- Healthcare / Healthtech
|
||||
- Fintech / Financial services
|
||||
- GovTech / Public sector
|
||||
- EdTech (educational records, accredited courses)
|
||||
- Legal tech
|
||||
- Other regulated domains
|
||||
|
||||
### 3. For High-Complexity Domains: Validate Required Special Sections
|
||||
|
||||
**Attempt subprocess validation:**
|
||||
|
||||
"Perform domain compliance validation for {domain}:
|
||||
|
||||
Based on {domain} requirements, check PRD for:
|
||||
|
||||
**Healthcare:**
|
||||
- Clinical Requirements section
|
||||
- Regulatory Pathway (FDA, HIPAA, etc.)
|
||||
- Safety Measures
|
||||
- HIPAA Compliance (data privacy, security)
|
||||
- Patient safety considerations
|
||||
|
||||
**Fintech:**
|
||||
- Compliance Matrix (SOC2, PCI-DSS, GDPR, etc.)
|
||||
- Security Architecture
|
||||
- Audit Requirements
|
||||
- Fraud Prevention measures
|
||||
- Financial transaction handling
|
||||
|
||||
**GovTech:**
|
||||
- Accessibility Standards (WCAG 2.1 AA, Section 508)
|
||||
- Procurement Compliance
|
||||
- Security Clearance requirements
|
||||
- Data residency requirements
|
||||
|
||||
**Other regulated domains:**
|
||||
- Check for domain-specific regulatory sections
|
||||
- Compliance requirements
|
||||
- Special considerations
|
||||
|
||||
For each required section:
|
||||
- Is it present in PRD?
|
||||
- Is it adequately documented?
|
||||
- Note any gaps
|
||||
|
||||
Return compliance matrix with presence/adequacy assessment."
|
||||
|
||||
**Graceful degradation (if no Task tool):**
|
||||
- Manually check for required sections based on domain
|
||||
- List present sections and missing sections
|
||||
- Assess adequacy of documentation
|
||||
|
||||
### 5. For Low-Complexity Domains: Skip Detailed Checks
|
||||
|
||||
Append to validation report:
|
||||
```markdown
|
||||
## Domain Compliance Validation
|
||||
|
||||
**Domain:** {domain}
|
||||
**Complexity:** Low (general/standard)
|
||||
**Assessment:** N/A - No special domain compliance requirements
|
||||
|
||||
**Note:** This PRD is for a standard domain without regulatory compliance requirements.
|
||||
```
|
||||
|
||||
Display: "**Domain Compliance Validation Skipped**
|
||||
|
||||
Domain: {domain} (low complexity)
|
||||
|
||||
**Proceeding to next validation check...**"
|
||||
|
||||
Without delay, read fully and follow: {nextStepFile}
|
||||
|
||||
### 6. Report Compliance Findings (High-Complexity Domains)
|
||||
|
||||
Append to validation report:
|
||||
|
||||
```markdown
|
||||
## Domain Compliance Validation
|
||||
|
||||
**Domain:** {domain}
|
||||
**Complexity:** High (regulated)
|
||||
|
||||
### Required Special Sections
|
||||
|
||||
**{Section 1 Name}:** [Present/Missing/Adequate]
|
||||
{If missing or inadequate: Note specific gaps}
|
||||
|
||||
**{Section 2 Name}:** [Present/Missing/Adequate]
|
||||
{If missing or inadequate: Note specific gaps}
|
||||
|
||||
[Continue for all required sections]
|
||||
|
||||
### Compliance Matrix
|
||||
|
||||
| Requirement | Status | Notes |
|
||||
|-------------|--------|-------|
|
||||
| {Requirement 1} | [Met/Partial/Missing] | {Notes} |
|
||||
| {Requirement 2} | [Met/Partial/Missing] | {Notes} |
|
||||
[... continue for all requirements]
|
||||
|
||||
### Summary
|
||||
|
||||
**Required Sections Present:** {count}/{total}
|
||||
**Compliance Gaps:** {count}
|
||||
|
||||
**Severity:** [Critical if missing regulatory sections, Warning if incomplete, Pass if complete]
|
||||
|
||||
**Recommendation:**
|
||||
[If Critical] "PRD is missing required domain-specific compliance sections. These are essential for {domain} products."
|
||||
[If Warning] "Some domain compliance sections are incomplete. Strengthen documentation for full compliance."
|
||||
[If Pass] "All required domain compliance sections are present and adequately documented."
|
||||
```
|
||||
|
||||
### 7. Display Progress and Auto-Proceed
|
||||
|
||||
Display: "**Domain Compliance Validation Complete**
|
||||
|
||||
Domain: {domain} ({complexity})
|
||||
Compliance Status: {status}
|
||||
|
||||
**Proceeding to next validation check...**"
|
||||
|
||||
Without delay, read fully and follow: {nextStepFile} (step-v-09-project-type-validation.md)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Domain classification extracted correctly
|
||||
- Complexity assessed appropriately
|
||||
- Low complexity domains: Skipped with clear "N/A" documentation
|
||||
- High complexity domains: All required sections checked
|
||||
- Compliance matrix built with status for each requirement
|
||||
- Severity assessed correctly
|
||||
- Findings reported to validation report
|
||||
- Auto-proceeds to next validation step
|
||||
- Subprocess attempted with graceful degradation
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not checking domain classification before proceeding
|
||||
- Performing detailed checks on low complexity domains
|
||||
- For high complexity: missing required section checks
|
||||
- Not building compliance matrix
|
||||
- Not reporting findings to validation report
|
||||
- Not auto-proceeding
|
||||
|
||||
**Master Rule:** Domain compliance is conditional. High-complexity domains require special sections - low complexity domains skip these checks.
|
||||
|
|
@ -1,263 +0,0 @@
|
|||
---
|
||||
name: 'step-v-09-project-type-validation'
|
||||
description: 'Project-Type Compliance Validation - Validate project-type specific requirements are properly documented'
|
||||
|
||||
# File references (ONLY variables used in this step)
|
||||
nextStepFile: './step-v-10-smart-validation.md'
|
||||
prdFile: '{prd_file_path}'
|
||||
prdFrontmatter: '{prd_frontmatter}'
|
||||
validationReportPath: '{validation_report_path}'
|
||||
projectTypesData: '../data/project-types.csv'
|
||||
---
|
||||
|
||||
# Step 9: Project-Type Compliance Validation
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Validate project-type specific requirements are properly documented - different project types (api_backend, web_app, mobile_app, etc.) have different required and excluded sections.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Validation Architect and Quality Assurance Specialist
|
||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
||||
- ✅ We engage in systematic validation, not collaborative dialogue
|
||||
- ✅ You bring project type expertise and architectural knowledge
|
||||
- ✅ This step runs autonomously - no user input needed
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on project-type compliance
|
||||
- 🚫 FORBIDDEN to validate other aspects in this step
|
||||
- 💬 Approach: Validate required sections present, excluded sections absent
|
||||
- 🚪 This is a validation sequence step - auto-proceeds when complete
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Check classification.projectType from PRD frontmatter
|
||||
- 🎯 Validate required sections for that project type are present
|
||||
- 🎯 Validate excluded sections for that project type are absent
|
||||
- 💾 Append compliance findings to validation report
|
||||
- 📖 Display "Proceeding to next check..." and load next step
|
||||
- 🚫 FORBIDDEN to pause or request user input
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: PRD file with frontmatter classification, validation report
|
||||
- Focus: Project-type compliance only
|
||||
- Limits: Don't validate other aspects, don't pause for user input
|
||||
- Dependencies: Steps 2-8 completed - domain and requirements validation done
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Load Project Types Data
|
||||
|
||||
Load and read the complete file at:
|
||||
`{projectTypesData}` (../data/project-types.csv)
|
||||
|
||||
This CSV contains:
|
||||
- Detection signals for each project type
|
||||
- Required sections for each project type
|
||||
- Skip/excluded sections for each project type
|
||||
- Innovation signals
|
||||
|
||||
Internalize this data - it drives what sections must be present or absent for each project type.
|
||||
|
||||
### 2. Extract Project Type Classification
|
||||
|
||||
From PRD frontmatter, extract:
|
||||
- `classification.projectType` - what type of project is this?
|
||||
|
||||
**Common project types:**
|
||||
- api_backend
|
||||
- web_app
|
||||
- mobile_app
|
||||
- desktop_app
|
||||
- data_pipeline
|
||||
- ml_system
|
||||
- library_sdk
|
||||
- infrastructure
|
||||
- other
|
||||
|
||||
**If no projectType classification found:**
|
||||
Assume "web_app" (most common) and note in findings
|
||||
|
||||
### 3. Determine Required and Excluded Sections from CSV Data
|
||||
|
||||
**From loaded project-types.csv data, for this project type:**
|
||||
|
||||
**Required sections:** (from required_sections column)
|
||||
These MUST be present in the PRD
|
||||
|
||||
**Skip sections:** (from skip_sections column)
|
||||
These MUST NOT be present in the PRD
|
||||
|
||||
**Example mappings from CSV:**
|
||||
- api_backend: Required=[endpoint_specs, auth_model, data_schemas], Skip=[ux_ui, visual_design]
|
||||
- mobile_app: Required=[platform_reqs, device_permissions, offline_mode], Skip=[desktop_features, cli_commands]
|
||||
- cli_tool: Required=[command_structure, output_formats, config_schema], Skip=[visual_design, ux_principles, touch_interactions]
|
||||
- etc.
|
||||
|
||||
### 4. Validate Against CSV-Based Requirements
|
||||
|
||||
**Based on project type, determine:**
|
||||
|
||||
**api_backend:**
|
||||
- Required: Endpoint Specs, Auth Model, Data Schemas, API Versioning
|
||||
- Excluded: UX/UI sections, mobile-specific sections
|
||||
|
||||
**web_app:**
|
||||
- Required: User Journeys, UX/UI Requirements, Responsive Design
|
||||
- Excluded: None typically
|
||||
|
||||
**mobile_app:**
|
||||
- Required: Mobile UX, Platform specifics (iOS/Android), Offline mode
|
||||
- Excluded: Desktop-specific sections
|
||||
|
||||
**desktop_app:**
|
||||
- Required: Desktop UX, Platform specifics (Windows/Mac/Linux)
|
||||
- Excluded: Mobile-specific sections
|
||||
|
||||
**data_pipeline:**
|
||||
- Required: Data Sources, Data Transformation, Data Sinks, Error Handling
|
||||
- Excluded: UX/UI sections
|
||||
|
||||
**ml_system:**
|
||||
- Required: Model Requirements, Training Data, Inference Requirements, Model Performance
|
||||
- Excluded: UX/UI sections (unless ML UI)
|
||||
|
||||
**library_sdk:**
|
||||
- Required: API Surface, Usage Examples, Integration Guide
|
||||
- Excluded: UX/UI sections, deployment sections
|
||||
|
||||
**infrastructure:**
|
||||
- Required: Infrastructure Components, Deployment, Monitoring, Scaling
|
||||
- Excluded: Feature requirements (this is infrastructure, not product)
|
||||
|
||||
### 4. Attempt Sub-Process Validation
|
||||
|
||||
"Perform project-type compliance validation for {projectType}:
|
||||
|
||||
**Check that required sections are present:**
|
||||
{List required sections for this project type}
|
||||
For each: Is it present in PRD? Is it adequately documented?
|
||||
|
||||
**Check that excluded sections are absent:**
|
||||
{List excluded sections for this project type}
|
||||
For each: Is it absent from PRD? (Should not be present)
|
||||
|
||||
Build compliance table showing:
|
||||
- Required sections: [Present/Missing/Incomplete]
|
||||
- Excluded sections: [Absent/Present] (Present = violation)
|
||||
|
||||
Return compliance table with findings."
|
||||
|
||||
**Graceful degradation (if no Task tool):**
|
||||
- Manually check PRD for required sections
|
||||
- Manually check PRD for excluded sections
|
||||
- Build compliance table
|
||||
|
||||
### 5. Build Compliance Table
|
||||
|
||||
**Required sections check:**
|
||||
- For each required section: Present / Missing / Incomplete
|
||||
- Count: Required sections present vs total required
|
||||
|
||||
**Excluded sections check:**
|
||||
- For each excluded section: Absent / Present (violation)
|
||||
- Count: Excluded sections present (violations)
|
||||
|
||||
**Total compliance score:**
|
||||
- Required: {present}/{total}
|
||||
- Excluded violations: {count}
|
||||
|
||||
### 6. Report Project-Type Compliance Findings to Validation Report
|
||||
|
||||
Append to validation report:
|
||||
|
||||
```markdown
|
||||
## Project-Type Compliance Validation
|
||||
|
||||
**Project Type:** {projectType}
|
||||
|
||||
### Required Sections
|
||||
|
||||
**{Section 1}:** [Present/Missing/Incomplete]
|
||||
{If missing or incomplete: Note specific gaps}
|
||||
|
||||
**{Section 2}:** [Present/Missing/Incomplete]
|
||||
{If missing or incomplete: Note specific gaps}
|
||||
|
||||
[Continue for all required sections]
|
||||
|
||||
### Excluded Sections (Should Not Be Present)
|
||||
|
||||
**{Section 1}:** [Absent/Present] ✓
|
||||
{If present: This section should not be present for {projectType}}
|
||||
|
||||
**{Section 2}:** [Absent/Present] ✓
|
||||
{If present: This section should not be present for {projectType}}
|
||||
|
||||
[Continue for all excluded sections]
|
||||
|
||||
### Compliance Summary
|
||||
|
||||
**Required Sections:** {present}/{total} present
|
||||
**Excluded Sections Present:** {violations} (should be 0)
|
||||
**Compliance Score:** {percentage}%
|
||||
|
||||
**Severity:** [Critical if required sections missing, Warning if incomplete, Pass if complete]
|
||||
|
||||
**Recommendation:**
|
||||
[If Critical] "PRD is missing required sections for {projectType}. Add missing sections to properly specify this type of project."
|
||||
[If Warning] "Some required sections for {projectType} are incomplete. Strengthen documentation."
|
||||
[If Pass] "All required sections for {projectType} are present. No excluded sections found."
|
||||
```
|
||||
|
||||
### 7. Display Progress and Auto-Proceed
|
||||
|
||||
Display: "**Project-Type Compliance Validation Complete**
|
||||
|
||||
Project Type: {projectType}
|
||||
Compliance: {score}%
|
||||
|
||||
**Proceeding to next validation check...**"
|
||||
|
||||
Without delay, read fully and follow: {nextStepFile} (step-v-10-smart-validation.md)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Project type extracted correctly (or default assumed)
|
||||
- Required sections validated for presence and completeness
|
||||
- Excluded sections validated for absence
|
||||
- Compliance table built with status for all sections
|
||||
- Severity assessed correctly
|
||||
- Findings reported to validation report
|
||||
- Auto-proceeds to next validation step
|
||||
- Subprocess attempted with graceful degradation
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not checking project type before proceeding
|
||||
- Missing required section checks
|
||||
- Missing excluded section checks
|
||||
- Not building compliance table
|
||||
- Not reporting findings to validation report
|
||||
- Not auto-proceeding
|
||||
|
||||
**Master Rule:** Different project types have different requirements. API PRDs don't need UX sections - validate accordingly.
|
||||
|
|
@ -1,209 +0,0 @@
|
|||
---
|
||||
name: 'step-v-10-smart-validation'
|
||||
description: 'SMART Requirements Validation - Validate Functional Requirements meet SMART quality criteria'
|
||||
|
||||
# File references (ONLY variables used in this step)
|
||||
nextStepFile: './step-v-11-holistic-quality-validation.md'
|
||||
prdFile: '{prd_file_path}'
|
||||
validationReportPath: '{validation_report_path}'
|
||||
---
|
||||
|
||||
# Step 10: SMART Requirements Validation
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Validate Functional Requirements meet SMART quality criteria (Specific, Measurable, Attainable, Relevant, Traceable), ensuring high-quality requirements.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
- ✅ YOU MUST ALWAYS WRITE all artifact and document content in `{document_output_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Validation Architect and Quality Assurance Specialist
|
||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
||||
- ✅ We engage in systematic validation, not collaborative dialogue
|
||||
- ✅ You bring requirements engineering expertise and quality assessment
|
||||
- ✅ This step runs autonomously - no user input needed
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on FR quality assessment using SMART framework
|
||||
- 🚫 FORBIDDEN to validate other aspects in this step
|
||||
- 💬 Approach: Score each FR on SMART criteria (1-5 scale)
|
||||
- 🚪 This is a validation sequence step - auto-proceeds when complete
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Extract all FRs from PRD
|
||||
- 🎯 Score each FR on SMART criteria (Specific, Measurable, Attainable, Relevant, Traceable)
|
||||
- 💾 Flag FRs with score < 3 in any category
|
||||
- 📖 Append scoring table and suggestions to validation report
|
||||
- 📖 Display "Proceeding to next check..." and load next step
|
||||
- 🚫 FORBIDDEN to pause or request user input
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: PRD file, validation report
|
||||
- Focus: FR quality assessment only using SMART framework
|
||||
- Limits: Don't validate NFRs or other aspects, don't pause for user input
|
||||
- Dependencies: Steps 2-9 completed - comprehensive validation checks done
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Extract All Functional Requirements
|
||||
|
||||
From the PRD's Functional Requirements section, extract:
|
||||
- All FRs with their FR numbers (FR-001, FR-002, etc.)
|
||||
- Count total FRs
|
||||
|
||||
### 2. Attempt Sub-Process Validation
|
||||
|
||||
**Try to use Task tool to spawn a subprocess:**
|
||||
|
||||
"Perform SMART requirements validation on these Functional Requirements:
|
||||
|
||||
{List all FRs}
|
||||
|
||||
**For each FR, score on SMART criteria (1-5 scale):**
|
||||
|
||||
**Specific (1-5):**
|
||||
- 5: Clear, unambiguous, well-defined
|
||||
- 3: Somewhat clear but could be more specific
|
||||
- 1: Vague, ambiguous, unclear
|
||||
|
||||
**Measurable (1-5):**
|
||||
- 5: Quantifiable metrics, testable
|
||||
- 3: Partially measurable
|
||||
- 1: Not measurable, subjective
|
||||
|
||||
**Attainable (1-5):**
|
||||
- 5: Realistic, achievable with constraints
|
||||
- 3: Probably achievable but uncertain
|
||||
- 1: Unrealistic, technically infeasible
|
||||
|
||||
**Relevant (1-5):**
|
||||
- 5: Clearly aligned with user needs and business objectives
|
||||
- 3: Somewhat relevant but connection unclear
|
||||
- 1: Not relevant, doesn't align with goals
|
||||
|
||||
**Traceable (1-5):**
|
||||
- 5: Clearly traces to user journey or business objective
|
||||
- 3: Partially traceable
|
||||
- 1: Orphan requirement, no clear source
|
||||
|
||||
**For each FR with score < 3 in any category:**
|
||||
- Provide specific improvement suggestions
|
||||
|
||||
Return scoring table with all FR scores and improvement suggestions for low-scoring FRs."
|
||||
|
||||
**Graceful degradation (if no Task tool):**
|
||||
- Manually score each FR on SMART criteria
|
||||
- Note FRs with low scores
|
||||
- Provide improvement suggestions
|
||||
|
||||
### 3. Build Scoring Table
|
||||
|
||||
For each FR:
|
||||
- FR number
|
||||
- Specific score (1-5)
|
||||
- Measurable score (1-5)
|
||||
- Attainable score (1-5)
|
||||
- Relevant score (1-5)
|
||||
- Traceable score (1-5)
|
||||
- Average score
|
||||
- Flag if any category < 3
|
||||
|
||||
**Calculate overall FR quality:**
|
||||
- Percentage of FRs with all scores ≥ 3
|
||||
- Percentage of FRs with all scores ≥ 4
|
||||
- Average score across all FRs and categories
|
||||
|
||||
### 4. Report SMART Findings to Validation Report
|
||||
|
||||
Append to validation report:
|
||||
|
||||
```markdown
|
||||
## SMART Requirements Validation
|
||||
|
||||
**Total Functional Requirements:** {count}
|
||||
|
||||
### Scoring Summary
|
||||
|
||||
**All scores ≥ 3:** {percentage}% ({count}/{total})
|
||||
**All scores ≥ 4:** {percentage}% ({count}/{total})
|
||||
**Overall Average Score:** {average}/5.0
|
||||
|
||||
### Scoring Table
|
||||
|
||||
| FR # | Specific | Measurable | Attainable | Relevant | Traceable | Average | Flag |
|
||||
|------|----------|------------|------------|----------|-----------|--------|------|
|
||||
| FR-001 | {s1} | {m1} | {a1} | {r1} | {t1} | {avg1} | {X if any <3} |
|
||||
| FR-002 | {s2} | {m2} | {a2} | {r2} | {t2} | {avg2} | {X if any <3} |
|
||||
[Continue for all FRs]
|
||||
|
||||
**Legend:** 1=Poor, 3=Acceptable, 5=Excellent
|
||||
**Flag:** X = Score < 3 in one or more categories
|
||||
|
||||
### Improvement Suggestions
|
||||
|
||||
**Low-Scoring FRs:**
|
||||
|
||||
**FR-{number}:** {specific suggestion for improvement}
|
||||
[For each FR with score < 3 in any category]
|
||||
|
||||
### Overall Assessment
|
||||
|
||||
**Severity:** [Critical if >30% flagged FRs, Warning if 10-30%, Pass if <10%]
|
||||
|
||||
**Recommendation:**
|
||||
[If Critical] "Many FRs have quality issues. Revise flagged FRs using SMART framework to improve clarity and testability."
|
||||
[If Warning] "Some FRs would benefit from SMART refinement. Focus on flagged requirements above."
|
||||
[If Pass] "Functional Requirements demonstrate good SMART quality overall."
|
||||
```
|
||||
|
||||
### 5. Display Progress and Auto-Proceed
|
||||
|
||||
Display: "**SMART Requirements Validation Complete**
|
||||
|
||||
FR Quality: {percentage}% with acceptable scores ({severity})
|
||||
|
||||
**Proceeding to next validation check...**"
|
||||
|
||||
Without delay, read fully and follow: {nextStepFile} (step-v-11-holistic-quality-validation.md)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- All FRs extracted from PRD
|
||||
- Each FR scored on all 5 SMART criteria (1-5 scale)
|
||||
- FRs with scores < 3 flagged for improvement
|
||||
- Improvement suggestions provided for low-scoring FRs
|
||||
- Scoring table built with all FR scores
|
||||
- Overall quality assessment calculated
|
||||
- Findings reported to validation report
|
||||
- Auto-proceeds to next validation step
|
||||
- Subprocess attempted with graceful degradation
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not scoring all FRs on all SMART criteria
|
||||
- Missing improvement suggestions for low-scoring FRs
|
||||
- Not building scoring table
|
||||
- Not calculating overall quality metrics
|
||||
- Not reporting findings to validation report
|
||||
- Not auto-proceeding
|
||||
|
||||
**Master Rule:** FRs should be high-quality, not just present. SMART framework provides objective quality measure.
|
||||
|
|
@ -1,264 +0,0 @@
|
|||
---
|
||||
name: 'step-v-11-holistic-quality-validation'
|
||||
description: 'Holistic Quality Assessment - Assess PRD as cohesive, compelling document - is it a good PRD?'
|
||||
|
||||
# File references (ONLY variables used in this step)
|
||||
nextStepFile: './step-v-12-completeness-validation.md'
|
||||
prdFile: '{prd_file_path}'
|
||||
validationReportPath: '{validation_report_path}'
|
||||
---
|
||||
|
||||
# Step 11: Holistic Quality Assessment
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Assess the PRD as a cohesive, compelling document - evaluating document flow, dual audience effectiveness (humans and LLMs), BMAD PRD principles compliance, and overall quality rating.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
- ✅ YOU MUST ALWAYS WRITE all artifact and document content in `{document_output_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Validation Architect and Quality Assurance Specialist
|
||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
||||
- ✅ We engage in systematic validation, not collaborative dialogue
|
||||
- ✅ You bring analytical rigor and document quality expertise
|
||||
- ✅ This step runs autonomously - no user input needed
|
||||
- ✅ Uses Advanced Elicitation for multi-perspective evaluation
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on holistic document quality assessment
|
||||
- 🚫 FORBIDDEN to validate individual components (done in previous steps)
|
||||
- 💬 Approach: Multi-perspective evaluation using Advanced Elicitation
|
||||
- 🚪 This is a validation sequence step - auto-proceeds when complete
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Use Advanced Elicitation for multi-perspective assessment
|
||||
- 🎯 Evaluate document flow, dual audience, BMAD principles
|
||||
- 💾 Append comprehensive assessment to validation report
|
||||
- 📖 Display "Proceeding to next check..." and load next step
|
||||
- 🚫 FORBIDDEN to pause or request user input
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Complete PRD file, validation report with findings from steps 1-10
|
||||
- Focus: Holistic quality - the WHOLE document
|
||||
- Limits: Don't re-validate individual components, don't pause for user input
|
||||
- Dependencies: Steps 1-10 completed - all systematic checks done
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Attempt Sub-Process with Advanced Elicitation
|
||||
|
||||
**Try to use Task tool to spawn a subprocess using Advanced Elicitation:**
|
||||
|
||||
"Perform holistic quality assessment on this PRD using multi-perspective evaluation:
|
||||
|
||||
**Advanced Elicitation workflow:**
|
||||
Invoke the `bmad-advanced-elicitation` skill
|
||||
|
||||
**Evaluate the PRD from these perspectives:**
|
||||
|
||||
**1. Document Flow & Coherence:**
|
||||
- Read entire PRD
|
||||
- Evaluate narrative flow - does it tell a cohesive story?
|
||||
- Check transitions between sections
|
||||
- Assess consistency - is it coherent throughout?
|
||||
- Evaluate readability - is it clear and well-organized?
|
||||
|
||||
**2. Dual Audience Effectiveness:**
|
||||
|
||||
**For Humans:**
|
||||
- Executive-friendly: Can executives understand vision and goals quickly?
|
||||
- Developer clarity: Do developers have clear requirements to build from?
|
||||
- Designer clarity: Do designers understand user needs and flows?
|
||||
- Stakeholder decision-making: Can stakeholders make informed decisions?
|
||||
|
||||
**For LLMs:**
|
||||
- Machine-readable structure: Is the PRD structured for LLM consumption?
|
||||
- UX readiness: Can an LLM generate UX designs from this?
|
||||
- Architecture readiness: Can an LLM generate architecture from this?
|
||||
- Epic/Story readiness: Can an LLM break down into epics and stories?
|
||||
|
||||
**3. BMAD PRD Principles Compliance:**
|
||||
- Information density: Every sentence carries weight?
|
||||
- Measurability: Requirements testable?
|
||||
- Traceability: Requirements trace to sources?
|
||||
- Domain awareness: Domain-specific considerations included?
|
||||
- Zero anti-patterns: No filler or wordiness?
|
||||
- Dual audience: Works for both humans and LLMs?
|
||||
- Markdown format: Proper structure and formatting?
|
||||
|
||||
**4. Overall Quality Rating:**
|
||||
Rate the PRD on 5-point scale:
|
||||
- Excellent (5/5): Exemplary, ready for production use
|
||||
- Good (4/5): Strong with minor improvements needed
|
||||
- Adequate (3/5): Acceptable but needs refinement
|
||||
- Needs Work (2/5): Significant gaps or issues
|
||||
- Problematic (1/5): Major flaws, needs substantial revision
|
||||
|
||||
**5. Top 3 Improvements:**
|
||||
Identify the 3 most impactful improvements to make this a great PRD
|
||||
|
||||
Return comprehensive assessment with all perspectives, rating, and top 3 improvements."
|
||||
|
||||
**Graceful degradation (if no Task tool or Advanced Elicitation unavailable):**
|
||||
- Perform holistic assessment directly in current context
|
||||
- Read complete PRD
|
||||
- Evaluate document flow, coherence, transitions
|
||||
- Assess dual audience effectiveness
|
||||
- Check BMAD principles compliance
|
||||
- Assign overall quality rating
|
||||
- Identify top 3 improvements
|
||||
|
||||
### 2. Synthesize Assessment
|
||||
|
||||
**Compile findings from multi-perspective evaluation:**
|
||||
|
||||
**Document Flow & Coherence:**
|
||||
- Overall assessment: [Excellent/Good/Adequate/Needs Work/Problematic]
|
||||
- Key strengths: [list]
|
||||
- Key weaknesses: [list]
|
||||
|
||||
**Dual Audience Effectiveness:**
|
||||
- For Humans: [assessment]
|
||||
- For LLMs: [assessment]
|
||||
- Overall dual audience score: [1-5]
|
||||
|
||||
**BMAD Principles Compliance:**
|
||||
- Principles met: [count]/7
|
||||
- Principles with issues: [list]
|
||||
|
||||
**Overall Quality Rating:** [1-5 with label]
|
||||
|
||||
**Top 3 Improvements:**
|
||||
1. [Improvement 1]
|
||||
2. [Improvement 2]
|
||||
3. [Improvement 3]
|
||||
|
||||
### 3. Report Holistic Quality Findings to Validation Report
|
||||
|
||||
Append to validation report:
|
||||
|
||||
```markdown
|
||||
## Holistic Quality Assessment
|
||||
|
||||
### Document Flow & Coherence
|
||||
|
||||
**Assessment:** [Excellent/Good/Adequate/Needs Work/Problematic]
|
||||
|
||||
**Strengths:**
|
||||
{List key strengths}
|
||||
|
||||
**Areas for Improvement:**
|
||||
{List key weaknesses}
|
||||
|
||||
### Dual Audience Effectiveness
|
||||
|
||||
**For Humans:**
|
||||
- Executive-friendly: [assessment]
|
||||
- Developer clarity: [assessment]
|
||||
- Designer clarity: [assessment]
|
||||
- Stakeholder decision-making: [assessment]
|
||||
|
||||
**For LLMs:**
|
||||
- Machine-readable structure: [assessment]
|
||||
- UX readiness: [assessment]
|
||||
- Architecture readiness: [assessment]
|
||||
- Epic/Story readiness: [assessment]
|
||||
|
||||
**Dual Audience Score:** {score}/5
|
||||
|
||||
### BMAD PRD Principles Compliance
|
||||
|
||||
| Principle | Status | Notes |
|
||||
|-----------|--------|-------|
|
||||
| Information Density | [Met/Partial/Not Met] | {notes} |
|
||||
| Measurability | [Met/Partial/Not Met] | {notes} |
|
||||
| Traceability | [Met/Partial/Not Met] | {notes} |
|
||||
| Domain Awareness | [Met/Partial/Not Met] | {notes} |
|
||||
| Zero Anti-Patterns | [Met/Partial/Not Met] | {notes} |
|
||||
| Dual Audience | [Met/Partial/Not Met] | {notes} |
|
||||
| Markdown Format | [Met/Partial/Not Met] | {notes} |
|
||||
|
||||
**Principles Met:** {count}/7
|
||||
|
||||
### Overall Quality Rating
|
||||
|
||||
**Rating:** {rating}/5 - {label}
|
||||
|
||||
**Scale:**
|
||||
- 5/5 - Excellent: Exemplary, ready for production use
|
||||
- 4/5 - Good: Strong with minor improvements needed
|
||||
- 3/5 - Adequate: Acceptable but needs refinement
|
||||
- 2/5 - Needs Work: Significant gaps or issues
|
||||
- 1/5 - Problematic: Major flaws, needs substantial revision
|
||||
|
||||
### Top 3 Improvements
|
||||
|
||||
1. **{Improvement 1}**
|
||||
{Brief explanation of why and how}
|
||||
|
||||
2. **{Improvement 2}**
|
||||
{Brief explanation of why and how}
|
||||
|
||||
3. **{Improvement 3}**
|
||||
{Brief explanation of why and how}
|
||||
|
||||
### Summary
|
||||
|
||||
**This PRD is:** {one-sentence overall assessment}
|
||||
|
||||
**To make it great:** Focus on the top 3 improvements above.
|
||||
```
|
||||
|
||||
### 4. Display Progress and Auto-Proceed
|
||||
|
||||
Display: "**Holistic Quality Assessment Complete**
|
||||
|
||||
Overall Rating: {rating}/5 - {label}
|
||||
|
||||
**Proceeding to final validation checks...**"
|
||||
|
||||
Without delay, read fully and follow: {nextStepFile} (step-v-12-completeness-validation.md)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Advanced Elicitation used for multi-perspective evaluation (or graceful degradation)
|
||||
- Document flow & coherence assessed
|
||||
- Dual audience effectiveness evaluated (humans and LLMs)
|
||||
- BMAD PRD principles compliance checked
|
||||
- Overall quality rating assigned (1-5 scale)
|
||||
- Top 3 improvements identified
|
||||
- Comprehensive assessment reported to validation report
|
||||
- Auto-proceeds to next validation step
|
||||
- Subprocess attempted with graceful degradation
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not using Advanced Elicitation for multi-perspective evaluation
|
||||
- Missing document flow assessment
|
||||
- Missing dual audience evaluation
|
||||
- Not checking all BMAD principles
|
||||
- Not assigning overall quality rating
|
||||
- Missing top 3 improvements
|
||||
- Not reporting comprehensive assessment to validation report
|
||||
- Not auto-proceeding
|
||||
|
||||
**Master Rule:** This evaluates the WHOLE document, not just components. Answers "Is this a good PRD?" and "What would make it great?"
|
||||
|
|
@ -1,242 +0,0 @@
|
|||
---
|
||||
name: 'step-v-12-completeness-validation'
|
||||
description: 'Completeness Check - Final comprehensive completeness check before report generation'
|
||||
|
||||
# File references (ONLY variables used in this step)
|
||||
nextStepFile: './step-v-13-report-complete.md'
|
||||
prdFile: '{prd_file_path}'
|
||||
prdFrontmatter: '{prd_frontmatter}'
|
||||
validationReportPath: '{validation_report_path}'
|
||||
---
|
||||
|
||||
# Step 12: Completeness Validation
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Final comprehensive completeness check - validate no template variables remain, each section has required content, section-specific completeness, and frontmatter is properly populated.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Validation Architect and Quality Assurance Specialist
|
||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
||||
- ✅ We engage in systematic validation, not collaborative dialogue
|
||||
- ✅ You bring attention to detail and completeness verification
|
||||
- ✅ This step runs autonomously - no user input needed
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on completeness verification
|
||||
- 🚫 FORBIDDEN to validate quality (done in step 11) or other aspects
|
||||
- 💬 Approach: Systematic checklist-style verification
|
||||
- 🚪 This is a validation sequence step - auto-proceeds when complete
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Check template completeness (no variables remaining)
|
||||
- 🎯 Validate content completeness (each section has required content)
|
||||
- 🎯 Validate section-specific completeness
|
||||
- 🎯 Validate frontmatter completeness
|
||||
- 💾 Append completeness matrix to validation report
|
||||
- 📖 Display "Proceeding to final step..." and load next step
|
||||
- 🚫 FORBIDDEN to pause or request user input
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Complete PRD file, frontmatter, validation report
|
||||
- Focus: Completeness verification only (final gate)
|
||||
- Limits: Don't assess quality, don't pause for user input
|
||||
- Dependencies: Steps 1-11 completed - all validation checks done
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Attempt Sub-Process Validation
|
||||
|
||||
**Try to use Task tool to spawn a subprocess:**
|
||||
|
||||
"Perform completeness validation on this PRD - final gate check:
|
||||
|
||||
**1. Template Completeness:**
|
||||
- Scan PRD for any remaining template variables
|
||||
- Look for: {variable}, {{variable}}, {placeholder}, [placeholder], etc.
|
||||
- List any found with line numbers
|
||||
|
||||
**2. Content Completeness:**
|
||||
- Executive Summary: Has vision statement? ({key content})
|
||||
- Success Criteria: All criteria measurable? ({metrics present})
|
||||
- Product Scope: In-scope and out-of-scope defined? ({both present})
|
||||
- User Journeys: User types identified? ({users listed})
|
||||
- Functional Requirements: FRs listed with proper format? ({FRs present})
|
||||
- Non-Functional Requirements: NFRs with metrics? ({NFRs present})
|
||||
|
||||
For each section: Is required content present? (Yes/No/Partial)
|
||||
|
||||
**3. Section-Specific Completeness:**
|
||||
- Success Criteria: Each has specific measurement method?
|
||||
- User Journeys: Cover all user types?
|
||||
- Functional Requirements: Cover MVP scope?
|
||||
- Non-Functional Requirements: Each has specific criteria?
|
||||
|
||||
**4. Frontmatter Completeness:**
|
||||
- stepsCompleted: Populated?
|
||||
- classification: Present (domain, projectType)?
|
||||
- inputDocuments: Tracked?
|
||||
- date: Present?
|
||||
|
||||
Return completeness matrix with status for each check."
|
||||
|
||||
**Graceful degradation (if no Task tool):**
|
||||
- Manually scan for template variables
|
||||
- Manually check each section for required content
|
||||
- Manually verify frontmatter fields
|
||||
- Build completeness matrix
|
||||
|
||||
### 2. Build Completeness Matrix
|
||||
|
||||
**Template Completeness:**
|
||||
- Template variables found: count
|
||||
- List if any found
|
||||
|
||||
**Content Completeness by Section:**
|
||||
- Executive Summary: Complete / Incomplete / Missing
|
||||
- Success Criteria: Complete / Incomplete / Missing
|
||||
- Product Scope: Complete / Incomplete / Missing
|
||||
- User Journeys: Complete / Incomplete / Missing
|
||||
- Functional Requirements: Complete / Incomplete / Missing
|
||||
- Non-Functional Requirements: Complete / Incomplete / Missing
|
||||
- Other sections: [List completeness]
|
||||
|
||||
**Section-Specific Completeness:**
|
||||
- Success criteria measurable: All / Some / None
|
||||
- Journeys cover all users: Yes / Partial / No
|
||||
- FRs cover MVP scope: Yes / Partial / No
|
||||
- NFRs have specific criteria: All / Some / None
|
||||
|
||||
**Frontmatter Completeness:**
|
||||
- stepsCompleted: Present / Missing
|
||||
- classification: Present / Missing
|
||||
- inputDocuments: Present / Missing
|
||||
- date: Present / Missing
|
||||
|
||||
**Overall completeness:**
|
||||
- Sections complete: X/Y
|
||||
- Critical gaps: [list if any]
|
||||
|
||||
### 3. Report Completeness Findings to Validation Report
|
||||
|
||||
Append to validation report:
|
||||
|
||||
```markdown
|
||||
## Completeness Validation
|
||||
|
||||
### Template Completeness
|
||||
|
||||
**Template Variables Found:** {count}
|
||||
{If count > 0, list variables with line numbers}
|
||||
{If count = 0, note: No template variables remaining ✓}
|
||||
|
||||
### Content Completeness by Section
|
||||
|
||||
**Executive Summary:** [Complete/Incomplete/Missing]
|
||||
{If incomplete or missing, note specific gaps}
|
||||
|
||||
**Success Criteria:** [Complete/Incomplete/Missing]
|
||||
{If incomplete or missing, note specific gaps}
|
||||
|
||||
**Product Scope:** [Complete/Incomplete/Missing]
|
||||
{If incomplete or missing, note specific gaps}
|
||||
|
||||
**User Journeys:** [Complete/Incomplete/Missing]
|
||||
{If incomplete or missing, note specific gaps}
|
||||
|
||||
**Functional Requirements:** [Complete/Incomplete/Missing]
|
||||
{If incomplete or missing, note specific gaps}
|
||||
|
||||
**Non-Functional Requirements:** [Complete/Incomplete/Missing]
|
||||
{If incomplete or missing, note specific gaps}
|
||||
|
||||
### Section-Specific Completeness
|
||||
|
||||
**Success Criteria Measurability:** [All/Some/None] measurable
|
||||
{If Some or None, note which criteria lack metrics}
|
||||
|
||||
**User Journeys Coverage:** [Yes/Partial/No] - covers all user types
|
||||
{If Partial or No, note missing user types}
|
||||
|
||||
**FRs Cover MVP Scope:** [Yes/Partial/No]
|
||||
{If Partial or No, note scope gaps}
|
||||
|
||||
**NFRs Have Specific Criteria:** [All/Some/None]
|
||||
{If Some or None, note which NFRs lack specificity}
|
||||
|
||||
### Frontmatter Completeness
|
||||
|
||||
**stepsCompleted:** [Present/Missing]
|
||||
**classification:** [Present/Missing]
|
||||
**inputDocuments:** [Present/Missing]
|
||||
**date:** [Present/Missing]
|
||||
|
||||
**Frontmatter Completeness:** {complete_fields}/4
|
||||
|
||||
### Completeness Summary
|
||||
|
||||
**Overall Completeness:** {percentage}% ({complete_sections}/{total_sections})
|
||||
|
||||
**Critical Gaps:** [count] [list if any]
|
||||
**Minor Gaps:** [count] [list if any]
|
||||
|
||||
**Severity:** [Critical if template variables exist or critical sections missing, Warning if minor gaps, Pass if complete]
|
||||
|
||||
**Recommendation:**
|
||||
[If Critical] "PRD has completeness gaps that must be addressed before use. Fix template variables and complete missing sections."
|
||||
[If Warning] "PRD has minor completeness gaps. Address minor gaps for complete documentation."
|
||||
[If Pass] "PRD is complete with all required sections and content present."
|
||||
```
|
||||
|
||||
### 4. Display Progress and Auto-Proceed
|
||||
|
||||
Display: "**Completeness Validation Complete**
|
||||
|
||||
Overall Completeness: {percentage}% ({severity})
|
||||
|
||||
**Proceeding to final step...**"
|
||||
|
||||
Without delay, read fully and follow: {nextStepFile} (step-v-13-report-complete.md)
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Scanned for template variables systematically
|
||||
- Validated each section for required content
|
||||
- Validated section-specific completeness (measurability, coverage, scope)
|
||||
- Validated frontmatter completeness
|
||||
- Completeness matrix built with all checks
|
||||
- Severity assessed correctly
|
||||
- Findings reported to validation report
|
||||
- Auto-proceeds to final step
|
||||
- Subprocess attempted with graceful degradation
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not scanning for template variables
|
||||
- Missing section-specific completeness checks
|
||||
- Not validating frontmatter
|
||||
- Not building completeness matrix
|
||||
- Not reporting findings to validation report
|
||||
- Not auto-proceeding
|
||||
|
||||
**Master Rule:** Final gate to ensure document is complete before presenting findings. Template variables or critical gaps must be fixed.
|
||||
|
|
@ -1,232 +0,0 @@
|
|||
---
|
||||
name: 'step-v-13-report-complete'
|
||||
description: 'Validation Report Complete - Finalize report, summarize findings, present to user, offer next steps'
|
||||
|
||||
# File references (ONLY variables used in this step)
|
||||
validationReportPath: '{validation_report_path}'
|
||||
prdFile: '{prd_file_path}'
|
||||
---
|
||||
|
||||
# Step 13: Validation Report Complete
|
||||
|
||||
## STEP GOAL:
|
||||
|
||||
Finalize validation report, summarize all findings from steps 1-12, present summary to user conversationally, and offer actionable next steps.
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
### Universal Rules:
|
||||
|
||||
- 🛑 NEVER generate content without user input
|
||||
- 📖 CRITICAL: Read the complete step file before taking any action
|
||||
- 🔄 CRITICAL: When loading next step with 'C', ensure entire file is read
|
||||
- 📋 YOU ARE A FACILITATOR, not a content generator
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
- ✅ YOU MUST ALWAYS WRITE all artifact and document content in `{document_output_language}`
|
||||
|
||||
### Role Reinforcement:
|
||||
|
||||
- ✅ You are a Validation Architect and Quality Assurance Specialist
|
||||
- ✅ If you already have been given communication or persona patterns, continue to use those while playing this new role
|
||||
- ✅ We engage in collaborative dialogue, not command-response
|
||||
- ✅ You bring synthesis and summary expertise
|
||||
- ✅ This is the FINAL step - requires user interaction
|
||||
|
||||
### Step-Specific Rules:
|
||||
|
||||
- 🎯 Focus ONLY on summarizing findings and presenting options
|
||||
- 🚫 FORBIDDEN to perform additional validation
|
||||
- 💬 Approach: Conversational summary with clear next steps
|
||||
- 🚪 This is the final step - no next step after this
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Load complete validation report
|
||||
- 🎯 Summarize all findings from steps 1-12
|
||||
- 🎯 Update report frontmatter with final status
|
||||
- 💬 Present summary to user conversationally
|
||||
- 💬 Offer menu options for next actions
|
||||
- 🚫 FORBIDDEN to proceed without user selection
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Available context: Complete validation report with findings from all validation steps
|
||||
- Focus: Summary and presentation only (no new validation)
|
||||
- Limits: Don't add new findings, just synthesize existing
|
||||
- Dependencies: Steps 1-12 completed - all validation checks done
|
||||
|
||||
## MANDATORY SEQUENCE
|
||||
|
||||
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise unless user explicitly requests a change.
|
||||
|
||||
### 1. Load Complete Validation Report
|
||||
|
||||
Read the entire validation report from {validationReportPath}
|
||||
|
||||
Extract all findings from:
|
||||
- Format Detection (Step 2)
|
||||
- Parity Analysis (Step 2B, if applicable)
|
||||
- Information Density (Step 3)
|
||||
- Product Brief Coverage (Step 4)
|
||||
- Measurability (Step 5)
|
||||
- Traceability (Step 6)
|
||||
- Implementation Leakage (Step 7)
|
||||
- Domain Compliance (Step 8)
|
||||
- Project-Type Compliance (Step 9)
|
||||
- SMART Requirements (Step 10)
|
||||
- Holistic Quality (Step 11)
|
||||
- Completeness (Step 12)
|
||||
|
||||
### 2. Update Report Frontmatter with Final Status
|
||||
|
||||
Update validation report frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
validationTarget: '{prd_path}'
|
||||
validationDate: '{current_date}'
|
||||
inputDocuments: [list of documents]
|
||||
validationStepsCompleted: ['step-v-01-discovery', 'step-v-02-format-detection', 'step-v-03-density-validation', 'step-v-04-brief-coverage-validation', 'step-v-05-measurability-validation', 'step-v-06-traceability-validation', 'step-v-07-implementation-leakage-validation', 'step-v-08-domain-compliance-validation', 'step-v-09-project-type-validation', 'step-v-10-smart-validation', 'step-v-11-holistic-quality-validation', 'step-v-12-completeness-validation']
|
||||
validationStatus: COMPLETE
|
||||
holisticQualityRating: '{rating from step 11}'
|
||||
overallStatus: '{Pass/Warning/Critical based on all findings}'
|
||||
---
|
||||
```
|
||||
|
||||
### 3. Create Summary of Findings
|
||||
|
||||
**Overall Status:**
|
||||
- Determine from all validation findings
|
||||
- **Pass:** All critical checks pass, minor warnings acceptable
|
||||
- **Warning:** Some issues found but PRD is usable
|
||||
- **Critical:** Major issues that prevent PRD from being fit for purpose
|
||||
|
||||
**Quick Results Table:**
|
||||
- Format: [classification]
|
||||
- Information Density: [severity]
|
||||
- Measurability: [severity]
|
||||
- Traceability: [severity]
|
||||
- Implementation Leakage: [severity]
|
||||
- Domain Compliance: [status]
|
||||
- Project-Type Compliance: [compliance score]
|
||||
- SMART Quality: [percentage]
|
||||
- Holistic Quality: [rating/5]
|
||||
- Completeness: [percentage]
|
||||
|
||||
**Critical Issues:** List from all validation steps
|
||||
**Warnings:** List from all validation steps
|
||||
**Strengths:** List positives from all validation steps
|
||||
|
||||
**Holistic Quality Rating:** From step 11
|
||||
**Top 3 Improvements:** From step 11
|
||||
|
||||
**Recommendation:** Based on overall status
|
||||
|
||||
### 4. Present Summary to User Conversationally
|
||||
|
||||
Display:
|
||||
|
||||
"**✓ PRD Validation Complete**
|
||||
|
||||
**Overall Status:** {Pass/Warning/Critical}
|
||||
|
||||
**Quick Results:**
|
||||
{Present quick results table with key findings}
|
||||
|
||||
**Critical Issues:** {count or "None"}
|
||||
{If any, list briefly}
|
||||
|
||||
**Warnings:** {count or "None"}
|
||||
{If any, list briefly}
|
||||
|
||||
**Strengths:**
|
||||
{List key strengths}
|
||||
|
||||
**Holistic Quality:** {rating}/5 - {label}
|
||||
|
||||
**Top 3 Improvements:**
|
||||
1. {Improvement 1}
|
||||
2. {Improvement 2}
|
||||
3. {Improvement 3}
|
||||
|
||||
**Recommendation:**
|
||||
{Based on overall status:
|
||||
- Pass: "PRD is in good shape. Address minor improvements to make it great."
|
||||
- Warning: "PRD is usable but has issues that should be addressed. Review warnings and improve where needed."
|
||||
- Critical: "PRD has significant issues that should be fixed before use. Focus on critical issues above."}
|
||||
|
||||
**What would you like to do next?**"
|
||||
|
||||
### 5. Present MENU OPTIONS
|
||||
|
||||
Display:
|
||||
|
||||
**[R] Review Detailed Findings** - Walk through validation report section by section
|
||||
**[E] Use Edit Workflow** - Use validation report with Edit workflow for systematic improvements
|
||||
**[F] Fix Simpler Items** - Immediate fixes for simple issues (anti-patterns, leakage, missing headers)
|
||||
**[X] Exit** - Exit and Suggest Next Steps.
|
||||
|
||||
#### EXECUTION RULES:
|
||||
|
||||
- ALWAYS halt and wait for user input after presenting menu
|
||||
- Only proceed based on user selection
|
||||
|
||||
#### Menu Handling Logic:
|
||||
|
||||
- **IF R (Review Detailed Findings):**
|
||||
- Walk through validation report section by section
|
||||
- Present findings from each validation step
|
||||
- Allow user to ask questions
|
||||
- After review, return to menu
|
||||
|
||||
- **IF E (Use Edit Workflow):**
|
||||
- Explain: "The Edit workflow (steps-e/) can use this validation report to systematically address issues. Edit mode will guide you through discovering what to edit, reviewing the PRD, and applying targeted improvements."
|
||||
- Offer: "Would you like to launch Edit mode now? It will help you fix validation findings systematically."
|
||||
- If yes: Read fully and follow: `./steps-e/step-e-01-discovery.md`
|
||||
- If no: Return to menu
|
||||
|
||||
- **IF F (Fix Simpler Items):**
|
||||
- Offer immediate fixes for:
|
||||
- Template variables (fill in with appropriate content)
|
||||
- Conversational filler (remove wordy phrases)
|
||||
- Implementation leakage (remove technology names from FRs/NFRs)
|
||||
- Missing section headers (add ## headers)
|
||||
- Ask: "Which simple fixes would you like me to make?"
|
||||
- If user specifies fixes, make them and update validation report
|
||||
- Return to menu
|
||||
|
||||
- **IF X (Exit):**
|
||||
- Display: "**Validation Report Saved:** {validationReportPath}"
|
||||
- Display: "**Summary:** {overall status} - {recommendation}"
|
||||
- PRD Validation complete. Invoke the `bmad-help` skill.
|
||||
|
||||
- **IF Any other:** Help user, then redisplay menu
|
||||
|
||||
---
|
||||
|
||||
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
|
||||
|
||||
### ✅ SUCCESS:
|
||||
|
||||
- Complete validation report loaded successfully
|
||||
- All findings from steps 1-12 summarized
|
||||
- Report frontmatter updated with final status
|
||||
- Overall status determined correctly (Pass/Warning/Critical)
|
||||
- Quick results table presented
|
||||
- Critical issues, warnings, and strengths listed
|
||||
- Holistic quality rating included
|
||||
- Top 3 improvements presented
|
||||
- Clear recommendation provided
|
||||
- Menu options presented with clear explanations
|
||||
- User can review findings, get help, or exit
|
||||
|
||||
### ❌ SYSTEM FAILURE:
|
||||
|
||||
- Not loading complete validation report
|
||||
- Missing summary of findings
|
||||
- Not updating report frontmatter
|
||||
- Not determining overall status
|
||||
- Missing menu options
|
||||
- Unclear next steps
|
||||
|
||||
**Master Rule:** User needs clear summary and actionable next steps. Edit workflow is best for complex issues; immediate fixes available for simpler ones.
|
||||
|
|
@ -1,65 +0,0 @@
|
|||
---
|
||||
name: validate-prd
|
||||
description: 'Validate a PRD against standards. Use when the user says "validate this PRD" or "run PRD validation"'
|
||||
standalone: false
|
||||
main_config: '{project-root}/_bmad/bmm/config.yaml'
|
||||
validateWorkflow: './steps-v/step-v-01-discovery.md'
|
||||
---
|
||||
|
||||
# PRD Validate Workflow
|
||||
|
||||
**Goal:** Validate existing PRDs against BMAD standards through comprehensive review.
|
||||
|
||||
**Your Role:** Validation Architect and Quality Assurance Specialist.
|
||||
|
||||
You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This uses **step-file architecture** for disciplined execution:
|
||||
|
||||
### Core Principles
|
||||
|
||||
- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
|
||||
- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
|
||||
- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
|
||||
- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
|
||||
- **Append-Only Building**: Build documents by appending content as directed to the output file
|
||||
|
||||
### Step Processing Rules
|
||||
|
||||
1. **READ COMPLETELY**: Always read the entire step file before taking any action
|
||||
2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
|
||||
3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
|
||||
4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
|
||||
5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
|
||||
6. **LOAD NEXT**: When directed, read fully and follow the next step file
|
||||
|
||||
### Critical Rules (NO EXCEPTIONS)
|
||||
|
||||
- 🛑 **NEVER** load multiple step files simultaneously
|
||||
- 📖 **ALWAYS** read entire step file before execution
|
||||
- 🚫 **NEVER** skip steps or optimize the sequence
|
||||
- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
|
||||
- 🎯 **ALWAYS** follow the exact instructions in the step file
|
||||
- ⏸️ **ALWAYS** halt at menus and wait for user input
|
||||
- 📋 **NEVER** create mental todo lists from future steps
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Configuration Loading
|
||||
|
||||
Load and read full config from {main_config} and resolve:
|
||||
|
||||
- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
|
||||
- `communication_language`, `document_output_language`, `user_skill_level`
|
||||
- `date` as system-generated current datetime
|
||||
|
||||
✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the configured `{communication_language}`.
|
||||
✅ YOU MUST ALWAYS WRITE all artifact and document content in `{document_output_language}`.
|
||||
|
||||
### 2. Route to Validate Workflow
|
||||
|
||||
"**Validate Mode: Validating an existing PRD against BMAD standards.**"
|
||||
|
||||
Then read fully and follow: `{validateWorkflow}` (steps-v/step-v-01-discovery.md)
|
||||
|
|
@ -36,10 +36,12 @@ When you are in this persona and the user calls a skill, this persona must carry
|
|||
|
||||
## On Activation
|
||||
|
||||
1. **Load config via bmad-init skill** — Store all returned vars for use:
|
||||
- Use `{user_name}` from config for greeting
|
||||
- Use `{communication_language}` from config for all communications
|
||||
- Store any other config variables as `{var-name}` and use appropriately
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
2. **Continue with steps below:**
|
||||
- **Load project context** — Search for `**/project-context.md`. If found, load as foundational reference for project standards and conventions. If not found, continue without it.
|
||||
|
|
|
|||
|
|
@ -33,17 +33,15 @@
|
|||
- ⏸️ **ALWAYS** halt at menus and wait for user input
|
||||
- 📋 **NEVER** create mental todo lists from future steps
|
||||
|
||||
---
|
||||
## Activation
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve::
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
### 1. Module Configuration Loading
|
||||
|
||||
Load and read full config from {project-root}/_bmad/bmm/config.yaml and resolve:
|
||||
|
||||
- `project_name`, `output_folder`, `planning_artifacts`, `user_name`, `communication_language`, `document_output_language`
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### 2. First Step EXECUTION
|
||||
2. First Step EXECUTION
|
||||
|
||||
Read fully and follow: `./steps/step-01-document-discovery.md` to begin the workflow.
|
||||
|
|
|
|||
|
|
@ -16,22 +16,16 @@ This uses **micro-file architecture** for disciplined execution:
|
|||
- Append-only document building through conversation
|
||||
- You NEVER proceed to a step file if the current step file indicates the user must approve and indicate continuation.
|
||||
|
||||
---
|
||||
## Activation
|
||||
|
||||
## INITIALIZATION
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve::
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
### Configuration Loading
|
||||
|
||||
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
|
||||
- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
|
||||
- `communication_language`, `document_output_language`, `user_skill_level`
|
||||
- `date` as system-generated current datetime
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION
|
||||
2. EXECUTION
|
||||
|
||||
Read fully and follow: `./steps/step-01-init.md` to begin the workflow.
|
||||
|
||||
|
|
|
|||
|
|
@ -37,17 +37,15 @@ This uses **step-file architecture** for disciplined execution:
|
|||
- ⏸️ **ALWAYS** halt at menus and wait for user input
|
||||
- 📋 **NEVER** create mental todo lists from future steps
|
||||
|
||||
---
|
||||
## Activation
|
||||
|
||||
## INITIALIZATION SEQUENCE
|
||||
|
||||
### 1. Configuration Loading
|
||||
|
||||
Load and read full config from {project-root}/_bmad/bmm/config.yaml and resolve:
|
||||
|
||||
- `project_name`, `output_folder`, `planning_artifacts`, `user_name`, `communication_language`, `document_output_language`
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
### 2. First Step EXECUTION
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve::
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
2. First Step EXECUTION
|
||||
|
||||
Read fully and follow: `./steps/step-01-validate-prerequisites.md` to begin the workflow.
|
||||
|
|
|
|||
|
|
@ -18,25 +18,21 @@ This uses **micro-file architecture** for disciplined execution:
|
|||
|
||||
---
|
||||
|
||||
## INITIALIZATION
|
||||
## Activation
|
||||
|
||||
### Configuration Loading
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve::
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
|
||||
- `project_name`, `output_folder`, `user_name`
|
||||
- `communication_language`, `document_output_language`, `user_skill_level`
|
||||
- `date` as system-generated current datetime
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
- ✅ YOU MUST ALWAYS WRITE all artifact and document content in `{document_output_language}`
|
||||
|
||||
### Paths
|
||||
|
||||
- `output_file` = `{output_folder}/project-context.md`
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION
|
||||
EXECUTION
|
||||
|
||||
Load and execute `./steps/step-01-discover.md` to begin the workflow.
|
||||
|
||||
|
|
|
|||
|
|
@ -46,10 +46,12 @@ When you are in this persona and the user calls a skill, this persona must carry
|
|||
|
||||
## On Activation
|
||||
|
||||
1. **Load config via bmad-init skill** — Store all returned vars for use:
|
||||
- Use `{user_name}` from config for greeting
|
||||
- Use `{communication_language}` from config for all communications
|
||||
- Store any other config variables as `{var-name}` and use appropriately
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
2. **Continue with steps below:**
|
||||
- **Load project context** — Search for `**/project-context.md`. If found, load as foundational reference for project standards and conventions. If not found, continue without it.
|
||||
|
|
|
|||
|
|
@ -43,10 +43,12 @@ When you are in this persona and the user calls a skill, this persona must carry
|
|||
|
||||
## On Activation
|
||||
|
||||
1. **Load config via bmad-init skill** — Store all returned vars for use:
|
||||
- Use `{user_name}` from config for greeting
|
||||
- Use `{communication_language}` from config for all communications
|
||||
- Store any other config variables as `{var-name}` and use appropriately
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
2. **Continue with steps below:**
|
||||
- **Load project context** — Search for `**/project-context.md`. If found, load as foundational reference for project standards and conventions. If not found, continue without it.
|
||||
|
|
|
|||
|
|
@ -35,10 +35,12 @@ When you are in this persona and the user calls a skill, this persona must carry
|
|||
|
||||
## On Activation
|
||||
|
||||
1. **Load config via bmad-init skill** — Store all returned vars for use:
|
||||
- Use `{user_name}` from config for greeting
|
||||
- Use `{communication_language}` from config for all communications
|
||||
- Store any other config variables as `{var-name}` and use appropriately
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
2. **Continue with steps below:**
|
||||
- **Load project context** — Search for `**/project-context.md`. If found, load as foundational reference for project standards and conventions. If not found, continue without it.
|
||||
|
|
|
|||
|
|
@ -37,10 +37,12 @@ When you are in this persona and the user calls a skill, this persona must carry
|
|||
|
||||
## On Activation
|
||||
|
||||
1. **Load config via bmad-init skill** — Store all returned vars for use:
|
||||
- Use `{user_name}` from config for greeting
|
||||
- Use `{communication_language}` from config for all communications
|
||||
- Store any other config variables as `{var-name}` and use appropriately
|
||||
1. Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
- Use `{document_output_language}` for output documents
|
||||
- Use `{planning_artifacts}` for output location and artifact scanning
|
||||
- Use `{project_knowledge}` for additional context scanning
|
||||
|
||||
2. **Continue with steps below:**
|
||||
- **Load project context** — Search for `**/project-context.md`. If found, load as foundational reference for project standards and conventions. If not found, continue without it.
|
||||
|
|
|
|||
|
|
@ -12,7 +12,8 @@ BMad Method,bmad-brainstorming,Brainstorm Project,BP,Expert guided facilitation
|
|||
BMad Method,bmad-market-research,Market Research,MR,"Market analysis competitive landscape customer needs and trends.",,1-analysis,,,false,"planning_artifacts|project-knowledge",research documents
|
||||
BMad Method,bmad-domain-research,Domain Research,DR,Industry domain deep dive subject matter expertise and terminology.,,1-analysis,,,false,"planning_artifacts|project_knowledge",research documents
|
||||
BMad Method,bmad-technical-research,Technical Research,TR,Technical feasibility architecture options and implementation approaches.,,1-analysis,,,false,"planning_artifacts|project_knowledge",research documents
|
||||
BMad Method,bmad-product-brief,Create Brief,CB,A guided experience to nail down your product idea.,,1-analysis,,,false,planning_artifacts,product brief
|
||||
BMad Method,bmad-product-brief,Create Brief,CB,An expert guided experience to nail down your product idea in a brief. a gentler approach than PRFAQ when you are already sure of your concept and nothing will sway you.,,-A,1-analysis,,,false,planning_artifacts,product brief
|
||||
BMad Method,bmad-prfaq,PRFAQ Challenge,WB,Working Backwards guided experience to forge and stress-test your product concept to ensure you have a great product that users will love and need through the PRFAQ gauntlet to determine feasibility and alignment with user needs. alternative to product brief.,,-H,1-analysis,,,false,planning_artifacts,prfaq document
|
||||
BMad Method,bmad-create-prd,Create PRD,CP,Expert led facilitation to produce your Product Requirements Document.,,2-planning,,,true,planning_artifacts,prd
|
||||
BMad Method,bmad-validate-prd,Validate PRD,VP,,,[path],2-planning,bmad-create-prd,,false,planning_artifacts,prd validation report
|
||||
BMad Method,bmad-edit-prd,Edit PRD,EP,,,[path],2-planning,bmad-validate-prd,,false,planning_artifacts,updated prd
|
||||
|
|
|
|||
|
Can't render this file because it has a wrong number of fields in line 2.
|
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
name: bmad-advanced-elicitation
|
||||
description: 'Push the LLM to reconsider, refine, and improve its recent output. Use when user asks for deeper critique or mentions a known deeper critique method, e.g. socratic, first principles, pre-mortem, red team.'
|
||||
agent_party: '{project-root}/_bmad/_config/agent-manifest.csv'
|
||||
---
|
||||
|
||||
# Advanced Elicitation
|
||||
|
|
@ -36,7 +35,7 @@ When invoked from another prompt or process:
|
|||
|
||||
### Step 1: Method Registry Loading
|
||||
|
||||
**Action:** Load and read `./methods.csv` and `{agent_party}`
|
||||
**Action:** Load and read `./methods.csv` and '{project-root}/_bmad/_config/agent-manifest.csv'
|
||||
|
||||
#### CSV Structure
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
name: bmad-distillator
|
||||
description: Lossless LLM-optimized compression of source documents. Use when the user requests to 'distill documents' or 'create a distillate'.
|
||||
argument-hint: "[to create provide input paths] [--validate distillate-path to confirm distillate is lossless and optimized]"
|
||||
---
|
||||
|
||||
# Distillator: A Document Distillation Engine
|
||||
|
|
|
|||
|
|
@ -81,18 +81,18 @@ When the same fact appears in both a brief and discovery notes:
|
|||
|
||||
**Brief says:**
|
||||
```
|
||||
bmad-init must always be included as a base skill in every bundle
|
||||
bmad-help must always be included as a base skill in every bundle
|
||||
```
|
||||
|
||||
**Discovery notes say:**
|
||||
```
|
||||
bmad-init must always be included as a base skill in every bundle/install
|
||||
(solves bootstrapping problem)
|
||||
bmad-help must always be included as a base skill in every bundle/install
|
||||
(solves discoverability problem)
|
||||
```
|
||||
|
||||
**Distillate keeps the more contextual version:**
|
||||
```
|
||||
- bmad-init: always included as base skill in every bundle (solves bootstrapping)
|
||||
- bmad-help: always included as base skill in every bundle (solves discoverability)
|
||||
```
|
||||
|
||||
### Decision/Rationale Compression
|
||||
|
|
@ -128,7 +128,7 @@ parts: 1
|
|||
|
||||
## Core Concept
|
||||
- BMAD Next-Gen Installer: replaces monolithic Node.js CLI with skill-based plugin architecture for distributing BMAD methodology across 40+ AI platforms
|
||||
- Three layers: self-describing plugins (bmad-manifest.json), cross-platform install via Vercel skills CLI (MIT), runtime registration via bmad-init skill
|
||||
- Three layers: self-describing plugins (bmad-manifest.json), cross-platform install via Vercel skills CLI (MIT), runtime registration via bmad-setup skill
|
||||
- Transforms BMAD from dev-only methodology into open platform for any domain (creative, therapeutic, educational, personal)
|
||||
|
||||
## Problem
|
||||
|
|
@ -141,7 +141,7 @@ parts: 1
|
|||
- Plugins: skill bundles with Anthropic plugin standard as base format + bmad-manifest.json extending for BMAD-specific metadata (installer options, capabilities, help integration, phase ordering, dependencies)
|
||||
- Existing manifest example: `{"module-code":"bmm","replaces-skill":"bmad-create-product-brief","capabilities":[{"name":"create-brief","menu-code":"CB","supports-headless":true,"phase-name":"1-analysis","after":["brainstorming"],"before":["create-prd"],"is-required":true}]}`
|
||||
- Vercel skills CLI handles platform translation; integration pattern (wrap/fork/call) is PRD decision
|
||||
- bmad-init: global skill scanning installed bmad-manifest.json files, registering capabilities, configuring project settings; always included as base skill in every bundle (solves bootstrapping)
|
||||
- bmad-setup: global skill scanning installed bmad-manifest.json files, registering capabilities, configuring project settings; always included as base skill in every bundle (solves bootstrapping)
|
||||
- bmad-update: plugin update path without full reinstall; technical approach (diff/replace/preserve customizations) is PRD decision
|
||||
- Distribution tiers: (1) NPX installer wrapping skills CLI for technical users, (2) zip bundle + platform-specific README for non-technical users, (3) future marketplace
|
||||
- Non-technical path has honest friction: "copy to right folder" requires knowing where; per-platform README instructions; improves over time as low-code space matures
|
||||
|
|
@ -161,18 +161,18 @@ parts: 1
|
|||
- Zero (or near-zero) custom platform directory code; delegated to skills CLI ecosystem
|
||||
- Installation verified on top platforms by volume; skills CLI handles long tail
|
||||
- Non-technical install path validated with non-developer users
|
||||
- bmad-init discovers/registers all plugins from manifests; clear errors for malformed manifests
|
||||
- bmad-setup discovers/registers all plugins from manifests; clear errors for malformed manifests
|
||||
- At least one external module author successfully publishes plugin using manifest system
|
||||
- bmad-update works without full reinstall
|
||||
- Existing CLI users have documented migration path
|
||||
|
||||
## Scope
|
||||
- In: manifest spec, bmad-init, bmad-update, Vercel CLI integration, NPX installer, zip bundles, migration path
|
||||
- In: manifest spec, bmad-setup, bmad-update, Vercel CLI integration, NPX installer, zip bundles, migration path
|
||||
- Out: BMAD Builder, marketplace web platform, skill conversion (prerequisite, separate), one-click install for all platforms, monetization, quality certification process (gated-submission principle is architectural requirement; process defined separately)
|
||||
- Deferred: CI/CD integration, telemetry for module authors, air-gapped enterprise install, zip bundle integrity verification (checksums/signing), deeper non-technical platform integrations
|
||||
|
||||
## Current Installer (migration context)
|
||||
- Entry: `tools/cli/bmad-cli.js` (Commander.js) → `tools/cli/installers/lib/core/installer.js`
|
||||
- Entry: `tools/installer/bmad-cli.js` (Commander.js) → `tools/installer/core/installer.js`
|
||||
- Platforms: `platform-codes.yaml` (~20 platforms with target dirs, legacy dirs, template types, special flags)
|
||||
- Manifests: CSV files (skill/workflow/agent-manifest.csv) are current source of truth, not JSON
|
||||
- External modules: `external-official-modules.yaml` (CIS, GDS, TEA, WDS) from npm with semver
|
||||
|
|
@ -214,7 +214,7 @@ parts: 1
|
|||
|
||||
## Opportunities
|
||||
- Module authors as acquisition channel: each published plugin distributes BMAD to creator's audience
|
||||
- CI/CD integration: bmad-init as pipeline one-liner increases stickiness
|
||||
- CI/CD integration: bmad-setup as pipeline one-liner increases stickiness
|
||||
- Educational institutions: structured methodology + non-technical install → university AI curriculum
|
||||
- Skill composability: mixing BMAD modules with third-party skills for custom methodology stacks
|
||||
|
||||
|
|
|
|||
|
|
@ -1,100 +0,0 @@
|
|||
---
|
||||
name: bmad-init
|
||||
description: "Initialize BMad project configuration and load config variables. Use when any skill needs module-specific configuration values, or when setting up a new BMad project."
|
||||
argument-hint: "[--module=module_code] [--vars=var1:default1,var2] [--skill-path=/path/to/calling/skill]"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This skill is the configuration entry point for all BMad skills. It has two modes:
|
||||
|
||||
- **Fast path**: Config exists for the requested module — returns vars as JSON. Done.
|
||||
- **Init path**: Config is missing — walks the user through configuration, writes config files, then returns vars.
|
||||
|
||||
Every BMad skill should call this on activation to get its config vars. The caller never needs to know whether init happened — they just get their config back.
|
||||
|
||||
The script `bmad_init.py` is located in this skill's `scripts/` directory. Locate and run it using python for all commands below.
|
||||
|
||||
## On Activation — Fast Path
|
||||
|
||||
Run the `bmad_init.py` script with the `load` subcommand. Pass `--project-root` set to the project root directory.
|
||||
|
||||
- If a module code was provided by the calling skill, include `--module {module_code}`
|
||||
- To load all vars, include `--all`
|
||||
- To request specific variables with defaults, use `--vars var1:default1,var2`
|
||||
- If no module was specified, omit `--module` to get core vars only
|
||||
|
||||
**If the script returns JSON vars** — store them as `{var-name}` and return to the calling skill. Done.
|
||||
|
||||
**If the script returns an error or `init_required`** — proceed to the Init Path below.
|
||||
|
||||
## Init Path — First-Time Setup
|
||||
|
||||
When the fast path fails (config missing for a module), run this init flow.
|
||||
|
||||
### Step 1: Check what needs setup
|
||||
|
||||
Run `bmad_init.py` with the `check` subcommand, passing `--module {module_code}`, `--skill-path {calling_skill_path}`, and `--project-root`.
|
||||
|
||||
The response tells you what's needed:
|
||||
|
||||
- `"status": "ready"` — Config is fine. Re-run load.
|
||||
- `"status": "no_project"` — Can't find project root. Ask user to confirm the project path.
|
||||
- `"status": "core_missing"` — Core config doesn't exist. Must ask core questions first.
|
||||
- `"status": "module_missing"` — Core exists but module config doesn't. Ask module questions.
|
||||
|
||||
The response includes:
|
||||
- `core_module` — Core module.yaml questions (when core setup needed)
|
||||
- `target_module` — Target module.yaml questions (when module setup needed, discovered from `--skill-path` or `_bmad/{module}/`)
|
||||
- `core_vars` — Existing core config values (when core exists but module doesn't)
|
||||
|
||||
### Step 2: Ask core questions (if `core_missing`)
|
||||
|
||||
The check response includes `core_module` with header, subheader, and variable definitions.
|
||||
|
||||
1. Show the `header` and `subheader` to the user
|
||||
2. For each variable, present the `prompt` and `default`
|
||||
3. For variables with `single-select`, show the options as a numbered list
|
||||
4. For variables with multi-line `prompt` (array), show all lines
|
||||
5. Let the user accept defaults or provide values
|
||||
|
||||
### Step 3: Ask module questions (if module was requested)
|
||||
|
||||
The check response includes `target_module` with the module's questions. Variables may reference core answers in their defaults (e.g., `{output_folder}`).
|
||||
|
||||
1. Resolve defaults by running `bmad_init.py` with the `resolve-defaults` subcommand, passing `--module {module_code}`, `--core-answers '{core_answers_json}'`, and `--project-root`
|
||||
2. Show the module's `header` and `subheader`
|
||||
3. For each variable, present the prompt with resolved default
|
||||
4. For `single-select` variables, show options as a numbered list
|
||||
|
||||
### Step 4: Write config
|
||||
|
||||
Collect all answers and run `bmad_init.py` with the `write` subcommand, passing `--answers '{all_answers_json}'` and `--project-root`.
|
||||
|
||||
The `--answers` JSON format:
|
||||
|
||||
```json
|
||||
{
|
||||
"core": {
|
||||
"user_name": "BMad",
|
||||
"communication_language": "English",
|
||||
"document_output_language": "English",
|
||||
"output_folder": "_bmad-output"
|
||||
},
|
||||
"bmb": {
|
||||
"bmad_builder_output_folder": "_bmad-output/skills",
|
||||
"bmad_builder_reports": "_bmad-output/reports"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Note: Pass the **raw user answers** (before result template expansion). The script applies result templates and `{project-root}` expansion when writing.
|
||||
|
||||
The script:
|
||||
- Creates `_bmad/core/config.yaml` with core values (if core answers provided)
|
||||
- Creates `_bmad/{module}/config.yaml` with core values + module values (result-expanded)
|
||||
- Creates any directories listed in the module.yaml `directories` array
|
||||
|
||||
### Step 5: Return vars
|
||||
|
||||
After writing, re-run `bmad_init.py` with the `load` subcommand (same as the fast path) to return resolved vars. Store returned vars as `{var-name}` and return them to the calling skill.
|
||||
|
|
@ -1,25 +0,0 @@
|
|||
code: core
|
||||
name: "BMad Core Module"
|
||||
|
||||
header: "BMad Core Configuration"
|
||||
subheader: "Configure the core settings for your BMad installation.\nThese settings will be used across all installed bmad skills, workflows, and agents."
|
||||
|
||||
user_name:
|
||||
prompt: "What should agents call you? (Use your name or a team name)"
|
||||
default: "BMad"
|
||||
result: "{value}"
|
||||
|
||||
communication_language:
|
||||
prompt: "What language should agents use when chatting with you?"
|
||||
default: "English"
|
||||
result: "{value}"
|
||||
|
||||
document_output_language:
|
||||
prompt: "Preferred document output language?"
|
||||
default: "English"
|
||||
result: "{value}"
|
||||
|
||||
output_folder:
|
||||
prompt: "Where should output files be saved?"
|
||||
default: "_bmad-output"
|
||||
result: "{project-root}/{value}"
|
||||
|
|
@ -1,624 +0,0 @@
|
|||
# /// script
|
||||
# requires-python = ">=3.10"
|
||||
# dependencies = ["pyyaml"]
|
||||
# ///
|
||||
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
BMad Init — Project configuration bootstrap and config loader.
|
||||
|
||||
Config files (flat YAML per module):
|
||||
- _bmad/core/config.yaml (core settings — user_name, language, output_folder, etc.)
|
||||
- _bmad/{module}/config.yaml (module settings + core values merged in)
|
||||
|
||||
Usage:
|
||||
# Fast path — load all vars for a module (includes core vars)
|
||||
python bmad_init.py load --module bmb --all --project-root /path
|
||||
|
||||
# Load specific vars with optional defaults
|
||||
python bmad_init.py load --module bmb --vars var1:default1,var2 --project-root /path
|
||||
|
||||
# Load core only
|
||||
python bmad_init.py load --all --project-root /path
|
||||
|
||||
# Check if init is needed
|
||||
python bmad_init.py check --project-root /path
|
||||
python bmad_init.py check --module bmb --skill-path /path/to/skill --project-root /path
|
||||
|
||||
# Resolve module defaults given core answers
|
||||
python bmad_init.py resolve-defaults --module bmb --core-answers '{"output_folder":"..."}' --project-root /path
|
||||
|
||||
# Write config from answered questions
|
||||
python bmad_init.py write --answers '{"core": {...}, "bmb": {...}}' --project-root /path
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Project Root Detection
|
||||
# =============================================================================
|
||||
|
||||
def find_project_root(llm_provided=None):
|
||||
"""
|
||||
Find project root by looking for _bmad folder.
|
||||
|
||||
Args:
|
||||
llm_provided: Path explicitly provided via --project-root.
|
||||
|
||||
Returns:
|
||||
Path to project root, or None if not found.
|
||||
"""
|
||||
if llm_provided:
|
||||
candidate = Path(llm_provided)
|
||||
if (candidate / '_bmad').exists():
|
||||
return candidate
|
||||
# First run — _bmad won't exist yet but LLM path is still valid
|
||||
if candidate.is_dir():
|
||||
return candidate
|
||||
|
||||
for start_dir in [Path.cwd(), Path(__file__).resolve().parent]:
|
||||
current_dir = start_dir
|
||||
while current_dir != current_dir.parent:
|
||||
if (current_dir / '_bmad').exists():
|
||||
return current_dir
|
||||
current_dir = current_dir.parent
|
||||
|
||||
return None
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Module YAML Loading
|
||||
# =============================================================================
|
||||
|
||||
def load_module_yaml(path):
|
||||
"""
|
||||
Load and parse a module.yaml file, separating metadata from variable definitions.
|
||||
|
||||
Returns:
|
||||
Dict with 'meta' (code, name, etc.) and 'variables' (var definitions)
|
||||
and 'directories' (list of dir templates), or None on failure.
|
||||
"""
|
||||
try:
|
||||
with open(path, 'r', encoding='utf-8') as f:
|
||||
raw = yaml.safe_load(f)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
if not raw or not isinstance(raw, dict):
|
||||
return None
|
||||
|
||||
meta_keys = {'code', 'name', 'description', 'default_selected', 'header', 'subheader'}
|
||||
meta = {}
|
||||
variables = {}
|
||||
directories = []
|
||||
|
||||
for key, value in raw.items():
|
||||
if key == 'directories':
|
||||
directories = value if isinstance(value, list) else []
|
||||
elif key in meta_keys:
|
||||
meta[key] = value
|
||||
elif isinstance(value, dict) and 'prompt' in value:
|
||||
variables[key] = value
|
||||
# Skip comment-only entries (## var_name lines become None values)
|
||||
|
||||
return {'meta': meta, 'variables': variables, 'directories': directories}
|
||||
|
||||
|
||||
def find_core_module_yaml():
|
||||
"""Find the core module.yaml bundled with this skill."""
|
||||
return Path(__file__).resolve().parent.parent / 'resources' / 'core-module.yaml'
|
||||
|
||||
|
||||
def find_target_module_yaml(module_code, project_root, skill_path=None):
|
||||
"""
|
||||
Find module.yaml for a given module code.
|
||||
|
||||
Search order:
|
||||
1. skill_path/assets/module.yaml (calling skill's assets)
|
||||
2. skill_path/module.yaml (calling skill's root)
|
||||
3. _bmad/{module_code}/module.yaml (installed module location)
|
||||
"""
|
||||
search_paths = []
|
||||
|
||||
if skill_path:
|
||||
sp = Path(skill_path)
|
||||
search_paths.append(sp / 'assets' / 'module.yaml')
|
||||
search_paths.append(sp / 'module.yaml')
|
||||
|
||||
if project_root and module_code:
|
||||
search_paths.append(Path(project_root) / '_bmad' / module_code / 'module.yaml')
|
||||
|
||||
for path in search_paths:
|
||||
if path.exists():
|
||||
return path
|
||||
|
||||
return None
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Config Loading (Flat per-module files)
|
||||
# =============================================================================
|
||||
|
||||
def load_config_file(path):
|
||||
"""Load a flat YAML config file. Returns dict or None."""
|
||||
try:
|
||||
with open(path, 'r', encoding='utf-8') as f:
|
||||
data = yaml.safe_load(f)
|
||||
return data if isinstance(data, dict) else None
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def load_module_config(module_code, project_root):
|
||||
"""Load config for a specific module from _bmad/{module}/config.yaml."""
|
||||
config_path = Path(project_root) / '_bmad' / module_code / 'config.yaml'
|
||||
return load_config_file(config_path)
|
||||
|
||||
|
||||
def resolve_project_root_placeholder(value, project_root):
|
||||
"""Replace {project-root} placeholder with actual path."""
|
||||
if not value or not isinstance(value, str):
|
||||
return value
|
||||
if '{project-root}' not in value:
|
||||
return value
|
||||
|
||||
# Strip the {project-root} token to inspect what remains, so we can
|
||||
# correctly handle absolute paths stored as "{project-root}//absolute/path"
|
||||
# (produced by the "{project-root}/{value}" template applied to an absolute value).
|
||||
suffix = value.replace('{project-root}', '', 1)
|
||||
|
||||
# Strip the one path separator that follows the token (if any)
|
||||
if suffix.startswith('/') or suffix.startswith('\\'):
|
||||
remainder = suffix[1:]
|
||||
else:
|
||||
remainder = suffix
|
||||
|
||||
if os.path.isabs(remainder):
|
||||
# The original value was an absolute path stored with a {project-root}/ prefix.
|
||||
# Return the absolute path directly — no joining needed.
|
||||
return remainder
|
||||
|
||||
# Relative path: join with project root and normalize to resolve any .. segments.
|
||||
return os.path.normpath(os.path.join(str(project_root), remainder))
|
||||
|
||||
|
||||
def parse_var_specs(vars_string):
|
||||
"""
|
||||
Parse variable specs: var_name:default_value,var_name2:default_value2
|
||||
No default = returns null if missing.
|
||||
"""
|
||||
if not vars_string:
|
||||
return []
|
||||
specs = []
|
||||
for spec in vars_string.split(','):
|
||||
spec = spec.strip()
|
||||
if not spec:
|
||||
continue
|
||||
if ':' in spec:
|
||||
parts = spec.split(':', 1)
|
||||
specs.append({'name': parts[0].strip(), 'default': parts[1].strip()})
|
||||
else:
|
||||
specs.append({'name': spec, 'default': None})
|
||||
return specs
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Template Expansion
|
||||
# =============================================================================
|
||||
|
||||
def expand_template(value, context):
|
||||
"""
|
||||
Expand {placeholder} references in a string using context dict.
|
||||
|
||||
Supports: {project-root}, {value}, {output_folder}, {directory_name}, etc.
|
||||
"""
|
||||
if not value or not isinstance(value, str):
|
||||
return value
|
||||
result = value
|
||||
for key, val in context.items():
|
||||
placeholder = '{' + key + '}'
|
||||
if placeholder in result and val is not None:
|
||||
result = result.replace(placeholder, str(val))
|
||||
return result
|
||||
|
||||
|
||||
def apply_result_template(var_def, raw_value, context):
|
||||
"""
|
||||
Apply a variable's result template to transform the raw user answer.
|
||||
|
||||
E.g., result: "{project-root}/{value}" with value="_bmad-output"
|
||||
becomes "/Users/foo/project/_bmad-output"
|
||||
"""
|
||||
result_template = var_def.get('result')
|
||||
if not result_template:
|
||||
return raw_value
|
||||
|
||||
# If the user supplied an absolute path and the template would prefix it with
|
||||
# "{project-root}/", skip the template entirely to avoid producing a broken path
|
||||
# like "/my/project//absolute/path".
|
||||
if isinstance(raw_value, str) and os.path.isabs(raw_value):
|
||||
return raw_value
|
||||
|
||||
ctx = dict(context)
|
||||
ctx['value'] = raw_value
|
||||
result = expand_template(result_template, ctx)
|
||||
|
||||
# Normalize the resulting path to resolve any ".." segments (e.g. when the user
|
||||
# entered a relative path such as "../../outside-dir").
|
||||
if isinstance(result, str) and '{' not in result and os.path.isabs(result):
|
||||
result = os.path.normpath(result)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Load Command (Fast Path)
|
||||
# =============================================================================
|
||||
|
||||
def cmd_load(args):
|
||||
"""Load config vars — the fast path."""
|
||||
project_root = find_project_root(llm_provided=args.project_root)
|
||||
if not project_root:
|
||||
print(json.dumps({'error': 'Project root not found (_bmad folder not detected)'}),
|
||||
file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
module_code = args.module or 'core'
|
||||
|
||||
# Load the module's config (which includes core vars)
|
||||
config = load_module_config(module_code, project_root)
|
||||
if config is None:
|
||||
print(json.dumps({
|
||||
'init_required': True,
|
||||
'missing_module': module_code,
|
||||
}), file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Resolve {project-root} in all values
|
||||
for key in config:
|
||||
config[key] = resolve_project_root_placeholder(config[key], project_root)
|
||||
|
||||
if args.all:
|
||||
print(json.dumps(config, indent=2))
|
||||
else:
|
||||
var_specs = parse_var_specs(args.vars)
|
||||
if not var_specs:
|
||||
print(json.dumps({'error': 'Either --vars or --all must be specified'}),
|
||||
file=sys.stderr)
|
||||
sys.exit(1)
|
||||
result = {}
|
||||
for spec in var_specs:
|
||||
val = config.get(spec['name'])
|
||||
if val is not None and val != '':
|
||||
result[spec['name']] = val
|
||||
elif spec['default'] is not None:
|
||||
result[spec['name']] = spec['default']
|
||||
else:
|
||||
result[spec['name']] = None
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Check Command
|
||||
# =============================================================================
|
||||
|
||||
def cmd_check(args):
|
||||
"""Check if config exists and return status with module.yaml questions if needed."""
|
||||
project_root = find_project_root(llm_provided=args.project_root)
|
||||
if not project_root:
|
||||
print(json.dumps({
|
||||
'status': 'no_project',
|
||||
'message': 'No project root found. Provide --project-root to bootstrap.',
|
||||
}, indent=2))
|
||||
return
|
||||
|
||||
project_root = Path(project_root)
|
||||
module_code = args.module
|
||||
|
||||
# Check core config
|
||||
core_config = load_module_config('core', project_root)
|
||||
core_exists = core_config is not None
|
||||
|
||||
# If no module requested, just check core
|
||||
if not module_code or module_code == 'core':
|
||||
if core_exists:
|
||||
print(json.dumps({'status': 'ready', 'project_root': str(project_root)}, indent=2))
|
||||
else:
|
||||
core_yaml_path = find_core_module_yaml()
|
||||
core_module = load_module_yaml(core_yaml_path) if core_yaml_path.exists() else None
|
||||
print(json.dumps({
|
||||
'status': 'core_missing',
|
||||
'project_root': str(project_root),
|
||||
'core_module': core_module,
|
||||
}, indent=2))
|
||||
return
|
||||
|
||||
# Module requested — check if its config exists
|
||||
module_config = load_module_config(module_code, project_root)
|
||||
if module_config is not None:
|
||||
print(json.dumps({'status': 'ready', 'project_root': str(project_root)}, indent=2))
|
||||
return
|
||||
|
||||
# Module config missing — find its module.yaml for questions
|
||||
target_yaml_path = find_target_module_yaml(
|
||||
module_code, project_root, skill_path=args.skill_path
|
||||
)
|
||||
target_module = load_module_yaml(target_yaml_path) if target_yaml_path else None
|
||||
|
||||
result = {
|
||||
'project_root': str(project_root),
|
||||
}
|
||||
|
||||
if not core_exists:
|
||||
result['status'] = 'core_missing'
|
||||
core_yaml_path = find_core_module_yaml()
|
||||
result['core_module'] = load_module_yaml(core_yaml_path) if core_yaml_path.exists() else None
|
||||
else:
|
||||
result['status'] = 'module_missing'
|
||||
result['core_vars'] = core_config
|
||||
|
||||
result['target_module'] = target_module
|
||||
if target_yaml_path:
|
||||
result['target_module_yaml_path'] = str(target_yaml_path)
|
||||
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Resolve Defaults Command
|
||||
# =============================================================================
|
||||
|
||||
def cmd_resolve_defaults(args):
|
||||
"""Given core answers, resolve a module's variable defaults."""
|
||||
project_root = find_project_root(llm_provided=args.project_root)
|
||||
if not project_root:
|
||||
print(json.dumps({'error': 'Project root not found'}), file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
try:
|
||||
core_answers = json.loads(args.core_answers)
|
||||
except json.JSONDecodeError as e:
|
||||
print(json.dumps({'error': f'Invalid JSON in --core-answers: {e}'}),
|
||||
file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Build context for template expansion
|
||||
context = {
|
||||
'project-root': str(project_root),
|
||||
'directory_name': Path(project_root).name,
|
||||
}
|
||||
context.update(core_answers)
|
||||
|
||||
# Find and load the module's module.yaml
|
||||
module_code = args.module
|
||||
target_yaml_path = find_target_module_yaml(
|
||||
module_code, project_root, skill_path=args.skill_path
|
||||
)
|
||||
if not target_yaml_path:
|
||||
print(json.dumps({'error': f'No module.yaml found for module: {module_code}'}),
|
||||
file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
module_def = load_module_yaml(target_yaml_path)
|
||||
if not module_def:
|
||||
print(json.dumps({'error': f'Failed to parse module.yaml at: {target_yaml_path}'}),
|
||||
file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Resolve defaults in each variable
|
||||
resolved_vars = {}
|
||||
for var_name, var_def in module_def['variables'].items():
|
||||
default = var_def.get('default', '')
|
||||
resolved_default = expand_template(str(default), context)
|
||||
resolved_vars[var_name] = dict(var_def)
|
||||
resolved_vars[var_name]['default'] = resolved_default
|
||||
|
||||
result = {
|
||||
'module_code': module_code,
|
||||
'meta': module_def['meta'],
|
||||
'variables': resolved_vars,
|
||||
'directories': module_def['directories'],
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Write Command
|
||||
# =============================================================================
|
||||
|
||||
def cmd_write(args):
|
||||
"""Write config files from answered questions."""
|
||||
project_root = find_project_root(llm_provided=args.project_root)
|
||||
if not project_root:
|
||||
if args.project_root:
|
||||
project_root = Path(args.project_root)
|
||||
else:
|
||||
print(json.dumps({'error': 'Project root not found and --project-root not provided'}),
|
||||
file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
project_root = Path(project_root)
|
||||
|
||||
try:
|
||||
answers = json.loads(args.answers)
|
||||
except json.JSONDecodeError as e:
|
||||
print(json.dumps({'error': f'Invalid JSON in --answers: {e}'}),
|
||||
file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
context = {
|
||||
'project-root': str(project_root),
|
||||
'directory_name': project_root.name,
|
||||
}
|
||||
|
||||
# Load module.yaml definitions to get result templates
|
||||
core_yaml_path = find_core_module_yaml()
|
||||
core_def = load_module_yaml(core_yaml_path) if core_yaml_path.exists() else None
|
||||
|
||||
files_written = []
|
||||
dirs_created = []
|
||||
|
||||
# Process core answers first (needed for module config expansion)
|
||||
core_answers_raw = answers.get('core', {})
|
||||
core_config = {}
|
||||
|
||||
if core_answers_raw and core_def:
|
||||
for var_name, raw_value in core_answers_raw.items():
|
||||
var_def = core_def['variables'].get(var_name, {})
|
||||
expanded = apply_result_template(var_def, raw_value, context)
|
||||
core_config[var_name] = expanded
|
||||
|
||||
# Write core config
|
||||
core_dir = project_root / '_bmad' / 'core'
|
||||
core_dir.mkdir(parents=True, exist_ok=True)
|
||||
core_config_path = core_dir / 'config.yaml'
|
||||
|
||||
# Merge with existing if present
|
||||
existing = load_config_file(core_config_path) or {}
|
||||
existing.update(core_config)
|
||||
|
||||
_write_config_file(core_config_path, existing, 'CORE')
|
||||
files_written.append(str(core_config_path))
|
||||
elif core_answers_raw:
|
||||
# No core_def available — write raw values
|
||||
core_config = dict(core_answers_raw)
|
||||
core_dir = project_root / '_bmad' / 'core'
|
||||
core_dir.mkdir(parents=True, exist_ok=True)
|
||||
core_config_path = core_dir / 'config.yaml'
|
||||
existing = load_config_file(core_config_path) or {}
|
||||
existing.update(core_config)
|
||||
_write_config_file(core_config_path, existing, 'CORE')
|
||||
files_written.append(str(core_config_path))
|
||||
|
||||
# Update context with resolved core values for module expansion
|
||||
context.update(core_config)
|
||||
|
||||
# Process module answers
|
||||
for module_code, module_answers_raw in answers.items():
|
||||
if module_code == 'core':
|
||||
continue
|
||||
|
||||
# Find module.yaml for result templates
|
||||
target_yaml_path = find_target_module_yaml(
|
||||
module_code, project_root, skill_path=args.skill_path
|
||||
)
|
||||
module_def = load_module_yaml(target_yaml_path) if target_yaml_path else None
|
||||
|
||||
# Build module config: start with core values, then add module values
|
||||
# Re-read core config to get the latest (may have been updated above)
|
||||
latest_core = load_module_config('core', project_root) or core_config
|
||||
module_config = dict(latest_core)
|
||||
|
||||
for var_name, raw_value in module_answers_raw.items():
|
||||
if module_def:
|
||||
var_def = module_def['variables'].get(var_name, {})
|
||||
expanded = apply_result_template(var_def, raw_value, context)
|
||||
else:
|
||||
expanded = raw_value
|
||||
module_config[var_name] = expanded
|
||||
context[var_name] = expanded # Available for subsequent template expansion
|
||||
|
||||
# Write module config
|
||||
module_dir = project_root / '_bmad' / module_code
|
||||
module_dir.mkdir(parents=True, exist_ok=True)
|
||||
module_config_path = module_dir / 'config.yaml'
|
||||
|
||||
existing = load_config_file(module_config_path) or {}
|
||||
existing.update(module_config)
|
||||
|
||||
module_name = module_def['meta'].get('name', module_code.upper()) if module_def else module_code.upper()
|
||||
_write_config_file(module_config_path, existing, module_name)
|
||||
files_written.append(str(module_config_path))
|
||||
|
||||
# Create directories declared in module.yaml
|
||||
if module_def and module_def.get('directories'):
|
||||
for dir_template in module_def['directories']:
|
||||
dir_path = expand_template(dir_template, context)
|
||||
if dir_path:
|
||||
Path(dir_path).mkdir(parents=True, exist_ok=True)
|
||||
dirs_created.append(dir_path)
|
||||
|
||||
result = {
|
||||
'status': 'written',
|
||||
'files_written': files_written,
|
||||
'dirs_created': dirs_created,
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
def _write_config_file(path, data, module_label):
|
||||
"""Write a config YAML file with a header comment."""
|
||||
from datetime import datetime, timezone
|
||||
with open(path, 'w', encoding='utf-8') as f:
|
||||
f.write(f'# {module_label} Module Configuration\n')
|
||||
f.write(f'# Generated by bmad-init\n')
|
||||
f.write(f'# Date: {datetime.now(timezone.utc).isoformat()}\n\n')
|
||||
yaml.safe_dump(data, f, default_flow_style=False, allow_unicode=True, sort_keys=False)
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# CLI Entry Point
|
||||
# =============================================================================
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description='BMad Init — Project configuration bootstrap and config loader.'
|
||||
)
|
||||
subparsers = parser.add_subparsers(dest='command')
|
||||
|
||||
# --- load ---
|
||||
load_parser = subparsers.add_parser('load', help='Load config vars (fast path)')
|
||||
load_parser.add_argument('--module', help='Module code (omit for core only)')
|
||||
load_parser.add_argument('--vars', help='Comma-separated vars with optional defaults')
|
||||
load_parser.add_argument('--all', action='store_true', help='Return all config vars')
|
||||
load_parser.add_argument('--project-root', help='Project root path')
|
||||
|
||||
# --- check ---
|
||||
check_parser = subparsers.add_parser('check', help='Check if init is needed')
|
||||
check_parser.add_argument('--module', help='Module code to check (optional)')
|
||||
check_parser.add_argument('--skill-path', help='Path to the calling skill folder')
|
||||
check_parser.add_argument('--project-root', help='Project root path')
|
||||
|
||||
# --- resolve-defaults ---
|
||||
resolve_parser = subparsers.add_parser('resolve-defaults',
|
||||
help='Resolve module defaults given core answers')
|
||||
resolve_parser.add_argument('--module', required=True, help='Module code')
|
||||
resolve_parser.add_argument('--core-answers', required=True, help='JSON string of core answers')
|
||||
resolve_parser.add_argument('--skill-path', help='Path to calling skill folder')
|
||||
resolve_parser.add_argument('--project-root', help='Project root path')
|
||||
|
||||
# --- write ---
|
||||
write_parser = subparsers.add_parser('write', help='Write config files')
|
||||
write_parser.add_argument('--answers', required=True, help='JSON string of all answers')
|
||||
write_parser.add_argument('--skill-path', help='Path to calling skill (for module.yaml lookup)')
|
||||
write_parser.add_argument('--project-root', help='Project root path')
|
||||
|
||||
args = parser.parse_args()
|
||||
if args.command is None:
|
||||
parser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
commands = {
|
||||
'load': cmd_load,
|
||||
'check': cmd_check,
|
||||
'resolve-defaults': cmd_resolve_defaults,
|
||||
'write': cmd_write,
|
||||
}
|
||||
|
||||
handler = commands.get(args.command)
|
||||
if handler:
|
||||
handler(args)
|
||||
else:
|
||||
parser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
|
|
@ -1,393 +0,0 @@
|
|||
# /// script
|
||||
# requires-python = ">=3.10"
|
||||
# dependencies = ["pyyaml"]
|
||||
# ///
|
||||
|
||||
#!/usr/bin/env python3
|
||||
"""Unit tests for bmad_init.py"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from bmad_init import (
|
||||
find_project_root,
|
||||
parse_var_specs,
|
||||
resolve_project_root_placeholder,
|
||||
expand_template,
|
||||
apply_result_template,
|
||||
load_module_yaml,
|
||||
find_core_module_yaml,
|
||||
find_target_module_yaml,
|
||||
load_config_file,
|
||||
load_module_config,
|
||||
)
|
||||
|
||||
|
||||
class TestFindProjectRoot(unittest.TestCase):
|
||||
|
||||
def test_finds_bmad_folder(self):
|
||||
temp_dir = tempfile.mkdtemp()
|
||||
try:
|
||||
(Path(temp_dir) / '_bmad').mkdir()
|
||||
original_cwd = os.getcwd()
|
||||
try:
|
||||
os.chdir(temp_dir)
|
||||
result = find_project_root()
|
||||
self.assertEqual(result.resolve(), Path(temp_dir).resolve())
|
||||
finally:
|
||||
os.chdir(original_cwd)
|
||||
finally:
|
||||
shutil.rmtree(temp_dir)
|
||||
|
||||
def test_llm_provided_with_bmad(self):
|
||||
temp_dir = tempfile.mkdtemp()
|
||||
try:
|
||||
(Path(temp_dir) / '_bmad').mkdir()
|
||||
result = find_project_root(llm_provided=temp_dir)
|
||||
self.assertEqual(result.resolve(), Path(temp_dir).resolve())
|
||||
finally:
|
||||
shutil.rmtree(temp_dir)
|
||||
|
||||
def test_llm_provided_without_bmad_still_returns_dir(self):
|
||||
"""First-run case: LLM provides path but _bmad doesn't exist yet."""
|
||||
temp_dir = tempfile.mkdtemp()
|
||||
try:
|
||||
result = find_project_root(llm_provided=temp_dir)
|
||||
self.assertEqual(result.resolve(), Path(temp_dir).resolve())
|
||||
finally:
|
||||
shutil.rmtree(temp_dir)
|
||||
|
||||
|
||||
class TestParseVarSpecs(unittest.TestCase):
|
||||
|
||||
def test_vars_with_defaults(self):
|
||||
specs = parse_var_specs('var1:value1,var2:value2')
|
||||
self.assertEqual(len(specs), 2)
|
||||
self.assertEqual(specs[0]['name'], 'var1')
|
||||
self.assertEqual(specs[0]['default'], 'value1')
|
||||
|
||||
def test_vars_without_defaults(self):
|
||||
specs = parse_var_specs('var1,var2')
|
||||
self.assertEqual(len(specs), 2)
|
||||
self.assertIsNone(specs[0]['default'])
|
||||
|
||||
def test_mixed_vars(self):
|
||||
specs = parse_var_specs('required_var,var2:default2')
|
||||
self.assertIsNone(specs[0]['default'])
|
||||
self.assertEqual(specs[1]['default'], 'default2')
|
||||
|
||||
def test_colon_in_default(self):
|
||||
specs = parse_var_specs('path:{project-root}/some/path')
|
||||
self.assertEqual(specs[0]['default'], '{project-root}/some/path')
|
||||
|
||||
def test_empty_string(self):
|
||||
self.assertEqual(parse_var_specs(''), [])
|
||||
|
||||
def test_none(self):
|
||||
self.assertEqual(parse_var_specs(None), [])
|
||||
|
||||
|
||||
class TestResolveProjectRootPlaceholder(unittest.TestCase):
|
||||
|
||||
def test_resolve_placeholder(self):
|
||||
result = resolve_project_root_placeholder('{project-root}/output', Path('/test'))
|
||||
self.assertEqual(result, '/test/output')
|
||||
|
||||
def test_no_placeholder(self):
|
||||
result = resolve_project_root_placeholder('/absolute/path', Path('/test'))
|
||||
self.assertEqual(result, '/absolute/path')
|
||||
|
||||
def test_none(self):
|
||||
self.assertIsNone(resolve_project_root_placeholder(None, Path('/test')))
|
||||
|
||||
def test_non_string(self):
|
||||
self.assertEqual(resolve_project_root_placeholder(42, Path('/test')), 42)
|
||||
|
||||
def test_absolute_path_stored_with_prefix(self):
|
||||
"""Absolute output_folder entered by user is stored as '{project-root}//abs/path'
|
||||
by the '{project-root}/{value}' template. It must resolve to '/abs/path', not
|
||||
'/project//abs/path'."""
|
||||
result = resolve_project_root_placeholder(
|
||||
'{project-root}//Users/me/outside', Path('/Users/me/myproject')
|
||||
)
|
||||
self.assertEqual(result, '/Users/me/outside')
|
||||
|
||||
def test_relative_path_with_traversal_is_normalized(self):
|
||||
"""A relative path like '../../sibling' produces '{project-root}/../../sibling'
|
||||
after the template. It must resolve to the normalized absolute path, not the
|
||||
un-normalized string '/project/../../sibling'."""
|
||||
result = resolve_project_root_placeholder(
|
||||
'{project-root}/../../sibling', Path('/Users/me/myproject')
|
||||
)
|
||||
self.assertEqual(result, '/Users/sibling')
|
||||
|
||||
def test_relative_path_one_level_up(self):
|
||||
result = resolve_project_root_placeholder(
|
||||
'{project-root}/../outside-outputs', Path('/project/root')
|
||||
)
|
||||
self.assertEqual(result, '/project/outside-outputs')
|
||||
|
||||
def test_standard_relative_path_unchanged(self):
|
||||
"""Normal in-project relative paths continue to work correctly."""
|
||||
result = resolve_project_root_placeholder(
|
||||
'{project-root}/_bmad-output', Path('/project/root')
|
||||
)
|
||||
self.assertEqual(result, '/project/root/_bmad-output')
|
||||
|
||||
|
||||
class TestExpandTemplate(unittest.TestCase):
|
||||
|
||||
def test_basic_expansion(self):
|
||||
result = expand_template('{project-root}/output', {'project-root': '/test'})
|
||||
self.assertEqual(result, '/test/output')
|
||||
|
||||
def test_multiple_placeholders(self):
|
||||
result = expand_template(
|
||||
'{output_folder}/planning',
|
||||
{'output_folder': '_bmad-output', 'project-root': '/test'}
|
||||
)
|
||||
self.assertEqual(result, '_bmad-output/planning')
|
||||
|
||||
def test_none_value(self):
|
||||
self.assertIsNone(expand_template(None, {}))
|
||||
|
||||
def test_non_string(self):
|
||||
self.assertEqual(expand_template(42, {}), 42)
|
||||
|
||||
|
||||
class TestApplyResultTemplate(unittest.TestCase):
|
||||
|
||||
def test_with_result_template(self):
|
||||
var_def = {'result': '{project-root}/{value}'}
|
||||
result = apply_result_template(var_def, '_bmad-output', {'project-root': '/test'})
|
||||
self.assertEqual(result, '/test/_bmad-output')
|
||||
|
||||
def test_without_result_template(self):
|
||||
result = apply_result_template({}, 'raw_value', {})
|
||||
self.assertEqual(result, 'raw_value')
|
||||
|
||||
def test_value_only_template(self):
|
||||
var_def = {'result': '{value}'}
|
||||
result = apply_result_template(var_def, 'English', {})
|
||||
self.assertEqual(result, 'English')
|
||||
|
||||
def test_absolute_value_skips_project_root_template(self):
|
||||
"""When the user enters an absolute path, the '{project-root}/{value}' template
|
||||
must not be applied — doing so would produce '/project//absolute/path'."""
|
||||
var_def = {'result': '{project-root}/{value}'}
|
||||
result = apply_result_template(
|
||||
var_def, '/Users/me/shared-outputs', {'project-root': '/Users/me/myproject'}
|
||||
)
|
||||
self.assertEqual(result, '/Users/me/shared-outputs')
|
||||
|
||||
def test_relative_traversal_value_is_normalized(self):
|
||||
"""A relative path like '../../outside' combined with the project-root template
|
||||
must produce a clean normalized absolute path, not '/project/../../outside'."""
|
||||
var_def = {'result': '{project-root}/{value}'}
|
||||
result = apply_result_template(
|
||||
var_def, '../../outside-dir', {'project-root': '/Users/me/myproject'}
|
||||
)
|
||||
self.assertEqual(result, '/Users/outside-dir')
|
||||
|
||||
def test_relative_one_level_up_is_normalized(self):
|
||||
var_def = {'result': '{project-root}/{value}'}
|
||||
result = apply_result_template(
|
||||
var_def, '../sibling-outputs', {'project-root': '/project/root'}
|
||||
)
|
||||
self.assertEqual(result, '/project/sibling-outputs')
|
||||
|
||||
def test_normal_relative_value_unchanged(self):
|
||||
"""Standard in-project relative paths still produce the expected joined path."""
|
||||
var_def = {'result': '{project-root}/{value}'}
|
||||
result = apply_result_template(
|
||||
var_def, '_bmad-output', {'project-root': '/project/root'}
|
||||
)
|
||||
self.assertEqual(result, '/project/root/_bmad-output')
|
||||
|
||||
|
||||
class TestLoadModuleYaml(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
self.temp_dir = tempfile.mkdtemp()
|
||||
|
||||
def tearDown(self):
|
||||
shutil.rmtree(self.temp_dir)
|
||||
|
||||
def test_loads_core_module_yaml(self):
|
||||
path = Path(self.temp_dir) / 'module.yaml'
|
||||
path.write_text(
|
||||
'code: core\n'
|
||||
'name: "BMad Core Module"\n'
|
||||
'header: "Core Config"\n'
|
||||
'user_name:\n'
|
||||
' prompt: "What should agents call you?"\n'
|
||||
' default: "BMad"\n'
|
||||
' result: "{value}"\n'
|
||||
)
|
||||
result = load_module_yaml(path)
|
||||
self.assertIsNotNone(result)
|
||||
self.assertEqual(result['meta']['code'], 'core')
|
||||
self.assertEqual(result['meta']['name'], 'BMad Core Module')
|
||||
self.assertIn('user_name', result['variables'])
|
||||
self.assertEqual(result['variables']['user_name']['prompt'], 'What should agents call you?')
|
||||
|
||||
def test_loads_module_with_directories(self):
|
||||
path = Path(self.temp_dir) / 'module.yaml'
|
||||
path.write_text(
|
||||
'code: bmm\n'
|
||||
'name: "BMad Method"\n'
|
||||
'project_name:\n'
|
||||
' prompt: "Project name?"\n'
|
||||
' default: "{directory_name}"\n'
|
||||
' result: "{value}"\n'
|
||||
'directories:\n'
|
||||
' - "{planning_artifacts}"\n'
|
||||
)
|
||||
result = load_module_yaml(path)
|
||||
self.assertEqual(result['directories'], ['{planning_artifacts}'])
|
||||
|
||||
def test_returns_none_for_missing(self):
|
||||
result = load_module_yaml(Path(self.temp_dir) / 'nonexistent.yaml')
|
||||
self.assertIsNone(result)
|
||||
|
||||
def test_returns_none_for_empty(self):
|
||||
path = Path(self.temp_dir) / 'empty.yaml'
|
||||
path.write_text('')
|
||||
result = load_module_yaml(path)
|
||||
self.assertIsNone(result)
|
||||
|
||||
|
||||
class TestFindCoreModuleYaml(unittest.TestCase):
|
||||
|
||||
def test_returns_path_to_resources(self):
|
||||
path = find_core_module_yaml()
|
||||
self.assertTrue(str(path).endswith('resources/core-module.yaml'))
|
||||
|
||||
|
||||
class TestFindTargetModuleYaml(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
self.temp_dir = tempfile.mkdtemp()
|
||||
self.project_root = Path(self.temp_dir)
|
||||
|
||||
def tearDown(self):
|
||||
shutil.rmtree(self.temp_dir)
|
||||
|
||||
def test_finds_in_skill_assets(self):
|
||||
skill_path = self.project_root / 'skills' / 'test-skill'
|
||||
assets = skill_path / 'assets'
|
||||
assets.mkdir(parents=True)
|
||||
(assets / 'module.yaml').write_text('code: test\n')
|
||||
|
||||
result = find_target_module_yaml('test', self.project_root, str(skill_path))
|
||||
self.assertIsNotNone(result)
|
||||
self.assertTrue(str(result).endswith('assets/module.yaml'))
|
||||
|
||||
def test_finds_in_skill_root(self):
|
||||
skill_path = self.project_root / 'skills' / 'test-skill'
|
||||
skill_path.mkdir(parents=True)
|
||||
(skill_path / 'module.yaml').write_text('code: test\n')
|
||||
|
||||
result = find_target_module_yaml('test', self.project_root, str(skill_path))
|
||||
self.assertIsNotNone(result)
|
||||
|
||||
def test_finds_in_bmad_module_dir(self):
|
||||
module_dir = self.project_root / '_bmad' / 'mymod'
|
||||
module_dir.mkdir(parents=True)
|
||||
(module_dir / 'module.yaml').write_text('code: mymod\n')
|
||||
|
||||
result = find_target_module_yaml('mymod', self.project_root)
|
||||
self.assertIsNotNone(result)
|
||||
|
||||
def test_returns_none_when_not_found(self):
|
||||
result = find_target_module_yaml('missing', self.project_root)
|
||||
self.assertIsNone(result)
|
||||
|
||||
def test_skill_path_takes_priority(self):
|
||||
"""Skill assets module.yaml takes priority over _bmad/{module}/."""
|
||||
skill_path = self.project_root / 'skills' / 'test-skill'
|
||||
assets = skill_path / 'assets'
|
||||
assets.mkdir(parents=True)
|
||||
(assets / 'module.yaml').write_text('code: test\nname: from-skill\n')
|
||||
|
||||
module_dir = self.project_root / '_bmad' / 'test'
|
||||
module_dir.mkdir(parents=True)
|
||||
(module_dir / 'module.yaml').write_text('code: test\nname: from-bmad\n')
|
||||
|
||||
result = find_target_module_yaml('test', self.project_root, str(skill_path))
|
||||
self.assertTrue('assets' in str(result))
|
||||
|
||||
|
||||
class TestLoadConfigFile(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
self.temp_dir = tempfile.mkdtemp()
|
||||
|
||||
def tearDown(self):
|
||||
shutil.rmtree(self.temp_dir)
|
||||
|
||||
def test_loads_flat_yaml(self):
|
||||
path = Path(self.temp_dir) / 'config.yaml'
|
||||
path.write_text('user_name: Test\ncommunication_language: English\n')
|
||||
result = load_config_file(path)
|
||||
self.assertEqual(result['user_name'], 'Test')
|
||||
|
||||
def test_returns_none_for_missing(self):
|
||||
result = load_config_file(Path(self.temp_dir) / 'missing.yaml')
|
||||
self.assertIsNone(result)
|
||||
|
||||
|
||||
class TestLoadModuleConfig(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
self.temp_dir = tempfile.mkdtemp()
|
||||
self.project_root = Path(self.temp_dir)
|
||||
bmad_core = self.project_root / '_bmad' / 'core'
|
||||
bmad_core.mkdir(parents=True)
|
||||
(bmad_core / 'config.yaml').write_text(
|
||||
'user_name: TestUser\n'
|
||||
'communication_language: English\n'
|
||||
'document_output_language: English\n'
|
||||
'output_folder: "{project-root}/_bmad-output"\n'
|
||||
)
|
||||
bmad_bmb = self.project_root / '_bmad' / 'bmb'
|
||||
bmad_bmb.mkdir(parents=True)
|
||||
(bmad_bmb / 'config.yaml').write_text(
|
||||
'user_name: TestUser\n'
|
||||
'communication_language: English\n'
|
||||
'document_output_language: English\n'
|
||||
'output_folder: "{project-root}/_bmad-output"\n'
|
||||
'bmad_builder_output_folder: "{project-root}/_bmad-output/skills"\n'
|
||||
'bmad_builder_reports: "{project-root}/_bmad-output/reports"\n'
|
||||
)
|
||||
|
||||
def tearDown(self):
|
||||
shutil.rmtree(self.temp_dir)
|
||||
|
||||
def test_load_core(self):
|
||||
result = load_module_config('core', self.project_root)
|
||||
self.assertIsNotNone(result)
|
||||
self.assertEqual(result['user_name'], 'TestUser')
|
||||
|
||||
def test_load_module_includes_core_vars(self):
|
||||
result = load_module_config('bmb', self.project_root)
|
||||
self.assertIsNotNone(result)
|
||||
# Module-specific var
|
||||
self.assertIn('bmad_builder_output_folder', result)
|
||||
# Core vars also present
|
||||
self.assertEqual(result['user_name'], 'TestUser')
|
||||
|
||||
def test_missing_module(self):
|
||||
result = load_module_config('nonexistent', self.project_root)
|
||||
self.assertIsNone(result)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
|
|
@ -1,6 +1,125 @@
|
|||
---
|
||||
name: bmad-party-mode
|
||||
description: 'Orchestrates group discussions between all installed BMAD agents, enabling natural multi-agent conversations. Use when user requests party mode.'
|
||||
description: 'Orchestrates group discussions between installed BMAD agents, enabling natural multi-agent conversations where each agent is a real subagent with independent thinking. Use when user requests party mode, wants multiple agent perspectives, group discussion, roundtable, or multi-agent conversation about their project.'
|
||||
---
|
||||
|
||||
Follow the instructions in ./workflow.md.
|
||||
# Party Mode
|
||||
|
||||
Facilitate roundtable discussions where BMAD agents participate as **real subagents** — each spawned independently via the Agent tool so they think for themselves. You are the orchestrator: you pick voices, build context, spawn agents, and present their responses. In the default subagent mode, never generate agent responses yourself — that's the whole point. In `--solo` mode, you roleplay all agents directly.
|
||||
|
||||
## Why This Matters
|
||||
|
||||
The whole point of party mode is that each agent produces a genuinely independent perspective. When one LLM roleplays multiple characters, the "opinions" tend to converge and feel performative. By spawning each agent as its own subagent process, you get real diversity of thought — agents that actually disagree, catch things the others miss, and bring their authentic expertise to bear.
|
||||
|
||||
## Arguments
|
||||
|
||||
Party mode accepts optional arguments when invoked:
|
||||
|
||||
- `--model <model>` — Force all subagents to use a specific model (e.g. `--model haiku`, `--model opus`). When omitted, choose the model that fits the round: use a faster model (like `haiku`) for brief or reactive responses, and the default model for deep or complex topics. Match model weight to the depth of thinking the round requires.
|
||||
- `--solo` — Run without subagents. Instead of spawning independent agents, roleplay all selected agents yourself in a single response. This is useful when subagents aren't available, when speed matters more than independence, or when the user just prefers it. Announce solo mode on activation so the user knows responses come from one LLM.
|
||||
|
||||
## On Activation
|
||||
|
||||
1. **Parse arguments** — check for `--model` and `--solo` flags from the user's invocation.
|
||||
|
||||
2. Load config from `{project-root}/_bmad/core/config.yaml` and resolve:
|
||||
- Use `{user_name}` for greeting
|
||||
- Use `{communication_language}` for all communications
|
||||
|
||||
3. **Read the agent manifest** at `{project-root}/_bmad/_config/agent-manifest.csv`. Build an internal roster of available agents with their displayName, title, icon, role, identity, communicationStyle, and principles.
|
||||
|
||||
4. **Load project context** — search for `**/project-context.md`. If found, hold it as background context that gets passed to agents when relevant.
|
||||
|
||||
5. **Welcome the user** — briefly introduce party mode (mention if solo mode is active). Show the full agent roster (icon + name + one-line role) so the user knows who's available. Ask what they'd like to discuss.
|
||||
|
||||
## The Core Loop
|
||||
|
||||
For each user message:
|
||||
|
||||
### 1. Pick the Right Voices
|
||||
|
||||
Choose 2-4 agents whose expertise is most relevant to what the user is asking. Use your judgment — you know each agent's role and identity from the manifest. Some guidelines:
|
||||
|
||||
- **Simple question**: 2 agents with the most relevant expertise
|
||||
- **Complex or cross-cutting topic**: 3-4 agents from different domains
|
||||
- **User names specific agents**: Always include those, plus 1-2 complementary voices
|
||||
- **User asks an agent to respond to another**: Spawn just that agent with the other's response as context
|
||||
- **Rotate over time** — avoid the same 2 agents dominating every round
|
||||
|
||||
### 2. Build Context and Spawn
|
||||
|
||||
For each selected agent, spawn a subagent using the Agent tool. Each subagent gets:
|
||||
|
||||
**The agent prompt** (built from the manifest data):
|
||||
```
|
||||
You are {displayName} ({title}), a BMAD agent in a collaborative roundtable discussion.
|
||||
|
||||
## Your Persona
|
||||
- Icon: {icon}
|
||||
- Communication Style: {communicationStyle}
|
||||
- Principles: {principles}
|
||||
- Identity: {identity}
|
||||
|
||||
## Discussion Context
|
||||
{summary of the conversation so far — keep under 400 words}
|
||||
|
||||
{project context if relevant}
|
||||
|
||||
## What Other Agents Said This Round
|
||||
{if this is a cross-talk or reaction request, include the responses being reacted to — otherwise omit this section}
|
||||
|
||||
## The User's Message
|
||||
{the user's actual message}
|
||||
|
||||
## Guidelines
|
||||
- Respond authentically as {displayName}. Your perspective should reflect your genuine expertise.
|
||||
- Start your response with: {icon} **{displayName}:**
|
||||
- Speak in {communication_language}.
|
||||
- Scale your response to the substance — don't pad. If you have a brief point, make it briefly.
|
||||
- Disagree with other agents when your expertise tells you to. Don't hedge or be polite about it.
|
||||
- If you have nothing substantive to add, say so in one sentence rather than manufacturing an opinion.
|
||||
- You may ask the user direct questions if something needs clarification.
|
||||
- Do NOT use tools. Just respond with your perspective.
|
||||
```
|
||||
|
||||
**Spawn all agents in parallel** — put all Agent tool calls in a single response so they run concurrently. If `--model` was specified, use that model for all subagents. Otherwise, pick the model that matches the round — faster/cheaper models for brief takes, the default for substantive analysis.
|
||||
|
||||
**Solo mode** — if `--solo` is active, skip spawning. Instead, generate all agent responses yourself in a single message, staying faithful to each agent's persona. Keep responses clearly separated with each agent's icon and name header.
|
||||
|
||||
### 3. Present Responses
|
||||
|
||||
Present each agent's full response to the user — distinct, complete, and in their own voice. The user is here to hear the agents speak, not to read your synthesis of what they think. Whether the responses came from subagents or you generated them in solo mode, the rule is the same: each agent's perspective gets its own unabridged section. Never blend, paraphrase, or condense agent responses into a summary.
|
||||
|
||||
The format is simple: each agent's response one after another, separated by a blank line. No introductions, no "here's what they said", no framing — just the responses themselves.
|
||||
|
||||
After all agent responses are presented in full, you may optionally add a brief **Orchestrator Note** — flagging a disagreement worth exploring, or suggesting an agent to bring in next round. Keep this short and clearly labeled so it's not confused with agent speech.
|
||||
|
||||
### 4. Handle Follow-ups
|
||||
|
||||
The user drives what happens next. Common patterns:
|
||||
|
||||
| User says... | You do... |
|
||||
|---|---|
|
||||
| Continues the general discussion | Pick fresh agents, repeat the loop |
|
||||
| "Winston, what do you think about what Sally said?" | Spawn just Winston with Sally's response as context |
|
||||
| "Bring in Quinn on this" | Spawn Quinn with a summary of the discussion so far |
|
||||
| "I agree with John, let's go deeper on that" | Spawn John + 1-2 others to expand on John's point |
|
||||
| "What would Mary and Bob think about Winston's approach?" | Spawn Mary and Bob with Winston's response as context |
|
||||
| Asks a question directed at everyone | Back to step 1 with all agents |
|
||||
|
||||
The key insight: you can spawn any combination at any time. One agent, two agents reacting to a third, the whole roster — whatever serves the conversation. Each spawn is cheap and independent.
|
||||
|
||||
## Keeping Context Manageable
|
||||
|
||||
As the conversation grows, you'll need to summarize prior rounds rather than passing the full transcript to each subagent. Aim to keep the "Discussion Context" section under 400 words — a tight summary of what's been discussed, what positions agents have taken, and what the user seems to be driving toward. Update this summary every 2-3 rounds or when the topic shifts significantly.
|
||||
|
||||
## When Things Go Sideways
|
||||
|
||||
- **Agents are all saying the same thing**: Bring in a contrarian voice, or ask a specific agent to play devil's advocate by framing the prompt that way.
|
||||
- **Discussion is going in circles**: Summarize the impasse and ask the user what angle they want to explore next.
|
||||
- **User seems disengaged**: Ask directly — continue, change topic, or wrap up?
|
||||
- **Agent gives a weak response**: Don't retry. Present it and let the user decide if they want more from that agent.
|
||||
|
||||
## Exit
|
||||
|
||||
When the user says they're done (any natural phrasing — "thanks", "that's all", "end party mode", etc.), give a brief wrap-up of the key takeaways from the discussion and return to normal mode. Don't force exit triggers — just read the room.
|
||||
|
|
|
|||
|
|
@ -1,138 +0,0 @@
|
|||
# Step 1: Agent Loading and Party Mode Initialization
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
- ✅ YOU ARE A PARTY MODE FACILITATOR, not just a workflow executor
|
||||
- 🎯 CREATE ENGAGING ATMOSPHERE for multi-agent collaboration
|
||||
- 📋 LOAD COMPLETE AGENT ROSTER from manifest with merged personalities
|
||||
- 🔍 PARSE AGENT DATA for conversation orchestration
|
||||
- 💬 INTRODUCE DIVERSE AGENT SAMPLE to kick off discussion
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Show agent loading process before presenting party activation
|
||||
- ⚠️ Present [C] continue option after agent roster is loaded
|
||||
- 💾 ONLY save when user chooses C (Continue)
|
||||
- 📖 Update frontmatter `stepsCompleted: [1]` before loading next step
|
||||
- 🚫 FORBIDDEN to start conversation until C is selected
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Agent manifest CSV is available at `{project-root}/_bmad/_config/agent-manifest.csv`
|
||||
- User configuration from config.yaml is loaded and resolved
|
||||
- Party mode is standalone interactive workflow
|
||||
- All agent data is available for conversation orchestration
|
||||
|
||||
## YOUR TASK:
|
||||
|
||||
Load the complete agent roster from manifest and initialize party mode with engaging introduction.
|
||||
|
||||
## AGENT LOADING SEQUENCE:
|
||||
|
||||
### 1. Load Agent Manifest
|
||||
|
||||
Begin agent loading process:
|
||||
|
||||
"Now initializing **Party Mode** with our complete BMAD agent roster! Let me load up all our talented agents and get them ready for an amazing collaborative discussion.
|
||||
|
||||
**Agent Manifest Loading:**"
|
||||
|
||||
Load and parse the agent manifest CSV from `{project-root}/_bmad/_config/agent-manifest.csv`
|
||||
|
||||
### 2. Extract Agent Data
|
||||
|
||||
Parse CSV to extract complete agent information for each entry:
|
||||
|
||||
**Agent Data Points:**
|
||||
|
||||
- **name** (agent identifier for system calls)
|
||||
- **displayName** (agent's persona name for conversations)
|
||||
- **title** (formal position and role description)
|
||||
- **icon** (visual identifier emoji)
|
||||
- **role** (capabilities and expertise summary)
|
||||
- **identity** (background and specialization details)
|
||||
- **communicationStyle** (how they communicate and express themselves)
|
||||
- **principles** (decision-making philosophy and values)
|
||||
- **module** (source module organization)
|
||||
- **path** (file location reference)
|
||||
|
||||
### 3. Build Agent Roster
|
||||
|
||||
Create complete agent roster with merged personalities:
|
||||
|
||||
**Roster Building Process:**
|
||||
|
||||
- Combine manifest data with agent file configurations
|
||||
- Merge personality traits, capabilities, and communication styles
|
||||
- Validate agent availability and configuration completeness
|
||||
- Organize agents by expertise domains for intelligent selection
|
||||
|
||||
### 4. Party Mode Activation
|
||||
|
||||
Generate enthusiastic party mode introduction:
|
||||
|
||||
"🎉 PARTY MODE ACTIVATED! 🎉
|
||||
|
||||
Welcome {{user_name}}! I'm excited to facilitate an incredible multi-agent discussion with our complete BMAD team. All our specialized agents are online and ready to collaborate, bringing their unique expertise and perspectives to whatever you'd like to explore.
|
||||
|
||||
**Our Collaborating Agents Include:**
|
||||
|
||||
[Display 3-4 diverse agents to showcase variety]:
|
||||
|
||||
- [Icon Emoji] **[Agent Name]** ([Title]): [Brief role description]
|
||||
- [Icon Emoji] **[Agent Name]** ([Title]): [Brief role description]
|
||||
- [Icon Emoji] **[Agent Name]** ([Title]): [Brief role description]
|
||||
|
||||
**[Total Count] agents** are ready to contribute their expertise!
|
||||
|
||||
**What would you like to discuss with the team today?**"
|
||||
|
||||
### 5. Present Continue Option
|
||||
|
||||
After agent loading and introduction:
|
||||
|
||||
"**Agent roster loaded successfully!** All our BMAD experts are excited to collaborate with you.
|
||||
|
||||
**Ready to start the discussion?**
|
||||
[C] Continue - Begin multi-agent conversation
|
||||
|
||||
### 6. Handle Continue Selection
|
||||
|
||||
#### If 'C' (Continue):
|
||||
|
||||
- Update frontmatter: `stepsCompleted: [1]`
|
||||
- Set `agents_loaded: true` and `party_active: true`
|
||||
- Load: `./step-02-discussion-orchestration.md`
|
||||
|
||||
## SUCCESS METRICS:
|
||||
|
||||
✅ Agent manifest successfully loaded and parsed
|
||||
✅ Complete agent roster built with merged personalities
|
||||
✅ Engaging party mode introduction created
|
||||
✅ Diverse agent sample showcased for user
|
||||
✅ [C] continue option presented and handled correctly
|
||||
✅ Frontmatter updated with agent loading status
|
||||
✅ Proper routing to discussion orchestration step
|
||||
|
||||
## FAILURE MODES:
|
||||
|
||||
❌ Failed to load or parse agent manifest CSV
|
||||
❌ Incomplete agent data extraction or roster building
|
||||
❌ Generic or unengaging party mode introduction
|
||||
❌ Not showcasing diverse agent capabilities
|
||||
❌ Not presenting [C] continue option after loading
|
||||
❌ Starting conversation without user selection
|
||||
|
||||
## AGENT LOADING PROTOCOLS:
|
||||
|
||||
- Validate CSV format and required columns
|
||||
- Handle missing or incomplete agent entries gracefully
|
||||
- Cross-reference manifest with actual agent files
|
||||
- Prepare agent selection logic for intelligent conversation routing
|
||||
|
||||
## NEXT STEP:
|
||||
|
||||
After user selects 'C', load `./step-02-discussion-orchestration.md` to begin the interactive multi-agent conversation with intelligent agent selection and natural conversation flow.
|
||||
|
||||
Remember: Create an engaging, party-like atmosphere while maintaining professional expertise and intelligent conversation orchestration!
|
||||
|
|
@ -1,187 +0,0 @@
|
|||
# Step 2: Discussion Orchestration and Multi-Agent Conversation
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
- ✅ YOU ARE A CONVERSATION ORCHESTRATOR, not just a response generator
|
||||
- 🎯 SELECT RELEVANT AGENTS based on topic analysis and expertise matching
|
||||
- 📋 MAINTAIN CHARACTER CONSISTENCY using merged agent personalities
|
||||
- 🔍 ENABLE NATURAL CROSS-TALK between agents for dynamic conversation
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Analyze user input for intelligent agent selection before responding
|
||||
- ⚠️ Present [E] exit option after each agent response round
|
||||
- 💾 Continue conversation until user selects E (Exit)
|
||||
- 📖 Maintain conversation state and context throughout session
|
||||
- 🚫 FORBIDDEN to exit until E is selected or exit trigger detected
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Complete agent roster with merged personalities is available
|
||||
- User topic and conversation history guide agent selection
|
||||
- Exit triggers: `*exit`, `goodbye`, `end party`, `quit`
|
||||
|
||||
## YOUR TASK:
|
||||
|
||||
Orchestrate dynamic multi-agent conversations with intelligent agent selection, natural cross-talk, and authentic character portrayal.
|
||||
|
||||
## DISCUSSION ORCHESTRATION SEQUENCE:
|
||||
|
||||
### 1. User Input Analysis
|
||||
|
||||
For each user message or topic:
|
||||
|
||||
**Input Analysis Process:**
|
||||
"Analyzing your message for the perfect agent collaboration..."
|
||||
|
||||
**Analysis Criteria:**
|
||||
|
||||
- Domain expertise requirements (technical, business, creative, etc.)
|
||||
- Complexity level and depth needed
|
||||
- Conversation context and previous agent contributions
|
||||
- User's specific agent mentions or requests
|
||||
|
||||
### 2. Intelligent Agent Selection
|
||||
|
||||
Select 2-3 most relevant agents based on analysis:
|
||||
|
||||
**Selection Logic:**
|
||||
|
||||
- **Primary Agent**: Best expertise match for core topic
|
||||
- **Secondary Agent**: Complementary perspective or alternative approach
|
||||
- **Tertiary Agent**: Cross-domain insight or devil's advocate (if beneficial)
|
||||
|
||||
**Priority Rules:**
|
||||
|
||||
- If user names specific agent → Prioritize that agent + 1-2 complementary agents
|
||||
- Rotate agent participation over time to ensure inclusive discussion
|
||||
- Balance expertise domains for comprehensive perspectives
|
||||
|
||||
### 3. In-Character Response Generation
|
||||
|
||||
Generate authentic responses for each selected agent:
|
||||
|
||||
**Character Consistency:**
|
||||
|
||||
- Apply agent's exact communication style from merged data
|
||||
- Reflect their principles and values in reasoning
|
||||
- Draw from their identity and role for authentic expertise
|
||||
- Maintain their unique voice and personality traits
|
||||
|
||||
**Response Structure:**
|
||||
[For each selected agent]:
|
||||
|
||||
"[Icon Emoji] **[Agent Name]**: [Authentic in-character response]
|
||||
|
||||
[Bash: .claude/hooks/bmad-speak.sh \"[Agent Name]\" \"[Their response]\"]"
|
||||
|
||||
### 4. Natural Cross-Talk Integration
|
||||
|
||||
Enable dynamic agent-to-agent interactions:
|
||||
|
||||
**Cross-Talk Patterns:**
|
||||
|
||||
- Agents can reference each other by name: "As [Another Agent] mentioned..."
|
||||
- Building on previous points: "[Another Agent] makes a great point about..."
|
||||
- Respectful disagreements: "I see it differently than [Another Agent]..."
|
||||
- Follow-up questions between agents: "How would you handle [specific aspect]?"
|
||||
|
||||
**Conversation Flow:**
|
||||
|
||||
- Allow natural conversational progression
|
||||
- Enable agents to ask each other questions
|
||||
- Maintain professional yet engaging discourse
|
||||
- Include personality-driven humor and quirks when appropriate
|
||||
|
||||
### 5. Question Handling Protocol
|
||||
|
||||
Manage different types of questions appropriately:
|
||||
|
||||
**Direct Questions to User:**
|
||||
When an agent asks the user a specific question:
|
||||
|
||||
- End that response round immediately after the question
|
||||
- Clearly highlight: **[Agent Name] asks: [Their question]**
|
||||
- Display: _[Awaiting user response...]_
|
||||
- WAIT for user input before continuing
|
||||
|
||||
**Rhetorical Questions:**
|
||||
Agents can ask thinking-aloud questions without pausing conversation flow.
|
||||
|
||||
**Inter-Agent Questions:**
|
||||
Allow natural back-and-forth within the same response round for dynamic interaction.
|
||||
|
||||
### 6. Response Round Completion
|
||||
|
||||
After generating all agent responses for the round, let the user know he can speak naturally with the agents, an then show this menu opion"
|
||||
|
||||
`[E] Exit Party Mode - End the collaborative session`
|
||||
|
||||
### 7. Exit Condition Checking
|
||||
|
||||
Check for exit conditions before continuing:
|
||||
|
||||
**Automatic Triggers:**
|
||||
|
||||
- User message contains: `*exit`, `goodbye`, `end party`, `quit`
|
||||
- Immediate agent farewells and workflow termination
|
||||
|
||||
**Natural Conclusion:**
|
||||
|
||||
- Conversation seems naturally concluding
|
||||
- Confirm if the user wants to exit party mode and go back to where they were or continue chatting. Do it in a conversational way with an agent in the party.
|
||||
|
||||
### 8. Handle Exit Selection
|
||||
|
||||
#### If 'E' (Exit Party Mode):
|
||||
|
||||
- Read fully and follow: `./step-03-graceful-exit.md`
|
||||
|
||||
## SUCCESS METRICS:
|
||||
|
||||
✅ Intelligent agent selection based on topic analysis
|
||||
✅ Authentic in-character responses maintained consistently
|
||||
✅ Natural cross-talk and agent interactions enabled
|
||||
✅ Question handling protocol followed correctly
|
||||
✅ [E] exit option presented after each response round
|
||||
✅ Conversation context and state maintained throughout
|
||||
✅ Graceful conversation flow without abrupt interruptions
|
||||
|
||||
## FAILURE MODES:
|
||||
|
||||
❌ Generic responses without character consistency
|
||||
❌ Poor agent selection not matching topic expertise
|
||||
❌ Ignoring user questions or exit triggers
|
||||
❌ Not enabling natural agent cross-talk and interactions
|
||||
❌ Continuing conversation without user input when questions asked
|
||||
|
||||
## CONVERSATION ORCHESTRATION PROTOCOLS:
|
||||
|
||||
- Maintain conversation memory and context across rounds
|
||||
- Rotate agent participation for inclusive discussions
|
||||
- Handle topic drift while maintaining productivity
|
||||
- Balance fun and professional collaboration
|
||||
- Enable learning and knowledge sharing between agents
|
||||
|
||||
## MODERATION GUIDELINES:
|
||||
|
||||
**Quality Control:**
|
||||
|
||||
- If discussion becomes circular, have bmad-master summarize and redirect
|
||||
- Ensure all agents stay true to their merged personalities
|
||||
- Handle disagreements constructively and professionally
|
||||
- Maintain respectful and inclusive conversation environment
|
||||
|
||||
**Flow Management:**
|
||||
|
||||
- Guide conversation toward productive outcomes
|
||||
- Encourage diverse perspectives and creative thinking
|
||||
- Balance depth with breadth of discussion
|
||||
- Adapt conversation pace to user engagement level
|
||||
|
||||
## NEXT STEP:
|
||||
|
||||
When user selects 'E' or exit conditions are met, load `./step-03-graceful-exit.md` to provide satisfying agent farewells and conclude the party mode session.
|
||||
|
||||
Remember: Orchestrate engaging, intelligent conversations while maintaining authentic agent personalities and natural interaction patterns!
|
||||
|
|
@ -1,167 +0,0 @@
|
|||
# Step 3: Graceful Exit and Party Mode Conclusion
|
||||
|
||||
## MANDATORY EXECUTION RULES (READ FIRST):
|
||||
|
||||
- ✅ YOU ARE A PARTY MODE COORDINATOR concluding an engaging session
|
||||
- 🎯 PROVIDE SATISFYING AGENT FAREWELLS in authentic character voices
|
||||
- 📋 EXPRESS GRATITUDE to user for collaborative participation
|
||||
- 🔍 ACKNOWLEDGE SESSION HIGHLIGHTS and key insights gained
|
||||
- 💬 MAINTAIN POSITIVE ATMOSPHERE until the very end
|
||||
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||
|
||||
## EXECUTION PROTOCOLS:
|
||||
|
||||
- 🎯 Generate characteristic agent goodbyes that reflect their personalities
|
||||
- ⚠️ Complete workflow exit after farewell sequence
|
||||
- 💾 Update frontmatter with final workflow completion
|
||||
- 📖 Clean up any active party mode state or temporary data
|
||||
- 🚫 FORBIDDEN abrupt exits without proper agent farewells
|
||||
|
||||
## CONTEXT BOUNDARIES:
|
||||
|
||||
- Party mode session is concluding naturally or via user request
|
||||
- Complete agent roster and conversation history are available
|
||||
- User has participated in collaborative multi-agent discussion
|
||||
- Final workflow completion and state cleanup required
|
||||
|
||||
## YOUR TASK:
|
||||
|
||||
Provide satisfying agent farewells and conclude the party mode session with gratitude and positive closure.
|
||||
|
||||
## GRACEFUL EXIT SEQUENCE:
|
||||
|
||||
### 1. Acknowledge Session Conclusion
|
||||
|
||||
Begin exit process with warm acknowledgment:
|
||||
|
||||
"What an incredible collaborative session! Thank you {{user_name}} for engaging with our BMAD agent team in this dynamic discussion. Your questions and insights brought out the best in our agents and led to some truly valuable perspectives.
|
||||
|
||||
**Before we wrap up, let a few of our agents say goodbye...**"
|
||||
|
||||
### 2. Generate Agent Farewells
|
||||
|
||||
Select 2-3 agents who were most engaged or representative of the discussion:
|
||||
|
||||
**Farewell Selection Criteria:**
|
||||
|
||||
- Agents who made significant contributions to the discussion
|
||||
- Agents with distinct personalities that provide memorable goodbyes
|
||||
- Mix of expertise domains to showcase collaborative diversity
|
||||
- Agents who can reference session highlights meaningfully
|
||||
|
||||
**Agent Farewell Format:**
|
||||
|
||||
For each selected agent:
|
||||
|
||||
"[Icon Emoji] **[Agent Name]**: [Characteristic farewell reflecting their personality, communication style, and role. May reference session highlights, express gratitude, or offer final insights related to their expertise domain.]
|
||||
|
||||
[Bash: .claude/hooks/bmad-speak.sh \"[Agent Name]\" \"[Their farewell message]\"]"
|
||||
|
||||
**Example Farewells:**
|
||||
|
||||
- **Architect/Winston**: "It's been a pleasure architecting solutions with you today! Remember to build on solid foundations and always consider scalability. Until next time! 🏗️"
|
||||
- **Innovator/Creative Agent**: "What an inspiring creative journey! Don't let those innovative ideas fade - nurture them and watch them grow. Keep thinking outside the box! 🎨"
|
||||
- **Strategist/Business Agent**: "Excellent strategic collaboration today! The insights we've developed will serve you well. Keep analyzing, keep optimizing, and keep winning! 📈"
|
||||
|
||||
### 3. Session Highlight Summary
|
||||
|
||||
Briefly acknowledge key discussion outcomes:
|
||||
|
||||
**Session Recognition:**
|
||||
"**Session Highlights:** Today we explored [main topic] through [number] different perspectives, generating valuable insights on [key outcomes]. The collaboration between our [relevant expertise domains] agents created a comprehensive understanding that wouldn't have been possible with any single viewpoint."
|
||||
|
||||
### 4. Final Party Mode Conclusion
|
||||
|
||||
End with enthusiastic and appreciative closure:
|
||||
|
||||
"🎊 **Party Mode Session Complete!** 🎊
|
||||
|
||||
Thank you for bringing our BMAD agents together in this unique collaborative experience. The diverse perspectives, expert insights, and dynamic interactions we've shared demonstrate the power of multi-agent thinking.
|
||||
|
||||
**Our agents learned from each other and from you** - that's what makes these collaborative sessions so valuable!
|
||||
|
||||
**Ready for your next challenge**? Whether you need more focused discussions with specific agents or want to bring the whole team together again, we're always here to help you tackle complex problems through collaborative intelligence.
|
||||
|
||||
**Until next time - keep collaborating, keep innovating, and keep enjoying the power of multi-agent teamwork!** 🚀"
|
||||
|
||||
### 5. Complete Workflow Exit
|
||||
|
||||
Final workflow completion steps:
|
||||
|
||||
**Frontmatter Update:**
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: [1, 2, 3]
|
||||
user_name: '{{user_name}}'
|
||||
date: '{{date}}'
|
||||
agents_loaded: true
|
||||
party_active: false
|
||||
workflow_completed: true
|
||||
---
|
||||
```
|
||||
|
||||
**State Cleanup:**
|
||||
|
||||
- Clear any active conversation state
|
||||
- Reset agent selection cache
|
||||
- Mark party mode workflow as completed
|
||||
|
||||
### 6. Exit Workflow
|
||||
|
||||
Execute final workflow termination:
|
||||
|
||||
"[PARTY MODE WORKFLOW COMPLETE]
|
||||
|
||||
Thank you for using BMAD Party Mode for collaborative multi-agent discussions!"
|
||||
|
||||
## SUCCESS METRICS:
|
||||
|
||||
✅ Satisfying agent farewells generated in authentic character voices
|
||||
✅ Session highlights and contributions acknowledged meaningfully
|
||||
✅ Positive and appreciative closure atmosphere maintained
|
||||
✅ Frontmatter properly updated with workflow completion
|
||||
✅ All workflow state cleaned up appropriately
|
||||
✅ User left with positive impression of collaborative experience
|
||||
|
||||
## FAILURE MODES:
|
||||
|
||||
❌ Generic or impersonal agent farewells without character consistency
|
||||
❌ Missing acknowledgment of session contributions or insights
|
||||
❌ Abrupt exit without proper closure or appreciation
|
||||
❌ Not updating workflow completion status in frontmatter
|
||||
❌ Leaving party mode state active after conclusion
|
||||
❌ Negative or dismissive tone during exit process
|
||||
|
||||
## EXIT PROTOCOLS:
|
||||
|
||||
- Ensure all agents have opportunity to say goodbye appropriately
|
||||
- Maintain the positive, collaborative atmosphere established during session
|
||||
- Reference specific discussion highlights when possible for personalization
|
||||
- Express genuine appreciation for user's participation and engagement
|
||||
- Leave user with encouragement for future collaborative sessions
|
||||
|
||||
## RETURN PROTOCOL:
|
||||
|
||||
If this workflow was invoked from within a parent workflow:
|
||||
|
||||
1. Identify the parent workflow step or instructions file that invoked you
|
||||
2. Re-read that file now to restore context
|
||||
3. Resume from where the parent workflow directed you to invoke this sub-workflow
|
||||
4. Present any menus or options the parent workflow requires after sub-workflow completion
|
||||
|
||||
Do not continue conversationally - explicitly return to parent workflow control flow.
|
||||
|
||||
## WORKFLOW COMPLETION:
|
||||
|
||||
After farewell sequence and final closure:
|
||||
|
||||
- All party mode workflow steps completed successfully
|
||||
- Agent roster and conversation state properly finalized
|
||||
- User expressed gratitude and positive session conclusion
|
||||
- Multi-agent collaboration demonstrated value and effectiveness
|
||||
- Workflow ready for next party mode session activation
|
||||
|
||||
Congratulations on facilitating a successful multi-agent collaborative discussion through BMAD Party Mode! 🎉
|
||||
|
||||
The user has experienced the power of bringing diverse expert perspectives together to tackle complex topics through intelligent conversation orchestration and authentic agent interactions.
|
||||
|
|
@ -1,190 +0,0 @@
|
|||
---
|
||||
---
|
||||
|
||||
# Party Mode Workflow
|
||||
|
||||
**Goal:** Orchestrates group discussions between all installed BMAD agents, enabling natural multi-agent conversations
|
||||
|
||||
**Your Role:** You are a party mode facilitator and multi-agent conversation orchestrator. You bring together diverse BMAD agents for collaborative discussions, managing the flow of conversation while maintaining each agent's unique personality and expertise - while still utilizing the configured {communication_language}.
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW ARCHITECTURE
|
||||
|
||||
This uses **micro-file architecture** with **sequential conversation orchestration**:
|
||||
|
||||
- Step 01 loads agent manifest and initializes party mode
|
||||
- Step 02 orchestrates the ongoing multi-agent discussion
|
||||
- Step 03 handles graceful party mode exit
|
||||
- Conversation state tracked in frontmatter
|
||||
- Agent personalities maintained through merged manifest data
|
||||
|
||||
---
|
||||
|
||||
## INITIALIZATION
|
||||
|
||||
### Configuration Loading
|
||||
|
||||
Load config from `{project-root}/_bmad/core/config.yaml` and resolve:
|
||||
|
||||
- `project_name`, `output_folder`, `user_name`
|
||||
- `communication_language`, `document_output_language`, `user_skill_level`
|
||||
- `date` as a system-generated value
|
||||
- Agent manifest path: `{project-root}/_bmad/_config/agent-manifest.csv`
|
||||
|
||||
### Paths
|
||||
|
||||
- `agent_manifest_path` = `{project-root}/_bmad/_config/agent-manifest.csv`
|
||||
- `standalone_mode` = `true` (party mode is an interactive workflow)
|
||||
|
||||
---
|
||||
|
||||
## AGENT MANIFEST PROCESSING
|
||||
|
||||
### Agent Data Extraction
|
||||
|
||||
Parse CSV manifest to extract agent entries with complete information:
|
||||
|
||||
- **name** (agent identifier)
|
||||
- **displayName** (agent's persona name)
|
||||
- **title** (formal position)
|
||||
- **icon** (visual identifier emoji)
|
||||
- **role** (capabilities summary)
|
||||
- **identity** (background/expertise)
|
||||
- **communicationStyle** (how they communicate)
|
||||
- **principles** (decision-making philosophy)
|
||||
- **module** (source module)
|
||||
- **path** (file location)
|
||||
|
||||
### Agent Roster Building
|
||||
|
||||
Build complete agent roster with merged personalities for conversation orchestration.
|
||||
|
||||
---
|
||||
|
||||
## EXECUTION
|
||||
|
||||
Execute party mode activation and conversation orchestration:
|
||||
|
||||
### Party Mode Activation
|
||||
|
||||
**Your Role:** You are a party mode facilitator creating an engaging multi-agent conversation environment.
|
||||
|
||||
**Welcome Activation:**
|
||||
|
||||
"🎉 PARTY MODE ACTIVATED! 🎉
|
||||
|
||||
Welcome {{user_name}}! All BMAD agents are here and ready for a dynamic group discussion. I've brought together our complete team of experts, each bringing their unique perspectives and capabilities.
|
||||
|
||||
**Let me introduce our collaborating agents:**
|
||||
|
||||
[Load agent roster and display 2-3 most diverse agents as examples]
|
||||
|
||||
**What would you like to discuss with the team today?**"
|
||||
|
||||
### Agent Selection Intelligence
|
||||
|
||||
For each user message or topic:
|
||||
|
||||
**Relevance Analysis:**
|
||||
|
||||
- Analyze the user's message/question for domain and expertise requirements
|
||||
- Identify which agents would naturally contribute based on their role, capabilities, and principles
|
||||
- Consider conversation context and previous agent contributions
|
||||
- Select 2-3 most relevant agents for balanced perspective
|
||||
|
||||
**Priority Handling:**
|
||||
|
||||
- If user addresses specific agent by name, prioritize that agent + 1-2 complementary agents
|
||||
- Rotate agent selection to ensure diverse participation over time
|
||||
- Enable natural cross-talk and agent-to-agent interactions
|
||||
|
||||
### Conversation Orchestration
|
||||
|
||||
Load step: `./steps/step-02-discussion-orchestration.md`
|
||||
|
||||
---
|
||||
|
||||
## WORKFLOW STATES
|
||||
|
||||
### Frontmatter Tracking
|
||||
|
||||
```yaml
|
||||
---
|
||||
stepsCompleted: [1]
|
||||
user_name: '{{user_name}}'
|
||||
date: '{{date}}'
|
||||
agents_loaded: true
|
||||
party_active: true
|
||||
exit_triggers: ['*exit', 'goodbye', 'end party', 'quit']
|
||||
---
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ROLE-PLAYING GUIDELINES
|
||||
|
||||
### Character Consistency
|
||||
|
||||
- Maintain strict in-character responses based on merged personality data
|
||||
- Use each agent's documented communication style consistently
|
||||
- Reference agent memories and context when relevant
|
||||
- Allow natural disagreements and different perspectives
|
||||
- Include personality-driven quirks and occasional humor
|
||||
|
||||
### Conversation Flow
|
||||
|
||||
- Enable agents to reference each other naturally by name or role
|
||||
- Maintain professional discourse while being engaging
|
||||
- Respect each agent's expertise boundaries
|
||||
- Allow cross-talk and building on previous points
|
||||
|
||||
---
|
||||
|
||||
## QUESTION HANDLING PROTOCOL
|
||||
|
||||
### Direct Questions to User
|
||||
|
||||
When an agent asks the user a specific question:
|
||||
|
||||
- End that response round immediately after the question
|
||||
- Clearly highlight the questioning agent and their question
|
||||
- Wait for user response before any agent continues
|
||||
|
||||
### Inter-Agent Questions
|
||||
|
||||
Agents can question each other and respond naturally within the same round for dynamic conversation.
|
||||
|
||||
---
|
||||
|
||||
## EXIT CONDITIONS
|
||||
|
||||
### Automatic Triggers
|
||||
|
||||
Exit party mode when user message contains any exit triggers:
|
||||
|
||||
- `*exit`, `goodbye`, `end party`, `quit`
|
||||
|
||||
### Graceful Conclusion
|
||||
|
||||
If conversation naturally concludes:
|
||||
|
||||
- Ask user if they'd like to continue or end party mode
|
||||
- Exit gracefully when user indicates completion
|
||||
|
||||
---
|
||||
|
||||
## MODERATION NOTES
|
||||
|
||||
**Quality Control:**
|
||||
|
||||
- If discussion becomes circular, have bmad-master summarize and redirect
|
||||
- Balance fun and productivity based on conversation tone
|
||||
- Ensure all agents stay true to their merged personalities
|
||||
- Exit gracefully when user indicates completion
|
||||
|
||||
**Conversation Management:**
|
||||
|
||||
- Rotate agent participation to ensure inclusive discussion
|
||||
- Handle topic drift while maintaining productive conversation
|
||||
- Facilitate cross-agent collaboration and knowledge sharing
|
||||
|
|
@ -15,7 +15,7 @@
|
|||
const path = require('node:path');
|
||||
const os = require('node:os');
|
||||
const fs = require('fs-extra');
|
||||
const { loadSkillManifest, getInstallToBmad } = require('../tools/cli/installers/lib/ide/shared/skill-manifest');
|
||||
const { loadSkillManifest, getInstallToBmad } = require('../tools/installer/ide/shared/skill-manifest');
|
||||
|
||||
// ANSI colors
|
||||
const colors = {
|
||||
|
|
|
|||
|
|
@ -14,10 +14,9 @@
|
|||
const path = require('node:path');
|
||||
const os = require('node:os');
|
||||
const fs = require('fs-extra');
|
||||
const { ConfigCollector } = require('../tools/cli/installers/lib/core/config-collector');
|
||||
const { ManifestGenerator } = require('../tools/cli/installers/lib/core/manifest-generator');
|
||||
const { IdeManager } = require('../tools/cli/installers/lib/ide/manager');
|
||||
const { clearCache, loadPlatformCodes } = require('../tools/cli/installers/lib/ide/platform-codes');
|
||||
const { ManifestGenerator } = require('../tools/installer/core/manifest-generator');
|
||||
const { IdeManager } = require('../tools/installer/ide/manager');
|
||||
const { clearCache, loadPlatformCodes } = require('../tools/installer/ide/platform-codes');
|
||||
|
||||
// ANSI colors
|
||||
const colors = {
|
||||
|
|
@ -149,8 +148,6 @@ async function runTests() {
|
|||
|
||||
assert(windsurfInstaller?.target_dir === '.windsurf/skills', 'Windsurf target_dir uses native skills path');
|
||||
|
||||
assert(windsurfInstaller?.skill_format === true, 'Windsurf installer enables native skill output');
|
||||
|
||||
assert(
|
||||
Array.isArray(windsurfInstaller?.legacy_targets) && windsurfInstaller.legacy_targets.includes('.windsurf/workflows'),
|
||||
'Windsurf installer cleans legacy workflow output',
|
||||
|
|
@ -197,8 +194,6 @@ async function runTests() {
|
|||
|
||||
assert(kiroInstaller?.target_dir === '.kiro/skills', 'Kiro target_dir uses native skills path');
|
||||
|
||||
assert(kiroInstaller?.skill_format === true, 'Kiro installer enables native skill output');
|
||||
|
||||
assert(
|
||||
Array.isArray(kiroInstaller?.legacy_targets) && kiroInstaller.legacy_targets.includes('.kiro/steering'),
|
||||
'Kiro installer cleans legacy steering output',
|
||||
|
|
@ -245,8 +240,6 @@ async function runTests() {
|
|||
|
||||
assert(antigravityInstaller?.target_dir === '.agent/skills', 'Antigravity target_dir uses native skills path');
|
||||
|
||||
assert(antigravityInstaller?.skill_format === true, 'Antigravity installer enables native skill output');
|
||||
|
||||
assert(
|
||||
Array.isArray(antigravityInstaller?.legacy_targets) && antigravityInstaller.legacy_targets.includes('.agent/workflows'),
|
||||
'Antigravity installer cleans legacy workflow output',
|
||||
|
|
@ -293,8 +286,6 @@ async function runTests() {
|
|||
|
||||
assert(auggieInstaller?.target_dir === '.augment/skills', 'Auggie target_dir uses native skills path');
|
||||
|
||||
assert(auggieInstaller?.skill_format === true, 'Auggie installer enables native skill output');
|
||||
|
||||
assert(
|
||||
Array.isArray(auggieInstaller?.legacy_targets) && auggieInstaller.legacy_targets.includes('.augment/commands'),
|
||||
'Auggie installer cleans legacy command output',
|
||||
|
|
@ -346,10 +337,6 @@ async function runTests() {
|
|||
|
||||
assert(opencodeInstaller?.target_dir === '.opencode/skills', 'OpenCode target_dir uses native skills path');
|
||||
|
||||
assert(opencodeInstaller?.skill_format === true, 'OpenCode installer enables native skill output');
|
||||
|
||||
assert(opencodeInstaller?.ancestor_conflict_check === true, 'OpenCode installer enables ancestor conflict checks');
|
||||
|
||||
assert(
|
||||
Array.isArray(opencodeInstaller?.legacy_targets) &&
|
||||
['.opencode/agents', '.opencode/commands', '.opencode/agent', '.opencode/command'].every((legacyTarget) =>
|
||||
|
|
@ -412,10 +399,6 @@ async function runTests() {
|
|||
|
||||
assert(claudeInstaller?.target_dir === '.claude/skills', 'Claude Code target_dir uses native skills path');
|
||||
|
||||
assert(claudeInstaller?.skill_format === true, 'Claude Code installer enables native skill output');
|
||||
|
||||
assert(claudeInstaller?.ancestor_conflict_check === true, 'Claude Code installer enables ancestor conflict checks');
|
||||
|
||||
assert(
|
||||
Array.isArray(claudeInstaller?.legacy_targets) && claudeInstaller.legacy_targets.includes('.claude/commands'),
|
||||
'Claude Code installer cleans legacy command output',
|
||||
|
|
@ -454,44 +437,7 @@ async function runTests() {
|
|||
|
||||
console.log('');
|
||||
|
||||
// ============================================================
|
||||
// Test 10: Claude Code Ancestor Conflict
|
||||
// ============================================================
|
||||
console.log(`${colors.yellow}Test Suite 10: Claude Code Ancestor Conflict${colors.reset}\n`);
|
||||
|
||||
try {
|
||||
const tempRoot10 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-claude-code-ancestor-test-'));
|
||||
const parentProjectDir10 = path.join(tempRoot10, 'parent');
|
||||
const childProjectDir10 = path.join(parentProjectDir10, 'child');
|
||||
const installedBmadDir10 = await createTestBmadFixture();
|
||||
|
||||
await fs.ensureDir(path.join(parentProjectDir10, '.git'));
|
||||
await fs.ensureDir(path.join(parentProjectDir10, '.claude', 'skills', 'bmad-existing'));
|
||||
await fs.ensureDir(childProjectDir10);
|
||||
await fs.writeFile(path.join(parentProjectDir10, '.claude', 'skills', 'bmad-existing', 'SKILL.md'), 'legacy\n');
|
||||
|
||||
const ideManager10 = new IdeManager();
|
||||
await ideManager10.ensureInitialized();
|
||||
const result10 = await ideManager10.setup('claude-code', childProjectDir10, installedBmadDir10, {
|
||||
silent: true,
|
||||
selectedModules: ['bmm'],
|
||||
});
|
||||
const expectedConflictDir10 = await fs.realpath(path.join(parentProjectDir10, '.claude', 'skills'));
|
||||
|
||||
assert(result10.success === false, 'Claude Code setup refuses install when ancestor skills already exist');
|
||||
assert(result10.handlerResult?.reason === 'ancestor-conflict', 'Claude Code ancestor rejection reports ancestor-conflict reason');
|
||||
assert(
|
||||
result10.handlerResult?.conflictDir === expectedConflictDir10,
|
||||
'Claude Code ancestor rejection points at ancestor .claude/skills dir',
|
||||
);
|
||||
|
||||
await fs.remove(tempRoot10);
|
||||
await fs.remove(path.dirname(installedBmadDir10));
|
||||
} catch (error) {
|
||||
assert(false, 'Claude Code ancestor conflict protection test succeeds', error.message);
|
||||
}
|
||||
|
||||
console.log('');
|
||||
// Test 10: Removed — ancestor conflict check no longer applies (no IDE inherits skills from parent dirs)
|
||||
|
||||
// ============================================================
|
||||
// Test 11: Codex Native Skills Install
|
||||
|
|
@ -505,10 +451,6 @@ async function runTests() {
|
|||
|
||||
assert(codexInstaller?.target_dir === '.agents/skills', 'Codex target_dir uses native skills path');
|
||||
|
||||
assert(codexInstaller?.skill_format === true, 'Codex installer enables native skill output');
|
||||
|
||||
assert(codexInstaller?.ancestor_conflict_check === true, 'Codex installer enables ancestor conflict checks');
|
||||
|
||||
assert(
|
||||
Array.isArray(codexInstaller?.legacy_targets) && codexInstaller.legacy_targets.includes('.codex/prompts'),
|
||||
'Codex installer cleans legacy prompt output',
|
||||
|
|
@ -547,41 +489,7 @@ async function runTests() {
|
|||
|
||||
console.log('');
|
||||
|
||||
// ============================================================
|
||||
// Test 12: Codex Ancestor Conflict
|
||||
// ============================================================
|
||||
console.log(`${colors.yellow}Test Suite 12: Codex Ancestor Conflict${colors.reset}\n`);
|
||||
|
||||
try {
|
||||
const tempRoot12 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-codex-ancestor-test-'));
|
||||
const parentProjectDir12 = path.join(tempRoot12, 'parent');
|
||||
const childProjectDir12 = path.join(parentProjectDir12, 'child');
|
||||
const installedBmadDir12 = await createTestBmadFixture();
|
||||
|
||||
await fs.ensureDir(path.join(parentProjectDir12, '.git'));
|
||||
await fs.ensureDir(path.join(parentProjectDir12, '.agents', 'skills', 'bmad-existing'));
|
||||
await fs.ensureDir(childProjectDir12);
|
||||
await fs.writeFile(path.join(parentProjectDir12, '.agents', 'skills', 'bmad-existing', 'SKILL.md'), 'legacy\n');
|
||||
|
||||
const ideManager12 = new IdeManager();
|
||||
await ideManager12.ensureInitialized();
|
||||
const result12 = await ideManager12.setup('codex', childProjectDir12, installedBmadDir12, {
|
||||
silent: true,
|
||||
selectedModules: ['bmm'],
|
||||
});
|
||||
const expectedConflictDir12 = await fs.realpath(path.join(parentProjectDir12, '.agents', 'skills'));
|
||||
|
||||
assert(result12.success === false, 'Codex setup refuses install when ancestor skills already exist');
|
||||
assert(result12.handlerResult?.reason === 'ancestor-conflict', 'Codex ancestor rejection reports ancestor-conflict reason');
|
||||
assert(result12.handlerResult?.conflictDir === expectedConflictDir12, 'Codex ancestor rejection points at ancestor .agents/skills dir');
|
||||
|
||||
await fs.remove(tempRoot12);
|
||||
await fs.remove(path.dirname(installedBmadDir12));
|
||||
} catch (error) {
|
||||
assert(false, 'Codex ancestor conflict protection test succeeds', error.message);
|
||||
}
|
||||
|
||||
console.log('');
|
||||
// Test 12: Removed — ancestor conflict check no longer applies (no IDE inherits skills from parent dirs)
|
||||
|
||||
// ============================================================
|
||||
// Test 13: Cursor Native Skills Install
|
||||
|
|
@ -595,8 +503,6 @@ async function runTests() {
|
|||
|
||||
assert(cursorInstaller?.target_dir === '.cursor/skills', 'Cursor target_dir uses native skills path');
|
||||
|
||||
assert(cursorInstaller?.skill_format === true, 'Cursor installer enables native skill output');
|
||||
|
||||
assert(
|
||||
Array.isArray(cursorInstaller?.legacy_targets) && cursorInstaller.legacy_targets.includes('.cursor/commands'),
|
||||
'Cursor installer cleans legacy command output',
|
||||
|
|
@ -649,8 +555,6 @@ async function runTests() {
|
|||
|
||||
assert(rooInstaller?.target_dir === '.roo/skills', 'Roo target_dir uses native skills path');
|
||||
|
||||
assert(rooInstaller?.skill_format === true, 'Roo installer enables native skill output');
|
||||
|
||||
assert(
|
||||
Array.isArray(rooInstaller?.legacy_targets) && rooInstaller.legacy_targets.includes('.roo/commands'),
|
||||
'Roo installer cleans legacy command output',
|
||||
|
|
@ -702,44 +606,7 @@ async function runTests() {
|
|||
|
||||
console.log('');
|
||||
|
||||
// ============================================================
|
||||
// Test 15: OpenCode Ancestor Conflict
|
||||
// ============================================================
|
||||
console.log(`${colors.yellow}Test Suite 15: OpenCode Ancestor Conflict${colors.reset}\n`);
|
||||
|
||||
try {
|
||||
const tempRoot = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-opencode-ancestor-test-'));
|
||||
const parentProjectDir = path.join(tempRoot, 'parent');
|
||||
const childProjectDir = path.join(parentProjectDir, 'child');
|
||||
const installedBmadDir = await createTestBmadFixture();
|
||||
|
||||
await fs.ensureDir(path.join(parentProjectDir, '.git'));
|
||||
await fs.ensureDir(path.join(parentProjectDir, '.opencode', 'skills', 'bmad-existing'));
|
||||
await fs.ensureDir(childProjectDir);
|
||||
await fs.writeFile(path.join(parentProjectDir, '.opencode', 'skills', 'bmad-existing', 'SKILL.md'), 'legacy\n');
|
||||
|
||||
const ideManager = new IdeManager();
|
||||
await ideManager.ensureInitialized();
|
||||
const result = await ideManager.setup('opencode', childProjectDir, installedBmadDir, {
|
||||
silent: true,
|
||||
selectedModules: ['bmm'],
|
||||
});
|
||||
const expectedConflictDir = await fs.realpath(path.join(parentProjectDir, '.opencode', 'skills'));
|
||||
|
||||
assert(result.success === false, 'OpenCode setup refuses install when ancestor skills already exist');
|
||||
assert(result.handlerResult?.reason === 'ancestor-conflict', 'OpenCode ancestor rejection reports ancestor-conflict reason');
|
||||
assert(
|
||||
result.handlerResult?.conflictDir === expectedConflictDir,
|
||||
'OpenCode ancestor rejection points at ancestor .opencode/skills dir',
|
||||
);
|
||||
|
||||
await fs.remove(tempRoot);
|
||||
await fs.remove(path.dirname(installedBmadDir));
|
||||
} catch (error) {
|
||||
assert(false, 'OpenCode ancestor conflict protection test succeeds', error.message);
|
||||
}
|
||||
|
||||
console.log('');
|
||||
// Test 15: Removed — ancestor conflict check no longer applies (no IDE inherits skills from parent dirs)
|
||||
|
||||
// Test 16: Removed — old YAML→XML QA agent compilation no longer applies (agents now use SKILL.md format)
|
||||
|
||||
|
|
@ -757,8 +624,6 @@ async function runTests() {
|
|||
|
||||
assert(copilotInstaller?.target_dir === '.github/skills', 'GitHub Copilot target_dir uses native skills path');
|
||||
|
||||
assert(copilotInstaller?.skill_format === true, 'GitHub Copilot installer enables native skill output');
|
||||
|
||||
assert(
|
||||
Array.isArray(copilotInstaller?.legacy_targets) && copilotInstaller.legacy_targets.includes('.github/agents'),
|
||||
'GitHub Copilot installer cleans legacy agents output',
|
||||
|
|
@ -839,8 +704,6 @@ async function runTests() {
|
|||
|
||||
assert(clineInstaller?.target_dir === '.cline/skills', 'Cline target_dir uses native skills path');
|
||||
|
||||
assert(clineInstaller?.skill_format === true, 'Cline installer enables native skill output');
|
||||
|
||||
assert(
|
||||
Array.isArray(clineInstaller?.legacy_targets) && clineInstaller.legacy_targets.includes('.clinerules/workflows'),
|
||||
'Cline installer cleans legacy workflow output',
|
||||
|
|
@ -901,8 +764,6 @@ async function runTests() {
|
|||
|
||||
assert(codebuddyInstaller?.target_dir === '.codebuddy/skills', 'CodeBuddy target_dir uses native skills path');
|
||||
|
||||
assert(codebuddyInstaller?.skill_format === true, 'CodeBuddy installer enables native skill output');
|
||||
|
||||
assert(
|
||||
Array.isArray(codebuddyInstaller?.legacy_targets) && codebuddyInstaller.legacy_targets.includes('.codebuddy/commands'),
|
||||
'CodeBuddy installer cleans legacy command output',
|
||||
|
|
@ -961,8 +822,6 @@ async function runTests() {
|
|||
|
||||
assert(crushInstaller?.target_dir === '.crush/skills', 'Crush target_dir uses native skills path');
|
||||
|
||||
assert(crushInstaller?.skill_format === true, 'Crush installer enables native skill output');
|
||||
|
||||
assert(
|
||||
Array.isArray(crushInstaller?.legacy_targets) && crushInstaller.legacy_targets.includes('.crush/commands'),
|
||||
'Crush installer cleans legacy command output',
|
||||
|
|
@ -1021,8 +880,6 @@ async function runTests() {
|
|||
|
||||
assert(traeInstaller?.target_dir === '.trae/skills', 'Trae target_dir uses native skills path');
|
||||
|
||||
assert(traeInstaller?.skill_format === true, 'Trae installer enables native skill output');
|
||||
|
||||
assert(
|
||||
Array.isArray(traeInstaller?.legacy_targets) && traeInstaller.legacy_targets.includes('.trae/rules'),
|
||||
'Trae installer cleans legacy rules output',
|
||||
|
|
@ -1069,27 +926,34 @@ async function runTests() {
|
|||
console.log('');
|
||||
|
||||
// ============================================================
|
||||
// Suite 22: KiloCoder Suspended
|
||||
// Suite 22: KiloCoder Native Skills
|
||||
// ============================================================
|
||||
console.log(`${colors.yellow}Test Suite 22: KiloCoder Suspended${colors.reset}\n`);
|
||||
console.log(`${colors.yellow}Test Suite 22: KiloCoder Native Skills${colors.reset}\n`);
|
||||
|
||||
try {
|
||||
clearCache();
|
||||
const platformCodes22 = await loadPlatformCodes();
|
||||
const kiloConfig22 = platformCodes22.platforms.kilo;
|
||||
|
||||
assert(typeof kiloConfig22?.suspended === 'string', 'KiloCoder has a suspended message in platform config');
|
||||
assert(!kiloConfig22?.suspended, 'KiloCoder is not suspended');
|
||||
|
||||
assert(kiloConfig22?.installer?.target_dir === '.kilocode/skills', 'KiloCoder retains target_dir config for future use');
|
||||
assert(kiloConfig22?.installer?.target_dir === '.kilocode/skills', 'KiloCoder target_dir uses native skills path');
|
||||
|
||||
assert(
|
||||
Array.isArray(kiloConfig22?.installer?.legacy_targets) && kiloConfig22.installer.legacy_targets.includes('.kilocode/workflows'),
|
||||
'KiloCoder installer cleans legacy workflows output',
|
||||
);
|
||||
|
||||
const ideManager22 = new IdeManager();
|
||||
await ideManager22.ensureInitialized();
|
||||
|
||||
// Should not appear in available IDEs
|
||||
// Should appear in available IDEs
|
||||
const availableIdes22 = ideManager22.getAvailableIdes();
|
||||
assert(!availableIdes22.some((ide) => ide.value === 'kilo'), 'KiloCoder is hidden from IDE selection');
|
||||
assert(
|
||||
availableIdes22.some((ide) => ide.value === 'kilo'),
|
||||
'KiloCoder appears in IDE selection',
|
||||
);
|
||||
|
||||
// Setup should be blocked but legacy files should be cleaned up
|
||||
const tempProjectDir22 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-kilo-test-'));
|
||||
const installedBmadDir22 = await createTestBmadFixture();
|
||||
|
||||
|
|
@ -1103,25 +967,29 @@ async function runTests() {
|
|||
selectedModules: ['bmm'],
|
||||
});
|
||||
|
||||
assert(result22.success === false, 'KiloCoder setup is blocked when suspended');
|
||||
assert(result22.error === 'suspended', 'KiloCoder setup returns suspended error');
|
||||
assert(result22.success === true, 'KiloCoder setup succeeds against temp project');
|
||||
|
||||
// Should not write new skill files
|
||||
assert(
|
||||
!(await fs.pathExists(path.join(tempProjectDir22, '.kilocode', 'skills'))),
|
||||
'KiloCoder does not create skills directory when suspended',
|
||||
);
|
||||
const skillFile22 = path.join(tempProjectDir22, '.kilocode', 'skills', 'bmad-master', 'SKILL.md');
|
||||
assert(await fs.pathExists(skillFile22), 'KiloCoder install writes SKILL.md directory output');
|
||||
|
||||
// Legacy files should be cleaned up
|
||||
assert(
|
||||
!(await fs.pathExists(path.join(tempProjectDir22, '.kilocode', 'workflows'))),
|
||||
'KiloCoder legacy workflows are cleaned up even when suspended',
|
||||
);
|
||||
const skillContent22 = await fs.readFile(skillFile22, 'utf8');
|
||||
const nameMatch22 = skillContent22.match(/^name:\s*(.+)$/m);
|
||||
assert(nameMatch22 && nameMatch22[1].trim() === 'bmad-master', 'KiloCoder skill name frontmatter matches directory name exactly');
|
||||
|
||||
assert(!(await fs.pathExists(path.join(tempProjectDir22, '.kilocode', 'workflows'))), 'KiloCoder setup removes legacy workflows dir');
|
||||
|
||||
const result22b = await ideManager22.setup('kilo', tempProjectDir22, installedBmadDir22, {
|
||||
silent: true,
|
||||
selectedModules: ['bmm'],
|
||||
});
|
||||
|
||||
assert(result22b.success === true, 'KiloCoder reinstall/upgrade succeeds over existing skills');
|
||||
assert(await fs.pathExists(skillFile22), 'KiloCoder reinstall preserves SKILL.md output');
|
||||
|
||||
await fs.remove(tempProjectDir22);
|
||||
await fs.remove(path.dirname(installedBmadDir22));
|
||||
} catch (error) {
|
||||
assert(false, 'KiloCoder suspended test succeeds', error.message);
|
||||
assert(false, 'KiloCoder native skills test succeeds', error.message);
|
||||
}
|
||||
|
||||
console.log('');
|
||||
|
|
@ -1138,8 +1006,6 @@ async function runTests() {
|
|||
|
||||
assert(geminiInstaller?.target_dir === '.gemini/skills', 'Gemini target_dir uses native skills path');
|
||||
|
||||
assert(geminiInstaller?.skill_format === true, 'Gemini installer enables native skill output');
|
||||
|
||||
assert(
|
||||
Array.isArray(geminiInstaller?.legacy_targets) && geminiInstaller.legacy_targets.includes('.gemini/commands'),
|
||||
'Gemini installer cleans legacy commands output',
|
||||
|
|
@ -1196,7 +1062,6 @@ async function runTests() {
|
|||
const iflowInstaller = platformCodes24.platforms.iflow?.installer;
|
||||
|
||||
assert(iflowInstaller?.target_dir === '.iflow/skills', 'iFlow target_dir uses native skills path');
|
||||
assert(iflowInstaller?.skill_format === true, 'iFlow installer enables native skill output');
|
||||
assert(
|
||||
Array.isArray(iflowInstaller?.legacy_targets) && iflowInstaller.legacy_targets.includes('.iflow/commands'),
|
||||
'iFlow installer cleans legacy commands output',
|
||||
|
|
@ -1246,7 +1111,6 @@ async function runTests() {
|
|||
const qwenInstaller = platformCodes25.platforms.qwen?.installer;
|
||||
|
||||
assert(qwenInstaller?.target_dir === '.qwen/skills', 'QwenCoder target_dir uses native skills path');
|
||||
assert(qwenInstaller?.skill_format === true, 'QwenCoder installer enables native skill output');
|
||||
assert(
|
||||
Array.isArray(qwenInstaller?.legacy_targets) && qwenInstaller.legacy_targets.includes('.qwen/commands'),
|
||||
'QwenCoder installer cleans legacy commands output',
|
||||
|
|
@ -1296,7 +1160,6 @@ async function runTests() {
|
|||
const rovoInstaller = platformCodes26.platforms['rovo-dev']?.installer;
|
||||
|
||||
assert(rovoInstaller?.target_dir === '.rovodev/skills', 'Rovo Dev target_dir uses native skills path');
|
||||
assert(rovoInstaller?.skill_format === true, 'Rovo Dev installer enables native skill output');
|
||||
assert(
|
||||
Array.isArray(rovoInstaller?.legacy_targets) && rovoInstaller.legacy_targets.includes('.rovodev/workflows'),
|
||||
'Rovo Dev installer cleans legacy workflows output',
|
||||
|
|
@ -1432,8 +1295,6 @@ async function runTests() {
|
|||
const piInstaller = platformCodes28.platforms.pi?.installer;
|
||||
|
||||
assert(piInstaller?.target_dir === '.pi/skills', 'Pi target_dir uses native skills path');
|
||||
assert(piInstaller?.skill_format === true, 'Pi installer enables native skill output');
|
||||
assert(piInstaller?.template_type === 'default', 'Pi installer uses default skill template');
|
||||
|
||||
tempProjectDir28 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-pi-test-'));
|
||||
installedBmadDir28 = await createTestBmadFixture();
|
||||
|
|
@ -1648,93 +1509,6 @@ async function runTests() {
|
|||
// skill-manifest.csv should include the native agent entrypoint
|
||||
const skillManifestCsv29 = await fs.readFile(path.join(tempFixture29, '_config', 'skill-manifest.csv'), 'utf8');
|
||||
assert(skillManifestCsv29.includes('bmad-tea'), 'skill-manifest.csv includes native type:agent SKILL.md entrypoint');
|
||||
|
||||
// --- Agents at non-agents/ paths (regression test for BMM/CIS layouts) ---
|
||||
// Create a second fixture with agents at paths like bmm/1-analysis/bmad-agent-analyst/
|
||||
const tempFixture29b = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-agent-paths-'));
|
||||
await fs.ensureDir(path.join(tempFixture29b, '_config'));
|
||||
|
||||
// Agent at bmm-style path: bmm/1-analysis/bmad-agent-analyst/
|
||||
const bmmAgentDir = path.join(tempFixture29b, 'bmm', '1-analysis', 'bmad-agent-analyst');
|
||||
await fs.ensureDir(bmmAgentDir);
|
||||
await fs.writeFile(
|
||||
path.join(bmmAgentDir, 'bmad-skill-manifest.yaml'),
|
||||
[
|
||||
'type: agent',
|
||||
'name: bmad-agent-analyst',
|
||||
'displayName: Mary',
|
||||
'title: Business Analyst',
|
||||
'role: Strategic Business Analyst',
|
||||
'module: bmm',
|
||||
].join('\n') + '\n',
|
||||
);
|
||||
await fs.writeFile(
|
||||
path.join(bmmAgentDir, 'SKILL.md'),
|
||||
'---\nname: bmad-agent-analyst\ndescription: Business Analyst agent\n---\n\nAnalyst agent.\n',
|
||||
);
|
||||
|
||||
// Agent at cis-style path: cis/skills/bmad-cis-agent-brainstorming-coach/
|
||||
const cisAgentDir = path.join(tempFixture29b, 'cis', 'skills', 'bmad-cis-agent-brainstorming-coach');
|
||||
await fs.ensureDir(cisAgentDir);
|
||||
await fs.writeFile(
|
||||
path.join(cisAgentDir, 'bmad-skill-manifest.yaml'),
|
||||
[
|
||||
'type: agent',
|
||||
'name: bmad-cis-agent-brainstorming-coach',
|
||||
'displayName: Carson',
|
||||
'title: Brainstorming Specialist',
|
||||
'role: Master Facilitator',
|
||||
'module: cis',
|
||||
].join('\n') + '\n',
|
||||
);
|
||||
await fs.writeFile(
|
||||
path.join(cisAgentDir, 'SKILL.md'),
|
||||
'---\nname: bmad-cis-agent-brainstorming-coach\ndescription: Brainstorming coach\n---\n\nCoach.\n',
|
||||
);
|
||||
|
||||
// Agent at standard agents/ path (GDS-style): gds/agents/gds-agent-game-dev/
|
||||
const gdsAgentDir = path.join(tempFixture29b, 'gds', 'agents', 'gds-agent-game-dev');
|
||||
await fs.ensureDir(gdsAgentDir);
|
||||
await fs.writeFile(
|
||||
path.join(gdsAgentDir, 'bmad-skill-manifest.yaml'),
|
||||
[
|
||||
'type: agent',
|
||||
'name: gds-agent-game-dev',
|
||||
'displayName: Link',
|
||||
'title: Game Developer',
|
||||
'role: Senior Game Dev',
|
||||
'module: gds',
|
||||
].join('\n') + '\n',
|
||||
);
|
||||
await fs.writeFile(
|
||||
path.join(gdsAgentDir, 'SKILL.md'),
|
||||
'---\nname: gds-agent-game-dev\ndescription: Game developer agent\n---\n\nGame dev.\n',
|
||||
);
|
||||
|
||||
const generator29b = new ManifestGenerator();
|
||||
await generator29b.generateManifests(tempFixture29b, ['bmm', 'cis', 'gds'], [], { ides: [] });
|
||||
|
||||
// All three agents should appear in agents[] regardless of directory layout
|
||||
const bmmAgent = generator29b.agents.find((a) => a.name === 'bmad-agent-analyst');
|
||||
assert(bmmAgent !== undefined, 'Agent at bmm/1-analysis/ path appears in agents[]');
|
||||
assert(bmmAgent && bmmAgent.module === 'bmm', 'BMM agent module field comes from manifest file');
|
||||
assert(bmmAgent && bmmAgent.path.includes('bmm/1-analysis/bmad-agent-analyst'), 'BMM agent path reflects actual directory layout');
|
||||
|
||||
const cisAgent = generator29b.agents.find((a) => a.name === 'bmad-cis-agent-brainstorming-coach');
|
||||
assert(cisAgent !== undefined, 'Agent at cis/skills/ path appears in agents[]');
|
||||
assert(cisAgent && cisAgent.module === 'cis', 'CIS agent module field comes from manifest file');
|
||||
|
||||
const gdsAgent = generator29b.agents.find((a) => a.name === 'gds-agent-game-dev');
|
||||
assert(gdsAgent !== undefined, 'Agent at gds/agents/ path appears in agents[]');
|
||||
assert(gdsAgent && gdsAgent.module === 'gds', 'GDS agent module field comes from manifest file');
|
||||
|
||||
// agent-manifest.csv should contain all three
|
||||
const agentCsv29b = await fs.readFile(path.join(tempFixture29b, '_config', 'agent-manifest.csv'), 'utf8');
|
||||
assert(agentCsv29b.includes('bmad-agent-analyst'), 'agent-manifest.csv includes BMM-layout agent');
|
||||
assert(agentCsv29b.includes('bmad-cis-agent-brainstorming-coach'), 'agent-manifest.csv includes CIS-layout agent');
|
||||
assert(agentCsv29b.includes('gds-agent-game-dev'), 'agent-manifest.csv includes GDS-layout agent');
|
||||
|
||||
await fs.remove(tempFixture29b).catch(() => {});
|
||||
} catch (error) {
|
||||
assert(false, 'Unified skill scanner test succeeds', error.message);
|
||||
} finally {
|
||||
|
|
@ -1861,8 +1635,6 @@ async function runTests() {
|
|||
const onaInstaller = platformCodes32.platforms.ona?.installer;
|
||||
|
||||
assert(onaInstaller?.target_dir === '.ona/skills', 'Ona target_dir uses native skills path');
|
||||
assert(onaInstaller?.skill_format === true, 'Ona installer enables native skill output');
|
||||
assert(onaInstaller?.template_type === 'default', 'Ona installer uses default skill template');
|
||||
|
||||
tempProjectDir32 = await fs.mkdtemp(path.join(os.tmpdir(), 'bmad-ona-test-'));
|
||||
installedBmadDir32 = await createTestBmadFixture();
|
||||
|
|
@ -1941,93 +1713,6 @@ async function runTests() {
|
|||
|
||||
console.log('');
|
||||
|
||||
// ============================================================
|
||||
// Test Suite 33: ConfigCollector Prompt Normalization
|
||||
// ============================================================
|
||||
console.log(`${colors.yellow}Test Suite 33: ConfigCollector Prompt Normalization${colors.reset}\n`);
|
||||
|
||||
try {
|
||||
const teaModuleConfig33 = {
|
||||
test_artifacts: {
|
||||
default: '_bmad-output/test-artifacts',
|
||||
},
|
||||
test_design_output: {
|
||||
prompt: 'Where should test design documents be stored?',
|
||||
default: 'test-design',
|
||||
result: '{test_artifacts}/{value}',
|
||||
},
|
||||
test_review_output: {
|
||||
prompt: 'Where should test review reports be stored?',
|
||||
default: 'test-reviews',
|
||||
result: '{test_artifacts}/{value}',
|
||||
},
|
||||
trace_output: {
|
||||
prompt: 'Where should traceability reports be stored?',
|
||||
default: 'traceability',
|
||||
result: '{test_artifacts}/{value}',
|
||||
},
|
||||
};
|
||||
|
||||
const collector33 = new ConfigCollector();
|
||||
collector33.currentProjectDir = path.join(os.tmpdir(), 'bmad-config-normalization');
|
||||
collector33.allAnswers = {};
|
||||
collector33.collectedConfig = {
|
||||
tea: {
|
||||
test_artifacts: '_bmad-output/test-artifacts',
|
||||
},
|
||||
};
|
||||
collector33.existingConfig = {
|
||||
tea: {
|
||||
test_artifacts: '_bmad-output/test-artifacts',
|
||||
test_design_output: '_bmad-output/test-artifacts/test-design',
|
||||
test_review_output: '_bmad-output/test-artifacts/test-reviews',
|
||||
trace_output: '_bmad-output/test-artifacts/traceability',
|
||||
},
|
||||
};
|
||||
|
||||
const testDesignQuestion33 = await collector33.buildQuestion(
|
||||
'tea',
|
||||
'test_design_output',
|
||||
teaModuleConfig33.test_design_output,
|
||||
teaModuleConfig33,
|
||||
);
|
||||
const testReviewQuestion33 = await collector33.buildQuestion(
|
||||
'tea',
|
||||
'test_review_output',
|
||||
teaModuleConfig33.test_review_output,
|
||||
teaModuleConfig33,
|
||||
);
|
||||
const traceQuestion33 = await collector33.buildQuestion('tea', 'trace_output', teaModuleConfig33.trace_output, teaModuleConfig33);
|
||||
|
||||
assert(testDesignQuestion33.default === 'test-design', 'ConfigCollector normalizes existing test_design_output prompt default');
|
||||
assert(testReviewQuestion33.default === 'test-reviews', 'ConfigCollector normalizes existing test_review_output prompt default');
|
||||
assert(traceQuestion33.default === 'traceability', 'ConfigCollector normalizes existing trace_output prompt default');
|
||||
|
||||
collector33.allAnswers = {
|
||||
tea_test_artifacts: '_bmad-output/test-artifacts',
|
||||
};
|
||||
|
||||
assert(
|
||||
collector33.processResultTemplate(teaModuleConfig33.test_design_output.result, testDesignQuestion33.default) ===
|
||||
'_bmad-output/test-artifacts/test-design',
|
||||
'ConfigCollector re-applies test_design_output template without duplicating prefix',
|
||||
);
|
||||
assert(
|
||||
collector33.processResultTemplate(teaModuleConfig33.test_review_output.result, testReviewQuestion33.default) ===
|
||||
'_bmad-output/test-artifacts/test-reviews',
|
||||
'ConfigCollector re-applies test_review_output template without duplicating prefix',
|
||||
);
|
||||
assert(
|
||||
collector33.processResultTemplate(teaModuleConfig33.trace_output.result, traceQuestion33.default) ===
|
||||
'_bmad-output/test-artifacts/traceability',
|
||||
'ConfigCollector re-applies trace_output template without duplicating prefix',
|
||||
);
|
||||
} catch (error) {
|
||||
assert(false, 'ConfigCollector prompt normalization test succeeds', error.message);
|
||||
}
|
||||
|
||||
console.log('');
|
||||
|
||||
// ============================================================
|
||||
// Summary
|
||||
// ============================================================
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ function assert(condition, testName, errorMessage = '') {
|
|||
|
||||
// ---------------------------------------------------------------------------
|
||||
// These regexes are extracted from ModuleManager.vendorWorkflowDependencies()
|
||||
// in tools/cli/installers/lib/modules/manager.js
|
||||
// in tools/installer/modules/manager.js
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// Source regex (line ~1081) — uses non-capturing group for _bmad
|
||||
|
|
|
|||
|
|
@ -1,38 +0,0 @@
|
|||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* BMad Method CLI - Direct execution wrapper for npx
|
||||
* This file ensures proper execution when run via npx from GitHub or npm registry
|
||||
*/
|
||||
|
||||
const { execFileSync } = require('node:child_process');
|
||||
const path = require('node:path');
|
||||
const fs = require('node:fs');
|
||||
|
||||
// Check if we're running in an npx temporary directory
|
||||
const isNpxExecution = __dirname.includes('_npx') || __dirname.includes('.npm');
|
||||
|
||||
if (isNpxExecution) {
|
||||
// Running via npx - spawn child process to preserve user's working directory
|
||||
const args = process.argv.slice(2);
|
||||
const bmadCliPath = path.join(__dirname, 'cli', 'bmad-cli.js');
|
||||
|
||||
if (!fs.existsSync(bmadCliPath)) {
|
||||
console.error('Error: Could not find bmad-cli.js at', bmadCliPath);
|
||||
console.error('Current directory:', __dirname);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
try {
|
||||
// Execute CLI from user's working directory (process.cwd()), not npm cache
|
||||
execFileSync('node', [bmadCliPath, ...args], {
|
||||
stdio: 'inherit',
|
||||
cwd: process.cwd(), // This preserves the user's working directory
|
||||
});
|
||||
} catch (error) {
|
||||
process.exit(error.status || 1);
|
||||
}
|
||||
} else {
|
||||
// Local execution - use require
|
||||
require('./cli/bmad-cli.js');
|
||||
}
|
||||
|
|
@ -1,743 +0,0 @@
|
|||
const fs = require('fs-extra');
|
||||
const path = require('node:path');
|
||||
const glob = require('glob');
|
||||
const yaml = require('yaml');
|
||||
const prompts = require('../../../lib/prompts');
|
||||
|
||||
/**
|
||||
* Dependency Resolver for BMAD modules
|
||||
* Handles cross-module dependencies and ensures all required files are included
|
||||
*/
|
||||
class DependencyResolver {
|
||||
constructor() {
|
||||
this.dependencies = new Map();
|
||||
this.resolvedFiles = new Set();
|
||||
this.missingDependencies = new Set();
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve all dependencies for selected modules
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @param {Array} selectedModules - Modules explicitly selected by user
|
||||
* @param {Object} options - Resolution options
|
||||
* @returns {Object} Resolution results with all required files
|
||||
*/
|
||||
async resolve(bmadDir, selectedModules = [], options = {}) {
|
||||
if (options.verbose) {
|
||||
await prompts.log.info('Resolving module dependencies...');
|
||||
}
|
||||
|
||||
// Always include core as base
|
||||
const modulesToProcess = new Set(['core', ...selectedModules]);
|
||||
|
||||
// First pass: collect all explicitly selected files
|
||||
const primaryFiles = await this.collectPrimaryFiles(bmadDir, modulesToProcess, options);
|
||||
|
||||
// Second pass: parse and resolve dependencies
|
||||
const allDependencies = await this.parseDependencies(primaryFiles);
|
||||
|
||||
// Third pass: resolve dependency paths and collect files
|
||||
const resolvedDeps = await this.resolveDependencyPaths(bmadDir, allDependencies);
|
||||
|
||||
// Fourth pass: check for transitive dependencies
|
||||
const transitiveDeps = await this.resolveTransitiveDependencies(bmadDir, resolvedDeps);
|
||||
|
||||
// Combine all files
|
||||
const allFiles = new Set([...primaryFiles.map((f) => f.path), ...resolvedDeps, ...transitiveDeps]);
|
||||
|
||||
// Organize by module
|
||||
const organizedFiles = this.organizeByModule(bmadDir, allFiles);
|
||||
|
||||
// Report results (only in verbose mode)
|
||||
if (options.verbose) {
|
||||
await this.reportResults(organizedFiles, selectedModules);
|
||||
}
|
||||
|
||||
return {
|
||||
primaryFiles,
|
||||
dependencies: resolvedDeps,
|
||||
transitiveDependencies: transitiveDeps,
|
||||
allFiles: [...allFiles],
|
||||
byModule: organizedFiles,
|
||||
missing: [...this.missingDependencies],
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Collect primary files from selected modules
|
||||
*/
|
||||
async collectPrimaryFiles(bmadDir, modules, options = {}) {
|
||||
const files = [];
|
||||
const { moduleManager } = options;
|
||||
|
||||
for (const module of modules) {
|
||||
// Skip external modules - they're installed from cache, not from source
|
||||
if (moduleManager && (await moduleManager.isExternalModule(module))) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Handle both source (src/) and installed (bmad/) directory structures
|
||||
let moduleDir;
|
||||
|
||||
// Check if this is a source directory (has 'src' subdirectory)
|
||||
const srcDir = path.join(bmadDir, 'src');
|
||||
if (await fs.pathExists(srcDir)) {
|
||||
// Source directory structure: src/core-skills or src/bmm-skills
|
||||
if (module === 'core') {
|
||||
moduleDir = path.join(srcDir, 'core-skills');
|
||||
} else if (module === 'bmm') {
|
||||
moduleDir = path.join(srcDir, 'bmm-skills');
|
||||
}
|
||||
}
|
||||
|
||||
if (!moduleDir) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!(await fs.pathExists(moduleDir))) {
|
||||
await prompts.log.warn('Module directory not found: ' + moduleDir);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Collect agents
|
||||
const agentsDir = path.join(moduleDir, 'agents');
|
||||
if (await fs.pathExists(agentsDir)) {
|
||||
const agentFiles = await glob.glob('*.md', { cwd: agentsDir });
|
||||
for (const file of agentFiles) {
|
||||
const agentPath = path.join(agentsDir, file);
|
||||
|
||||
// Check for localskip attribute
|
||||
const content = await fs.readFile(agentPath, 'utf8');
|
||||
const hasLocalSkip = content.match(/<agent[^>]*\slocalskip="true"[^>]*>/);
|
||||
if (hasLocalSkip) {
|
||||
continue; // Skip agents marked for web-only
|
||||
}
|
||||
|
||||
files.push({
|
||||
path: agentPath,
|
||||
type: 'agent',
|
||||
module,
|
||||
name: path.basename(file, '.md'),
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Collect tasks
|
||||
const tasksDir = path.join(moduleDir, 'tasks');
|
||||
if (await fs.pathExists(tasksDir)) {
|
||||
const taskFiles = await glob.glob('*.md', { cwd: tasksDir });
|
||||
for (const file of taskFiles) {
|
||||
files.push({
|
||||
path: path.join(tasksDir, file),
|
||||
type: 'task',
|
||||
module,
|
||||
name: path.basename(file, '.md'),
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return files;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse dependencies from file content
|
||||
*/
|
||||
async parseDependencies(files) {
|
||||
const allDeps = new Set();
|
||||
|
||||
for (const file of files) {
|
||||
const content = await fs.readFile(file.path, 'utf8');
|
||||
|
||||
// Parse YAML frontmatter for explicit dependencies
|
||||
const frontmatterMatch = content.match(/^---\r?\n([\s\S]*?)\r?\n---/);
|
||||
if (frontmatterMatch) {
|
||||
try {
|
||||
// Pre-process to handle backticks in YAML values
|
||||
let yamlContent = frontmatterMatch[1];
|
||||
// Quote values with backticks to make them valid YAML
|
||||
yamlContent = yamlContent.replaceAll(/: `([^`]+)`/g, ': "$1"');
|
||||
|
||||
const frontmatter = yaml.parse(yamlContent);
|
||||
if (frontmatter.dependencies) {
|
||||
const deps = Array.isArray(frontmatter.dependencies) ? frontmatter.dependencies : [frontmatter.dependencies];
|
||||
|
||||
for (const dep of deps) {
|
||||
allDeps.add({
|
||||
from: file.path,
|
||||
dependency: dep,
|
||||
type: 'explicit',
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Check for template dependencies
|
||||
if (frontmatter.template) {
|
||||
const templates = Array.isArray(frontmatter.template) ? frontmatter.template : [frontmatter.template];
|
||||
for (const template of templates) {
|
||||
allDeps.add({
|
||||
from: file.path,
|
||||
dependency: template,
|
||||
type: 'template',
|
||||
});
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
await prompts.log.warn('Failed to parse frontmatter in ' + file.name + ': ' + error.message);
|
||||
}
|
||||
}
|
||||
|
||||
// Parse content for command references (cross-module dependencies)
|
||||
const commandRefs = this.parseCommandReferences(content);
|
||||
for (const ref of commandRefs) {
|
||||
allDeps.add({
|
||||
from: file.path,
|
||||
dependency: ref,
|
||||
type: 'command',
|
||||
});
|
||||
}
|
||||
|
||||
// Parse for file path references
|
||||
const fileRefs = this.parseFileReferences(content);
|
||||
for (const ref of fileRefs) {
|
||||
// Determine type based on path format
|
||||
// Paths starting with bmad/ are absolute references to the bmad installation
|
||||
const depType = ref.startsWith('bmad/') ? 'bmad-path' : 'file';
|
||||
allDeps.add({
|
||||
from: file.path,
|
||||
dependency: ref,
|
||||
type: depType,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return allDeps;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse command references from content
|
||||
*/
|
||||
parseCommandReferences(content) {
|
||||
const refs = new Set();
|
||||
|
||||
// Match @task-{name} or @agent-{name} or @{module}-{type}-{name}
|
||||
const commandPattern = /@(task-|agent-|bmad-)([a-z0-9-]+)/g;
|
||||
let match;
|
||||
|
||||
while ((match = commandPattern.exec(content)) !== null) {
|
||||
refs.add(match[0]);
|
||||
}
|
||||
|
||||
// Match file paths like bmad/core/agents/analyst
|
||||
const pathPattern = /bmad\/(core|bmm|cis)\/(agents|tasks)\/([a-z0-9-]+)/g;
|
||||
|
||||
while ((match = pathPattern.exec(content)) !== null) {
|
||||
refs.add(match[0]);
|
||||
}
|
||||
|
||||
return [...refs];
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse file path references from content
|
||||
*/
|
||||
parseFileReferences(content) {
|
||||
const refs = new Set();
|
||||
|
||||
// Match relative paths like ../templates/file.yaml or ./data/file.md
|
||||
const relativePattern = /['"](\.\.?\/[^'"]+\.(md|yaml|yml|xml|json|txt|csv))['"]/g;
|
||||
let match;
|
||||
|
||||
while ((match = relativePattern.exec(content)) !== null) {
|
||||
refs.add(match[1]);
|
||||
}
|
||||
|
||||
// Parse exec attributes in command tags
|
||||
const execPattern = /exec="([^"]+)"/g;
|
||||
while ((match = execPattern.exec(content)) !== null) {
|
||||
let execPath = match[1];
|
||||
if (execPath && execPath !== '*') {
|
||||
// Remove {project-root} prefix to get the actual path
|
||||
// Usage is like {project-root}/bmad/core/tasks/foo.md
|
||||
if (execPath.includes('{project-root}')) {
|
||||
execPath = execPath.replace('{project-root}', '');
|
||||
}
|
||||
refs.add(execPath);
|
||||
}
|
||||
}
|
||||
|
||||
// Parse tmpl attributes in command tags
|
||||
const tmplPattern = /tmpl="([^"]+)"/g;
|
||||
while ((match = tmplPattern.exec(content)) !== null) {
|
||||
let tmplPath = match[1];
|
||||
if (tmplPath && tmplPath !== '*') {
|
||||
// Remove {project-root} prefix to get the actual path
|
||||
// Usage is like {project-root}/bmad/core/tasks/foo.md
|
||||
if (tmplPath.includes('{project-root}')) {
|
||||
tmplPath = tmplPath.replace('{project-root}', '');
|
||||
}
|
||||
refs.add(tmplPath);
|
||||
}
|
||||
}
|
||||
|
||||
return [...refs];
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve dependency paths to actual files
|
||||
*/
|
||||
async resolveDependencyPaths(bmadDir, dependencies) {
|
||||
const resolved = new Set();
|
||||
|
||||
for (const dep of dependencies) {
|
||||
const resolvedPaths = await this.resolveSingleDependency(bmadDir, dep);
|
||||
for (const path of resolvedPaths) {
|
||||
resolved.add(path);
|
||||
}
|
||||
}
|
||||
|
||||
return resolved;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve a single dependency to file paths
|
||||
*/
|
||||
async resolveSingleDependency(bmadDir, dep) {
|
||||
const paths = [];
|
||||
|
||||
switch (dep.type) {
|
||||
case 'explicit':
|
||||
case 'file': {
|
||||
let depPath = dep.dependency;
|
||||
|
||||
// Handle {project-root} prefix if present
|
||||
if (depPath.includes('{project-root}')) {
|
||||
// Remove {project-root} and resolve as bmad path
|
||||
depPath = depPath.replace('{project-root}', '');
|
||||
|
||||
if (depPath.startsWith('bmad/')) {
|
||||
const bmadPath = depPath.replace(/^bmad\//, '');
|
||||
|
||||
// Handle glob patterns
|
||||
if (depPath.includes('*')) {
|
||||
// Extract the base path and pattern
|
||||
const pathParts = bmadPath.split('/');
|
||||
const module = pathParts[0];
|
||||
const filePattern = pathParts.at(-1);
|
||||
const middlePath = pathParts.slice(1, -1).join('/');
|
||||
|
||||
let basePath;
|
||||
if (module === 'core') {
|
||||
basePath = path.join(bmadDir, 'core', middlePath);
|
||||
} else {
|
||||
basePath = path.join(bmadDir, 'modules', module, middlePath);
|
||||
}
|
||||
|
||||
if (await fs.pathExists(basePath)) {
|
||||
const files = await glob.glob(filePattern, { cwd: basePath });
|
||||
for (const file of files) {
|
||||
paths.push(path.join(basePath, file));
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Direct path
|
||||
if (bmadPath.startsWith('core/')) {
|
||||
const corePath = path.join(bmadDir, bmadPath);
|
||||
if (await fs.pathExists(corePath)) {
|
||||
paths.push(corePath);
|
||||
}
|
||||
} else {
|
||||
const parts = bmadPath.split('/');
|
||||
const module = parts[0];
|
||||
const rest = parts.slice(1).join('/');
|
||||
const modulePath = path.join(bmadDir, 'modules', module, rest);
|
||||
|
||||
if (await fs.pathExists(modulePath)) {
|
||||
paths.push(modulePath);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Regular relative path handling
|
||||
const sourceDir = path.dirname(dep.from);
|
||||
|
||||
// Handle glob patterns
|
||||
if (depPath.includes('*')) {
|
||||
const basePath = path.resolve(sourceDir, path.dirname(depPath));
|
||||
const pattern = path.basename(depPath);
|
||||
|
||||
if (await fs.pathExists(basePath)) {
|
||||
const files = await glob.glob(pattern, { cwd: basePath });
|
||||
for (const file of files) {
|
||||
paths.push(path.join(basePath, file));
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Direct file reference
|
||||
const fullPath = path.resolve(sourceDir, depPath);
|
||||
if (await fs.pathExists(fullPath)) {
|
||||
paths.push(fullPath);
|
||||
} else {
|
||||
this.missingDependencies.add(`${depPath} (referenced by ${path.basename(dep.from)})`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
case 'command': {
|
||||
// Resolve command references to actual files
|
||||
const commandPath = await this.resolveCommandToPath(bmadDir, dep.dependency);
|
||||
if (commandPath) {
|
||||
paths.push(commandPath);
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
case 'bmad-path': {
|
||||
// Resolve bmad/ paths (from {project-root}/bmad/... references)
|
||||
// These are paths relative to the src directory structure
|
||||
const bmadPath = dep.dependency.replace(/^bmad\//, '');
|
||||
|
||||
// Try to resolve as if it's in src structure
|
||||
// bmad/core/tasks/foo.md -> src/core-skills/tasks/foo.md
|
||||
// bmad/bmm/tasks/bar.md -> src/bmm-skills/tasks/bar.md (bmm is directly under src/)
|
||||
// bmad/cis/agents/bar.md -> src/modules/cis/agents/bar.md
|
||||
|
||||
if (bmadPath.startsWith('core/')) {
|
||||
const corePath = path.join(bmadDir, bmadPath);
|
||||
if (await fs.pathExists(corePath)) {
|
||||
paths.push(corePath);
|
||||
} else {
|
||||
// Not found, but don't report as missing since it might be installed later
|
||||
}
|
||||
} else {
|
||||
// It's a module path like bmm/tasks/foo.md or cis/agents/bar.md
|
||||
const parts = bmadPath.split('/');
|
||||
const module = parts[0];
|
||||
const rest = parts.slice(1).join('/');
|
||||
let modulePath;
|
||||
if (module === 'bmm') {
|
||||
// bmm is directly under src/
|
||||
modulePath = path.join(bmadDir, module, rest);
|
||||
} else {
|
||||
// Other modules are under modules/
|
||||
modulePath = path.join(bmadDir, 'modules', module, rest);
|
||||
}
|
||||
|
||||
if (await fs.pathExists(modulePath)) {
|
||||
paths.push(modulePath);
|
||||
} else {
|
||||
// Not found, but don't report as missing since it might be installed later
|
||||
}
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
case 'template': {
|
||||
// Resolve template references
|
||||
let templateDep = dep.dependency;
|
||||
|
||||
// Handle {project-root} prefix if present
|
||||
if (templateDep.includes('{project-root}')) {
|
||||
// Remove {project-root} and treat as bmad-path
|
||||
templateDep = templateDep.replace('{project-root}', '');
|
||||
|
||||
// Now resolve as a bmad path
|
||||
if (templateDep.startsWith('bmad/')) {
|
||||
const bmadPath = templateDep.replace(/^bmad\//, '');
|
||||
|
||||
if (bmadPath.startsWith('core/')) {
|
||||
const corePath = path.join(bmadDir, bmadPath);
|
||||
if (await fs.pathExists(corePath)) {
|
||||
paths.push(corePath);
|
||||
}
|
||||
} else {
|
||||
// Module path like cis/templates/brainstorm.md
|
||||
const parts = bmadPath.split('/');
|
||||
const module = parts[0];
|
||||
const rest = parts.slice(1).join('/');
|
||||
const modulePath = path.join(bmadDir, 'modules', module, rest);
|
||||
|
||||
if (await fs.pathExists(modulePath)) {
|
||||
paths.push(modulePath);
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Regular relative template path
|
||||
const sourceDir = path.dirname(dep.from);
|
||||
const templatePath = path.resolve(sourceDir, templateDep);
|
||||
|
||||
if (await fs.pathExists(templatePath)) {
|
||||
paths.push(templatePath);
|
||||
} else {
|
||||
this.missingDependencies.add(`Template: ${dep.dependency}`);
|
||||
}
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
// No default
|
||||
}
|
||||
|
||||
return paths;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve command reference to file path
|
||||
*/
|
||||
async resolveCommandToPath(bmadDir, command) {
|
||||
// Parse command format: @task-name or @agent-name or bmad/module/type/name
|
||||
|
||||
if (command.startsWith('@task-')) {
|
||||
const taskName = command.slice(6);
|
||||
// Search all modules for this task
|
||||
for (const module of ['core', 'bmm', 'cis']) {
|
||||
const taskPath =
|
||||
module === 'core'
|
||||
? path.join(bmadDir, 'core', 'tasks', `${taskName}.md`)
|
||||
: path.join(bmadDir, 'modules', module, 'tasks', `${taskName}.md`);
|
||||
if (await fs.pathExists(taskPath)) {
|
||||
return taskPath;
|
||||
}
|
||||
}
|
||||
} else if (command.startsWith('@agent-')) {
|
||||
const agentName = command.slice(7);
|
||||
// Search all modules for this agent
|
||||
for (const module of ['core', 'bmm', 'cis']) {
|
||||
const agentPath =
|
||||
module === 'core'
|
||||
? path.join(bmadDir, 'core', 'agents', `${agentName}.md`)
|
||||
: path.join(bmadDir, 'modules', module, 'agents', `${agentName}.md`);
|
||||
if (await fs.pathExists(agentPath)) {
|
||||
return agentPath;
|
||||
}
|
||||
}
|
||||
} else if (command.startsWith('bmad/')) {
|
||||
// Direct path reference
|
||||
const parts = command.split('/');
|
||||
if (parts.length >= 4) {
|
||||
const [, module, type, ...nameParts] = parts;
|
||||
const name = nameParts.join('/'); // Handle nested paths
|
||||
|
||||
// Check if name already has extension
|
||||
const fileName = name.endsWith('.md') ? name : `${name}.md`;
|
||||
|
||||
const filePath =
|
||||
module === 'core' ? path.join(bmadDir, 'core', type, fileName) : path.join(bmadDir, 'modules', module, type, fileName);
|
||||
if (await fs.pathExists(filePath)) {
|
||||
return filePath;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Don't report as missing if it's a self-reference within the module being installed
|
||||
if (!command.includes('cis') || command.includes('brain')) {
|
||||
// Only report missing if it's a true external dependency
|
||||
// this.missingDependencies.add(`Command: ${command}`);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve transitive dependencies (dependencies of dependencies)
|
||||
*/
|
||||
async resolveTransitiveDependencies(bmadDir, directDeps) {
|
||||
const transitive = new Set();
|
||||
const processed = new Set();
|
||||
|
||||
// Process each direct dependency
|
||||
for (const depPath of directDeps) {
|
||||
if (processed.has(depPath)) continue;
|
||||
processed.add(depPath);
|
||||
|
||||
// Only process markdown and YAML files for transitive deps
|
||||
if ((depPath.endsWith('.md') || depPath.endsWith('.yaml') || depPath.endsWith('.yml')) && (await fs.pathExists(depPath))) {
|
||||
const content = await fs.readFile(depPath, 'utf8');
|
||||
const subDeps = await this.parseDependencies([
|
||||
{
|
||||
path: depPath,
|
||||
type: 'dependency',
|
||||
module: this.getModuleFromPath(bmadDir, depPath),
|
||||
name: path.basename(depPath),
|
||||
},
|
||||
]);
|
||||
|
||||
const resolvedSubDeps = await this.resolveDependencyPaths(bmadDir, subDeps);
|
||||
for (const subDep of resolvedSubDeps) {
|
||||
if (!directDeps.has(subDep)) {
|
||||
transitive.add(subDep);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return transitive;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get module name from file path
|
||||
*/
|
||||
getModuleFromPath(bmadDir, filePath) {
|
||||
const relative = path.relative(bmadDir, filePath);
|
||||
const parts = relative.split(path.sep);
|
||||
|
||||
// Handle source directory structure (src/core-skills, src/bmm-skills, or src/modules/xxx)
|
||||
if (parts[0] === 'src') {
|
||||
if (parts[1] === 'core-skills') {
|
||||
return 'core';
|
||||
} else if (parts[1] === 'bmm-skills') {
|
||||
return 'bmm';
|
||||
} else if (parts[1] === 'modules' && parts.length > 2) {
|
||||
return parts[2];
|
||||
}
|
||||
}
|
||||
|
||||
// Check if it's in modules directory (installed structure)
|
||||
if (parts[0] === 'modules' && parts.length > 1) {
|
||||
return parts[1];
|
||||
}
|
||||
|
||||
// Otherwise return the first part (core, etc.)
|
||||
// But don't return 'src' as a module name
|
||||
if (parts[0] === 'src') {
|
||||
return 'unknown';
|
||||
}
|
||||
return parts[0] || 'unknown';
|
||||
}
|
||||
|
||||
/**
|
||||
* Organize files by module
|
||||
*/
|
||||
organizeByModule(bmadDir, files) {
|
||||
const organized = {};
|
||||
|
||||
for (const file of files) {
|
||||
const module = this.getModuleFromPath(bmadDir, file);
|
||||
if (!organized[module]) {
|
||||
organized[module] = {
|
||||
agents: [],
|
||||
tasks: [],
|
||||
tools: [],
|
||||
templates: [],
|
||||
data: [],
|
||||
other: [],
|
||||
};
|
||||
}
|
||||
|
||||
// Get relative path correctly based on module structure
|
||||
let moduleBase;
|
||||
|
||||
// Check if file is in source directory structure
|
||||
if (file.includes('/src/core-skills/') || file.includes('/src/bmm-skills/')) {
|
||||
if (module === 'core') {
|
||||
moduleBase = path.join(bmadDir, 'src', 'core-skills');
|
||||
} else if (module === 'bmm') {
|
||||
moduleBase = path.join(bmadDir, 'src', 'bmm-skills');
|
||||
}
|
||||
} else {
|
||||
moduleBase = module === 'core' ? path.join(bmadDir, 'core') : path.join(bmadDir, 'modules', module);
|
||||
}
|
||||
|
||||
const relative = path.relative(moduleBase, file);
|
||||
|
||||
if (relative.startsWith('agents/') || file.includes('/agents/')) {
|
||||
organized[module].agents.push(file);
|
||||
} else if (relative.startsWith('tasks/') || file.includes('/tasks/')) {
|
||||
organized[module].tasks.push(file);
|
||||
} else if (relative.startsWith('tools/') || file.includes('/tools/')) {
|
||||
organized[module].tools.push(file);
|
||||
} else if (relative.includes('data/')) {
|
||||
organized[module].data.push(file);
|
||||
} else {
|
||||
organized[module].other.push(file);
|
||||
}
|
||||
}
|
||||
|
||||
return organized;
|
||||
}
|
||||
|
||||
/**
|
||||
* Report resolution results
|
||||
*/
|
||||
async reportResults(organized, selectedModules) {
|
||||
await prompts.log.success('Dependency resolution complete');
|
||||
|
||||
for (const [module, files] of Object.entries(organized)) {
|
||||
const isSelected = selectedModules.includes(module) || module === 'core';
|
||||
const totalFiles =
|
||||
files.agents.length + files.tasks.length + files.tools.length + files.templates.length + files.data.length + files.other.length;
|
||||
|
||||
if (totalFiles > 0) {
|
||||
await prompts.log.info(` ${module.toUpperCase()} module:`);
|
||||
await prompts.log.message(` Status: ${isSelected ? 'Selected' : 'Dependencies only'}`);
|
||||
|
||||
if (files.agents.length > 0) {
|
||||
await prompts.log.message(` Agents: ${files.agents.length}`);
|
||||
}
|
||||
if (files.tasks.length > 0) {
|
||||
await prompts.log.message(` Tasks: ${files.tasks.length}`);
|
||||
}
|
||||
if (files.templates.length > 0) {
|
||||
await prompts.log.message(` Templates: ${files.templates.length}`);
|
||||
}
|
||||
if (files.data.length > 0) {
|
||||
await prompts.log.message(` Data files: ${files.data.length}`);
|
||||
}
|
||||
if (files.other.length > 0) {
|
||||
await prompts.log.message(` Other files: ${files.other.length}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (this.missingDependencies.size > 0) {
|
||||
await prompts.log.warn('Missing dependencies:');
|
||||
for (const missing of this.missingDependencies) {
|
||||
await prompts.log.warn(` - ${missing}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a bundle for web deployment
|
||||
* @param {Object} resolution - Resolution results from resolve()
|
||||
* @returns {Object} Bundle data ready for web
|
||||
*/
|
||||
async createWebBundle(resolution) {
|
||||
const bundle = {
|
||||
metadata: {
|
||||
created: new Date().toISOString(),
|
||||
modules: Object.keys(resolution.byModule),
|
||||
totalFiles: resolution.allFiles.length,
|
||||
},
|
||||
agents: {},
|
||||
tasks: {},
|
||||
templates: {},
|
||||
data: {},
|
||||
};
|
||||
|
||||
// Bundle all files by type
|
||||
for (const filePath of resolution.allFiles) {
|
||||
if (!(await fs.pathExists(filePath))) continue;
|
||||
|
||||
const content = await fs.readFile(filePath, 'utf8');
|
||||
const relative = path.relative(path.dirname(resolution.primaryFiles[0]?.path || '.'), filePath);
|
||||
|
||||
if (filePath.includes('/agents/')) {
|
||||
bundle.agents[relative] = content;
|
||||
} else if (filePath.includes('/tasks/')) {
|
||||
bundle.tasks[relative] = content;
|
||||
} else if (filePath.includes('template')) {
|
||||
bundle.templates[relative] = content;
|
||||
} else {
|
||||
bundle.data[relative] = content;
|
||||
}
|
||||
}
|
||||
|
||||
return bundle;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { DependencyResolver };
|
||||
|
|
@ -1,223 +0,0 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const yaml = require('yaml');
|
||||
const { Manifest } = require('./manifest');
|
||||
|
||||
class Detector {
|
||||
/**
|
||||
* Detect existing BMAD installation
|
||||
* @param {string} bmadDir - Path to bmad directory
|
||||
* @returns {Object} Installation status and details
|
||||
*/
|
||||
async detect(bmadDir) {
|
||||
const result = {
|
||||
installed: false,
|
||||
path: bmadDir,
|
||||
version: null,
|
||||
hasCore: false,
|
||||
modules: [],
|
||||
ides: [],
|
||||
customModules: [],
|
||||
manifest: null,
|
||||
};
|
||||
|
||||
// Check if bmad directory exists
|
||||
if (!(await fs.pathExists(bmadDir))) {
|
||||
return result;
|
||||
}
|
||||
|
||||
// Check for manifest using the Manifest class
|
||||
const manifest = new Manifest();
|
||||
const manifestData = await manifest.read(bmadDir);
|
||||
if (manifestData) {
|
||||
result.manifest = manifestData;
|
||||
result.version = manifestData.version;
|
||||
result.installed = true;
|
||||
// Copy custom modules if they exist
|
||||
if (manifestData.customModules) {
|
||||
result.customModules = manifestData.customModules;
|
||||
}
|
||||
}
|
||||
|
||||
// Check for core
|
||||
const corePath = path.join(bmadDir, 'core');
|
||||
if (await fs.pathExists(corePath)) {
|
||||
result.hasCore = true;
|
||||
|
||||
// Try to get core version from config
|
||||
const coreConfigPath = path.join(corePath, 'config.yaml');
|
||||
if (await fs.pathExists(coreConfigPath)) {
|
||||
try {
|
||||
const configContent = await fs.readFile(coreConfigPath, 'utf8');
|
||||
const config = yaml.parse(configContent);
|
||||
if (!result.version && config.version) {
|
||||
result.version = config.version;
|
||||
}
|
||||
} catch {
|
||||
// Ignore config read errors
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for modules
|
||||
// If manifest exists, use it as the source of truth for installed modules
|
||||
// Otherwise fall back to directory scanning (legacy installations)
|
||||
if (manifestData && manifestData.modules && manifestData.modules.length > 0) {
|
||||
// Use manifest module list - these are officially installed modules
|
||||
for (const moduleId of manifestData.modules) {
|
||||
const modulePath = path.join(bmadDir, moduleId);
|
||||
const moduleConfigPath = path.join(modulePath, 'config.yaml');
|
||||
|
||||
const moduleInfo = {
|
||||
id: moduleId,
|
||||
path: modulePath,
|
||||
version: 'unknown',
|
||||
};
|
||||
|
||||
if (await fs.pathExists(moduleConfigPath)) {
|
||||
try {
|
||||
const configContent = await fs.readFile(moduleConfigPath, 'utf8');
|
||||
const config = yaml.parse(configContent);
|
||||
moduleInfo.version = config.version || 'unknown';
|
||||
moduleInfo.name = config.name || moduleId;
|
||||
moduleInfo.description = config.description;
|
||||
} catch {
|
||||
// Ignore config read errors
|
||||
}
|
||||
}
|
||||
|
||||
result.modules.push(moduleInfo);
|
||||
}
|
||||
} else {
|
||||
// Fallback: scan directory for modules (legacy installations without manifest)
|
||||
const entries = await fs.readdir(bmadDir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (entry.isDirectory() && entry.name !== 'core' && entry.name !== '_config') {
|
||||
const modulePath = path.join(bmadDir, entry.name);
|
||||
const moduleConfigPath = path.join(modulePath, 'config.yaml');
|
||||
|
||||
// Only treat it as a module if it has a config.yaml
|
||||
if (await fs.pathExists(moduleConfigPath)) {
|
||||
const moduleInfo = {
|
||||
id: entry.name,
|
||||
path: modulePath,
|
||||
version: 'unknown',
|
||||
};
|
||||
|
||||
try {
|
||||
const configContent = await fs.readFile(moduleConfigPath, 'utf8');
|
||||
const config = yaml.parse(configContent);
|
||||
moduleInfo.version = config.version || 'unknown';
|
||||
moduleInfo.name = config.name || entry.name;
|
||||
moduleInfo.description = config.description;
|
||||
} catch {
|
||||
// Ignore config read errors
|
||||
}
|
||||
|
||||
result.modules.push(moduleInfo);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for IDE configurations from manifest
|
||||
if (result.manifest && result.manifest.ides) {
|
||||
// Filter out any undefined/null values
|
||||
result.ides = result.manifest.ides.filter((ide) => ide && typeof ide === 'string');
|
||||
}
|
||||
|
||||
// Mark as installed if we found core or modules
|
||||
if (result.hasCore || result.modules.length > 0) {
|
||||
result.installed = true;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Detect legacy installation (_bmad-method, .bmm, .cis)
|
||||
* @param {string} projectDir - Project directory to check
|
||||
* @returns {Object} Legacy installation details
|
||||
*/
|
||||
async detectLegacy(projectDir) {
|
||||
const result = {
|
||||
hasLegacy: false,
|
||||
legacyCore: false,
|
||||
legacyModules: [],
|
||||
paths: [],
|
||||
};
|
||||
|
||||
// Check for legacy core (_bmad-method)
|
||||
const legacyCorePath = path.join(projectDir, '_bmad-method');
|
||||
if (await fs.pathExists(legacyCorePath)) {
|
||||
result.hasLegacy = true;
|
||||
result.legacyCore = true;
|
||||
result.paths.push(legacyCorePath);
|
||||
}
|
||||
|
||||
// Check for legacy modules (directories starting with .)
|
||||
const entries = await fs.readdir(projectDir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (
|
||||
entry.isDirectory() &&
|
||||
entry.name.startsWith('.') &&
|
||||
entry.name !== '_bmad-method' &&
|
||||
!entry.name.startsWith('.git') &&
|
||||
!entry.name.startsWith('.vscode') &&
|
||||
!entry.name.startsWith('.idea')
|
||||
) {
|
||||
const modulePath = path.join(projectDir, entry.name);
|
||||
const moduleManifestPath = path.join(modulePath, 'install-manifest.yaml');
|
||||
|
||||
// Check if it's likely a BMAD module
|
||||
if ((await fs.pathExists(moduleManifestPath)) || (await fs.pathExists(path.join(modulePath, 'config.yaml')))) {
|
||||
result.hasLegacy = true;
|
||||
result.legacyModules.push({
|
||||
name: entry.name.slice(1), // Remove leading dot
|
||||
path: modulePath,
|
||||
});
|
||||
result.paths.push(modulePath);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if migration from legacy is needed
|
||||
* @param {string} projectDir - Project directory
|
||||
* @returns {Object} Migration requirements
|
||||
*/
|
||||
async checkMigrationNeeded(projectDir) {
|
||||
const bmadDir = path.join(projectDir, 'bmad');
|
||||
const current = await this.detect(bmadDir);
|
||||
const legacy = await this.detectLegacy(projectDir);
|
||||
|
||||
return {
|
||||
needed: legacy.hasLegacy && !current.installed,
|
||||
canMigrate: legacy.hasLegacy,
|
||||
legacy: legacy,
|
||||
current: current,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Detect legacy BMAD v4 .bmad-method folder
|
||||
* @param {string} projectDir - Project directory to check
|
||||
* @returns {{ hasLegacyV4: boolean, offenders: string[] }}
|
||||
*/
|
||||
async detectLegacyV4(projectDir) {
|
||||
const offenders = [];
|
||||
|
||||
// Check for .bmad-method folder
|
||||
const bmadMethodPath = path.join(projectDir, '.bmad-method');
|
||||
if (await fs.pathExists(bmadMethodPath)) {
|
||||
offenders.push(bmadMethodPath);
|
||||
}
|
||||
|
||||
return { hasLegacyV4: offenders.length > 0, offenders };
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { Detector };
|
||||
|
|
@ -1,157 +0,0 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const yaml = require('yaml');
|
||||
const prompts = require('../../../lib/prompts');
|
||||
|
||||
/**
|
||||
* Manages IDE configuration persistence
|
||||
* Saves and loads IDE-specific configurations to/from bmad/_config/ides/
|
||||
*/
|
||||
class IdeConfigManager {
|
||||
constructor() {}
|
||||
|
||||
/**
|
||||
* Get path to IDE config directory
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @returns {string} Path to IDE config directory
|
||||
*/
|
||||
getIdeConfigDir(bmadDir) {
|
||||
return path.join(bmadDir, '_config', 'ides');
|
||||
}
|
||||
|
||||
/**
|
||||
* Get path to specific IDE config file
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @param {string} ideName - IDE name (e.g., 'claude-code')
|
||||
* @returns {string} Path to IDE config file
|
||||
*/
|
||||
getIdeConfigPath(bmadDir, ideName) {
|
||||
return path.join(this.getIdeConfigDir(bmadDir), `${ideName}.yaml`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Save IDE configuration
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @param {string} ideName - IDE name
|
||||
* @param {Object} configuration - IDE-specific configuration object
|
||||
*/
|
||||
async saveIdeConfig(bmadDir, ideName, configuration) {
|
||||
const configDir = this.getIdeConfigDir(bmadDir);
|
||||
await fs.ensureDir(configDir);
|
||||
|
||||
const configPath = this.getIdeConfigPath(bmadDir, ideName);
|
||||
const now = new Date().toISOString();
|
||||
|
||||
// Check if config already exists to preserve configured_date
|
||||
let configuredDate = now;
|
||||
if (await fs.pathExists(configPath)) {
|
||||
try {
|
||||
const existing = await this.loadIdeConfig(bmadDir, ideName);
|
||||
if (existing && existing.configured_date) {
|
||||
configuredDate = existing.configured_date;
|
||||
}
|
||||
} catch {
|
||||
// Ignore errors reading existing config
|
||||
}
|
||||
}
|
||||
|
||||
const configData = {
|
||||
ide: ideName,
|
||||
configured_date: configuredDate,
|
||||
last_updated: now,
|
||||
configuration: configuration || {},
|
||||
};
|
||||
|
||||
// Clean the config to remove any non-serializable values (like functions)
|
||||
const cleanConfig = structuredClone(configData);
|
||||
|
||||
const yamlContent = yaml.stringify(cleanConfig, {
|
||||
indent: 2,
|
||||
lineWidth: 0,
|
||||
sortKeys: false,
|
||||
});
|
||||
|
||||
// Ensure POSIX-compliant final newline
|
||||
const content = yamlContent.endsWith('\n') ? yamlContent : yamlContent + '\n';
|
||||
await fs.writeFile(configPath, content, 'utf8');
|
||||
}
|
||||
|
||||
/**
|
||||
* Load IDE configuration
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @param {string} ideName - IDE name
|
||||
* @returns {Object|null} IDE configuration or null if not found
|
||||
*/
|
||||
async loadIdeConfig(bmadDir, ideName) {
|
||||
const configPath = this.getIdeConfigPath(bmadDir, ideName);
|
||||
|
||||
if (!(await fs.pathExists(configPath))) {
|
||||
return null;
|
||||
}
|
||||
|
||||
try {
|
||||
const content = await fs.readFile(configPath, 'utf8');
|
||||
const config = yaml.parse(content);
|
||||
return config;
|
||||
} catch (error) {
|
||||
await prompts.log.warn(`Failed to load IDE config for ${ideName}: ${error.message}`);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Load all IDE configurations
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @returns {Object} Map of IDE name to configuration
|
||||
*/
|
||||
async loadAllIdeConfigs(bmadDir) {
|
||||
const configDir = this.getIdeConfigDir(bmadDir);
|
||||
const configs = {};
|
||||
|
||||
if (!(await fs.pathExists(configDir))) {
|
||||
return configs;
|
||||
}
|
||||
|
||||
try {
|
||||
const files = await fs.readdir(configDir);
|
||||
for (const file of files) {
|
||||
if (file.endsWith('.yaml')) {
|
||||
const ideName = file.replace('.yaml', '');
|
||||
const config = await this.loadIdeConfig(bmadDir, ideName);
|
||||
if (config) {
|
||||
configs[ideName] = config.configuration;
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
await prompts.log.warn(`Failed to load IDE configs: ${error.message}`);
|
||||
}
|
||||
|
||||
return configs;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if IDE has saved configuration
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @param {string} ideName - IDE name
|
||||
* @returns {boolean} True if configuration exists
|
||||
*/
|
||||
async hasIdeConfig(bmadDir, ideName) {
|
||||
const configPath = this.getIdeConfigPath(bmadDir, ideName);
|
||||
return await fs.pathExists(configPath);
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete IDE configuration
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @param {string} ideName - IDE name
|
||||
*/
|
||||
async deleteIdeConfig(bmadDir, ideName) {
|
||||
const configPath = this.getIdeConfigPath(bmadDir, ideName);
|
||||
if (await fs.pathExists(configPath)) {
|
||||
await fs.remove(configPath);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { IdeConfigManager };
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -1,657 +0,0 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const prompts = require('../../../lib/prompts');
|
||||
const { getSourcePath } = require('../../../lib/project-root');
|
||||
const { BMAD_FOLDER_NAME } = require('./shared/path-utils');
|
||||
|
||||
/**
|
||||
* Base class for IDE-specific setup
|
||||
* All IDE handlers should extend this class
|
||||
*/
|
||||
class BaseIdeSetup {
|
||||
constructor(name, displayName = null, preferred = false) {
|
||||
this.name = name;
|
||||
this.displayName = displayName || name; // Human-readable name for UI
|
||||
this.preferred = preferred; // Whether this IDE should be shown in preferred list
|
||||
this.configDir = null; // Override in subclasses
|
||||
this.rulesDir = null; // Override in subclasses
|
||||
this.configFile = null; // Override in subclasses when detection is file-based
|
||||
this.detectionPaths = []; // Additional paths that indicate the IDE is configured
|
||||
this.bmadFolderName = BMAD_FOLDER_NAME; // Default, can be overridden
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the bmad folder name for placeholder replacement
|
||||
* @param {string} bmadFolderName - The bmad folder name
|
||||
*/
|
||||
setBmadFolderName(bmadFolderName) {
|
||||
this.bmadFolderName = bmadFolderName;
|
||||
}
|
||||
|
||||
/**
|
||||
* Main setup method - must be implemented by subclasses
|
||||
* @param {string} projectDir - Project directory
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @param {Object} options - Setup options
|
||||
*/
|
||||
async setup(projectDir, bmadDir, options = {}) {
|
||||
throw new Error(`setup() must be implemented by ${this.name} handler`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Cleanup IDE configuration
|
||||
* @param {string} projectDir - Project directory
|
||||
*/
|
||||
async cleanup(projectDir, options = {}) {
|
||||
// Default implementation - can be overridden
|
||||
if (this.configDir) {
|
||||
const configPath = path.join(projectDir, this.configDir);
|
||||
if (await fs.pathExists(configPath)) {
|
||||
const bmadRulesPath = path.join(configPath, BMAD_FOLDER_NAME);
|
||||
if (await fs.pathExists(bmadRulesPath)) {
|
||||
await fs.remove(bmadRulesPath);
|
||||
if (!options.silent) await prompts.log.message(`Removed ${this.name} BMAD configuration`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Install a custom agent launcher - subclasses should override
|
||||
* @param {string} projectDir - Project directory
|
||||
* @param {string} agentName - Agent name (e.g., "fred-commit-poet")
|
||||
* @param {string} agentPath - Path to compiled agent (relative to project root)
|
||||
* @param {Object} metadata - Agent metadata
|
||||
* @returns {Object|null} Info about created command, or null if not supported
|
||||
*/
|
||||
async installCustomAgentLauncher(projectDir, agentName, agentPath, metadata) {
|
||||
// Default implementation - subclasses can override
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Detect whether this IDE already has configuration in the project
|
||||
* Subclasses can override for custom logic
|
||||
* @param {string} projectDir - Project directory
|
||||
* @returns {boolean}
|
||||
*/
|
||||
async detect(projectDir) {
|
||||
const pathsToCheck = [];
|
||||
|
||||
if (this.configDir) {
|
||||
pathsToCheck.push(path.join(projectDir, this.configDir));
|
||||
}
|
||||
|
||||
if (this.configFile) {
|
||||
pathsToCheck.push(path.join(projectDir, this.configFile));
|
||||
}
|
||||
|
||||
if (Array.isArray(this.detectionPaths)) {
|
||||
for (const candidate of this.detectionPaths) {
|
||||
if (!candidate) continue;
|
||||
const resolved = path.isAbsolute(candidate) ? candidate : path.join(projectDir, candidate);
|
||||
pathsToCheck.push(resolved);
|
||||
}
|
||||
}
|
||||
|
||||
for (const candidate of pathsToCheck) {
|
||||
if (await fs.pathExists(candidate)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get list of agents from BMAD installation
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @returns {Array} List of agent files
|
||||
*/
|
||||
async getAgents(bmadDir) {
|
||||
const agents = [];
|
||||
|
||||
// Get core agents
|
||||
const coreAgentsPath = path.join(bmadDir, 'core', 'agents');
|
||||
if (await fs.pathExists(coreAgentsPath)) {
|
||||
const coreAgents = await this.scanDirectory(coreAgentsPath, '.md');
|
||||
agents.push(
|
||||
...coreAgents.map((a) => ({
|
||||
...a,
|
||||
module: 'core',
|
||||
})),
|
||||
);
|
||||
}
|
||||
|
||||
// Get module agents
|
||||
const entries = await fs.readdir(bmadDir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (entry.isDirectory() && entry.name !== 'core' && entry.name !== '_config' && entry.name !== 'agents') {
|
||||
const moduleAgentsPath = path.join(bmadDir, entry.name, 'agents');
|
||||
if (await fs.pathExists(moduleAgentsPath)) {
|
||||
const moduleAgents = await this.scanDirectory(moduleAgentsPath, '.md');
|
||||
agents.push(
|
||||
...moduleAgents.map((a) => ({
|
||||
...a,
|
||||
module: entry.name,
|
||||
})),
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Get standalone agents from bmad/agents/ directory
|
||||
const standaloneAgentsDir = path.join(bmadDir, 'agents');
|
||||
if (await fs.pathExists(standaloneAgentsDir)) {
|
||||
const agentDirs = await fs.readdir(standaloneAgentsDir, { withFileTypes: true });
|
||||
|
||||
for (const agentDir of agentDirs) {
|
||||
if (!agentDir.isDirectory()) continue;
|
||||
|
||||
const agentDirPath = path.join(standaloneAgentsDir, agentDir.name);
|
||||
const agentFiles = await fs.readdir(agentDirPath);
|
||||
|
||||
for (const file of agentFiles) {
|
||||
if (!file.endsWith('.md')) continue;
|
||||
if (file.includes('.customize.')) continue;
|
||||
|
||||
const filePath = path.join(agentDirPath, file);
|
||||
const content = await fs.readFile(filePath, 'utf8');
|
||||
|
||||
if (content.includes('localskip="true"')) continue;
|
||||
|
||||
agents.push({
|
||||
name: file.replace('.md', ''),
|
||||
path: filePath,
|
||||
relativePath: path.relative(standaloneAgentsDir, filePath),
|
||||
filename: file,
|
||||
module: 'standalone', // Mark as standalone agent
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return agents;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get list of tasks from BMAD installation
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @param {boolean} standaloneOnly - If true, only return standalone tasks
|
||||
* @returns {Array} List of task files
|
||||
*/
|
||||
async getTasks(bmadDir, standaloneOnly = false) {
|
||||
const tasks = [];
|
||||
|
||||
// Get core tasks (scan for both .md and .xml)
|
||||
const coreTasksPath = path.join(bmadDir, 'core', 'tasks');
|
||||
if (await fs.pathExists(coreTasksPath)) {
|
||||
const coreTasks = await this.scanDirectoryWithStandalone(coreTasksPath, ['.md', '.xml']);
|
||||
tasks.push(
|
||||
...coreTasks.map((t) => ({
|
||||
...t,
|
||||
module: 'core',
|
||||
})),
|
||||
);
|
||||
}
|
||||
|
||||
// Get module tasks
|
||||
const entries = await fs.readdir(bmadDir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (entry.isDirectory() && entry.name !== 'core' && entry.name !== '_config' && entry.name !== 'agents') {
|
||||
const moduleTasksPath = path.join(bmadDir, entry.name, 'tasks');
|
||||
if (await fs.pathExists(moduleTasksPath)) {
|
||||
const moduleTasks = await this.scanDirectoryWithStandalone(moduleTasksPath, ['.md', '.xml']);
|
||||
tasks.push(
|
||||
...moduleTasks.map((t) => ({
|
||||
...t,
|
||||
module: entry.name,
|
||||
})),
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Filter by standalone if requested
|
||||
if (standaloneOnly) {
|
||||
return tasks.filter((t) => t.standalone === true);
|
||||
}
|
||||
|
||||
return tasks;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get list of tools from BMAD installation
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @param {boolean} standaloneOnly - If true, only return standalone tools
|
||||
* @returns {Array} List of tool files
|
||||
*/
|
||||
async getTools(bmadDir, standaloneOnly = false) {
|
||||
const tools = [];
|
||||
|
||||
// Get core tools (scan for both .md and .xml)
|
||||
const coreToolsPath = path.join(bmadDir, 'core', 'tools');
|
||||
if (await fs.pathExists(coreToolsPath)) {
|
||||
const coreTools = await this.scanDirectoryWithStandalone(coreToolsPath, ['.md', '.xml']);
|
||||
tools.push(
|
||||
...coreTools.map((t) => ({
|
||||
...t,
|
||||
module: 'core',
|
||||
})),
|
||||
);
|
||||
}
|
||||
|
||||
// Get module tools
|
||||
const entries = await fs.readdir(bmadDir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (entry.isDirectory() && entry.name !== 'core' && entry.name !== '_config' && entry.name !== 'agents') {
|
||||
const moduleToolsPath = path.join(bmadDir, entry.name, 'tools');
|
||||
if (await fs.pathExists(moduleToolsPath)) {
|
||||
const moduleTools = await this.scanDirectoryWithStandalone(moduleToolsPath, ['.md', '.xml']);
|
||||
tools.push(
|
||||
...moduleTools.map((t) => ({
|
||||
...t,
|
||||
module: entry.name,
|
||||
})),
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Filter by standalone if requested
|
||||
if (standaloneOnly) {
|
||||
return tools.filter((t) => t.standalone === true);
|
||||
}
|
||||
|
||||
return tools;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get list of workflows from BMAD installation
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @param {boolean} standaloneOnly - If true, only return standalone workflows
|
||||
* @returns {Array} List of workflow files
|
||||
*/
|
||||
async getWorkflows(bmadDir, standaloneOnly = false) {
|
||||
const workflows = [];
|
||||
|
||||
// Get core workflows
|
||||
const coreWorkflowsPath = path.join(bmadDir, 'core', 'workflows');
|
||||
if (await fs.pathExists(coreWorkflowsPath)) {
|
||||
const coreWorkflows = await this.findWorkflowFiles(coreWorkflowsPath);
|
||||
workflows.push(
|
||||
...coreWorkflows.map((w) => ({
|
||||
...w,
|
||||
module: 'core',
|
||||
})),
|
||||
);
|
||||
}
|
||||
|
||||
// Get module workflows
|
||||
const entries = await fs.readdir(bmadDir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (entry.isDirectory() && entry.name !== 'core' && entry.name !== '_config' && entry.name !== 'agents') {
|
||||
const moduleWorkflowsPath = path.join(bmadDir, entry.name, 'workflows');
|
||||
if (await fs.pathExists(moduleWorkflowsPath)) {
|
||||
const moduleWorkflows = await this.findWorkflowFiles(moduleWorkflowsPath);
|
||||
workflows.push(
|
||||
...moduleWorkflows.map((w) => ({
|
||||
...w,
|
||||
module: entry.name,
|
||||
})),
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Filter by standalone if requested
|
||||
if (standaloneOnly) {
|
||||
return workflows.filter((w) => w.standalone === true);
|
||||
}
|
||||
|
||||
return workflows;
|
||||
}
|
||||
|
||||
/**
|
||||
* Recursively find workflow.md files
|
||||
* @param {string} dir - Directory to search
|
||||
* @param {string} [rootDir] - Original root directory (used internally for recursion)
|
||||
* @returns {Array} List of workflow file info objects
|
||||
*/
|
||||
async findWorkflowFiles(dir, rootDir = null) {
|
||||
rootDir = rootDir || dir;
|
||||
const workflows = [];
|
||||
|
||||
if (!(await fs.pathExists(dir))) {
|
||||
return workflows;
|
||||
}
|
||||
|
||||
const entries = await fs.readdir(dir, { withFileTypes: true });
|
||||
|
||||
for (const entry of entries) {
|
||||
const fullPath = path.join(dir, entry.name);
|
||||
|
||||
if (entry.isDirectory()) {
|
||||
// Recursively search subdirectories
|
||||
const subWorkflows = await this.findWorkflowFiles(fullPath, rootDir);
|
||||
workflows.push(...subWorkflows);
|
||||
} else if (entry.isFile() && entry.name === 'workflow.md') {
|
||||
// Read workflow.md frontmatter to get name and standalone property
|
||||
try {
|
||||
const content = await fs.readFile(fullPath, 'utf8');
|
||||
const frontmatterMatch = content.match(/^---\r?\n([\s\S]*?)\r?\n---/);
|
||||
if (!frontmatterMatch) continue;
|
||||
|
||||
const workflowData = yaml.parse(frontmatterMatch[1]);
|
||||
|
||||
if (workflowData && workflowData.name) {
|
||||
// Workflows are standalone by default unless explicitly false
|
||||
const standalone = workflowData.standalone !== false && workflowData.standalone !== 'false';
|
||||
workflows.push({
|
||||
name: workflowData.name,
|
||||
path: fullPath,
|
||||
relativePath: path.relative(rootDir, fullPath),
|
||||
filename: entry.name,
|
||||
description: workflowData.description || '',
|
||||
standalone: standalone,
|
||||
});
|
||||
}
|
||||
} catch {
|
||||
// Skip invalid workflow files
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return workflows;
|
||||
}
|
||||
|
||||
/**
|
||||
* Scan a directory for files with specific extension(s)
|
||||
* @param {string} dir - Directory to scan
|
||||
* @param {string|Array<string>} ext - File extension(s) to match (e.g., '.md' or ['.md', '.xml'])
|
||||
* @param {string} [rootDir] - Original root directory (used internally for recursion)
|
||||
* @returns {Array} List of file info objects
|
||||
*/
|
||||
async scanDirectory(dir, ext, rootDir = null) {
|
||||
rootDir = rootDir || dir;
|
||||
const files = [];
|
||||
|
||||
if (!(await fs.pathExists(dir))) {
|
||||
return files;
|
||||
}
|
||||
|
||||
// Normalize ext to array
|
||||
const extensions = Array.isArray(ext) ? ext : [ext];
|
||||
|
||||
const entries = await fs.readdir(dir, { withFileTypes: true });
|
||||
|
||||
for (const entry of entries) {
|
||||
const fullPath = path.join(dir, entry.name);
|
||||
|
||||
if (entry.isDirectory()) {
|
||||
// Recursively scan subdirectories
|
||||
const subFiles = await this.scanDirectory(fullPath, ext, rootDir);
|
||||
files.push(...subFiles);
|
||||
} else if (entry.isFile()) {
|
||||
// Check if file matches any of the extensions
|
||||
const matchedExt = extensions.find((e) => entry.name.endsWith(e));
|
||||
if (matchedExt) {
|
||||
files.push({
|
||||
name: path.basename(entry.name, matchedExt),
|
||||
path: fullPath,
|
||||
relativePath: path.relative(rootDir, fullPath),
|
||||
filename: entry.name,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return files;
|
||||
}
|
||||
|
||||
/**
|
||||
* Scan a directory for files with specific extension(s) and check standalone attribute
|
||||
* @param {string} dir - Directory to scan
|
||||
* @param {string|Array<string>} ext - File extension(s) to match (e.g., '.md' or ['.md', '.xml'])
|
||||
* @param {string} [rootDir] - Original root directory (used internally for recursion)
|
||||
* @returns {Array} List of file info objects with standalone property
|
||||
*/
|
||||
async scanDirectoryWithStandalone(dir, ext, rootDir = null) {
|
||||
rootDir = rootDir || dir;
|
||||
const files = [];
|
||||
|
||||
if (!(await fs.pathExists(dir))) {
|
||||
return files;
|
||||
}
|
||||
|
||||
// Normalize ext to array
|
||||
const extensions = Array.isArray(ext) ? ext : [ext];
|
||||
|
||||
const entries = await fs.readdir(dir, { withFileTypes: true });
|
||||
|
||||
for (const entry of entries) {
|
||||
const fullPath = path.join(dir, entry.name);
|
||||
|
||||
if (entry.isDirectory()) {
|
||||
// Recursively scan subdirectories
|
||||
const subFiles = await this.scanDirectoryWithStandalone(fullPath, ext, rootDir);
|
||||
files.push(...subFiles);
|
||||
} else if (entry.isFile()) {
|
||||
// Check if file matches any of the extensions
|
||||
const matchedExt = extensions.find((e) => entry.name.endsWith(e));
|
||||
if (matchedExt) {
|
||||
// Read file content to check for standalone attribute
|
||||
// All non-internal files are considered standalone by default
|
||||
let standalone = true;
|
||||
try {
|
||||
const content = await fs.readFile(fullPath, 'utf8');
|
||||
|
||||
// Skip internal/engine files (not user-facing)
|
||||
if (content.includes('internal="true"')) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check for explicit standalone: false
|
||||
if (entry.name.endsWith('.xml')) {
|
||||
// For XML files, check for standalone="false" attribute
|
||||
const tagMatch = content.match(/<(task|tool)[^>]*standalone="false"/);
|
||||
standalone = !tagMatch;
|
||||
} else if (entry.name.endsWith('.md')) {
|
||||
// For MD files, parse YAML frontmatter
|
||||
const frontmatterMatch = content.match(/^---\r?\n([\s\S]*?)\r?\n---/);
|
||||
if (frontmatterMatch) {
|
||||
try {
|
||||
const yaml = require('yaml');
|
||||
const frontmatter = yaml.parse(frontmatterMatch[1]);
|
||||
standalone = frontmatter.standalone !== false && frontmatter.standalone !== 'false';
|
||||
} catch {
|
||||
// If YAML parsing fails, default to standalone
|
||||
}
|
||||
}
|
||||
// No frontmatter means standalone (default)
|
||||
}
|
||||
} catch {
|
||||
// If we can't read the file, default to standalone
|
||||
standalone = true;
|
||||
}
|
||||
|
||||
files.push({
|
||||
name: path.basename(entry.name, matchedExt),
|
||||
path: fullPath,
|
||||
relativePath: path.relative(rootDir, fullPath),
|
||||
filename: entry.name,
|
||||
standalone: standalone,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return files;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create IDE command/rule file from agent or task
|
||||
* @param {string} content - File content
|
||||
* @param {Object} metadata - File metadata
|
||||
* @param {string} projectDir - The actual project directory path
|
||||
* @returns {string} Processed content
|
||||
*/
|
||||
processContent(content, metadata = {}, projectDir = null) {
|
||||
// Replace placeholders
|
||||
let processed = content;
|
||||
|
||||
// Only replace {project-root} if a specific projectDir is provided
|
||||
// Otherwise leave the placeholder intact
|
||||
// Note: Don't add trailing slash - paths in source include leading slash
|
||||
if (projectDir) {
|
||||
processed = processed.replaceAll('{project-root}', projectDir);
|
||||
}
|
||||
processed = processed.replaceAll('{module}', metadata.module || 'core');
|
||||
processed = processed.replaceAll('{agent}', metadata.name || '');
|
||||
processed = processed.replaceAll('{task}', metadata.name || '');
|
||||
|
||||
return processed;
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure directory exists
|
||||
* @param {string} dirPath - Directory path
|
||||
*/
|
||||
async ensureDir(dirPath) {
|
||||
await fs.ensureDir(dirPath);
|
||||
}
|
||||
|
||||
/**
|
||||
* Write file with content (replaces _bmad placeholder)
|
||||
* @param {string} filePath - File path
|
||||
* @param {string} content - File content
|
||||
*/
|
||||
async writeFile(filePath, content) {
|
||||
// Replace _bmad placeholder if present
|
||||
if (typeof content === 'string' && content.includes('_bmad')) {
|
||||
content = content.replaceAll('_bmad', this.bmadFolderName);
|
||||
}
|
||||
|
||||
// Replace escape sequence _bmad with literal _bmad
|
||||
if (typeof content === 'string' && content.includes('_bmad')) {
|
||||
content = content.replaceAll('_bmad', '_bmad');
|
||||
}
|
||||
await this.ensureDir(path.dirname(filePath));
|
||||
await fs.writeFile(filePath, content, 'utf8');
|
||||
}
|
||||
|
||||
/**
|
||||
* Copy file from source to destination (replaces _bmad placeholder in text files)
|
||||
* @param {string} source - Source file path
|
||||
* @param {string} dest - Destination file path
|
||||
*/
|
||||
async copyFile(source, dest) {
|
||||
// List of text file extensions that should have placeholder replacement
|
||||
const textExtensions = ['.md', '.yaml', '.yml', '.txt', '.json', '.js', '.ts', '.html', '.css', '.sh', '.bat', '.csv'];
|
||||
const ext = path.extname(source).toLowerCase();
|
||||
|
||||
await this.ensureDir(path.dirname(dest));
|
||||
|
||||
// Check if this is a text file that might contain placeholders
|
||||
if (textExtensions.includes(ext)) {
|
||||
try {
|
||||
// Read the file content
|
||||
let content = await fs.readFile(source, 'utf8');
|
||||
|
||||
// Replace _bmad placeholder with actual folder name
|
||||
if (content.includes('_bmad')) {
|
||||
content = content.replaceAll('_bmad', this.bmadFolderName);
|
||||
}
|
||||
|
||||
// Replace escape sequence _bmad with literal _bmad
|
||||
if (content.includes('_bmad')) {
|
||||
content = content.replaceAll('_bmad', '_bmad');
|
||||
}
|
||||
|
||||
// Write to dest with replaced content
|
||||
await fs.writeFile(dest, content, 'utf8');
|
||||
} catch {
|
||||
// If reading as text fails, fall back to regular copy
|
||||
await fs.copy(source, dest, { overwrite: true });
|
||||
}
|
||||
} else {
|
||||
// Binary file or other file type - just copy directly
|
||||
await fs.copy(source, dest, { overwrite: true });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if path exists
|
||||
* @param {string} pathToCheck - Path to check
|
||||
* @returns {boolean} True if path exists
|
||||
*/
|
||||
async exists(pathToCheck) {
|
||||
return await fs.pathExists(pathToCheck);
|
||||
}
|
||||
|
||||
/**
|
||||
* Alias for exists method
|
||||
* @param {string} pathToCheck - Path to check
|
||||
* @returns {boolean} True if path exists
|
||||
*/
|
||||
async pathExists(pathToCheck) {
|
||||
return await fs.pathExists(pathToCheck);
|
||||
}
|
||||
|
||||
/**
|
||||
* Read file content
|
||||
* @param {string} filePath - File path
|
||||
* @returns {string} File content
|
||||
*/
|
||||
async readFile(filePath) {
|
||||
return await fs.readFile(filePath, 'utf8');
|
||||
}
|
||||
|
||||
/**
|
||||
* Format name as title
|
||||
* @param {string} name - Name to format
|
||||
* @returns {string} Formatted title
|
||||
*/
|
||||
formatTitle(name) {
|
||||
return name
|
||||
.split('-')
|
||||
.map((word) => word.charAt(0).toUpperCase() + word.slice(1))
|
||||
.join(' ');
|
||||
}
|
||||
|
||||
/**
|
||||
* Flatten a relative path to a single filename for flat slash command naming
|
||||
* @deprecated Use toColonPath() or toDashPath() from shared/path-utils.js instead
|
||||
* Example: 'module/agents/name.md' -> 'bmad-module-agents-name.md'
|
||||
* Used by IDEs that ignore directory structure for slash commands (e.g., Antigravity, Codex)
|
||||
* @param {string} relativePath - Relative path to flatten
|
||||
* @returns {string} Flattened filename with 'bmad-' prefix
|
||||
*/
|
||||
flattenFilename(relativePath) {
|
||||
const sanitized = relativePath.replaceAll(/[/\\]/g, '-');
|
||||
return `bmad-${sanitized}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create agent configuration file
|
||||
* @param {string} bmadDir - BMAD installation directory
|
||||
* @param {Object} agent - Agent information
|
||||
*/
|
||||
async createAgentConfig(bmadDir, agent) {
|
||||
const agentConfigDir = path.join(bmadDir, '_config', 'agents');
|
||||
await this.ensureDir(agentConfigDir);
|
||||
|
||||
// Load agent config template
|
||||
const templatePath = getSourcePath('utility', 'models', 'agent-config-template.md');
|
||||
const templateContent = await this.readFile(templatePath);
|
||||
|
||||
const configContent = `# Agent Config: ${agent.name}
|
||||
|
||||
${templateContent}`;
|
||||
|
||||
const configPath = path.join(agentConfigDir, `${agent.module}-${agent.name}.md`);
|
||||
await this.writeFile(configPath, configContent);
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { BaseIdeSetup };
|
||||
|
|
@ -1,100 +0,0 @@
|
|||
const fs = require('fs-extra');
|
||||
const path = require('node:path');
|
||||
const yaml = require('yaml');
|
||||
|
||||
const PLATFORM_CODES_PATH = path.join(__dirname, 'platform-codes.yaml');
|
||||
|
||||
let _cachedPlatformCodes = null;
|
||||
|
||||
/**
|
||||
* Load the platform codes configuration from YAML
|
||||
* @returns {Object} Platform codes configuration
|
||||
*/
|
||||
async function loadPlatformCodes() {
|
||||
if (_cachedPlatformCodes) {
|
||||
return _cachedPlatformCodes;
|
||||
}
|
||||
|
||||
if (!(await fs.pathExists(PLATFORM_CODES_PATH))) {
|
||||
throw new Error(`Platform codes configuration not found at: ${PLATFORM_CODES_PATH}`);
|
||||
}
|
||||
|
||||
const content = await fs.readFile(PLATFORM_CODES_PATH, 'utf8');
|
||||
_cachedPlatformCodes = yaml.parse(content);
|
||||
return _cachedPlatformCodes;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get platform information by code
|
||||
* @param {string} platformCode - Platform code (e.g., 'claude-code', 'cursor')
|
||||
* @returns {Object|null} Platform info or null if not found
|
||||
*/
|
||||
function getPlatformInfo(platformCode) {
|
||||
if (!_cachedPlatformCodes) {
|
||||
throw new Error('Platform codes not loaded. Call loadPlatformCodes() first.');
|
||||
}
|
||||
|
||||
return _cachedPlatformCodes.platforms[platformCode] || null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all preferred platforms
|
||||
* @returns {Promise<Array>} Array of preferred platform codes
|
||||
*/
|
||||
async function getPreferredPlatforms() {
|
||||
const config = await loadPlatformCodes();
|
||||
return Object.entries(config.platforms)
|
||||
.filter(([_, info]) => info.preferred)
|
||||
.map(([code, _]) => code);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all platform codes by category
|
||||
* @param {string} category - Category to filter by (ide, cli, tool, etc.)
|
||||
* @returns {Promise<Array>} Array of platform codes in the category
|
||||
*/
|
||||
async function getPlatformsByCategory(category) {
|
||||
const config = await loadPlatformCodes();
|
||||
return Object.entries(config.platforms)
|
||||
.filter(([_, info]) => info.category === category)
|
||||
.map(([code, _]) => code);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all platforms with installer config
|
||||
* @returns {Promise<Array>} Array of platform codes that have installer config
|
||||
*/
|
||||
async function getConfigDrivenPlatforms() {
|
||||
const config = await loadPlatformCodes();
|
||||
return Object.entries(config.platforms)
|
||||
.filter(([_, info]) => info.installer)
|
||||
.map(([code, _]) => code);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get platforms that use custom installers (no installer config)
|
||||
* @returns {Promise<Array>} Array of platform codes with custom installers
|
||||
*/
|
||||
async function getCustomInstallerPlatforms() {
|
||||
const config = await loadPlatformCodes();
|
||||
return Object.entries(config.platforms)
|
||||
.filter(([_, info]) => !info.installer)
|
||||
.map(([code, _]) => code);
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear the cached platform codes (useful for testing)
|
||||
*/
|
||||
function clearCache() {
|
||||
_cachedPlatformCodes = null;
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
loadPlatformCodes,
|
||||
getPlatformInfo,
|
||||
getPreferredPlatforms,
|
||||
getPlatformsByCategory,
|
||||
getConfigDrivenPlatforms,
|
||||
getCustomInstallerPlatforms,
|
||||
clearCache,
|
||||
};
|
||||
|
|
@ -1,341 +0,0 @@
|
|||
# BMAD Platform Codes Configuration
|
||||
# Central configuration for all platform/IDE codes used in the BMAD system
|
||||
#
|
||||
# This file defines:
|
||||
# 1. Platform metadata (name, preferred status, category, description)
|
||||
# 2. Installer configuration (target directories, templates, artifact types)
|
||||
#
|
||||
# Format:
|
||||
# code: Platform identifier used internally
|
||||
# name: Display name shown to users
|
||||
# preferred: Whether this platform is shown as a recommended option on install
|
||||
# category: Type of platform (ide, cli, tool, service)
|
||||
# description: Brief description of the platform
|
||||
# installer: Installation configuration (optional - omit for custom installers)
|
||||
|
||||
platforms:
|
||||
antigravity:
|
||||
name: "Google Antigravity"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "Google's AI development environment"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .agent/workflows
|
||||
target_dir: .agent/skills
|
||||
template_type: antigravity
|
||||
skill_format: true
|
||||
|
||||
auggie:
|
||||
name: "Auggie"
|
||||
preferred: false
|
||||
category: cli
|
||||
description: "AI development tool"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .augment/commands
|
||||
target_dir: .augment/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
claude-code:
|
||||
name: "Claude Code"
|
||||
preferred: true
|
||||
category: cli
|
||||
description: "Anthropic's official CLI for Claude"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .claude/commands
|
||||
target_dir: .claude/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
ancestor_conflict_check: true
|
||||
|
||||
cline:
|
||||
name: "Cline"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "AI coding assistant"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .clinerules/workflows
|
||||
target_dir: .cline/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
codex:
|
||||
name: "Codex"
|
||||
preferred: false
|
||||
category: cli
|
||||
description: "OpenAI Codex integration"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .codex/prompts
|
||||
- ~/.codex/prompts
|
||||
target_dir: .agents/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
ancestor_conflict_check: true
|
||||
artifact_types: [agents, workflows, tasks]
|
||||
|
||||
codebuddy:
|
||||
name: "CodeBuddy"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "Tencent Cloud Code Assistant - AI-powered coding companion"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .codebuddy/commands
|
||||
target_dir: .codebuddy/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
crush:
|
||||
name: "Crush"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "AI development assistant"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .crush/commands
|
||||
target_dir: .crush/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
cursor:
|
||||
name: "Cursor"
|
||||
preferred: true
|
||||
category: ide
|
||||
description: "AI-first code editor"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .cursor/commands
|
||||
target_dir: .cursor/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
gemini:
|
||||
name: "Gemini CLI"
|
||||
preferred: false
|
||||
category: cli
|
||||
description: "Google's CLI for Gemini"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .gemini/commands
|
||||
target_dir: .gemini/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
github-copilot:
|
||||
name: "GitHub Copilot"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "GitHub's AI pair programmer"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .github/agents
|
||||
- .github/prompts
|
||||
target_dir: .github/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
iflow:
|
||||
name: "iFlow"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "AI workflow automation"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .iflow/commands
|
||||
target_dir: .iflow/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
kilo:
|
||||
name: "KiloCoder"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "AI coding platform"
|
||||
suspended: "Kilo Code does not yet support the Agent Skills standard. Support is paused until they implement it. See https://github.com/kilocode/kilo-code/issues for updates."
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .kilocode/workflows
|
||||
target_dir: .kilocode/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
kiro:
|
||||
name: "Kiro"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "Amazon's AI-powered IDE"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .kiro/steering
|
||||
target_dir: .kiro/skills
|
||||
template_type: kiro
|
||||
skill_format: true
|
||||
|
||||
ona:
|
||||
name: "Ona"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "Ona AI development environment"
|
||||
installer:
|
||||
target_dir: .ona/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
opencode:
|
||||
name: "OpenCode"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "OpenCode terminal coding assistant"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .opencode/agents
|
||||
- .opencode/commands
|
||||
- .opencode/agent
|
||||
- .opencode/command
|
||||
target_dir: .opencode/skills
|
||||
template_type: opencode
|
||||
skill_format: true
|
||||
ancestor_conflict_check: true
|
||||
|
||||
pi:
|
||||
name: "Pi"
|
||||
preferred: false
|
||||
category: cli
|
||||
description: "Provider-agnostic terminal-native AI coding agent"
|
||||
installer:
|
||||
target_dir: .pi/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
qoder:
|
||||
name: "Qoder"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "Qoder AI coding assistant"
|
||||
installer:
|
||||
target_dir: .qoder/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
qwen:
|
||||
name: "QwenCoder"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "Qwen AI coding assistant"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .qwen/commands
|
||||
target_dir: .qwen/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
roo:
|
||||
name: "Roo Code"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "Enhanced Cline fork"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .roo/commands
|
||||
target_dir: .roo/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
rovo-dev:
|
||||
name: "Rovo Dev"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "Atlassian's Rovo development environment"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .rovodev/workflows
|
||||
target_dir: .rovodev/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
trae:
|
||||
name: "Trae"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "AI coding tool"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .trae/rules
|
||||
target_dir: .trae/skills
|
||||
template_type: default
|
||||
skill_format: true
|
||||
|
||||
windsurf:
|
||||
name: "Windsurf"
|
||||
preferred: false
|
||||
category: ide
|
||||
description: "AI-powered IDE with cascade flows"
|
||||
installer:
|
||||
legacy_targets:
|
||||
- .windsurf/workflows
|
||||
target_dir: .windsurf/skills
|
||||
template_type: windsurf
|
||||
skill_format: true
|
||||
|
||||
# ============================================================================
|
||||
# Installer Config Schema
|
||||
# ============================================================================
|
||||
#
|
||||
# installer:
|
||||
# target_dir: string # Directory where artifacts are installed
|
||||
# template_type: string # Default template type to use
|
||||
# header_template: string (optional) # Override for header/frontmatter template
|
||||
# body_template: string (optional) # Override for body/content template
|
||||
# legacy_targets: array (optional) # Old target dirs to clean up on reinstall (migration)
|
||||
# - string # Relative path, e.g. .opencode/agent
|
||||
# targets: array (optional) # For multi-target installations
|
||||
# - target_dir: string
|
||||
# template_type: string
|
||||
# artifact_types: [agents, workflows, tasks, tools]
|
||||
# artifact_types: array (optional) # Filter which artifacts to install (default: all)
|
||||
# skip_existing: boolean (optional) # Skip files that already exist (default: false)
|
||||
# skill_format: boolean (optional) # Use directory-per-skill output: <name>/SKILL.md
|
||||
# # with clean frontmatter (name + description, unquoted)
|
||||
# ancestor_conflict_check: boolean (optional) # Refuse install when ancestor dir has BMAD files
|
||||
# # in the same target_dir (for IDEs that inherit
|
||||
# # skills from parent directories)
|
||||
|
||||
# ============================================================================
|
||||
# Platform Categories
|
||||
# ============================================================================
|
||||
|
||||
categories:
|
||||
ide:
|
||||
name: "Integrated Development Environment"
|
||||
description: "Full-featured code editors with AI assistance"
|
||||
|
||||
cli:
|
||||
name: "Command Line Interface"
|
||||
description: "Terminal-based tools"
|
||||
|
||||
tool:
|
||||
name: "Development Tool"
|
||||
description: "Standalone development utilities"
|
||||
|
||||
service:
|
||||
name: "Cloud Service"
|
||||
description: "Cloud-based development platforms"
|
||||
|
||||
extension:
|
||||
name: "Editor Extension"
|
||||
description: "Plugins for existing editors"
|
||||
|
||||
# ============================================================================
|
||||
# Naming Conventions and Rules
|
||||
# ============================================================================
|
||||
|
||||
conventions:
|
||||
code_format: "lowercase-kebab-case"
|
||||
name_format: "Title Case"
|
||||
max_code_length: 20
|
||||
allowed_characters: "a-z0-9-"
|
||||
|
|
@ -1 +0,0 @@
|
|||
default-workflow-yaml.md
|
||||
|
|
@ -1,136 +0,0 @@
|
|||
const fs = require('fs-extra');
|
||||
const path = require('node:path');
|
||||
const yaml = require('yaml');
|
||||
const prompts = require('../../../lib/prompts');
|
||||
|
||||
/**
|
||||
* Manages external official modules defined in external-official-modules.yaml
|
||||
* These are modules hosted in external repositories that can be installed
|
||||
*
|
||||
* @class ExternalModuleManager
|
||||
*/
|
||||
class ExternalModuleManager {
|
||||
constructor() {
|
||||
this.externalModulesConfigPath = path.join(__dirname, '../../../external-official-modules.yaml');
|
||||
this.cachedModules = null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Load and parse the external-official-modules.yaml file
|
||||
* @returns {Object} Parsed YAML content with modules object
|
||||
*/
|
||||
async loadExternalModulesConfig() {
|
||||
if (this.cachedModules) {
|
||||
return this.cachedModules;
|
||||
}
|
||||
|
||||
try {
|
||||
const content = await fs.readFile(this.externalModulesConfigPath, 'utf8');
|
||||
const config = yaml.parse(content);
|
||||
this.cachedModules = config;
|
||||
return config;
|
||||
} catch (error) {
|
||||
await prompts.log.warn(`Failed to load external modules config: ${error.message}`);
|
||||
return { modules: {} };
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get list of available external modules
|
||||
* @returns {Array<Object>} Array of module info objects
|
||||
*/
|
||||
async listAvailable() {
|
||||
const config = await this.loadExternalModulesConfig();
|
||||
const modules = [];
|
||||
|
||||
for (const [key, moduleConfig] of Object.entries(config.modules || {})) {
|
||||
modules.push({
|
||||
key,
|
||||
url: moduleConfig.url,
|
||||
moduleDefinition: moduleConfig['module-definition'],
|
||||
code: moduleConfig.code,
|
||||
name: moduleConfig.name,
|
||||
header: moduleConfig.header,
|
||||
subheader: moduleConfig.subheader,
|
||||
description: moduleConfig.description || '',
|
||||
defaultSelected: moduleConfig.defaultSelected === true,
|
||||
type: moduleConfig.type || 'community', // bmad-org or community
|
||||
npmPackage: moduleConfig.npmPackage || null, // Include npm package name
|
||||
isExternal: true,
|
||||
});
|
||||
}
|
||||
|
||||
return modules;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get module info by code
|
||||
* @param {string} code - The module code (e.g., 'cis')
|
||||
* @returns {Object|null} Module info or null if not found
|
||||
*/
|
||||
async getModuleByCode(code) {
|
||||
const modules = await this.listAvailable();
|
||||
return modules.find((m) => m.code === code) || null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get module info by key
|
||||
* @param {string} key - The module key (e.g., 'bmad-creative-intelligence-suite')
|
||||
* @returns {Object|null} Module info or null if not found
|
||||
*/
|
||||
async getModuleByKey(key) {
|
||||
const config = await this.loadExternalModulesConfig();
|
||||
const moduleConfig = config.modules?.[key];
|
||||
|
||||
if (!moduleConfig) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return {
|
||||
key,
|
||||
url: moduleConfig.url,
|
||||
moduleDefinition: moduleConfig['module-definition'],
|
||||
code: moduleConfig.code,
|
||||
name: moduleConfig.name,
|
||||
header: moduleConfig.header,
|
||||
subheader: moduleConfig.subheader,
|
||||
description: moduleConfig.description || '',
|
||||
defaultSelected: moduleConfig.defaultSelected === true,
|
||||
type: moduleConfig.type || 'community', // bmad-org or community
|
||||
npmPackage: moduleConfig.npmPackage || null, // Include npm package name
|
||||
isExternal: true,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a module code exists in external modules
|
||||
* @param {string} code - The module code to check
|
||||
* @returns {boolean} True if the module exists
|
||||
*/
|
||||
async hasModule(code) {
|
||||
const module = await this.getModuleByCode(code);
|
||||
return module !== null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the URL for a module by code
|
||||
* @param {string} code - The module code
|
||||
* @returns {string|null} The URL or null if not found
|
||||
*/
|
||||
async getModuleUrl(code) {
|
||||
const module = await this.getModuleByCode(code);
|
||||
return module ? module.url : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the module definition path for a module by code
|
||||
* @param {string} code - The module code
|
||||
* @returns {string|null} The module definition path or null if not found
|
||||
*/
|
||||
async getModuleDefinition(code) {
|
||||
const module = await this.getModuleByCode(code);
|
||||
return module ? module.moduleDefinition : null;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { ExternalModuleManager };
|
||||
|
|
@ -1,928 +0,0 @@
|
|||
const path = require('node:path');
|
||||
const fs = require('fs-extra');
|
||||
const yaml = require('yaml');
|
||||
const prompts = require('../../../lib/prompts');
|
||||
const { getProjectRoot, getSourcePath, getModulePath } = require('../../../lib/project-root');
|
||||
const { ExternalModuleManager } = require('./external-manager');
|
||||
const { BMAD_FOLDER_NAME } = require('../ide/shared/path-utils');
|
||||
|
||||
/**
|
||||
* Manages the installation, updating, and removal of BMAD modules.
|
||||
* Handles module discovery, dependency resolution, and configuration processing.
|
||||
*
|
||||
* @class ModuleManager
|
||||
* @requires fs-extra
|
||||
* @requires yaml
|
||||
* @requires prompts
|
||||
*
|
||||
* @example
|
||||
* const manager = new ModuleManager();
|
||||
* const modules = await manager.listAvailable();
|
||||
* await manager.install('core-module', '/path/to/bmad');
|
||||
*/
|
||||
class ModuleManager {
|
||||
constructor(options = {}) {
|
||||
this.bmadFolderName = BMAD_FOLDER_NAME; // Default, can be overridden
|
||||
this.customModulePaths = new Map(); // Initialize custom module paths
|
||||
this.externalModuleManager = new ExternalModuleManager(); // For external official modules
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the bmad folder name for placeholder replacement
|
||||
* @param {string} bmadFolderName - The bmad folder name
|
||||
*/
|
||||
setBmadFolderName(bmadFolderName) {
|
||||
this.bmadFolderName = bmadFolderName;
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the core configuration for access during module installation
|
||||
* @param {Object} coreConfig - Core configuration object
|
||||
*/
|
||||
setCoreConfig(coreConfig) {
|
||||
this.coreConfig = coreConfig;
|
||||
}
|
||||
|
||||
/**
|
||||
* Set custom module paths for priority lookup
|
||||
* @param {Map<string, string>} customModulePaths - Map of module ID to source path
|
||||
*/
|
||||
setCustomModulePaths(customModulePaths) {
|
||||
this.customModulePaths = customModulePaths;
|
||||
}
|
||||
|
||||
/**
|
||||
* Copy a file to the target location
|
||||
* @param {string} sourcePath - Source file path
|
||||
* @param {string} targetPath - Target file path
|
||||
* @param {boolean} overwrite - Whether to overwrite existing files (default: true)
|
||||
*/
|
||||
async copyFileWithPlaceholderReplacement(sourcePath, targetPath, overwrite = true) {
|
||||
await fs.copy(sourcePath, targetPath, { overwrite });
|
||||
}
|
||||
|
||||
/**
|
||||
* Copy a directory recursively
|
||||
* @param {string} sourceDir - Source directory path
|
||||
* @param {string} targetDir - Target directory path
|
||||
* @param {boolean} overwrite - Whether to overwrite existing files (default: true)
|
||||
*/
|
||||
async copyDirectoryWithPlaceholderReplacement(sourceDir, targetDir, overwrite = true) {
|
||||
await fs.ensureDir(targetDir);
|
||||
const entries = await fs.readdir(sourceDir, { withFileTypes: true });
|
||||
|
||||
for (const entry of entries) {
|
||||
const sourcePath = path.join(sourceDir, entry.name);
|
||||
const targetPath = path.join(targetDir, entry.name);
|
||||
|
||||
if (entry.isDirectory()) {
|
||||
await this.copyDirectoryWithPlaceholderReplacement(sourcePath, targetPath, overwrite);
|
||||
} else {
|
||||
await this.copyFileWithPlaceholderReplacement(sourcePath, targetPath, overwrite);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* List all available modules (excluding core which is always installed)
|
||||
* bmm is the only built-in module, directly under src/bmm-skills
|
||||
* All other modules come from external-official-modules.yaml
|
||||
* @returns {Object} Object with modules array and customModules array
|
||||
*/
|
||||
async listAvailable() {
|
||||
const modules = [];
|
||||
const customModules = [];
|
||||
|
||||
// Add built-in bmm module (directly under src/bmm-skills)
|
||||
const bmmPath = getSourcePath('bmm-skills');
|
||||
if (await fs.pathExists(bmmPath)) {
|
||||
const bmmInfo = await this.getModuleInfo(bmmPath, 'bmm', 'src/bmm-skills');
|
||||
if (bmmInfo) {
|
||||
modules.push(bmmInfo);
|
||||
}
|
||||
}
|
||||
|
||||
// Check for cached custom modules in _config/custom/
|
||||
if (this.bmadDir) {
|
||||
const customCacheDir = path.join(this.bmadDir, '_config', 'custom');
|
||||
if (await fs.pathExists(customCacheDir)) {
|
||||
const cacheEntries = await fs.readdir(customCacheDir, { withFileTypes: true });
|
||||
for (const entry of cacheEntries) {
|
||||
if (entry.isDirectory()) {
|
||||
const cachePath = path.join(customCacheDir, entry.name);
|
||||
const moduleInfo = await this.getModuleInfo(cachePath, entry.name, '_config/custom');
|
||||
if (moduleInfo && !modules.some((m) => m.id === moduleInfo.id) && !customModules.some((m) => m.id === moduleInfo.id)) {
|
||||
moduleInfo.isCustom = true;
|
||||
moduleInfo.fromCache = true;
|
||||
customModules.push(moduleInfo);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { modules, customModules };
|
||||
}
|
||||
|
||||
/**
|
||||
* Get module information from a module path
|
||||
* @param {string} modulePath - Path to the module directory
|
||||
* @param {string} defaultName - Default name for the module
|
||||
* @param {string} sourceDescription - Description of where the module was found
|
||||
* @returns {Object|null} Module info or null if not a valid module
|
||||
*/
|
||||
async getModuleInfo(modulePath, defaultName, sourceDescription) {
|
||||
// Check for module structure (module.yaml OR custom.yaml)
|
||||
const moduleConfigPath = path.join(modulePath, 'module.yaml');
|
||||
const rootCustomConfigPath = path.join(modulePath, 'custom.yaml');
|
||||
let configPath = null;
|
||||
|
||||
if (await fs.pathExists(moduleConfigPath)) {
|
||||
configPath = moduleConfigPath;
|
||||
} else if (await fs.pathExists(rootCustomConfigPath)) {
|
||||
configPath = rootCustomConfigPath;
|
||||
}
|
||||
|
||||
// Skip if this doesn't look like a module
|
||||
if (!configPath) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Mark as custom if it's using custom.yaml OR if it's outside src/bmm or src/core
|
||||
const isCustomSource =
|
||||
sourceDescription !== 'src/bmm-skills' && sourceDescription !== 'src/core-skills' && sourceDescription !== 'src/modules';
|
||||
const moduleInfo = {
|
||||
id: defaultName,
|
||||
path: modulePath,
|
||||
name: defaultName
|
||||
.split('-')
|
||||
.map((word) => word.charAt(0).toUpperCase() + word.slice(1))
|
||||
.join(' '),
|
||||
description: 'BMAD Module',
|
||||
version: '5.0.0',
|
||||
source: sourceDescription,
|
||||
isCustom: configPath === rootCustomConfigPath || isCustomSource,
|
||||
};
|
||||
|
||||
// Read module config for metadata
|
||||
try {
|
||||
const configContent = await fs.readFile(configPath, 'utf8');
|
||||
const config = yaml.parse(configContent);
|
||||
|
||||
// Use the code property as the id if available
|
||||
if (config.code) {
|
||||
moduleInfo.id = config.code;
|
||||
}
|
||||
|
||||
moduleInfo.name = config.name || moduleInfo.name;
|
||||
moduleInfo.description = config.description || moduleInfo.description;
|
||||
moduleInfo.version = config.version || moduleInfo.version;
|
||||
moduleInfo.dependencies = config.dependencies || [];
|
||||
moduleInfo.defaultSelected = config.default_selected === undefined ? false : config.default_selected;
|
||||
} catch (error) {
|
||||
await prompts.log.warn(`Failed to read config for ${defaultName}: ${error.message}`);
|
||||
}
|
||||
|
||||
return moduleInfo;
|
||||
}
|
||||
|
||||
/**
|
||||
* Find the source path for a module by searching all possible locations
|
||||
* @param {string} moduleCode - Code of the module to find (from module.yaml)
|
||||
* @returns {string|null} Path to the module source or null if not found
|
||||
*/
|
||||
async findModuleSource(moduleCode, options = {}) {
|
||||
const projectRoot = getProjectRoot();
|
||||
|
||||
// First check custom module paths if they exist
|
||||
if (this.customModulePaths && this.customModulePaths.has(moduleCode)) {
|
||||
return this.customModulePaths.get(moduleCode);
|
||||
}
|
||||
|
||||
// Check for built-in bmm module (directly under src/bmm-skills)
|
||||
if (moduleCode === 'bmm') {
|
||||
const bmmPath = getSourcePath('bmm-skills');
|
||||
if (await fs.pathExists(bmmPath)) {
|
||||
return bmmPath;
|
||||
}
|
||||
}
|
||||
|
||||
// Check external official modules
|
||||
const externalSource = await this.findExternalModuleSource(moduleCode, options);
|
||||
if (externalSource) {
|
||||
return externalSource;
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a module is an external official module
|
||||
* @param {string} moduleCode - Code of the module to check
|
||||
* @returns {boolean} True if the module is external
|
||||
*/
|
||||
async isExternalModule(moduleCode) {
|
||||
return await this.externalModuleManager.hasModule(moduleCode);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the cache directory for external modules
|
||||
* @returns {string} Path to the external modules cache directory
|
||||
*/
|
||||
getExternalCacheDir() {
|
||||
const os = require('node:os');
|
||||
const cacheDir = path.join(os.homedir(), '.bmad', 'cache', 'external-modules');
|
||||
return cacheDir;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clone an external module repository to cache
|
||||
* @param {string} moduleCode - Code of the external module
|
||||
* @returns {string} Path to the cloned repository
|
||||
*/
|
||||
async cloneExternalModule(moduleCode, options = {}) {
|
||||
const { execSync } = require('node:child_process');
|
||||
const moduleInfo = await this.externalModuleManager.getModuleByCode(moduleCode);
|
||||
|
||||
if (!moduleInfo) {
|
||||
throw new Error(`External module '${moduleCode}' not found in external-official-modules.yaml`);
|
||||
}
|
||||
|
||||
const cacheDir = this.getExternalCacheDir();
|
||||
const moduleCacheDir = path.join(cacheDir, moduleCode);
|
||||
const silent = options.silent || false;
|
||||
|
||||
// Create cache directory if it doesn't exist
|
||||
await fs.ensureDir(cacheDir);
|
||||
|
||||
// Helper to create a spinner or a no-op when silent
|
||||
const createSpinner = async () => {
|
||||
if (silent) {
|
||||
return {
|
||||
start() {},
|
||||
stop() {},
|
||||
error() {},
|
||||
message() {},
|
||||
cancel() {},
|
||||
clear() {},
|
||||
get isSpinning() {
|
||||
return false;
|
||||
},
|
||||
get isCancelled() {
|
||||
return false;
|
||||
},
|
||||
};
|
||||
}
|
||||
return await prompts.spinner();
|
||||
};
|
||||
|
||||
// Track if we need to install dependencies
|
||||
let needsDependencyInstall = false;
|
||||
let wasNewClone = false;
|
||||
|
||||
// Check if already cloned
|
||||
if (await fs.pathExists(moduleCacheDir)) {
|
||||
// Try to update if it's a git repo
|
||||
const fetchSpinner = await createSpinner();
|
||||
fetchSpinner.start(`Fetching ${moduleInfo.name}...`);
|
||||
try {
|
||||
const currentRef = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim();
|
||||
// Fetch and reset to remote - works better with shallow clones than pull
|
||||
execSync('git fetch origin --depth 1', {
|
||||
cwd: moduleCacheDir,
|
||||
stdio: ['ignore', 'pipe', 'pipe'],
|
||||
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
|
||||
});
|
||||
execSync('git reset --hard origin/HEAD', {
|
||||
cwd: moduleCacheDir,
|
||||
stdio: ['ignore', 'pipe', 'pipe'],
|
||||
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
|
||||
});
|
||||
const newRef = execSync('git rev-parse HEAD', { cwd: moduleCacheDir, stdio: 'pipe' }).toString().trim();
|
||||
|
||||
fetchSpinner.stop(`Fetched ${moduleInfo.name}`);
|
||||
// Force dependency install if we got new code
|
||||
if (currentRef !== newRef) {
|
||||
needsDependencyInstall = true;
|
||||
}
|
||||
} catch {
|
||||
fetchSpinner.error(`Fetch failed, re-downloading ${moduleInfo.name}`);
|
||||
// If update fails, remove and re-clone
|
||||
await fs.remove(moduleCacheDir);
|
||||
wasNewClone = true;
|
||||
}
|
||||
} else {
|
||||
wasNewClone = true;
|
||||
}
|
||||
|
||||
// Clone if not exists or was removed
|
||||
if (wasNewClone) {
|
||||
const fetchSpinner = await createSpinner();
|
||||
fetchSpinner.start(`Fetching ${moduleInfo.name}...`);
|
||||
try {
|
||||
execSync(`git clone --depth 1 "${moduleInfo.url}" "${moduleCacheDir}"`, {
|
||||
stdio: ['ignore', 'pipe', 'pipe'],
|
||||
env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
|
||||
});
|
||||
fetchSpinner.stop(`Fetched ${moduleInfo.name}`);
|
||||
} catch (error) {
|
||||
fetchSpinner.error(`Failed to fetch ${moduleInfo.name}`);
|
||||
throw new Error(`Failed to clone external module '${moduleCode}': ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Install dependencies if package.json exists
|
||||
const packageJsonPath = path.join(moduleCacheDir, 'package.json');
|
||||
const nodeModulesPath = path.join(moduleCacheDir, 'node_modules');
|
||||
if (await fs.pathExists(packageJsonPath)) {
|
||||
// Install if node_modules doesn't exist, or if package.json is newer (dependencies changed)
|
||||
const nodeModulesMissing = !(await fs.pathExists(nodeModulesPath));
|
||||
|
||||
// Force install if we updated or cloned new
|
||||
if (needsDependencyInstall || wasNewClone || nodeModulesMissing) {
|
||||
const installSpinner = await createSpinner();
|
||||
installSpinner.start(`Installing dependencies for ${moduleInfo.name}...`);
|
||||
try {
|
||||
execSync('npm install --omit=dev --no-audit --no-fund --no-progress --legacy-peer-deps', {
|
||||
cwd: moduleCacheDir,
|
||||
stdio: ['ignore', 'pipe', 'pipe'],
|
||||
timeout: 120_000, // 2 minute timeout
|
||||
});
|
||||
installSpinner.stop(`Installed dependencies for ${moduleInfo.name}`);
|
||||
} catch (error) {
|
||||
installSpinner.error(`Failed to install dependencies for ${moduleInfo.name}`);
|
||||
if (!silent) await prompts.log.warn(` ${error.message}`);
|
||||
}
|
||||
} else {
|
||||
// Check if package.json is newer than node_modules
|
||||
let packageJsonNewer = false;
|
||||
try {
|
||||
const packageStats = await fs.stat(packageJsonPath);
|
||||
const nodeModulesStats = await fs.stat(nodeModulesPath);
|
||||
packageJsonNewer = packageStats.mtime > nodeModulesStats.mtime;
|
||||
} catch {
|
||||
// If stat fails, assume we need to install
|
||||
packageJsonNewer = true;
|
||||
}
|
||||
|
||||
if (packageJsonNewer) {
|
||||
const installSpinner = await createSpinner();
|
||||
installSpinner.start(`Installing dependencies for ${moduleInfo.name}...`);
|
||||
try {
|
||||
execSync('npm install --omit=dev --no-audit --no-fund --no-progress --legacy-peer-deps', {
|
||||
cwd: moduleCacheDir,
|
||||
stdio: ['ignore', 'pipe', 'pipe'],
|
||||
timeout: 120_000, // 2 minute timeout
|
||||
});
|
||||
installSpinner.stop(`Installed dependencies for ${moduleInfo.name}`);
|
||||
} catch (error) {
|
||||
installSpinner.error(`Failed to install dependencies for ${moduleInfo.name}`);
|
||||
if (!silent) await prompts.log.warn(` ${error.message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return moduleCacheDir;
|
||||
}
|
||||
|
||||
/**
|
||||
* Find the source path for an external module
|
||||
* @param {string} moduleCode - Code of the external module
|
||||
* @returns {string|null} Path to the module source or null if not found
|
||||
*/
|
||||
async findExternalModuleSource(moduleCode, options = {}) {
|
||||
const moduleInfo = await this.externalModuleManager.getModuleByCode(moduleCode);
|
||||
|
||||
if (!moduleInfo) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Clone the external module repo
|
||||
const cloneDir = await this.cloneExternalModule(moduleCode, options);
|
||||
|
||||
// The module-definition specifies the path to module.yaml relative to repo root
|
||||
// We need to return the directory containing module.yaml
|
||||
const moduleDefinitionPath = moduleInfo.moduleDefinition; // e.g., 'src/module.yaml'
|
||||
const moduleDir = path.dirname(path.join(cloneDir, moduleDefinitionPath));
|
||||
|
||||
return moduleDir;
|
||||
}
|
||||
|
||||
/**
|
||||
* Install a module
|
||||
* @param {string} moduleName - Code of the module to install (from module.yaml)
|
||||
* @param {string} bmadDir - Target bmad directory
|
||||
* @param {Function} fileTrackingCallback - Optional callback to track installed files
|
||||
* @param {Object} options - Additional installation options
|
||||
* @param {Array<string>} options.installedIDEs - Array of IDE codes that were installed
|
||||
* @param {Object} options.moduleConfig - Module configuration from config collector
|
||||
* @param {Object} options.logger - Logger instance for output
|
||||
*/
|
||||
async install(moduleName, bmadDir, fileTrackingCallback = null, options = {}) {
|
||||
const sourcePath = await this.findModuleSource(moduleName, { silent: options.silent });
|
||||
const targetPath = path.join(bmadDir, moduleName);
|
||||
|
||||
// Check if source module exists
|
||||
if (!sourcePath) {
|
||||
// Provide a more user-friendly error message
|
||||
throw new Error(
|
||||
`Source for module '${moduleName}' is not available. It will be retained but cannot be updated without its source files.`,
|
||||
);
|
||||
}
|
||||
|
||||
// Check if this is a custom module and read its custom.yaml values
|
||||
let customConfig = null;
|
||||
const rootCustomConfigPath = path.join(sourcePath, 'custom.yaml');
|
||||
|
||||
if (await fs.pathExists(rootCustomConfigPath)) {
|
||||
try {
|
||||
const customContent = await fs.readFile(rootCustomConfigPath, 'utf8');
|
||||
customConfig = yaml.parse(customContent);
|
||||
} catch (error) {
|
||||
await prompts.log.warn(`Failed to read custom.yaml for ${moduleName}: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
// If this is a custom module, merge its values into the module config
|
||||
if (customConfig) {
|
||||
options.moduleConfig = { ...options.moduleConfig, ...customConfig };
|
||||
if (options.logger) {
|
||||
await options.logger.log(` Merged custom configuration for ${moduleName}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Check if already installed
|
||||
if (await fs.pathExists(targetPath)) {
|
||||
await fs.remove(targetPath);
|
||||
}
|
||||
|
||||
// Copy module files with filtering
|
||||
await this.copyModuleWithFiltering(sourcePath, targetPath, fileTrackingCallback, options.moduleConfig);
|
||||
|
||||
// Create directories declared in module.yaml (unless explicitly skipped)
|
||||
if (!options.skipModuleInstaller) {
|
||||
await this.createModuleDirectories(moduleName, bmadDir, options);
|
||||
}
|
||||
|
||||
// Capture version info for manifest
|
||||
const { Manifest } = require('../core/manifest');
|
||||
const manifestObj = new Manifest();
|
||||
const versionInfo = await manifestObj.getModuleVersionInfo(moduleName, bmadDir, sourcePath);
|
||||
|
||||
await manifestObj.addModule(bmadDir, moduleName, {
|
||||
version: versionInfo.version,
|
||||
source: versionInfo.source,
|
||||
npmPackage: versionInfo.npmPackage,
|
||||
repoUrl: versionInfo.repoUrl,
|
||||
});
|
||||
|
||||
return {
|
||||
success: true,
|
||||
module: moduleName,
|
||||
path: targetPath,
|
||||
versionInfo,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Update an existing module
|
||||
* @param {string} moduleName - Name of the module to update
|
||||
* @param {string} bmadDir - Target bmad directory
|
||||
* @param {boolean} force - Force update (overwrite modifications)
|
||||
*/
|
||||
async update(moduleName, bmadDir, force = false, options = {}) {
|
||||
const sourcePath = await this.findModuleSource(moduleName);
|
||||
const targetPath = path.join(bmadDir, moduleName);
|
||||
|
||||
// Check if source module exists
|
||||
if (!sourcePath) {
|
||||
throw new Error(`Module '${moduleName}' not found in any source location`);
|
||||
}
|
||||
|
||||
// Check if module is installed
|
||||
if (!(await fs.pathExists(targetPath))) {
|
||||
throw new Error(`Module '${moduleName}' is not installed`);
|
||||
}
|
||||
|
||||
if (force) {
|
||||
// Force update - remove and reinstall
|
||||
await fs.remove(targetPath);
|
||||
return await this.install(moduleName, bmadDir, null, { installer: options.installer });
|
||||
} else {
|
||||
// Selective update - preserve user modifications
|
||||
await this.syncModule(sourcePath, targetPath);
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
module: moduleName,
|
||||
path: targetPath,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove a module
|
||||
* @param {string} moduleName - Name of the module to remove
|
||||
* @param {string} bmadDir - Target bmad directory
|
||||
*/
|
||||
async remove(moduleName, bmadDir) {
|
||||
const targetPath = path.join(bmadDir, moduleName);
|
||||
|
||||
if (!(await fs.pathExists(targetPath))) {
|
||||
throw new Error(`Module '${moduleName}' is not installed`);
|
||||
}
|
||||
|
||||
await fs.remove(targetPath);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
module: moduleName,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a module is installed
|
||||
* @param {string} moduleName - Name of the module
|
||||
* @param {string} bmadDir - Target bmad directory
|
||||
* @returns {boolean} True if module is installed
|
||||
*/
|
||||
async isInstalled(moduleName, bmadDir) {
|
||||
const targetPath = path.join(bmadDir, moduleName);
|
||||
return await fs.pathExists(targetPath);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get installed module info
|
||||
* @param {string} moduleName - Name of the module
|
||||
* @param {string} bmadDir - Target bmad directory
|
||||
* @returns {Object|null} Module info or null if not installed
|
||||
*/
|
||||
async getInstalledInfo(moduleName, bmadDir) {
|
||||
const targetPath = path.join(bmadDir, moduleName);
|
||||
|
||||
if (!(await fs.pathExists(targetPath))) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const configPath = path.join(targetPath, 'config.yaml');
|
||||
const moduleInfo = {
|
||||
id: moduleName,
|
||||
path: targetPath,
|
||||
installed: true,
|
||||
};
|
||||
|
||||
if (await fs.pathExists(configPath)) {
|
||||
try {
|
||||
const configContent = await fs.readFile(configPath, 'utf8');
|
||||
const config = yaml.parse(configContent);
|
||||
Object.assign(moduleInfo, config);
|
||||
} catch (error) {
|
||||
await prompts.log.warn(`Failed to read installed module config: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
return moduleInfo;
|
||||
}
|
||||
|
||||
/**
|
||||
* Copy module with filtering for localskip agents and conditional content
|
||||
* @param {string} sourcePath - Source module path
|
||||
* @param {string} targetPath - Target module path
|
||||
* @param {Function} fileTrackingCallback - Optional callback to track installed files
|
||||
* @param {Object} moduleConfig - Module configuration with conditional flags
|
||||
*/
|
||||
async copyModuleWithFiltering(sourcePath, targetPath, fileTrackingCallback = null, moduleConfig = {}) {
|
||||
// Get all files in source
|
||||
const sourceFiles = await this.getFileList(sourcePath);
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
// Skip sub-modules directory - these are IDE-specific and handled separately
|
||||
if (file.startsWith('sub-modules/')) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Skip sidecar directories - these contain agent-specific assets not needed at install time
|
||||
const isInSidecarDirectory = path
|
||||
.dirname(file)
|
||||
.split('/')
|
||||
.some((dir) => dir.toLowerCase().endsWith('-sidecar'));
|
||||
|
||||
if (isInSidecarDirectory) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Skip module.yaml at root - it's only needed at install time
|
||||
if (file === 'module.yaml') {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Skip module root config.yaml only - generated by config collector with actual values
|
||||
// Workflow-level config.yaml (e.g. workflows/orchestrate-story/config.yaml) must be copied
|
||||
// for custom modules that use workflow-specific configuration
|
||||
if (file === 'config.yaml') {
|
||||
continue;
|
||||
}
|
||||
|
||||
const sourceFile = path.join(sourcePath, file);
|
||||
const targetFile = path.join(targetPath, file);
|
||||
|
||||
// Check if this is an agent file
|
||||
if (file.startsWith('agents/') && file.endsWith('.md')) {
|
||||
// Read the file to check for localskip
|
||||
const content = await fs.readFile(sourceFile, 'utf8');
|
||||
|
||||
// Check for localskip="true" in the agent tag
|
||||
const agentMatch = content.match(/<agent[^>]*\slocalskip="true"[^>]*>/);
|
||||
if (agentMatch) {
|
||||
await prompts.log.message(` Skipping web-only agent: ${path.basename(file)}`);
|
||||
continue; // Skip this agent
|
||||
}
|
||||
}
|
||||
|
||||
// Copy the file with placeholder replacement
|
||||
await this.copyFileWithPlaceholderReplacement(sourceFile, targetFile);
|
||||
|
||||
// Track the file if callback provided
|
||||
if (fileTrackingCallback) {
|
||||
fileTrackingCallback(targetFile);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Find all .md agent files recursively in a directory
|
||||
* @param {string} dir - Directory to search
|
||||
* @returns {Array} List of .md agent file paths
|
||||
*/
|
||||
async findAgentMdFiles(dir) {
|
||||
const agentFiles = [];
|
||||
|
||||
async function searchDirectory(searchDir) {
|
||||
const entries = await fs.readdir(searchDir, { withFileTypes: true });
|
||||
|
||||
for (const entry of entries) {
|
||||
const fullPath = path.join(searchDir, entry.name);
|
||||
|
||||
if (entry.isFile() && entry.name.endsWith('.md')) {
|
||||
agentFiles.push(fullPath);
|
||||
} else if (entry.isDirectory()) {
|
||||
await searchDirectory(fullPath);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
await searchDirectory(dir);
|
||||
return agentFiles;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create directories declared in module.yaml's `directories` key
|
||||
* This replaces the security-risky module installer pattern with declarative config
|
||||
* During updates, if a directory path changed, moves the old directory to the new path
|
||||
* @param {string} moduleName - Name of the module
|
||||
* @param {string} bmadDir - Target bmad directory
|
||||
* @param {Object} options - Installation options
|
||||
* @param {Object} options.moduleConfig - Module configuration from config collector
|
||||
* @param {Object} options.existingModuleConfig - Previous module config (for detecting path changes during updates)
|
||||
* @param {Object} options.coreConfig - Core configuration
|
||||
* @returns {Promise<{createdDirs: string[], movedDirs: string[], createdWdsFolders: string[]}>} Created directories info
|
||||
*/
|
||||
async createModuleDirectories(moduleName, bmadDir, options = {}) {
|
||||
const moduleConfig = options.moduleConfig || {};
|
||||
const existingModuleConfig = options.existingModuleConfig || {};
|
||||
const projectRoot = path.dirname(bmadDir);
|
||||
const emptyResult = { createdDirs: [], movedDirs: [], createdWdsFolders: [] };
|
||||
|
||||
// Special handling for core module - it's in src/core-skills not src/modules
|
||||
let sourcePath;
|
||||
if (moduleName === 'core') {
|
||||
sourcePath = getSourcePath('core-skills');
|
||||
} else {
|
||||
sourcePath = await this.findModuleSource(moduleName, { silent: true });
|
||||
if (!sourcePath) {
|
||||
return emptyResult; // No source found, skip
|
||||
}
|
||||
}
|
||||
|
||||
// Read module.yaml to find the `directories` key
|
||||
const moduleYamlPath = path.join(sourcePath, 'module.yaml');
|
||||
if (!(await fs.pathExists(moduleYamlPath))) {
|
||||
return emptyResult; // No module.yaml, skip
|
||||
}
|
||||
|
||||
let moduleYaml;
|
||||
try {
|
||||
const yamlContent = await fs.readFile(moduleYamlPath, 'utf8');
|
||||
moduleYaml = yaml.parse(yamlContent);
|
||||
} catch {
|
||||
return emptyResult; // Invalid YAML, skip
|
||||
}
|
||||
|
||||
if (!moduleYaml || !moduleYaml.directories) {
|
||||
return emptyResult; // No directories declared, skip
|
||||
}
|
||||
|
||||
const directories = moduleYaml.directories;
|
||||
const wdsFolders = moduleYaml.wds_folders || [];
|
||||
const createdDirs = [];
|
||||
const movedDirs = [];
|
||||
const createdWdsFolders = [];
|
||||
|
||||
for (const dirRef of directories) {
|
||||
// Parse variable reference like "{design_artifacts}"
|
||||
const varMatch = dirRef.match(/^\{([^}]+)\}$/);
|
||||
if (!varMatch) {
|
||||
// Not a variable reference, skip
|
||||
continue;
|
||||
}
|
||||
|
||||
const configKey = varMatch[1];
|
||||
const dirValue = moduleConfig[configKey];
|
||||
if (!dirValue || typeof dirValue !== 'string') {
|
||||
continue; // No value or not a string, skip
|
||||
}
|
||||
|
||||
// Strip {project-root}/ prefix if present
|
||||
let dirPath = dirValue.replace(/^\{project-root\}\/?/, '');
|
||||
|
||||
// Handle remaining {project-root} anywhere in the path
|
||||
dirPath = dirPath.replaceAll('{project-root}', '');
|
||||
|
||||
// Resolve to absolute path
|
||||
const fullPath = path.join(projectRoot, dirPath);
|
||||
|
||||
// Validate path is within project root (prevent directory traversal)
|
||||
const normalizedPath = path.normalize(fullPath);
|
||||
const normalizedRoot = path.normalize(projectRoot);
|
||||
if (!normalizedPath.startsWith(normalizedRoot + path.sep) && normalizedPath !== normalizedRoot) {
|
||||
const color = await prompts.getColor();
|
||||
await prompts.log.warn(color.yellow(`${configKey} path escapes project root, skipping: ${dirPath}`));
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check if directory path changed from previous config (update/modify scenario)
|
||||
const oldDirValue = existingModuleConfig[configKey];
|
||||
let oldFullPath = null;
|
||||
let oldDirPath = null;
|
||||
if (oldDirValue && typeof oldDirValue === 'string') {
|
||||
// F3: Normalize both values before comparing to avoid false negatives
|
||||
// from trailing slashes, separator differences, or prefix format variations
|
||||
let normalizedOld = oldDirValue.replace(/^\{project-root\}\/?/, '');
|
||||
normalizedOld = path.normalize(normalizedOld.replaceAll('{project-root}', ''));
|
||||
const normalizedNew = path.normalize(dirPath);
|
||||
|
||||
if (normalizedOld !== normalizedNew) {
|
||||
oldDirPath = normalizedOld;
|
||||
oldFullPath = path.join(projectRoot, oldDirPath);
|
||||
const normalizedOldAbsolute = path.normalize(oldFullPath);
|
||||
if (!normalizedOldAbsolute.startsWith(normalizedRoot + path.sep) && normalizedOldAbsolute !== normalizedRoot) {
|
||||
oldFullPath = null; // Old path escapes project root, ignore it
|
||||
}
|
||||
|
||||
// F13: Prevent parent/child move (e.g. docs/planning → docs/planning/v2)
|
||||
if (oldFullPath) {
|
||||
const normalizedNewAbsolute = path.normalize(fullPath);
|
||||
if (
|
||||
normalizedOldAbsolute.startsWith(normalizedNewAbsolute + path.sep) ||
|
||||
normalizedNewAbsolute.startsWith(normalizedOldAbsolute + path.sep)
|
||||
) {
|
||||
const color = await prompts.getColor();
|
||||
await prompts.log.warn(
|
||||
color.yellow(
|
||||
`${configKey}: cannot move between parent/child paths (${oldDirPath} / ${dirPath}), creating new directory instead`,
|
||||
),
|
||||
);
|
||||
oldFullPath = null;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const dirName = configKey.replaceAll('_', ' ');
|
||||
|
||||
if (oldFullPath && (await fs.pathExists(oldFullPath)) && !(await fs.pathExists(fullPath))) {
|
||||
// Path changed and old dir exists → move old to new location
|
||||
// F1: Use fs.move() instead of fs.rename() for cross-device/volume support
|
||||
// F2: Wrap in try/catch — fallback to creating new dir on failure
|
||||
try {
|
||||
await fs.ensureDir(path.dirname(fullPath));
|
||||
await fs.move(oldFullPath, fullPath);
|
||||
movedDirs.push(`${dirName}: ${oldDirPath} → ${dirPath}`);
|
||||
} catch (moveError) {
|
||||
const color = await prompts.getColor();
|
||||
await prompts.log.warn(
|
||||
color.yellow(
|
||||
`Failed to move ${oldDirPath} → ${dirPath}: ${moveError.message}\n Creating new directory instead. Please move contents from the old directory manually.`,
|
||||
),
|
||||
);
|
||||
await fs.ensureDir(fullPath);
|
||||
createdDirs.push(`${dirName}: ${dirPath}`);
|
||||
}
|
||||
} else if (oldFullPath && (await fs.pathExists(oldFullPath)) && (await fs.pathExists(fullPath))) {
|
||||
// F5: Both old and new directories exist — warn user about potential orphaned documents
|
||||
const color = await prompts.getColor();
|
||||
await prompts.log.warn(
|
||||
color.yellow(
|
||||
`${dirName}: path changed but both directories exist:\n Old: ${oldDirPath}\n New: ${dirPath}\n Old directory may contain orphaned documents — please review and merge manually.`,
|
||||
),
|
||||
);
|
||||
} else if (!(await fs.pathExists(fullPath))) {
|
||||
// New directory doesn't exist yet → create it
|
||||
createdDirs.push(`${dirName}: ${dirPath}`);
|
||||
await fs.ensureDir(fullPath);
|
||||
}
|
||||
|
||||
// Create WDS subfolders if this is the design_artifacts directory
|
||||
if (configKey === 'design_artifacts' && wdsFolders.length > 0) {
|
||||
for (const subfolder of wdsFolders) {
|
||||
const subPath = path.join(fullPath, subfolder);
|
||||
if (!(await fs.pathExists(subPath))) {
|
||||
await fs.ensureDir(subPath);
|
||||
createdWdsFolders.push(subfolder);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { createdDirs, movedDirs, createdWdsFolders };
|
||||
}
|
||||
|
||||
/**
|
||||
* Private: Process module configuration
|
||||
* @param {string} modulePath - Path to installed module
|
||||
* @param {string} moduleName - Module name
|
||||
*/
|
||||
async processModuleConfig(modulePath, moduleName) {
|
||||
const configPath = path.join(modulePath, 'config.yaml');
|
||||
|
||||
if (await fs.pathExists(configPath)) {
|
||||
try {
|
||||
let configContent = await fs.readFile(configPath, 'utf8');
|
||||
|
||||
// Replace path placeholders
|
||||
configContent = configContent.replaceAll('{project-root}', `bmad/${moduleName}`);
|
||||
configContent = configContent.replaceAll('{module}', moduleName);
|
||||
|
||||
await fs.writeFile(configPath, configContent, 'utf8');
|
||||
} catch (error) {
|
||||
await prompts.log.warn(`Failed to process module config: ${error.message}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Private: Sync module files (preserving user modifications)
|
||||
* @param {string} sourcePath - Source module path
|
||||
* @param {string} targetPath - Target module path
|
||||
*/
|
||||
async syncModule(sourcePath, targetPath) {
|
||||
// Get list of all source files
|
||||
const sourceFiles = await this.getFileList(sourcePath);
|
||||
|
||||
for (const file of sourceFiles) {
|
||||
const sourceFile = path.join(sourcePath, file);
|
||||
const targetFile = path.join(targetPath, file);
|
||||
|
||||
// Check if target file exists and has been modified
|
||||
if (await fs.pathExists(targetFile)) {
|
||||
const sourceStats = await fs.stat(sourceFile);
|
||||
const targetStats = await fs.stat(targetFile);
|
||||
|
||||
// Skip if target is newer (user modified)
|
||||
if (targetStats.mtime > sourceStats.mtime) {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
// Copy file with placeholder replacement
|
||||
await this.copyFileWithPlaceholderReplacement(sourceFile, targetFile);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Private: Get list of all files in a directory
|
||||
* @param {string} dir - Directory path
|
||||
* @param {string} baseDir - Base directory for relative paths
|
||||
* @returns {Array} List of relative file paths
|
||||
*/
|
||||
async getFileList(dir, baseDir = dir) {
|
||||
const files = [];
|
||||
const entries = await fs.readdir(dir, { withFileTypes: true });
|
||||
|
||||
for (const entry of entries) {
|
||||
const fullPath = path.join(dir, entry.name);
|
||||
|
||||
if (entry.isDirectory()) {
|
||||
const subFiles = await this.getFileList(fullPath, baseDir);
|
||||
files.push(...subFiles);
|
||||
} else {
|
||||
files.push(path.relative(baseDir, fullPath));
|
||||
}
|
||||
}
|
||||
|
||||
return files;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { ModuleManager };
|
||||
|
|
@ -1,213 +0,0 @@
|
|||
const fs = require('fs-extra');
|
||||
const yaml = require('yaml');
|
||||
const path = require('node:path');
|
||||
const packageJson = require('../../../package.json');
|
||||
|
||||
/**
|
||||
* Configuration utility class
|
||||
*/
|
||||
class Config {
|
||||
/**
|
||||
* Load a YAML configuration file
|
||||
* @param {string} configPath - Path to config file
|
||||
* @returns {Object} Parsed configuration
|
||||
*/
|
||||
async loadYaml(configPath) {
|
||||
if (!(await fs.pathExists(configPath))) {
|
||||
throw new Error(`Configuration file not found: ${configPath}`);
|
||||
}
|
||||
|
||||
const content = await fs.readFile(configPath, 'utf8');
|
||||
return yaml.parse(content);
|
||||
}
|
||||
|
||||
/**
|
||||
* Save configuration to YAML file
|
||||
* @param {string} configPath - Path to config file
|
||||
* @param {Object} config - Configuration object
|
||||
*/
|
||||
async saveYaml(configPath, config) {
|
||||
const yamlContent = yaml.dump(config, {
|
||||
indent: 2,
|
||||
lineWidth: 120,
|
||||
noRefs: true,
|
||||
});
|
||||
|
||||
await fs.ensureDir(path.dirname(configPath));
|
||||
// Ensure POSIX-compliant final newline
|
||||
const content = yamlContent.endsWith('\n') ? yamlContent : yamlContent + '\n';
|
||||
await fs.writeFile(configPath, content, 'utf8');
|
||||
}
|
||||
|
||||
/**
|
||||
* Process configuration file (replace placeholders)
|
||||
* @param {string} configPath - Path to config file
|
||||
* @param {Object} replacements - Replacement values
|
||||
*/
|
||||
async processConfig(configPath, replacements = {}) {
|
||||
let content = await fs.readFile(configPath, 'utf8');
|
||||
|
||||
// Standard replacements
|
||||
const standardReplacements = {
|
||||
'{project-root}': replacements.root || '',
|
||||
'{module}': replacements.module || '',
|
||||
'{version}': replacements.version || packageJson.version,
|
||||
'{date}': new Date().toISOString().split('T')[0],
|
||||
};
|
||||
|
||||
// Apply all replacements
|
||||
const allReplacements = { ...standardReplacements, ...replacements };
|
||||
|
||||
for (const [placeholder, value] of Object.entries(allReplacements)) {
|
||||
if (typeof placeholder === 'string' && typeof value === 'string') {
|
||||
const regex = new RegExp(placeholder.replaceAll(/[.*+?^${}()|[\]\\]/g, String.raw`\$&`), 'g');
|
||||
content = content.replace(regex, value);
|
||||
}
|
||||
}
|
||||
|
||||
await fs.writeFile(configPath, content, 'utf8');
|
||||
}
|
||||
|
||||
/**
|
||||
* Merge configurations
|
||||
* @param {Object} base - Base configuration
|
||||
* @param {Object} override - Override configuration
|
||||
* @returns {Object} Merged configuration
|
||||
*/
|
||||
mergeConfigs(base, override) {
|
||||
return this.deepMerge(base, override);
|
||||
}
|
||||
|
||||
/**
|
||||
* Deep merge two objects
|
||||
* @param {Object} target - Target object
|
||||
* @param {Object} source - Source object
|
||||
* @returns {Object} Merged object
|
||||
*/
|
||||
deepMerge(target, source) {
|
||||
const output = { ...target };
|
||||
|
||||
if (this.isObject(target) && this.isObject(source)) {
|
||||
for (const key of Object.keys(source)) {
|
||||
if (this.isObject(source[key])) {
|
||||
if (key in target) {
|
||||
output[key] = this.deepMerge(target[key], source[key]);
|
||||
} else {
|
||||
output[key] = source[key];
|
||||
}
|
||||
} else {
|
||||
output[key] = source[key];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return output;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if value is an object
|
||||
* @param {*} item - Item to check
|
||||
* @returns {boolean} True if object
|
||||
*/
|
||||
isObject(item) {
|
||||
return item && typeof item === 'object' && !Array.isArray(item);
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate configuration against schema
|
||||
* @param {Object} config - Configuration to validate
|
||||
* @param {Object} schema - Validation schema
|
||||
* @returns {Object} Validation result
|
||||
*/
|
||||
validateConfig(config, schema) {
|
||||
const errors = [];
|
||||
const warnings = [];
|
||||
|
||||
// Check required fields
|
||||
if (schema.required) {
|
||||
for (const field of schema.required) {
|
||||
if (!(field in config)) {
|
||||
errors.push(`Missing required field: ${field}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check field types
|
||||
if (schema.properties) {
|
||||
for (const [field, spec] of Object.entries(schema.properties)) {
|
||||
if (field in config) {
|
||||
const value = config[field];
|
||||
const expectedType = spec.type;
|
||||
|
||||
if (expectedType === 'array' && !Array.isArray(value)) {
|
||||
errors.push(`Field '${field}' should be an array`);
|
||||
} else if (expectedType === 'object' && !this.isObject(value)) {
|
||||
errors.push(`Field '${field}' should be an object`);
|
||||
} else if (expectedType === 'string' && typeof value !== 'string') {
|
||||
errors.push(`Field '${field}' should be a string`);
|
||||
} else if (expectedType === 'number' && typeof value !== 'number') {
|
||||
errors.push(`Field '${field}' should be a number`);
|
||||
} else if (expectedType === 'boolean' && typeof value !== 'boolean') {
|
||||
errors.push(`Field '${field}' should be a boolean`);
|
||||
}
|
||||
|
||||
// Check enum values
|
||||
if (spec.enum && !spec.enum.includes(value)) {
|
||||
errors.push(`Field '${field}' must be one of: ${spec.enum.join(', ')}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
valid: errors.length === 0,
|
||||
errors,
|
||||
warnings,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get configuration value with fallback
|
||||
* @param {Object} config - Configuration object
|
||||
* @param {string} path - Dot-notation path to value
|
||||
* @param {*} defaultValue - Default value if not found
|
||||
* @returns {*} Configuration value
|
||||
*/
|
||||
getValue(config, path, defaultValue = null) {
|
||||
const keys = path.split('.');
|
||||
let current = config;
|
||||
|
||||
for (const key of keys) {
|
||||
if (current && typeof current === 'object' && key in current) {
|
||||
current = current[key];
|
||||
} else {
|
||||
return defaultValue;
|
||||
}
|
||||
}
|
||||
|
||||
return current;
|
||||
}
|
||||
|
||||
/**
|
||||
* Set configuration value
|
||||
* @param {Object} config - Configuration object
|
||||
* @param {string} path - Dot-notation path to value
|
||||
* @param {*} value - Value to set
|
||||
*/
|
||||
setValue(config, path, value) {
|
||||
const keys = path.split('.');
|
||||
const lastKey = keys.pop();
|
||||
let current = config;
|
||||
|
||||
for (const key of keys) {
|
||||
if (!(key in current) || typeof current[key] !== 'object') {
|
||||
current[key] = {};
|
||||
}
|
||||
current = current[key];
|
||||
}
|
||||
|
||||
current[lastKey] = value;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { Config };
|
||||
|
|
@ -1,116 +0,0 @@
|
|||
const fs = require('fs-extra');
|
||||
const path = require('node:path');
|
||||
const yaml = require('yaml');
|
||||
const { getProjectRoot } = require('./project-root');
|
||||
|
||||
/**
|
||||
* Platform Codes Manager
|
||||
* Loads and provides access to the centralized platform codes configuration
|
||||
*/
|
||||
class PlatformCodes {
|
||||
constructor() {
|
||||
this.configPath = path.join(getProjectRoot(), 'tools', 'platform-codes.yaml');
|
||||
this.loadConfig();
|
||||
}
|
||||
|
||||
/**
|
||||
* Load the platform codes configuration
|
||||
*/
|
||||
loadConfig() {
|
||||
try {
|
||||
if (fs.existsSync(this.configPath)) {
|
||||
const content = fs.readFileSync(this.configPath, 'utf8');
|
||||
this.config = yaml.parse(content);
|
||||
} else {
|
||||
console.warn(`Platform codes config not found at ${this.configPath}`);
|
||||
this.config = { platforms: {} };
|
||||
}
|
||||
} catch (error) {
|
||||
console.error(`Error loading platform codes: ${error.message}`);
|
||||
this.config = { platforms: {} };
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all platform codes
|
||||
* @returns {Object} All platform configurations
|
||||
*/
|
||||
getAllPlatforms() {
|
||||
return this.config.platforms || {};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a specific platform configuration
|
||||
* @param {string} code - Platform code
|
||||
* @returns {Object|null} Platform configuration or null if not found
|
||||
*/
|
||||
getPlatform(code) {
|
||||
return this.config.platforms[code] || null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a platform code is valid
|
||||
* @param {string} code - Platform code to validate
|
||||
* @returns {boolean} True if valid
|
||||
*/
|
||||
isValidPlatform(code) {
|
||||
return code in this.config.platforms;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all preferred platforms
|
||||
* @returns {Array} Array of preferred platform codes
|
||||
*/
|
||||
getPreferredPlatforms() {
|
||||
return Object.entries(this.config.platforms)
|
||||
.filter(([, config]) => config.preferred)
|
||||
.map(([code]) => code);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get platforms by category
|
||||
* @param {string} category - Category to filter by
|
||||
* @returns {Array} Array of platform codes in the category
|
||||
*/
|
||||
getPlatformsByCategory(category) {
|
||||
return Object.entries(this.config.platforms)
|
||||
.filter(([, config]) => config.category === category)
|
||||
.map(([code]) => code);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get platform display name
|
||||
* @param {string} code - Platform code
|
||||
* @returns {string} Display name or code if not found
|
||||
*/
|
||||
getDisplayName(code) {
|
||||
const platform = this.getPlatform(code);
|
||||
return platform ? platform.name : code;
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate platform code format
|
||||
* @param {string} code - Platform code to validate
|
||||
* @returns {boolean} True if format is valid
|
||||
*/
|
||||
isValidFormat(code) {
|
||||
const conventions = this.config.conventions || {};
|
||||
const pattern = conventions.allowed_characters || 'a-z0-9-';
|
||||
const maxLength = conventions.max_code_length || 20;
|
||||
|
||||
const regex = new RegExp(`^[${pattern}]+$`);
|
||||
return regex.test(code) && code.length <= maxLength;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all platform codes as array
|
||||
* @returns {Array} Array of platform codes
|
||||
*/
|
||||
getCodes() {
|
||||
return Object.keys(this.config.platforms);
|
||||
}
|
||||
config = null;
|
||||
}
|
||||
|
||||
// Export singleton instance
|
||||
module.exports = new PlatformCodes();
|
||||
|
|
@ -6,7 +6,7 @@ Create a reference documentation page at `docs/reference/modules.md` that lists
|
|||
|
||||
## Source of Truth
|
||||
|
||||
Read `tools/cli/external-official-modules.yaml` — this is the authoritative registry of official external modules. Use the module names, codes, npm package names, and repository URLs from this file.
|
||||
Read `tools/installer/external-official-modules.yaml` — this is the authoritative registry of official external modules. Use the module names, codes, npm package names, and repository URLs from this file.
|
||||
|
||||
## Research Step
|
||||
|
||||
|
|
|
|||
|
|
@ -205,17 +205,14 @@ Support assumption: full Agent Skills support. BMAD currently uses a custom inst
|
|||
- [x] Implement/extend automated tests — 11 assertions in test suite 17 including marker cleanup
|
||||
- [x] Commit
|
||||
|
||||
## KiloCoder — SUSPENDED
|
||||
|
||||
**Status: Kilo Code does not support the Agent Skills standard.** The original migration assumed skills support because Kilo forked from Roo Code, but manual IDE verification confirmed Kilo has not merged that feature. BMAD support is paused until Kilo implements skills.
|
||||
## KiloCoder
|
||||
|
||||
**Install:** VS Code extension `kilocode.kilo-code` — search "Kilo Code" in Extensions or `code --install-extension kilocode.kilo-code`
|
||||
|
||||
- [x] ~~Confirm KiloCoder native skills path~~ — **FALSE**: assumed from Roo Code fork, not verified. Manual testing showed no skills support in the IDE
|
||||
- [x] Config and installer code retained in platform-codes.yaml with `suspended` flag — hidden from IDE picker, setup blocked with explanation
|
||||
- [x] Installer fails early (before writing `_bmad/`) if Kilo is the only selected IDE, protecting existing installations
|
||||
- [x] Legacy cleanup still runs for `.kilocode/workflows` and `.kilocodemodes` when users switch to a different IDE
|
||||
- [x] Automated tests — 7 assertions in suite 22 (suspended config, hidden from picker, setup blocked, no files written, legacy cleanup)
|
||||
- [x] Confirm KiloCoder native skills path — `.kilocode/skills`
|
||||
- [x] Legacy cleanup for `.kilocode/workflows` and `.kilocodemodes`
|
||||
- [x] Automated tests — suite 22 (config, IDE picker, install, skill output, legacy cleanup, reinstall)
|
||||
- [x] Commit
|
||||
|
||||
## Gemini CLI
|
||||
|
||||
|
|
|
|||
|
|
@ -1,9 +1,11 @@
|
|||
#!/usr/bin/env node
|
||||
|
||||
const { program } = require('commander');
|
||||
const path = require('node:path');
|
||||
const fs = require('node:fs');
|
||||
const { execSync } = require('node:child_process');
|
||||
const semver = require('semver');
|
||||
const prompts = require('./lib/prompts');
|
||||
const prompts = require('./prompts');
|
||||
|
||||
// The installer flow uses many sequential @clack/prompts, each adding keypress
|
||||
// listeners to stdin. Raise the limit to avoid spurious EventEmitter warnings.
|
||||
|
|
@ -8,7 +8,7 @@ const CLIUtils = {
|
|||
*/
|
||||
getVersion() {
|
||||
try {
|
||||
const packageJson = require(path.join(__dirname, '..', '..', '..', 'package.json'));
|
||||
const packageJson = require(path.join(__dirname, '..', '..', 'package.json'));
|
||||
return packageJson.version || 'Unknown';
|
||||
} catch {
|
||||
return 'Unknown';
|
||||
|
|
@ -16,10 +16,9 @@ const CLIUtils = {
|
|||
},
|
||||
|
||||
/**
|
||||
* Display BMAD logo using @clack intro + box
|
||||
* @param {boolean} _clearScreen - Deprecated, ignored (no longer clears screen)
|
||||
* Display BMAD logo and version using @clack intro + box
|
||||
*/
|
||||
async displayLogo(_clearScreen = true) {
|
||||
async displayLogo() {
|
||||
const version = this.getVersion();
|
||||
const color = await prompts.getColor();
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue