Merge branch 'main' into fix/version-comparison-semver

This commit is contained in:
Brian 2026-02-16 09:38:41 -06:00 committed by GitHub
commit 46cee9a731
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
66 changed files with 1485 additions and 362 deletions

View File

@ -0,0 +1,7 @@
---
name: bmad-os-diataxis-style-fix
description: Fixes documentation to comply with Diataxis framework and BMad Method style guide rules
disable-model-invocation: true
---
Read `prompts/instructions.md` and execute.

View File

@ -0,0 +1,229 @@
# Diataxis Style Fixer
Automatically fixes documentation to comply with the Diataxis framework and BMad Method style guide.
## CRITICAL RULES
- **NEVER commit or push changes** — let the user review first
- **NEVER make destructive edits** — preserve all content, only fix formatting
- **Use Edit tool** — make targeted fixes, not full file rewrites
- **Show summary** — after fixing, list all changes made
## Input
Documentation file path or directory to fix. Defaults to `docs/` if not specified.
## Step 1: Understand Diataxis Framework
**Diataxis** is a documentation framework that categorizes content into four types based on two axes:
| | **Learning** (oriented toward future) | **Doing** (oriented toward present) |
| -------------- | ----------------------------------------------------------------------------- | ---------------------------------------------------------------------------- |
| **Practical** | **Tutorials** — lessons that guide learners through achieving a specific goal | **How-to guides** — step-by-step instructions for solving a specific problem |
| **Conceptual** | **Explanation** — content that clarifies and describes underlying concepts | **Reference** — technical descriptions, organized for lookup |
**Key principles:**
- Each document type serves a distinct user need
- Don't mix types — a tutorial shouldn't explain concepts deeply
- Focus on the user's goal, not exhaustive coverage
- Structure follows purpose (tutorials are linear, reference is scannable)
## Step 2: Read the Style Guide
Read the project's style guide at `docs/_STYLE_GUIDE.md` to understand all project-specific conventions.
## Step 3: Detect Document Type
Based on file location, determine the document type:
| Location | Diataxis Type |
| -------------------- | -------------------- |
| `/docs/tutorials/` | Tutorial |
| `/docs/how-to/` | How-to guide |
| `/docs/explanation/` | Explanation |
| `/docs/reference/` | Reference |
| `/docs/glossary/` | Reference (glossary) |
## Step 4: Find and Fix Issues
For each markdown file, scan for issues and fix them:
### Universal Fixes (All Doc Types)
**Horizontal Rules (`---`)**
- Remove any `---` outside of YAML frontmatter
- Replace with `##` section headers or admonitions as appropriate
**`####` Headers**
- Replace with bold text: `#### Header``**Header**`
- Or convert to admonition if it's a warning/notice
**"Related" or "Next:" Sections**
- Remove entire section including links
- The sidebar handles navigation
**Deeply Nested Lists**
- Break into sections with `##` headers
- Flatten to max 3 levels
**Code Blocks for Dialogue/Examples**
- Convert to admonitions:
```
:::note[Example]
[content]
:::
```
**Bold Paragraph Callouts**
- Convert to admonitions with appropriate type
**Too Many Admonitions**
- Limit to 1-2 per section (tutorials allow 3-4 per major section)
- Consolidate related admonitions
- Remove less critical ones if over limit
**Table Cells / List Items > 2 Sentences**
- Break into multiple rows/cells
- Or shorten to 1-2 sentences
**Header Budget Exceeded**
- Merge related sections
- Convert some `##` to `###` subsections
- Goal: 8-12 `##` per doc; 2-3 `###` per section
### Type-Specific Fixes
**Tutorials** (`/docs/tutorials/`)
- Ensure hook describes outcome in 1-2 sentences
- Add "What You'll Learn" bullet section if missing
- Add `:::note[Prerequisites]` if missing
- Add `:::tip[Quick Path]` TL;DR at top if missing
- Use tables for phases, commands, agents
- Add "What You've Accomplished" section if missing
- Add Quick Reference table if missing
- Add Common Questions section if missing
- Add Getting Help section if missing
- Add `:::tip[Key Takeaways]` at end if missing
**How-To** (`/docs/how-to/`)
- Ensure hook starts with "Use the `X` workflow to..."
- Add "When to Use This" with 3-5 bullets if missing
- Add `:::note[Prerequisites]` if missing
- Ensure steps are numbered `###` with action verbs
- Add "What You Get" describing outputs if missing
**Explanation** (`/docs/explanation/`)
- Ensure hook states what document explains
- Organize content into scannable `##` sections
- Add comparison tables for 3+ options
- Link to how-to guides for procedural questions
- Limit to 2-3 admonitions per document
**Reference** (`/docs/reference/`)
- Ensure hook states what document references
- Ensure structure matches reference type
- Use consistent item structure throughout
- Use tables for structured/comparative data
- Link to explanation docs for conceptual depth
- Limit to 1-2 admonitions per document
**Glossary** (`/docs/glossary/` or glossary files)
- Ensure categories as `##` headers
- Ensure terms in tables (not individual headers)
- Definitions 1-2 sentences max
- Bold term names in cells
## Step 5: Apply Fixes
For each file with issues:
1. Read the file
2. Use Edit tool for each fix
3. Track what was changed
## Step 6: Summary
After processing all files, output a summary:
```markdown
# Style Fixes Applied
**Files processed:** N
**Files modified:** N
## Changes Made
### `path/to/file.md`
- Removed horizontal rule at line 45
- Converted `####` headers to bold text
- Added `:::tip[Quick Path]` admonition
- Consolidated 3 admonitions into 2
### `path/to/other.md`
- Removed "Related:" section
- Fixed table cell length (broke into 2 rows)
## Review Required
Please review the changes. When satisfied, commit and push as needed.
```
## Common Patterns
**Converting `####` to bold:**
```markdown
#### Important Note
Some text here.
```
```markdown
**Important Note**
Some text here.
```
**Removing horizontal rule:**
```markdown
Some content above.
---
Some content below.
```
```markdown
Some content above.
## [Descriptive Section Header]
Some content below.
```
**Converting code block to admonition:**
```markdown
```
User: What should I do?
Agent: Run the workflow.
```
```
```markdown
:::note[Example]
**User:** What should I do?
**Agent:** Run the workflow.
:::
```
**Converting bold paragraph to admonition:**
```markdown
**IMPORTANT:** This is critical that you read this before proceeding.
```
```markdown
:::caution[Important]
This is critical that you read this before proceeding.
:::
```

View File

@ -73,7 +73,7 @@ After searching, use the [feature request template](https://github.com/bmad-code
### Target Branch ### Target Branch
Submit PRs to the `main` branch. Submit PRs to the `main` branch. We use [trunk-based development](https://trunkbaseddevelopment.com/branch-for-release/): `main` is the trunk where all work lands, and stable release branches receive only cherry-picked fixes.
### PR Size ### PR Size

View File

@ -75,10 +75,12 @@ Show in "What You've Accomplished" sections:
````md ````md
``` ```
your-project/ your-project/
├── _bmad/ # BMad configuration ├── _bmad/ # BMad configuration
├── _bmad-output/ ├── _bmad-output/
│ ├── PRD.md # Your requirements document │ ├── planning-artifacts/
│ └── bmm-workflow-status.yaml # Progress tracking │ │ └── PRD.md # Your requirements document
│ ├── implementation-artifacts/
│ └── project-context.md # Implementation rules (optional)
└── ... └── ...
``` ```
```` ````
@ -142,12 +144,12 @@ your-project/
### Types ### Types
| Type | Example | | Type | Example |
| ----------------- | ---------------------------- | | ----------------- | ----------------------------- |
| **Index/Landing** | `core-concepts/index.md` | | **Index/Landing** | `core-concepts/index.md` |
| **Concept** | `what-are-agents.md` | | **Concept** | `what-are-agents.md` |
| **Feature** | `quick-flow.md` | | **Feature** | `quick-flow.md` |
| **Philosophy** | `why-solutioning-matters.md` | | **Philosophy** | `why-solutioning-matters.md` |
| **FAQ** | `established-projects-faq.md` | | **FAQ** | `established-projects-faq.md` |
### General Template ### General Template

View File

@ -0,0 +1,157 @@
---
title: "Project Context"
description: How project-context.md guides AI agents with your project's rules and preferences
sidebar:
order: 7
---
The `project-context.md` file is your project's implementation guide for AI agents. Similar to a "constitution" in other development systems, it captures the rules, patterns, and preferences that ensure consistent code generation across all workflows.
## What It Does
AI agents make implementation decisions constantly — which patterns to follow, how to structure code, what conventions to use. Without clear guidance, they may:
- Follow generic best practices that don't match your codebase
- Make inconsistent decisions across different stories
- Miss project-specific requirements or constraints
The `project-context.md` file solves this by documenting what agents need to know in a concise, LLM-optimized format.
## How It Works
Every implementation workflow automatically loads `project-context.md` if it exists. The architect workflow also loads it to respect your technical preferences when designing the architecture.
**Loaded by these workflows:**
- `create-architecture` — respects technical preferences during solutioning
- `create-story` — informs story creation with project patterns
- `dev-story` — guides implementation decisions
- `code-review` — validates against project standards
- `quick-dev` — applies patterns when implementing tech-specs
- `sprint-planning`, `retrospective`, `correct-course` — provides project-wide context
## When to Create It
The `project-context.md` file is useful at any stage of a project:
| Scenario | When to Create | Purpose |
|----------|----------------|---------|
| **New project, before architecture** | Manually, before `create-architecture` | Document your technical preferences so the architect respects them |
| **New project, after architecture** | Via `generate-project-context` or manually | Capture architecture decisions for implementation agents |
| **Existing project** | Via `generate-project-context` | Discover existing patterns so agents follow established conventions |
| **Quick Flow project** | Before or during `quick-dev` | Ensure quick implementation respects your patterns |
:::tip[Recommended]
For new projects, create it manually before architecture if you have strong technical preferences. Otherwise, generate it after architecture to capture those decisions.
:::
## What Goes In It
The file has two main sections:
### Technology Stack & Versions
Documents the frameworks, languages, and tools your project uses with specific versions:
```markdown
## Technology Stack & Versions
- Node.js 20.x, TypeScript 5.3, React 18.2
- State: Zustand (not Redux)
- Testing: Vitest, Playwright, MSW
- Styling: Tailwind CSS with custom design tokens
```
### Critical Implementation Rules
Documents patterns and conventions that agents might otherwise miss:
```markdown
## Critical Implementation Rules
**TypeScript Configuration:**
- Strict mode enabled — no `any` types without explicit approval
- Use `interface` for public APIs, `type` for unions/intersections
**Code Organization:**
- Components in `/src/components/` with co-located `.test.tsx`
- Utilities in `/src/lib/` for reusable pure functions
- API calls use the `apiClient` singleton — never fetch directly
**Testing Patterns:**
- Unit tests focus on business logic, not implementation details
- Integration tests use MSW to mock API responses
- E2E tests cover critical user journeys only
**Framework-Specific:**
- All async operations use the `handleError` wrapper for consistent error handling
- Feature flags accessed via `featureFlag()` from `@/lib/flags`
- New routes follow the file-based routing pattern in `/src/app/`
```
Focus on what's **unobvious** — things agents might not infer from reading code snippets. Don't document standard practices that apply universally.
## Creating the File
You have three options:
### Manual Creation
Create the file at `_bmad-output/project-context.md` and add your rules:
```bash
# In your project root
mkdir -p _bmad-output
touch _bmad-output/project-context.md
```
Edit it with your technology stack and implementation rules. The architect and implementation workflows will automatically find and load it.
### Generate After Architecture
Run the `generate-project-context` workflow after completing your architecture:
```bash
/bmad-bmm-generate-project-context
```
This scans your architecture document and project files to generate a context file capturing the decisions made.
### Generate for Existing Projects
For existing projects, run `generate-project-context` to discover existing patterns:
```bash
/bmad-bmm-generate-project-context
```
The workflow analyzes your codebase to identify conventions, then generates a context file you can review and refine.
## Why It Matters
Without `project-context.md`, agents make assumptions that may not match your project:
| Without Context | With Context |
|----------------|--------------|
| Uses generic patterns | Follows your established conventions |
| Inconsistent style across stories | Consistent implementation |
| May miss project-specific constraints | Respects all technical requirements |
| Each agent decides independently | All agents align with same rules |
This is especially important for:
- **Quick Flow** — skips PRD and architecture, so context file fills the gap
- **Team projects** — ensures all agents follow the same standards
- **Existing projects** — prevents breaking established patterns
## Editing and Updating
The `project-context.md` file is a living document. Update it when:
- Architecture decisions change
- New conventions are established
- Patterns evolve during implementation
- You identify gaps from agent behavior
You can edit it manually at any time, or re-run `generate-project-context` to update it after significant changes.
:::note[File Location]
The default location is `_bmad-output/project-context.md`. Workflows search for it there, and also check `**/project-context.md` anywhere in your project.
:::

View File

@ -5,7 +5,7 @@ sidebar:
order: 6 order: 6
--- ---
Use BMad Method effectively when working on existing projects and legacy codebases, sometimes also referred to as brownfield projects. Use BMad Method effectively when working on existing projects and legacy codebases.
This guide covers the essential workflow for onboarding to existing projects with BMad Method. This guide covers the essential workflow for onboarding to existing projects with BMad Method.
@ -23,7 +23,30 @@ If you have completed all PRD epics and stories through the BMad process, clean
- `_bmad-output/planning-artifacts/` - `_bmad-output/planning-artifacts/`
- `_bmad-output/implementation-artifacts/` - `_bmad-output/implementation-artifacts/`
## Step 2: Maintain Quality Project Documentation ## Step 2: Create Project Context
:::tip[Recommended for Existing Projects]
Generate `project-context.md` to capture your existing codebase patterns and conventions. This ensures AI agents follow your established practices when implementing changes.
:::
Run the generate project context workflow:
```bash
/bmad-bmm-generate-project-context
```
This scans your codebase to identify:
- Technology stack and versions
- Code organization patterns
- Naming conventions
- Testing approaches
- Framework-specific patterns
You can review and refine the generated file, or create it manually at `_bmad-output/project-context.md` if you prefer.
[Learn more about project context](../explanation/project-context.md)
## Step 3: Maintain Quality Project Documentation
Your `docs/` folder should contain succinct, well-organized documentation that accurately represents your project: Your `docs/` folder should contain succinct, well-organized documentation that accurately represents your project:

View File

@ -0,0 +1,136 @@
---
title: "Manage Project Context"
description: Create and maintain project-context.md to guide AI agents
sidebar:
order: 7
---
Use the `project-context.md` file to ensure AI agents follow your project's technical preferences and implementation rules throughout all workflows.
:::note[Prerequisites]
- BMad Method installed
- Understanding of your project's technology stack and conventions
:::
## When to Use This
- You have strong technical preferences before starting architecture
- You've completed architecture and want to capture decisions for implementation
- You're working on an existing codebase with established patterns
- You notice agents making inconsistent decisions across stories
## Step 1: Choose Your Approach
**Manual creation** — Best when you know exactly what rules you want to document
**Generate after architecture** — Best for capturing decisions made during solutioning
**Generate for existing projects** — Best for discovering patterns in existing codebases
## Step 2: Create the File
### Option A: Manual Creation
Create the file at `_bmad-output/project-context.md`:
```bash
mkdir -p _bmad-output
touch _bmad-output/project-context.md
```
Add your technology stack and implementation rules:
```markdown
---
project_name: 'MyProject'
user_name: 'YourName'
date: '2026-02-15'
sections_completed: ['technology_stack', 'critical_rules']
---
# Project Context for AI Agents
## Technology Stack & Versions
- Node.js 20.x, TypeScript 5.3, React 18.2
- State: Zustand
- Testing: Vitest, Playwright
- Styling: Tailwind CSS
## Critical Implementation Rules
**TypeScript:**
- Strict mode enabled, no `any` types
- Use `interface` for public APIs, `type` for unions
**Code Organization:**
- Components in `/src/components/` with co-located tests
- API calls use `apiClient` singleton — never fetch directly
**Testing:**
- Unit tests focus on business logic
- Integration tests use MSW for API mocking
```
### Option B: Generate After Architecture
Run the workflow in a fresh chat:
```bash
/bmad-bmm-generate-project-context
```
The workflow scans your architecture document and project files to generate a context file capturing the decisions made.
### Option C: Generate for Existing Projects
For existing projects, run:
```bash
/bmad-bmm-generate-project-context
```
The workflow analyzes your codebase to identify conventions, then generates a context file you can review and refine.
## Step 3: Verify Content
Review the generated file and ensure it captures:
- Correct technology versions
- Your actual conventions (not generic best practices)
- Rules that prevent common mistakes
- Framework-specific patterns
Edit manually to add anything missing or remove inaccuracies.
## What You Get
A `project-context.md` file that:
- Ensures all agents follow the same conventions
- Prevents inconsistent decisions across stories
- Captures architecture decisions for implementation
- Serves as a reference for your project's patterns and rules
## Tips
:::tip[Focus on the Unobvious]
Document patterns agents might miss such as "Use JSDoc style comments on every public class, function and variable", not universal practices like "use meaningful variable names" which LLMs know at this point.
:::
:::tip[Keep It Lean]
This file is loaded by every implementation workflow. Long files waste context. Do not include content that only applies to narrow scope or specific stories or features.
:::
:::tip[Update as Needed]
Edit manually when patterns change, or re-generate after significant architecture changes.
:::
:::tip[Works for All Project Types]
Just as useful for Quick Flow as for full BMad Method projects.
:::
## Next Steps
- [**Project Context Explanation**](../explanation/project-context.md) — Learn more about how it works
- [**Workflow Map**](../reference/workflow-map.md) — See which workflows load project context

View File

@ -23,11 +23,11 @@ Document sharding splits large markdown files into smaller, organized files base
```text ```text
Before Sharding: Before Sharding:
docs/ _bmad-output/planning-artifacts/
└── PRD.md (large 50k token file) └── PRD.md (large 50k token file)
After Sharding: After Sharding:
docs/ _bmad-output/planning-artifacts/
└── prd/ └── prd/
├── index.md # Table of contents with descriptions ├── index.md # Table of contents with descriptions
├── overview.md # Section 1 ├── overview.md # Section 1

View File

@ -77,14 +77,46 @@ Skip phases 1-3 for small, well-understood work.
Each document becomes context for the next phase. The PRD tells the architect what constraints matter. The architecture tells the dev agent which patterns to follow. Story files give focused, complete context for implementation. Without this structure, agents make inconsistent decisions. Each document becomes context for the next phase. The PRD tells the architect what constraints matter. The architecture tells the dev agent which patterns to follow. Story files give focused, complete context for implementation. Without this structure, agents make inconsistent decisions.
For established projects, `document-project` creates or updates `project-context.md` - what exists in the codebase and the rules all implementation workflows must observe. Run it just before Phase 4, and again when something significant changes - structure, architecture, or those rules. You can also edit `project-context.md` by hand. ### Project Context
All implementation workflows load `project-context.md` if it exists. Additional context per workflow: :::tip[Recommended]
Create `project-context.md` to ensure AI agents follow your project's rules and preferences. This file works like a constitution for your project — it guides implementation decisions across all workflows.
:::
| Workflow | Also Loads | **When to create it:**
| -------------- | ---------------------------- |
| Scenario | Approach |
|----------|----------|
| Before architecture (manual) | Document technical preferences you want the architect to respect |
| After architecture | Generate it to capture decisions made during solutioning |
| Existing projects | Run `generate-project-context` to discover established patterns |
| Quick Flow | Create before `quick-dev` to ensure consistent implementation |
**How to create it:**
- **Manually** — Create `_bmad-output/project-context.md` with your technology stack and implementation rules
- **Generate it** — Run `/bmad-bmm-generate-project-context` to auto-generate from your architecture or codebase
**What workflows load it:**
| Workflow | Purpose |
|----------|---------|
| `create-architecture` | Respects technical preferences when designing |
| `create-story` | Informs story creation with project patterns |
| `dev-story` | Guides implementation decisions |
| `code-review` | Validates against project standards |
| `quick-dev` | Applies patterns when implementing |
[**Learn more about project-context.md**](../explanation/project-context.md)
### Additional Context by Workflow
Beyond `project-context.md`, each workflow loads specific documents:
| Workflow | Also Loads |
|----------|------------|
| `create-story` | epics, PRD, architecture, UX | | `create-story` | epics, PRD, architecture, UX |
| `dev-story` | story file | | `dev-story` | story file |
| `code-review` | architecture, story file | | `code-review` | architecture, story file |
| `quick-spec` | planning docs (if exist) | | `quick-spec` | planning docs (if exist) |
| `quick-dev` | tech-spec | | `quick-dev` | tech-spec |

View File

@ -79,6 +79,12 @@ Always start a fresh chat for each workflow. This prevents context limitations f
Work through phases 1-3. **Use fresh chats for each workflow.** Work through phases 1-3. **Use fresh chats for each workflow.**
:::tip[Project Context (Optional)]
Before starting, consider creating `project-context.md` to document your technical preferences and implementation rules. This ensures all AI agents follow your conventions throughout the project.
Create it manually at `_bmad-output/project-context.md` or generate it after architecture using `/bmad-bmm-generate-project-context`. [Learn more](../explanation/project-context.md).
:::
### Phase 1: Analysis (Optional) ### Phase 1: Analysis (Optional)
All workflows in this phase are optional: All workflows in this phase are optional:
@ -155,12 +161,15 @@ Your project now has:
```text ```text
your-project/ your-project/
├── _bmad/ # BMad configuration ├── _bmad/ # BMad configuration
├── _bmad-output/ ├── _bmad-output/
│ ├── PRD.md # Your requirements document │ ├── planning-artifacts/
│ ├── architecture.md # Technical decisions │ │ ├── PRD.md # Your requirements document
│ ├── epics/ # Epic and story files │ │ ├── architecture.md # Technical decisions
│ └── sprint-status.yaml # Sprint tracking │ │ └── epics/ # Epic and story files
│ ├── implementation-artifacts/
│ │ └── sprint-status.yaml # Sprint tracking
│ └── project-context.md # Implementation rules (optional)
└── ... └── ...
``` ```
@ -171,6 +180,7 @@ your-project/
| `help` | `/bmad-help` | Any | Get guidance on what to do next | | `help` | `/bmad-help` | Any | Get guidance on what to do next |
| `prd` | `/bmad-bmm-create-prd` | PM | Create Product Requirements Document | | `prd` | `/bmad-bmm-create-prd` | PM | Create Product Requirements Document |
| `create-architecture` | `/bmad-bmm-create-architecture` | Architect | Create architecture document | | `create-architecture` | `/bmad-bmm-create-architecture` | Architect | Create architecture document |
| `generate-project-context` | `/bmad-bmm-generate-project-context` | Analyst | Create project context file |
| `create-epics-and-stories` | `/bmad-bmm-create-epics-and-stories` | PM | Break down PRD into epics | | `create-epics-and-stories` | `/bmad-bmm-create-epics-and-stories` | PM | Break down PRD into epics |
| `check-implementation-readiness` | `/bmad-bmm-check-implementation-readiness` | Architect | Validate planning cohesion | | `check-implementation-readiness` | `/bmad-bmm-check-implementation-readiness` | Architect | Validate planning cohesion |
| `sprint-planning` | `/bmad-bmm-sprint-planning` | SM | Initialize sprint tracking | | `sprint-planning` | `/bmad-bmm-sprint-planning` | SM | Initialize sprint tracking |

View File

@ -25,6 +25,7 @@
}, },
"scripts": { "scripts": {
"bmad:install": "node tools/cli/bmad-cli.js install", "bmad:install": "node tools/cli/bmad-cli.js install",
"bmad:uninstall": "node tools/cli/bmad-cli.js uninstall",
"docs:build": "node tools/build-docs.mjs", "docs:build": "node tools/build-docs.mjs",
"docs:dev": "astro dev --root website", "docs:dev": "astro dev --root website",
"docs:fix-links": "node tools/fix-doc-links.js", "docs:fix-links": "node tools/fix-doc-links.js",

View File

@ -16,7 +16,7 @@ agent:
communication_style: "Patient educator who explains like teaching a friend. Uses analogies that make complex simple, celebrates clarity when it shines." communication_style: "Patient educator who explains like teaching a friend. Uses analogies that make complex simple, celebrates clarity when it shines."
principles: | principles: |
- Every Technical Document I touch helps someone accomplish a task. Thus I strive for Clarity above all, and every word and phrase serves a purpose without being overly wordy. - Every Technical Document I touch helps someone accomplish a task. Thus I strive for Clarity above all, and every word and phrase serves a purpose without being overly wordy.
- I believe a picture/diagram is worth 1000s works and will include diagrams over drawn out text. - I believe a picture/diagram is worth 1000s of words and will include diagrams over drawn out text.
- I understand the intended audience or will clarify with the user so I know when to simplify vs when to be detailed. - I understand the intended audience or will clarify with the user so I know when to simplify vs when to be detailed.
- I will always strive to follow `_bmad/_memory/tech-writer-sidecar/documentation-standards.md` best practices. - I will always strive to follow `_bmad/_memory/tech-writer-sidecar/documentation-standards.md` best practices.

View File

@ -24,4 +24,4 @@ agent:
menu: menu:
- trigger: CU or fuzzy match on ux-design - trigger: CU or fuzzy match on ux-design
exec: "{project-root}/_bmad/bmm/workflows/2-plan-workflows/create-ux-design/workflow.md" exec: "{project-root}/_bmad/bmm/workflows/2-plan-workflows/create-ux-design/workflow.md"
description: "[CU] Create UX: Guidance through realizing the plan for your UX to inform architecture and implementation. PRovides more details that what was discovered in the PRD" description: "[CU] Create UX: Guidance through realizing the plan for your UX to inform architecture and implementation. Provides more details than what was discovered in the PRD"

View File

@ -70,14 +70,22 @@ This file contains the BMAD PRD philosophy, standards, and validation criteria t
**If PRD path provided as invocation parameter:** **If PRD path provided as invocation parameter:**
- Use provided path - Use provided path
**If no PRD path provided:** **If no PRD path provided, auto-discover:**
"**PRD Validation Workflow** - Search `{planning_artifacts}` for files matching `*prd*.md`
- Also check for sharded PRDs: `{planning_artifacts}/*prd*/*.md`
Which PRD would you like to validate? **If exactly ONE PRD found:**
- Use it automatically
- Inform user: "Found PRD: {discovered_path} — using it for validation."
Please provide the path to the PRD file you want to validate." **If MULTIPLE PRDs found:**
- List all discovered PRDs with numbered options
- "I found multiple PRDs. Which one would you like to validate?"
- Wait for user selection
**Wait for user to provide PRD path.** **If NO PRDs found:**
- "I couldn't find any PRD files in {planning_artifacts}. Please provide the path to the PRD file you want to validate."
- Wait for user to provide PRD path.
### 3. Validate PRD Exists and Load ### 3. Validate PRD Exists and Load

View File

@ -60,6 +60,4 @@ Load and read full config from {main_config} and resolve:
"**Validate Mode: Validating an existing PRD against BMAD standards.**" "**Validate Mode: Validating an existing PRD against BMAD standards.**"
Prompt for PRD path: "Which PRD would you like to validate? Please provide the path to the PRD.md file."
Then read fully and follow: `{validateWorkflow}` (steps-v/step-v-01-discovery.md) Then read fully and follow: `{validateWorkflow}` (steps-v/step-v-01-discovery.md)

View File

@ -12,7 +12,6 @@ document_output_language: "{config_source}:document_output_language"
date: system-generated date: system-generated
planning_artifacts: "{config_source}:planning_artifacts" planning_artifacts: "{config_source}:planning_artifacts"
implementation_artifacts: "{config_source}:implementation_artifacts" implementation_artifacts: "{config_source}:implementation_artifacts"
output_folder: "{implementation_artifacts}"
sprint_status: "{implementation_artifacts}/sprint-status.yaml" sprint_status: "{implementation_artifacts}/sprint-status.yaml"
# Workflow components # Workflow components
@ -21,10 +20,7 @@ instructions: "{installed_path}/instructions.xml"
validation: "{installed_path}/checklist.md" validation: "{installed_path}/checklist.md"
template: false template: false
variables: project_context: "**/project-context.md"
# Project context
project_context: "**/project-context.md"
story_dir: "{implementation_artifacts}"
# Smart input file references - handles both whole docs and sharded docs # Smart input file references - handles both whole docs and sharded docs
# Priority: Whole document first, then sharded version # Priority: Whole document first, then sharded version

View File

@ -12,8 +12,6 @@ date: system-generated
implementation_artifacts: "{config_source}:implementation_artifacts" implementation_artifacts: "{config_source}:implementation_artifacts"
planning_artifacts: "{config_source}:planning_artifacts" planning_artifacts: "{config_source}:planning_artifacts"
project_knowledge: "{config_source}:project_knowledge" project_knowledge: "{config_source}:project_knowledge"
output_folder: "{implementation_artifacts}"
sprint_status: "{implementation_artifacts}/sprint-status.yaml"
project_context: "**/project-context.md" project_context: "**/project-context.md"
# Smart input file references - handles both whole docs and sharded docs # Smart input file references - handles both whole docs and sharded docs
@ -52,6 +50,5 @@ input_file_patterns:
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/correct-course" installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/correct-course"
template: false template: false
instructions: "{installed_path}/instructions.md" instructions: "{installed_path}/instructions.md"
validation: "{installed_path}/checklist.md"
checklist: "{installed_path}/checklist.md" checklist: "{installed_path}/checklist.md"
default_output_file: "{planning_artifacts}/sprint-change-proposal-{date}.md" default_output_file: "{planning_artifacts}/sprint-change-proposal-{date}.md"

View File

@ -49,7 +49,7 @@ This is a COMPETITION to create the **ULTIMATE story context** that makes LLM de
### **Required Inputs:** ### **Required Inputs:**
- **Story file**: The story file to review and improve - **Story file**: The story file to review and improve
- **Workflow variables**: From workflow.yaml (story_dir, output_folder, epics_file, etc.) - **Workflow variables**: From workflow.yaml (implementation_artifacts, epics_file, etc.)
- **Source documents**: Epics, architecture, etc. (discovered or provided) - **Source documents**: Epics, architecture, etc. (discovered or provided)
- **Validation framework**: `validate-workflow.xml` (handles checklist execution) - **Validation framework**: `validate-workflow.xml` (handles checklist execution)
@ -65,7 +65,7 @@ You will systematically re-do the entire story creation process, but with a crit
2. **Load the story file**: `{story_file_path}` (provided by user or discovered) 2. **Load the story file**: `{story_file_path}` (provided by user or discovered)
3. **Load validation framework**: `{project-root}/_bmad/core/tasks/validate-workflow.xml` 3. **Load validation framework**: `{project-root}/_bmad/core/tasks/validate-workflow.xml`
4. **Extract metadata**: epic_num, story_num, story_key, story_title from story file 4. **Extract metadata**: epic_num, story_num, story_key, story_title from story file
5. **Resolve all workflow variables**: story_dir, output_folder, epics_file, architecture_file, etc. 5. **Resolve all workflow variables**: implementation_artifacts, epics_file, architecture_file, etc.
6. **Understand current status**: What story implementation guidance is currently provided? 6. **Understand current status**: What story implementation guidance is currently provided?
**Note:** If running in fresh context, user should provide the story file path being reviewed. If running from create-story workflow, the validation framework will automatically discover the checklist and story file. **Note:** If running in fresh context, user should provide the story file path being reviewed. If running from create-story workflow, the validation framework will automatically discover the checklist and story file.

View File

@ -192,7 +192,8 @@
(As a, I want, so that) - Detailed acceptance criteria (already BDD formatted) - Technical requirements specific to this story - (As a, I want, so that) - Detailed acceptance criteria (already BDD formatted) - Technical requirements specific to this story -
Business context and value - Success criteria <!-- Previous story analysis for context continuity --> Business context and value - Success criteria <!-- Previous story analysis for context continuity -->
<check if="story_num > 1"> <check if="story_num > 1">
<action>Load previous story file: {{story_dir}}/{{epic_num}}-{{previous_story_num}}-*.md</action> **PREVIOUS STORY INTELLIGENCE:** - <action>Find {{previous_story_num}}: scan {implementation_artifacts} for the story file in epic {{epic_num}} with the highest story number less than {{story_num}}</action>
<action>Load previous story file: {implementation_artifacts}/{{epic_num}}-{{previous_story_num}}-*.md</action> **PREVIOUS STORY INTELLIGENCE:** -
Dev notes and learnings from previous story - Review feedback and corrections needed - Files that were created/modified and their Dev notes and learnings from previous story - Review feedback and corrections needed - Files that were created/modified and their
patterns - Testing approaches that worked/didn't work - Problems encountered and solutions found - Code patterns established <action>Extract patterns - Testing approaches that worked/didn't work - Problems encountered and solutions found - Code patterns established <action>Extract
all learnings that could impact current story implementation</action> all learnings that could impact current story implementation</action>

View File

@ -6,11 +6,11 @@ author: "BMad"
config_source: "{project-root}/_bmad/bmm/config.yaml" config_source: "{project-root}/_bmad/bmm/config.yaml"
user_name: "{config_source}:user_name" user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language" communication_language: "{config_source}:communication_language"
document_output_language: "{config_source}:document_output_language"
user_skill_level: "{config_source}:user_skill_level"
date: system-generated date: system-generated
planning_artifacts: "{config_source}:planning_artifacts" planning_artifacts: "{config_source}:planning_artifacts"
implementation_artifacts: "{config_source}:implementation_artifacts" implementation_artifacts: "{config_source}:implementation_artifacts"
output_folder: "{implementation_artifacts}"
story_dir: "{implementation_artifacts}"
# Workflow components # Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/create-story" installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/create-story"
@ -19,18 +19,14 @@ instructions: "{installed_path}/instructions.xml"
validation: "{installed_path}/checklist.md" validation: "{installed_path}/checklist.md"
# Variables and inputs # Variables and inputs
variables: sprint_status: "{implementation_artifacts}/sprint-status.yaml" # Primary source for story tracking
sprint_status: "{implementation_artifacts}/sprint-status.yaml" # Primary source for story tracking epics_file: "{planning_artifacts}/epics.md" # Enhanced epics+stories with BDD and source hints
epics_file: "{planning_artifacts}/epics.md" # Enhanced epics+stories with BDD and source hints prd_file: "{planning_artifacts}/prd.md" # Fallback for requirements (if not in epics file)
prd_file: "{planning_artifacts}/prd.md" # Fallback for requirements (if not in epics file) architecture_file: "{planning_artifacts}/architecture.md" # Fallback for constraints (if not in epics file)
architecture_file: "{planning_artifacts}/architecture.md" # Fallback for constraints (if not in epics file) ux_file: "{planning_artifacts}/*ux*.md" # Fallback for UX requirements (if not in epics file)
ux_file: "{planning_artifacts}/*ux*.md" # Fallback for UX requirements (if not in epics file) story_title: "" # Will be elicited if not derivable
story_title: "" # Will be elicited if not derivable
# Project context
project_context: "**/project-context.md" project_context: "**/project-context.md"
default_output_file: "{implementation_artifacts}/{{story_key}}.md"
default_output_file: "{story_dir}/{{story_key}}.md"
# Smart input file references - Simplified for enhanced approach # Smart input file references - Simplified for enhanced approach
# The epics+stories file should contain everything needed with source hints # The epics+stories file should contain everything needed with source hints

View File

@ -78,7 +78,7 @@
<!-- Non-sprint story discovery --> <!-- Non-sprint story discovery -->
<check if="{{sprint_status}} file does NOT exist"> <check if="{{sprint_status}} file does NOT exist">
<action>Search {story_dir} for stories directly</action> <action>Search {implementation_artifacts} for stories directly</action>
<action>Find stories with "ready-for-dev" status in files</action> <action>Find stories with "ready-for-dev" status in files</action>
<action>Look for story files matching pattern: *-*-*.md</action> <action>Look for story files matching pattern: *-*-*.md</action>
<action>Read each candidate story file to check Status section</action> <action>Read each candidate story file to check Status section</action>
@ -114,7 +114,7 @@
</check> </check>
<action>Store the found story_key (e.g., "1-2-user-authentication") for later status updates</action> <action>Store the found story_key (e.g., "1-2-user-authentication") for later status updates</action>
<action>Find matching story file in {story_dir} using story_key pattern: {{story_key}}.md</action> <action>Find matching story file in {implementation_artifacts} using story_key pattern: {{story_key}}.md</action>
<action>Read COMPLETE story file from discovered path</action> <action>Read COMPLETE story file from discovered path</action>
<anchor id="task_check" /> <anchor id="task_check" />

View File

@ -4,12 +4,10 @@ author: "BMad"
# Critical variables from config # Critical variables from config
config_source: "{project-root}/_bmad/bmm/config.yaml" config_source: "{project-root}/_bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name" user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language" communication_language: "{config_source}:communication_language"
user_skill_level: "{config_source}:user_skill_level" user_skill_level: "{config_source}:user_skill_level"
document_output_language: "{config_source}:document_output_language" document_output_language: "{config_source}:document_output_language"
story_dir: "{config_source}:implementation_artifacts"
date: system-generated date: system-generated
# Workflow components # Workflow components

View File

@ -81,7 +81,7 @@ Bob (Scrum Master): "I'm having trouble detecting the completed epic from {sprin
<check if="{{epic_number}} still not determined"> <check if="{{epic_number}} still not determined">
<action>PRIORITY 3: Fallback to stories folder</action> <action>PRIORITY 3: Fallback to stories folder</action>
<action>Scan {story_directory} for highest numbered story files</action> <action>Scan {implementation_artifacts} for highest numbered story files</action>
<action>Extract epic numbers from story filenames (pattern: epic-X-Y-story-name.md)</action> <action>Extract epic numbers from story filenames (pattern: epic-X-Y-story-name.md)</action>
<action>Set {{detected_epic}} = highest epic number found</action> <action>Set {{detected_epic}} = highest epic number found</action>
@ -171,7 +171,7 @@ Bob (Scrum Master): "Before we start the team discussion, let me review all the
Charlie (Senior Dev): "Good idea - those dev notes always have gold in them." Charlie (Senior Dev): "Good idea - those dev notes always have gold in them."
</output> </output>
<action>For each story in epic {{epic_number}}, read the complete story file from {story_directory}/{{epic_number}}-{{story_num}}-\*.md</action> <action>For each story in epic {{epic_number}}, read the complete story file from {implementation_artifacts}/{{epic_number}}-{{story_num}}-*.md</action>
<action>Extract and analyze from each story:</action> <action>Extract and analyze from each story:</action>
@ -262,14 +262,14 @@ Bob (Scrum Master): "We'll get to all of it. But first, let me load the previous
<action>Calculate previous epic number: {{prev_epic_num}} = {{epic_number}} - 1</action> <action>Calculate previous epic number: {{prev_epic_num}} = {{epic_number}} - 1</action>
<check if="{{prev_epic_num}} >= 1"> <check if="{{prev_epic_num}} >= 1">
<action>Search for previous retrospective using pattern: {retrospectives_folder}/epic-{{prev_epic_num}}-retro-*.md</action> <action>Search for previous retrospectives using pattern: {implementation_artifacts}/epic-{{prev_epic_num}}-retro-*.md</action>
<check if="previous retro found"> <check if="previous retrospectives found">
<output> <output>
Bob (Scrum Master): "I found our retrospective from Epic {{prev_epic_num}}. Let me see what we committed to back then..." Bob (Scrum Master): "I found our retrospectives from Epic {{prev_epic_num}}. Let me see what we committed to back then..."
</output> </output>
<action>Read the complete previous retrospective file</action> <action>Read the previous retrospectives</action>
<action>Extract key elements:</action> <action>Extract key elements:</action>
- **Action items committed**: What did the team agree to improve? - **Action items committed**: What did the team agree to improve?
@ -366,7 +366,7 @@ Alice (Product Owner): "Good thinking - helps us connect what we learned to what
<action>Attempt to load next epic using selective loading strategy:</action> <action>Attempt to load next epic using selective loading strategy:</action>
**Try sharded first (more specific):** **Try sharded first (more specific):**
<action>Check if file exists: {planning_artifacts}/epic\*/epic-{{next_epic_num}}.md</action> <action>Check if file exists: {planning_artifacts}/epic*/epic-{{next_epic_num}}.md</action>
<check if="sharded epic file found"> <check if="sharded epic file found">
<action>Load {planning_artifacts}/*epic*/epic-{{next_epic_num}}.md</action> <action>Load {planning_artifacts}/*epic*/epic-{{next_epic_num}}.md</action>
@ -375,7 +375,7 @@ Alice (Product Owner): "Good thinking - helps us connect what we learned to what
**Fallback to whole document:** **Fallback to whole document:**
<check if="sharded epic not found"> <check if="sharded epic not found">
<action>Check if file exists: {planning_artifacts}/epic\*.md</action> <action>Check if file exists: {planning_artifacts}/epic*.md</action>
<check if="whole epic file found"> <check if="whole epic file found">
<action>Load entire epics document</action> <action>Load entire epics document</action>
@ -1303,7 +1303,7 @@ Bob (Scrum Master): "See you all when prep work is done. Meeting adjourned!"
<step n="11" goal="Save Retrospective and Update Sprint Status"> <step n="11" goal="Save Retrospective and Update Sprint Status">
<action>Ensure retrospectives folder exists: {retrospectives_folder}</action> <action>Ensure retrospectives folder exists: {implementation_artifacts}</action>
<action>Create folder if it doesn't exist</action> <action>Create folder if it doesn't exist</action>
<action>Generate comprehensive retrospective summary document including:</action> <action>Generate comprehensive retrospective summary document including:</action>
@ -1323,11 +1323,11 @@ Bob (Scrum Master): "See you all when prep work is done. Meeting adjourned!"
- Commitments and next steps - Commitments and next steps
<action>Format retrospective document as readable markdown with clear sections</action> <action>Format retrospective document as readable markdown with clear sections</action>
<action>Set filename: {retrospectives_folder}/epic-{{epic_number}}-retro-{date}.md</action> <action>Set filename: {implementation_artifacts}/epic-{{epic_number}}-retro-{date}.md</action>
<action>Save retrospective document</action> <action>Save retrospective document</action>
<output> <output>
✅ Retrospective document saved: {retrospectives_folder}/epic-{{epic_number}}-retro-{date}.md ✅ Retrospective document saved: {implementation_artifacts}/epic-{{epic_number}}-retro-{date}.md
</output> </output>
<action>Update {sprint_status_file} to mark retrospective as completed</action> <action>Update {sprint_status_file} to mark retrospective as completed</action>
@ -1366,7 +1366,7 @@ Retrospective document was saved successfully, but {sprint_status_file} may need
- Epic {{epic_number}}: {{epic_title}} reviewed - Epic {{epic_number}}: {{epic_title}} reviewed
- Retrospective Status: completed - Retrospective Status: completed
- Retrospective saved: {retrospectives_folder}/epic-{{epic_number}}-retro-{date}.md - Retrospective saved: {implementation_artifacts}/epic-{{epic_number}}-retro-{date}.md
**Commitments Made:** **Commitments Made:**
@ -1376,7 +1376,7 @@ Retrospective document was saved successfully, but {sprint_status_file} may need
**Next Steps:** **Next Steps:**
1. **Review retrospective summary**: {retrospectives_folder}/epic-{{epic_number}}-retro-{date}.md 1. **Review retrospective summary**: {implementation_artifacts}/epic-{{epic_number}}-retro-{date}.md
2. **Execute preparation sprint** (Est: {{prep_days}} days) 2. **Execute preparation sprint** (Est: {{prep_days}} days)
- Complete {{critical_count}} critical path items - Complete {{critical_count}} critical path items

View File

@ -4,7 +4,6 @@ description: "Run after epic completion to review overall success, extract lesso
author: "BMad" author: "BMad"
config_source: "{project-root}/_bmad/bmm/config.yaml" config_source: "{project-root}/_bmad/bmm/config.yaml"
output_folder: "{config_source}:implementation_artifacts}"
user_name: "{config_source}:user_name" user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language" communication_language: "{config_source}:communication_language"
user_skill_level: "{config_source}:user_skill_level" user_skill_level: "{config_source}:user_skill_level"
@ -52,5 +51,3 @@ input_file_patterns:
# Required files # Required files
sprint_status_file: "{implementation_artifacts}/sprint-status.yaml" sprint_status_file: "{implementation_artifacts}/sprint-status.yaml"
story_directory: "{implementation_artifacts}"
retrospectives_folder: "{implementation_artifacts}"

View File

@ -9,7 +9,6 @@ communication_language: "{config_source}:communication_language"
date: system-generated date: system-generated
implementation_artifacts: "{config_source}:implementation_artifacts" implementation_artifacts: "{config_source}:implementation_artifacts"
planning_artifacts: "{config_source}:planning_artifacts" planning_artifacts: "{config_source}:planning_artifacts"
output_folder: "{implementation_artifacts}"
# Workflow components # Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/sprint-planning" installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/sprint-planning"
@ -18,24 +17,21 @@ template: "{installed_path}/sprint-status-template.yaml"
validation: "{installed_path}/checklist.md" validation: "{installed_path}/checklist.md"
# Variables and inputs # Variables and inputs
variables: project_context: "**/project-context.md"
# Project context project_name: "{config_source}:project_name"
project_context: "**/project-context.md"
# Project identification
project_name: "{config_source}:project_name"
# Tracking system configuration # Tracking system configuration
tracking_system: "file-system" # Options: file-system, Future will support other options from config of mcp such as jira, linear, trello tracking_system: "file-system" # Options: file-system, Future will support other options from config of mcp such as jira, linear, trello
project_key: "NOKEY" # Placeholder for tracker integrations; file-system uses a no-op key project_key: "NOKEY" # Placeholder for tracker integrations; file-system uses a no-op key
story_location: "{config_source}:implementation_artifacts" # Relative path for file-system, Future will support URL for Jira/Linear/Trello story_location: "{implementation_artifacts}" # Relative path for file-system, Future will support URL for Jira/Linear/Trello
story_location_absolute: "{config_source}:implementation_artifacts" # Absolute path for file operations story_location_absolute: "{implementation_artifacts}" # Absolute path for file operations
# Source files (file-system only) # Source files (file-system only)
epics_location: "{planning_artifacts}" # Directory containing epic*.md files epics_location: "{planning_artifacts}" # Directory containing epic*.md files
epics_pattern: "epic*.md" # Pattern to find epic files epics_pattern: "epic*.md" # Pattern to find epic files
# Output configuration # Output configuration
status_file: "{implementation_artifacts}/sprint-status.yaml" status_file: "{implementation_artifacts}/sprint-status.yaml"
# Smart input file references - handles both whole docs and sharded docs # Smart input file references - handles both whole docs and sharded docs
# Priority: Whole document first, then sharded version # Priority: Whole document first, then sharded version
@ -43,8 +39,8 @@ variables:
input_file_patterns: input_file_patterns:
epics: epics:
description: "All epics with user stories" description: "All epics with user stories"
whole: "{output_folder}/*epic*.md" whole: "{planning_artifacts}/*epic*.md"
sharded: "{output_folder}/*epic*/*.md" sharded: "{planning_artifacts}/*epic*/*.md"
load_strategy: "FULL_LOAD" load_strategy: "FULL_LOAD"
# Output configuration # Output configuration

View File

@ -5,23 +5,17 @@ author: "BMad"
# Critical variables from config # Critical variables from config
config_source: "{project-root}/_bmad/bmm/config.yaml" config_source: "{project-root}/_bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name" user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language" communication_language: "{config_source}:communication_language"
document_output_language: "{config_source}:document_output_language" document_output_language: "{config_source}:document_output_language"
date: system-generated
implementation_artifacts: "{config_source}:implementation_artifacts" implementation_artifacts: "{config_source}:implementation_artifacts"
planning_artifacts: "{config_source}:planning_artifacts"
project_context: "**/project-context.md"
# Workflow components # Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/sprint-status" installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/sprint-status"
instructions: "{installed_path}/instructions.md" instructions: "{installed_path}/instructions.md"
# Inputs # Inputs
variables: sprint_status_file: "{implementation_artifacts}/sprint-status.yaml"
sprint_status_file: "{implementation_artifacts}/sprint-status.yaml"
tracking_system: "file-system"
# Smart input file references # Smart input file references
input_file_patterns: input_file_patterns:

View File

@ -28,7 +28,7 @@ This uses **step-file architecture** for focused execution:
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve: Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
- `user_name`, `communication_language`, `user_skill_level` - `user_name`, `communication_language`, `user_skill_level`
- `output_folder`, `planning_artifacts`, `implementation_artifacts` - `planning_artifacts`, `implementation_artifacts`
- `date` as system-generated current datetime - `date` as system-generated current datetime
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` - ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`

View File

@ -76,7 +76,7 @@ a) **Before asking detailed questions, do a rapid scan to understand the landsca
b) **Check for existing context docs:** b) **Check for existing context docs:**
- Check `{output_folder}` and `{planning_artifacts}`for planning documents (PRD, architecture, epics, research) - Check `{implementation_artifacts}` and `{planning_artifacts}`for planning documents (PRD, architecture, epics, research)
- Check for `**/project-context.md` - if it exists, skim for patterns and conventions - Check for `**/project-context.md` - if it exists, skim for patterns and conventions
- Check for any existing stories or specs related to user's request - Check for any existing stories or specs related to user's request

View File

@ -68,7 +68,7 @@ This uses **step-file architecture** for disciplined execution:
Load and read full config from `{main_config}` and resolve: Load and read full config from `{main_config}` and resolve:
- `project_name`, `output_folder`, `planning_artifacts`, `implementation_artifacts`, `user_name` - `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name`
- `communication_language`, `document_output_language`, `user_skill_level` - `communication_language`, `document_output_language`, `user_skill_level`
- `date` as system-generated current datetime - `date` as system-generated current datetime
- `project_context` = `**/project-context.md` (load if exists) - `project_context` = `**/project-context.md` (load if exists)

View File

@ -8,56 +8,8 @@
<critical>This router determines workflow mode and delegates to specialized sub-workflows</critical> <critical>This router determines workflow mode and delegates to specialized sub-workflows</critical>
<step n="1" goal="Validate workflow and get project info"> <step n="1" goal="Check for ability to resume and determine workflow mode">
<action>Check for existing state file at: {project_knowledge}/project-scan-report.json</action>
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/workflow-status">
<param>mode: data</param>
<param>data_request: project_config</param>
</invoke-workflow>
<check if="status_exists == false">
<output>{{suggestion}}</output>
<output>Note: Documentation workflow can run standalone. Continuing without progress tracking.</output>
<action>Set standalone_mode = true</action>
<action>Set status_file_found = false</action>
</check>
<check if="status_exists == true">
<action>Store {{status_file_path}} for later updates</action>
<action>Set status_file_found = true</action>
<!-- Extract brownfield/greenfield from status data -->
<check if="field_type == 'greenfield'">
<output>Note: This is a greenfield project. Documentation workflow is typically for brownfield projects.</output>
<ask>Continue anyway to document planning artifacts? (y/n)</ask>
<check if="n">
<action>Exit workflow</action>
</check>
</check>
<!-- Now validate sequencing -->
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/workflow-status">
<param>mode: validate</param>
<param>calling_workflow: document-project</param>
</invoke-workflow>
<check if="warning != ''">
<output>{{warning}}</output>
<output>Note: This may be auto-invoked by prd for brownfield documentation.</output>
<ask>Continue with documentation? (y/n)</ask>
<check if="n">
<output>{{suggestion}}</output>
<action>Exit workflow</action>
</check>
</check>
</check>
</step>
<step n="2" goal="Check for resumability and determine workflow mode">
<critical>SMART LOADING STRATEGY: Check state file FIRST before loading any CSV files</critical>
<action>Check for existing state file at: {output_folder}/project-scan-report.json</action>
<check if="project-scan-report.json exists"> <check if="project-scan-report.json exists">
<action>Read state file and extract: timestamps, mode, scan_level, current_step, completed_steps, project_classification</action> <action>Read state file and extract: timestamps, mode, scan_level, current_step, completed_steps, project_classification</action>
@ -66,21 +18,21 @@
<ask>I found an in-progress workflow state from {{last_updated}}. <ask>I found an in-progress workflow state from {{last_updated}}.
**Current Progress:** **Current Progress:**
- Mode: {{mode}} - Mode: {{mode}}
- Scan Level: {{scan_level}} - Scan Level: {{scan_level}}
- Completed Steps: {{completed_steps_count}}/{{total_steps}} - Completed Steps: {{completed_steps_count}}/{{total_steps}}
- Last Step: {{current_step}} - Last Step: {{current_step}}
- Project Type(s): {{cached_project_types}} - Project Type(s): {{cached_project_types}}
Would you like to: Would you like to:
1. **Resume from where we left off** - Continue from step {{current_step}} 1. **Resume from where we left off** - Continue from step {{current_step}}
2. **Start fresh** - Archive old state and begin new scan 2. **Start fresh** - Archive old state and begin new scan
3. **Cancel** - Exit without changes 3. **Cancel** - Exit without changes
Your choice [1/2/3]: Your choice [1/2/3]:
</ask> </ask>
<check if="user selects 1"> <check if="user selects 1">
@ -107,8 +59,8 @@ Your choice [1/2/3]:
</check> </check>
<check if="user selects 2"> <check if="user selects 2">
<action>Create archive directory: {output_folder}/.archive/</action> <action>Create archive directory: {project_knowledge}/.archive/</action>
<action>Move old state file to: {output_folder}/.archive/project-scan-report-{{timestamp}}.json</action> <action>Move old state file to: {project_knowledge}/.archive/project-scan-report-{{timestamp}}.json</action>
<action>Set resume_mode = false</action> <action>Set resume_mode = false</action>
<action>Continue to Step 0.5</action> <action>Continue to Step 0.5</action>
</check> </check>
@ -120,7 +72,7 @@ Your choice [1/2/3]:
<check if="state file age >= 24 hours"> <check if="state file age >= 24 hours">
<action>Display: "Found old state file (>24 hours). Starting fresh scan."</action> <action>Display: "Found old state file (>24 hours). Starting fresh scan."</action>
<action>Archive old state file to: {output_folder}/.archive/project-scan-report-{{timestamp}}.json</action> <action>Archive old state file to: {project_knowledge}/.archive/project-scan-report-{{timestamp}}.json</action>
<action>Set resume_mode = false</action> <action>Set resume_mode = false</action>
<action>Continue to Step 0.5</action> <action>Continue to Step 0.5</action>
</check> </check>
@ -128,7 +80,7 @@ Your choice [1/2/3]:
</step> </step>
<step n="3" goal="Check for existing documentation and determine workflow mode" if="resume_mode == false"> <step n="3" goal="Check for existing documentation and determine workflow mode" if="resume_mode == false">
<action>Check if {output_folder}/index.md exists</action> <action>Check if {project_knowledge}/index.md exists</action>
<check if="index.md exists"> <check if="index.md exists">
<action>Read existing index.md to extract metadata (date, project structure, parts count)</action> <action>Read existing index.md to extract metadata (date, project structure, parts count)</action>
@ -175,47 +127,4 @@ Your choice [1/2/3]:
</step> </step>
<step n="4" goal="Update status and complete">
<check if="status_file_found == true">
<invoke-workflow path="{project-root}/_bmad/bmm/workflows/workflow-status">
<param>mode: update</param>
<param>action: complete_workflow</param>
<param>workflow_name: document-project</param>
</invoke-workflow>
<check if="success == true">
<output>Status updated!</output>
</check>
</check>
<output>**✅ Document Project Workflow Complete, {user_name}!**
**Documentation Generated:**
- Mode: {{workflow_mode}}
- Scan Level: {{scan_level}}
- Output: {output_folder}/index.md and related files
{{#if status_file_found}}
**Status Updated:**
- Progress tracking updated
**Next Steps:**
- **Next required:** {{next_workflow}} ({{next_agent}} agent)
Check status anytime with: `workflow-status`
{{else}}
**Next Steps:**
Since no workflow is in progress:
- Refer to the BMM workflow guide if unsure what to do next
- Or run `workflow-init` to create a workflow path and get guided next steps
{{/if}}
</output>
</step>
</workflow> </workflow>

View File

@ -45,9 +45,9 @@
"type": "string", "type": "string",
"description": "Absolute path to project root directory" "description": "Absolute path to project root directory"
}, },
"output_folder": { "project_knowledge": {
"type": "string", "type": "string",
"description": "Absolute path to output folder" "description": "Absolute path to project knowledge folder"
}, },
"completed_steps": { "completed_steps": {
"type": "array", "type": "array",

View File

@ -6,7 +6,7 @@ author: "BMad"
# Critical variables # Critical variables
config_source: "{project-root}/_bmad/bmm/config.yaml" config_source: "{project-root}/_bmad/bmm/config.yaml"
output_folder: "{config_source}:project_knowledge" project_knowledge: "{config_source}:project_knowledge"
user_name: "{config_source}:user_name" user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language" communication_language: "{config_source}:communication_language"
document_output_language: "{config_source}:document_output_language" document_output_language: "{config_source}:document_output_language"

View File

@ -194,7 +194,7 @@ This will read EVERY file in this area. Proceed? [y/n]
<action>Load complete deep-dive template from: {installed_path}/templates/deep-dive-template.md</action> <action>Load complete deep-dive template from: {installed_path}/templates/deep-dive-template.md</action>
<action>Fill template with all collected data from steps 13b-13d</action> <action>Fill template with all collected data from steps 13b-13d</action>
<action>Write filled template to: {output_folder}/deep-dive-{{sanitized_target_name}}.md</action> <action>Write filled template to: {project_knowledge}/deep-dive-{{sanitized_target_name}}.md</action>
<action>Validate deep-dive document completeness</action> <action>Validate deep-dive document completeness</action>
<template-output>deep_dive_documentation</template-output> <template-output>deep_dive_documentation</template-output>
@ -241,7 +241,7 @@ Detailed exhaustive analysis of specific areas:
## Deep-Dive Documentation Complete! ✓ ## Deep-Dive Documentation Complete! ✓
**Generated:** {output_folder}/deep-dive-{{target_name}}.md **Generated:** {project_knowledge}/deep-dive-{{target_name}}.md
**Files Analyzed:** {{file_count}} **Files Analyzed:** {{file_count}}
**Lines of Code Scanned:** {{total_loc}} **Lines of Code Scanned:** {{total_loc}}
**Time Taken:** ~{{duration}} **Time Taken:** ~{{duration}}
@ -255,7 +255,7 @@ Detailed exhaustive analysis of specific areas:
- Related code and reuse opportunities - Related code and reuse opportunities
- Implementation guidance - Implementation guidance
**Index Updated:** {output_folder}/index.md now includes link to this deep-dive **Index Updated:** {project_knowledge}/index.md now includes link to this deep-dive
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</action> </action>
@ -278,7 +278,7 @@ Your choice [1/2]:
All deep-dive documentation complete! All deep-dive documentation complete!
**Master Index:** {output_folder}/index.md **Master Index:** {project_knowledge}/index.md
**Deep-Dives Generated:** {{deep_dive_count}} **Deep-Dives Generated:** {{deep_dive_count}}
These comprehensive docs are now ready for: These comprehensive docs are now ready for:

View File

@ -8,7 +8,7 @@ parent_workflow: "{project-root}/_bmad/bmm/workflows/document-project/workflow.y
# Critical variables inherited from parent # Critical variables inherited from parent
config_source: "{project-root}/_bmad/bmb/config.yaml" config_source: "{project-root}/_bmad/bmb/config.yaml"
output_folder: "{config_source}:output_folder" project_knowledge: "{config_source}:project_knowledge"
user_name: "{config_source}:user_name" user_name: "{config_source}:user_name"
date: system-generated date: system-generated

View File

@ -43,7 +43,7 @@ This workflow uses a single comprehensive CSV file to intelligently document you
</step> </step>
<step n="0.6" goal="Check for existing documentation and determine workflow mode"> <step n="0.6" goal="Check for existing documentation and determine workflow mode">
<action>Check if {output_folder}/index.md exists</action> <action>Check if {project_knowledge}/index.md exists</action>
<check if="index.md exists"> <check if="index.md exists">
<action>Read existing index.md to extract metadata (date, project structure, parts count)</action> <action>Read existing index.md to extract metadata (date, project structure, parts count)</action>
@ -127,7 +127,7 @@ Your choice [1/2/3] (default: 1):
<action>Display: "Using Exhaustive Scan (reading all source files)"</action> <action>Display: "Using Exhaustive Scan (reading all source files)"</action>
</action> </action>
<action>Initialize state file: {output_folder}/project-scan-report.json</action> <action>Initialize state file: {project_knowledge}/project-scan-report.json</action>
<critical>Every time you touch the state file, record: step id, human-readable summary (what you actually did), precise timestamp, and any outputs written. Vague phrases are unacceptable.</critical> <critical>Every time you touch the state file, record: step id, human-readable summary (what you actually did), precise timestamp, and any outputs written. Vague phrases are unacceptable.</critical>
<action>Write initial state: <action>Write initial state:
{ {
@ -136,7 +136,7 @@ Your choice [1/2/3] (default: 1):
"mode": "{{workflow_mode}}", "mode": "{{workflow_mode}}",
"scan_level": "{{scan_level}}", "scan_level": "{{scan_level}}",
"project_root": "{{project_root_path}}", "project_root": "{{project_root_path}}",
"output_folder": "{{output_folder}}", "project_knowledge": "{{project_knowledge}}",
"completed_steps": [], "completed_steps": [],
"current_step": "step_1", "current_step": "step_1",
"findings": {}, "findings": {},
@ -325,7 +325,7 @@ findings.batches_completed: [
</check> </check>
<action>Build API contracts catalog</action> <action>Build API contracts catalog</action>
<action>IMMEDIATELY write to: {output_folder}/api-contracts-{part_id}.md</action> <action>IMMEDIATELY write to: {project_knowledge}/api-contracts-{part_id}.md</action>
<action>Validate document has all required sections</action> <action>Validate document has all required sections</action>
<action>Update state file with output generated</action> <action>Update state file with output generated</action>
<action>PURGE detailed API data, keep only: "{{api_count}} endpoints documented"</action> <action>PURGE detailed API data, keep only: "{{api_count}} endpoints documented"</action>
@ -346,7 +346,7 @@ findings.batches_completed: [
</check> </check>
<action>Build database schema documentation</action> <action>Build database schema documentation</action>
<action>IMMEDIATELY write to: {output_folder}/data-models-{part_id}.md</action> <action>IMMEDIATELY write to: {project_knowledge}/data-models-{part_id}.md</action>
<action>Validate document completeness</action> <action>Validate document completeness</action>
<action>Update state file with output generated</action> <action>Update state file with output generated</action>
<action>PURGE detailed schema data, keep only: "{{table_count}} tables documented"</action> <action>PURGE detailed schema data, keep only: "{{table_count}} tables documented"</action>
@ -805,7 +805,7 @@ When a document SHOULD be generated but wasn't (due to quick scan, missing data,
<step n="11" goal="Validate and review generated documentation" if="workflow_mode != deep_dive"> <step n="11" goal="Validate and review generated documentation" if="workflow_mode != deep_dive">
<action>Show summary of all generated files: <action>Show summary of all generated files:
Generated in {{output_folder}}/: Generated in {{project_knowledge}}/:
{{file_list_with_sizes}} {{file_list_with_sizes}}
</action> </action>
@ -823,7 +823,7 @@ Generated in {{output_folder}}/:
3. Extract document metadata from each match for user selection 3. Extract document metadata from each match for user selection
</critical> </critical>
<action>Read {output_folder}/index.md</action> <action>Read {project_knowledge}/index.md</action>
<action>Scan for incomplete documentation markers: <action>Scan for incomplete documentation markers:
Step 1: Search for exact pattern "_(To be generated)_" (case-sensitive) Step 1: Search for exact pattern "_(To be generated)_" (case-sensitive)
@ -1065,9 +1065,9 @@ Enter number(s) separated by commas (e.g., "1,3,5"), or type 'all':
## Project Documentation Complete! ✓ ## Project Documentation Complete! ✓
**Location:** {{output_folder}}/ **Location:** {{project_knowledge}}/
**Master Index:** {{output_folder}}/index.md **Master Index:** {{project_knowledge}}/index.md
👆 This is your primary entry point for AI-assisted development 👆 This is your primary entry point for AI-assisted development
**Generated Documentation:** **Generated Documentation:**
@ -1076,9 +1076,9 @@ Enter number(s) separated by commas (e.g., "1,3,5"), or type 'all':
**Next Steps:** **Next Steps:**
1. Review the index.md to familiarize yourself with the documentation structure 1. Review the index.md to familiarize yourself with the documentation structure
2. When creating a brownfield PRD, point the PRD workflow to: {{output_folder}}/index.md 2. When creating a brownfield PRD, point the PRD workflow to: {{project_knowledge}}/index.md
3. For UI-only features: Reference {{output_folder}}/architecture-{{ui_part_id}}.md 3. For UI-only features: Reference {{project_knowledge}}/architecture-{{ui_part_id}}.md
4. For API-only features: Reference {{output_folder}}/architecture-{{api_part_id}}.md 4. For API-only features: Reference {{project_knowledge}}/architecture-{{api_part_id}}.md
5. For full-stack features: Reference both part architectures + integration-architecture.md 5. For full-stack features: Reference both part architectures + integration-architecture.md
**Verification Recap:** **Verification Recap:**
@ -1101,6 +1101,6 @@ When ready to plan new features, run the PRD workflow and provide this index as
- Write final state file - Write final state file
</action> </action>
<action>Display: "State file saved: {{output_folder}}/project-scan-report.json"</action> <action>Display: "State file saved: {{project_knowledge}}/project-scan-report.json"</action>
</workflow> </workflow>

View File

@ -8,7 +8,7 @@ parent_workflow: "{project-root}/_bmad/bmm/workflows/document-project/workflow.y
# Critical variables inherited from parent # Critical variables inherited from parent
config_source: "{project-root}/_bmad/bmb/config.yaml" config_source: "{project-root}/_bmad/bmb/config.yaml"
output_folder: "{config_source}:output_folder" project_knowledge: "{config_source}:project_knowledge"
user_name: "{config_source}:user_name" user_name: "{config_source}:user_name"
date: system-generated date: system-generated

View File

@ -5,7 +5,6 @@ author: "BMad"
# Critical variables from config # Critical variables from config
config_source: "{project-root}/_bmad/bmm/config.yaml" config_source: "{project-root}/_bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
implementation_artifacts: "{config_source}:implementation_artifacts" implementation_artifacts: "{config_source}:implementation_artifacts"
user_name: "{config_source}:user_name" user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language" communication_language: "{config_source}:communication_language"
@ -19,10 +18,8 @@ validation: "{installed_path}/checklist.md"
template: false template: false
# Variables and inputs # Variables and inputs
variables: test_dir: "{project-root}/tests" # Root test directory
# Directory paths source_dir: "{project-root}" # Source code directory
test_dir: "{project-root}/tests" # Root test directory
source_dir: "{project-root}" # Source code directory
# Output configuration # Output configuration
default_output_file: "{implementation_artifacts}/tests/test-summary.md" default_output_file: "{implementation_artifacts}/tests/test-summary.md"

View File

@ -15,7 +15,7 @@ agent:
identity: "Master-level expert in the BMAD Core Platform and all loaded modules with comprehensive knowledge of all resources, tasks, and workflows. Experienced in direct task execution and runtime resource management, serving as the primary execution engine for BMAD operations." identity: "Master-level expert in the BMAD Core Platform and all loaded modules with comprehensive knowledge of all resources, tasks, and workflows. Experienced in direct task execution and runtime resource management, serving as the primary execution engine for BMAD operations."
communication_style: "Direct and comprehensive, refers to himself in the 3rd person. Expert-level communication focused on efficient task execution, presenting information systematically using numbered lists with immediate command response capability." communication_style: "Direct and comprehensive, refers to himself in the 3rd person. Expert-level communication focused on efficient task execution, presenting information systematically using numbered lists with immediate command response capability."
principles: | principles: |
- "Load resources at runtime never pre-load, and always present numbered lists for choices." - Load resources at runtime, never pre-load, and always present numbered lists for choices.
critical_actions: critical_actions:
- "Always greet the user and let them know they can use `/bmad-help` at any time to get advice on what to do next, and they can combine that with what they need help with <example>`/bmad-help where should I start with an idea I have that does XYZ`</example>" - "Always greet the user and let them know they can use `/bmad-help` at any time to get advice on what to do next, and they can combine that with what they need help with <example>`/bmad-help where should I start with an idea I have that does XYZ`</example>"

View File

@ -5,3 +5,56 @@
For external official modules to be discoverable during install, ensure an entry for the external repo is added to external-official-modules.yaml. For external official modules to be discoverable during install, ensure an entry for the external repo is added to external-official-modules.yaml.
For community modules - this will be handled in a different way. This file is only for registration of modules under the bmad-code-org. For community modules - this will be handled in a different way. This file is only for registration of modules under the bmad-code-org.
## Post-Install Notes
Modules can display setup guidance to users after configuration is collected during `npx bmad-method install`. Notes are defined in the module's own `module.yaml` — no changes to the installer are needed.
### Simple Format
Always displayed after the module is configured:
```yaml
post-install-notes: |
Thank you for choosing the XYZ Cool Module
For Support about this Module call 555-1212
```
### Conditional Format
Display different messages based on a config question's answer:
```yaml
post-install-notes:
config_key_name:
value1: |
Instructions for value1...
value2: |
Instructions for value2...
```
Values without an entry (e.g., `none`) display nothing. Multiple config keys can each have their own conditional notes.
### Example: TEA Module
The TEA module uses the conditional format keyed on `tea_browser_automation`:
```yaml
post-install-notes:
tea_browser_automation:
cli: |
Playwright CLI Setup:
npm install -g @playwright/cli@latest
playwright-cli install --skills
mcp: |
Playwright MCP Setup (two servers):
1. playwright — npx @playwright/mcp@latest
2. playwright-test — npx playwright run-test-mcp-server
auto: |
Playwright CLI Setup:
...
Playwright MCP Setup (two servers):
...
```
When a user selects `auto`, they see both CLI and MCP instructions. When they select `none`, nothing is shown.

View File

@ -0,0 +1,167 @@
const path = require('node:path');
const fs = require('fs-extra');
const prompts = require('../lib/prompts');
const { Installer } = require('../installers/lib/core/installer');
const installer = new Installer();
module.exports = {
command: 'uninstall',
description: 'Remove BMAD installation from the current project',
options: [
['-y, --yes', 'Remove all BMAD components without prompting (preserves user artifacts)'],
['--directory <path>', 'Project directory (default: current directory)'],
],
action: async (options) => {
try {
let projectDir;
if (options.directory) {
// Explicit --directory flag takes precedence
projectDir = path.resolve(options.directory);
} else if (options.yes) {
// Non-interactive mode: use current directory
projectDir = process.cwd();
} else {
// Interactive: ask user which directory to uninstall from
// select() handles cancellation internally (exits process)
const dirChoice = await prompts.select({
message: 'Where do you want to uninstall BMAD from?',
choices: [
{ value: 'cwd', name: `Current directory (${process.cwd()})` },
{ value: 'other', name: 'Another directory...' },
],
});
if (dirChoice === 'other') {
// text() handles cancellation internally (exits process)
const customDir = await prompts.text({
message: 'Enter the project directory path:',
placeholder: process.cwd(),
validate: (value) => {
if (!value || value.trim().length === 0) return 'Directory path is required';
},
});
projectDir = path.resolve(customDir.trim());
} else {
projectDir = process.cwd();
}
}
if (!(await fs.pathExists(projectDir))) {
await prompts.log.error(`Directory does not exist: ${projectDir}`);
process.exit(1);
}
const { bmadDir } = await installer.findBmadDir(projectDir);
if (!(await fs.pathExists(bmadDir))) {
await prompts.log.warn('No BMAD installation found.');
process.exit(0);
}
const existingInstall = await installer.getStatus(projectDir);
const version = existingInstall.version || 'unknown';
const modules = (existingInstall.modules || []).map((m) => m.id || m.name).join(', ');
const ides = (existingInstall.ides || []).join(', ');
const outputFolder = await installer.getOutputFolder(projectDir);
await prompts.intro('BMAD Uninstall');
await prompts.note(`Version: ${version}\nModules: ${modules}\nIDE integrations: ${ides}`, 'Current Installation');
let removeModules = true;
let removeIdeConfigs = true;
let removeOutputFolder = false;
if (!options.yes) {
// multiselect() handles cancellation internally (exits process)
const selected = await prompts.multiselect({
message: 'Select components to remove:',
options: [
{
value: 'modules',
label: `BMAD Modules & data (${installer.bmadFolderName}/)`,
hint: 'Core installation, agents, workflows, config',
},
{ value: 'ide', label: 'IDE integrations', hint: ides || 'No IDEs configured' },
{ value: 'output', label: `User artifacts (${outputFolder}/)`, hint: 'WARNING: Contains your work products' },
],
initialValues: ['modules', 'ide'],
required: true,
});
removeModules = selected.includes('modules');
removeIdeConfigs = selected.includes('ide');
removeOutputFolder = selected.includes('output');
const red = (s) => `\u001B[31m${s}\u001B[0m`;
await prompts.note(
red('💀 This action is IRREVERSIBLE! Removed files cannot be recovered!') +
'\n' +
red('💀 IDE configurations and modules will need to be reinstalled.') +
'\n' +
red('💀 User artifacts are preserved unless explicitly selected.'),
'!! DESTRUCTIVE ACTION !!',
);
const confirmed = await prompts.confirm({
message: 'Proceed with uninstall?',
default: false,
});
if (!confirmed) {
await prompts.outro('Uninstall cancelled.');
process.exit(0);
}
}
// Phase 1: IDE integrations
if (removeIdeConfigs) {
const s = await prompts.spinner();
s.start('Removing IDE integrations...');
await installer.uninstallIdeConfigs(projectDir, existingInstall, { silent: true });
s.stop(`Removed IDE integrations (${ides || 'none'})`);
}
// Phase 2: User artifacts
if (removeOutputFolder) {
const s = await prompts.spinner();
s.start(`Removing user artifacts (${outputFolder}/)...`);
await installer.uninstallOutputFolder(projectDir, outputFolder);
s.stop('User artifacts removed');
}
// Phase 3: BMAD modules & data (last — other phases may need _bmad/)
if (removeModules) {
const s = await prompts.spinner();
s.start(`Removing BMAD modules & data (${installer.bmadFolderName}/)...`);
await installer.uninstallModules(projectDir);
s.stop('Modules & data removed');
}
const summary = [];
if (removeIdeConfigs) summary.push('IDE integrations cleaned');
if (removeModules) summary.push('Modules & data removed');
if (removeOutputFolder) summary.push('User artifacts removed');
if (!removeOutputFolder) summary.push(`User artifacts preserved in ${outputFolder}/`);
await prompts.note(summary.join('\n'), 'Summary');
await prompts.outro('To reinstall, run: npx bmad-method install');
process.exit(0);
} catch (error) {
try {
const errorMessage = error instanceof Error ? error.message : String(error);
await prompts.log.error(`Uninstall failed: ${errorMessage}`);
if (error instanceof Error && error.stack) {
await prompts.log.message(error.stack);
}
} catch {
console.error(error instanceof Error ? error.message : error);
}
process.exit(1);
}
},
};

View File

@ -34,7 +34,7 @@ startMessage: |
- Subscribe on YouTube: https://www.youtube.com/@BMadCode - Subscribe on YouTube: https://www.youtube.com/@BMadCode
- Every star & sub helps us reach more developers! - Every star & sub helps us reach more developers!
Latest updates: https://github.com/bmad-code-org/BMAD-METHOD/CHANGELOG.md Latest updates: https://github.com/bmad-code-org/BMAD-METHOD/blob/main/CHANGELOG.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

View File

@ -302,23 +302,30 @@ class ConfigCollector {
const configSpinner = await prompts.spinner(); const configSpinner = await prompts.spinner();
configSpinner.start('Configuring modules...'); configSpinner.start('Configuring modules...');
for (const moduleName of defaultModules) { try {
const displayName = displayNameMap.get(moduleName) || moduleName.toUpperCase(); for (const moduleName of defaultModules) {
configSpinner.message(`Configuring ${displayName}...`); const displayName = displayNameMap.get(moduleName) || moduleName.toUpperCase();
try { configSpinner.message(`Configuring ${displayName}...`);
this._silentConfig = true; try {
await this.collectModuleConfig(moduleName, projectDir); this._silentConfig = true;
} finally { await this.collectModuleConfig(moduleName, projectDir);
this._silentConfig = false; } finally {
this._silentConfig = false;
}
} }
} finally {
configSpinner.stop(customizeModules.length > 0 ? 'Module defaults applied' : 'Module configuration complete');
} }
configSpinner.stop('Module configuration complete');
} }
// Run customized modules individually (may show interactive prompts) // Run customized modules individually (may show interactive prompts)
for (const moduleName of customizeModules) { for (const moduleName of customizeModules) {
await this.collectModuleConfig(moduleName, projectDir); await this.collectModuleConfig(moduleName, projectDir);
} }
if (customizeModules.length > 0) {
await prompts.log.step('Module configuration complete');
}
} }
// Add metadata // Add metadata
@ -550,6 +557,8 @@ class ConfigCollector {
} }
} }
await this.displayModulePostConfigNotes(moduleName, moduleConfig);
return newKeys.length > 0 || newStaticKeys.length > 0; // Return true if we had any new fields (interactive or static) return newKeys.length > 0 || newStaticKeys.length > 0; // Return true if we had any new fields (interactive or static)
} }
@ -923,6 +932,8 @@ class ConfigCollector {
} }
} }
} }
await this.displayModulePostConfigNotes(moduleName, moduleConfig);
} }
/** /**
@ -1195,6 +1206,58 @@ class ConfigCollector {
return question; return question;
} }
/**
* Display post-configuration notes for a module
* Shows prerequisite guidance based on collected config values
* Reads notes from the module's `post-install-notes` section in module.yaml
* Supports two formats:
* - Simple string: always displayed
* - Object keyed by config field name, with value-specific messages
* @param {string} moduleName - Module name
* @param {Object} moduleConfig - Parsed module.yaml content
*/
async displayModulePostConfigNotes(moduleName, moduleConfig) {
if (this._silentConfig) return;
if (!moduleConfig || !moduleConfig['post-install-notes']) return;
const notes = moduleConfig['post-install-notes'];
const color = await prompts.getColor();
// Format 1: Simple string - always display
if (typeof notes === 'string') {
await prompts.log.message('');
for (const line of notes.trim().split('\n')) {
await prompts.log.message(color.dim(line));
}
return;
}
// Format 2: Conditional on config values
if (typeof notes === 'object') {
const config = this.collectedConfig[moduleName];
if (!config) return;
let hasOutput = false;
for (const [configKey, valueMessages] of Object.entries(notes)) {
const selectedValue = config[configKey];
if (!selectedValue || !valueMessages[selectedValue]) continue;
if (hasOutput) await prompts.log.message('');
hasOutput = true;
const message = valueMessages[selectedValue];
for (const line of message.trim().split('\n')) {
const trimmedLine = line.trim();
if (trimmedLine.endsWith(':') && !trimmedLine.startsWith(' ')) {
await prompts.log.info(color.bold(trimmedLine));
} else {
await prompts.log.message(color.dim(' ' + trimmedLine));
}
}
}
}
}
/** /**
* Deep merge two objects * Deep merge two objects
* @param {Object} target - Target object * @param {Object} target - Target object

View File

@ -527,28 +527,30 @@ class Installer {
const cachedModules = await fs.readdir(cacheDir, { withFileTypes: true }); const cachedModules = await fs.readdir(cacheDir, { withFileTypes: true });
for (const cachedModule of cachedModules) { for (const cachedModule of cachedModules) {
if (cachedModule.isDirectory()) { const moduleId = cachedModule.name;
const moduleId = cachedModule.name; const cachedPath = path.join(cacheDir, moduleId);
// Skip if we already have this module from manifest // Skip if path doesn't exist (broken symlink, deleted dir) - avoids lstat ENOENT
if (customModulePaths.has(moduleId)) { if (!(await fs.pathExists(cachedPath)) || !cachedModule.isDirectory()) {
continue; continue;
} }
// Check if this is an external official module - skip cache for those // Skip if we already have this module from manifest
const isExternal = await this.moduleManager.isExternalModule(moduleId); if (customModulePaths.has(moduleId)) {
if (isExternal) { continue;
// External modules are handled via cloneExternalModule, not from cache }
continue;
}
const cachedPath = path.join(cacheDir, moduleId); // Check if this is an external official module - skip cache for those
const isExternal = await this.moduleManager.isExternalModule(moduleId);
if (isExternal) {
// External modules are handled via cloneExternalModule, not from cache
continue;
}
// Check if this is actually a custom module (has module.yaml) // Check if this is actually a custom module (has module.yaml)
const moduleYamlPath = path.join(cachedPath, 'module.yaml'); const moduleYamlPath = path.join(cachedPath, 'module.yaml');
if (await fs.pathExists(moduleYamlPath)) { if (await fs.pathExists(moduleYamlPath)) {
customModulePaths.set(moduleId, cachedPath); customModulePaths.set(moduleId, cachedPath);
}
} }
} }
@ -609,28 +611,30 @@ class Installer {
const cachedModules = await fs.readdir(cacheDir, { withFileTypes: true }); const cachedModules = await fs.readdir(cacheDir, { withFileTypes: true });
for (const cachedModule of cachedModules) { for (const cachedModule of cachedModules) {
if (cachedModule.isDirectory()) { const moduleId = cachedModule.name;
const moduleId = cachedModule.name; const cachedPath = path.join(cacheDir, moduleId);
// Skip if we already have this module from manifest // Skip if path doesn't exist (broken symlink, deleted dir) - avoids lstat ENOENT
if (customModulePaths.has(moduleId)) { if (!(await fs.pathExists(cachedPath)) || !cachedModule.isDirectory()) {
continue; continue;
} }
// Check if this is an external official module - skip cache for those // Skip if we already have this module from manifest
const isExternal = await this.moduleManager.isExternalModule(moduleId); if (customModulePaths.has(moduleId)) {
if (isExternal) { continue;
// External modules are handled via cloneExternalModule, not from cache }
continue;
}
const cachedPath = path.join(cacheDir, moduleId); // Check if this is an external official module - skip cache for those
const isExternal = await this.moduleManager.isExternalModule(moduleId);
if (isExternal) {
// External modules are handled via cloneExternalModule, not from cache
continue;
}
// Check if this is actually a custom module (has module.yaml) // Check if this is actually a custom module (has module.yaml)
const moduleYamlPath = path.join(cachedPath, 'module.yaml'); const moduleYamlPath = path.join(cachedPath, 'module.yaml');
if (await fs.pathExists(moduleYamlPath)) { if (await fs.pathExists(moduleYamlPath)) {
customModulePaths.set(moduleId, cachedPath); customModulePaths.set(moduleId, cachedPath);
}
} }
} }
@ -949,12 +953,11 @@ class Installer {
if (!isCustomModule && config._customModuleSources && config._customModuleSources.has(moduleName)) { if (!isCustomModule && config._customModuleSources && config._customModuleSources.has(moduleName)) {
customInfo = config._customModuleSources.get(moduleName); customInfo = config._customModuleSources.get(moduleName);
isCustomModule = true; isCustomModule = true;
if ( if (customInfo.sourcePath && !customInfo.path) {
customInfo.sourcePath && customInfo.path = path.isAbsolute(customInfo.sourcePath)
(customInfo.sourcePath.startsWith('_config') || customInfo.sourcePath.includes('_config/custom')) && ? customInfo.sourcePath
!customInfo.path : path.join(bmadDir, customInfo.sourcePath);
) }
customInfo.path = customInfo.sourcePath;
} }
// Finally check regular custom content // Finally check regular custom content
@ -1528,20 +1531,157 @@ class Installer {
} }
/** /**
* Uninstall BMAD * Uninstall BMAD with selective removal options
* @param {string} directory - Project directory
* @param {Object} options - Uninstall options
* @param {boolean} [options.removeModules=true] - Remove _bmad/ directory
* @param {boolean} [options.removeIdeConfigs=true] - Remove IDE configurations
* @param {boolean} [options.removeOutputFolder=false] - Remove user artifacts output folder
* @returns {Object} Result with success status and removed components
*/ */
async uninstall(directory) { async uninstall(directory, options = {}) {
const projectDir = path.resolve(directory); const projectDir = path.resolve(directory);
const { bmadDir } = await this.findBmadDir(projectDir); const { bmadDir } = await this.findBmadDir(projectDir);
if (await fs.pathExists(bmadDir)) { if (!(await fs.pathExists(bmadDir))) {
await fs.remove(bmadDir); return { success: false, reason: 'not-installed' };
} }
// Clean up IDE configurations // 1. DETECT: Read state BEFORE deleting anything
await this.ideManager.cleanup(projectDir); const existingInstall = await this.detector.detect(bmadDir);
const outputFolder = await this._readOutputFolder(bmadDir);
return { success: true }; const removed = { modules: false, ideConfigs: false, outputFolder: false };
// 2. IDE CLEANUP (before _bmad/ deletion so configs are accessible)
if (options.removeIdeConfigs !== false) {
await this.uninstallIdeConfigs(projectDir, existingInstall, { silent: options.silent });
removed.ideConfigs = true;
}
// 3. OUTPUT FOLDER (only if explicitly requested)
if (options.removeOutputFolder === true && outputFolder) {
removed.outputFolder = await this.uninstallOutputFolder(projectDir, outputFolder);
}
// 4. BMAD DIRECTORY (last, after everything that needs it)
if (options.removeModules !== false) {
removed.modules = await this.uninstallModules(projectDir);
}
return { success: true, removed, version: existingInstall.version };
}
/**
* Uninstall IDE configurations only
* @param {string} projectDir - Project directory
* @param {Object} existingInstall - Detection result from detector.detect()
* @param {Object} [options] - Options (e.g. { silent: true })
* @returns {Promise<Object>} Results from IDE cleanup
*/
async uninstallIdeConfigs(projectDir, existingInstall, options = {}) {
await this.ideManager.ensureInitialized();
const cleanupOptions = { isUninstall: true, silent: options.silent };
const ideList = existingInstall.ides || [];
if (ideList.length > 0) {
return this.ideManager.cleanupByList(projectDir, ideList, cleanupOptions);
}
return this.ideManager.cleanup(projectDir, cleanupOptions);
}
/**
* Remove user artifacts output folder
* @param {string} projectDir - Project directory
* @param {string} outputFolder - Output folder name (relative)
* @returns {Promise<boolean>} Whether the folder was removed
*/
async uninstallOutputFolder(projectDir, outputFolder) {
if (!outputFolder) return false;
const resolvedProject = path.resolve(projectDir);
const outputPath = path.resolve(resolvedProject, outputFolder);
if (!outputPath.startsWith(resolvedProject + path.sep)) {
return false;
}
if (await fs.pathExists(outputPath)) {
await fs.remove(outputPath);
return true;
}
return false;
}
/**
* Remove the _bmad/ directory
* @param {string} projectDir - Project directory
* @returns {Promise<boolean>} Whether the directory was removed
*/
async uninstallModules(projectDir) {
const { bmadDir } = await this.findBmadDir(projectDir);
if (await fs.pathExists(bmadDir)) {
await fs.remove(bmadDir);
return true;
}
return false;
}
/**
* Get the configured output folder name for a project
* Resolves bmadDir internally from projectDir
* @param {string} projectDir - Project directory
* @returns {string} Output folder name (relative, default: '_bmad-output')
*/
async getOutputFolder(projectDir) {
const { bmadDir } = await this.findBmadDir(projectDir);
return this._readOutputFolder(bmadDir);
}
/**
* Read the output_folder setting from module config files
* Checks bmm/config.yaml first, then other module configs
* @param {string} bmadDir - BMAD installation directory
* @returns {string} Output folder path or default
*/
async _readOutputFolder(bmadDir) {
const yaml = require('yaml');
// Check bmm/config.yaml first (most common)
const bmmConfigPath = path.join(bmadDir, 'bmm', 'config.yaml');
if (await fs.pathExists(bmmConfigPath)) {
try {
const content = await fs.readFile(bmmConfigPath, 'utf8');
const config = yaml.parse(content);
if (config && config.output_folder) {
// Strip {project-root}/ prefix if present
return config.output_folder.replace(/^\{project-root\}[/\\]/, '');
}
} catch {
// Fall through to other modules
}
}
// Scan other module config.yaml files
try {
const entries = await fs.readdir(bmadDir, { withFileTypes: true });
for (const entry of entries) {
if (!entry.isDirectory() || entry.name === 'bmm' || entry.name.startsWith('_')) continue;
const configPath = path.join(bmadDir, entry.name, 'config.yaml');
if (await fs.pathExists(configPath)) {
try {
const content = await fs.readFile(configPath, 'utf8');
const config = yaml.parse(content);
if (config && config.output_folder) {
return config.output_folder.replace(/^\{project-root\}[/\\]/, '');
}
} catch {
// Continue scanning
}
}
}
} catch {
// Directory scan failed
}
// Default fallback
return '_bmad-output';
} }
/** /**
@ -2236,41 +2376,58 @@ class Installer {
const configuredIdes = existingInstall.ides || []; const configuredIdes = existingInstall.ides || [];
const projectRoot = path.dirname(bmadDir); const projectRoot = path.dirname(bmadDir);
// Get custom module sources from cache // Get custom module sources: first from --custom-content (re-cache from source), then from cache
const customModuleSources = new Map(); const customModuleSources = new Map();
if (config.customContent?.sources?.length > 0) {
for (const source of config.customContent.sources) {
if (source.id && source.path && (await fs.pathExists(source.path))) {
customModuleSources.set(source.id, {
id: source.id,
name: source.name || source.id,
sourcePath: source.path,
cached: false, // From CLI, will be re-cached
});
}
}
}
const cacheDir = path.join(bmadDir, '_config', 'custom'); const cacheDir = path.join(bmadDir, '_config', 'custom');
if (await fs.pathExists(cacheDir)) { if (await fs.pathExists(cacheDir)) {
const cachedModules = await fs.readdir(cacheDir, { withFileTypes: true }); const cachedModules = await fs.readdir(cacheDir, { withFileTypes: true });
for (const cachedModule of cachedModules) { for (const cachedModule of cachedModules) {
if (cachedModule.isDirectory()) { const moduleId = cachedModule.name;
const moduleId = cachedModule.name; const cachedPath = path.join(cacheDir, moduleId);
// Skip if we already have this module from manifest // Skip if path doesn't exist (broken symlink, deleted dir) - avoids lstat ENOENT
if (customModuleSources.has(moduleId)) { if (!(await fs.pathExists(cachedPath))) {
continue; continue;
} }
if (!cachedModule.isDirectory()) {
continue;
}
// Check if this is an external official module - skip cache for those // Skip if we already have this module from manifest
const isExternal = await this.moduleManager.isExternalModule(moduleId); if (customModuleSources.has(moduleId)) {
if (isExternal) { continue;
// External modules are handled via cloneExternalModule, not from cache }
continue;
}
const cachedPath = path.join(cacheDir, moduleId); // Check if this is an external official module - skip cache for those
const isExternal = await this.moduleManager.isExternalModule(moduleId);
if (isExternal) {
// External modules are handled via cloneExternalModule, not from cache
continue;
}
// Check if this is actually a custom module (has module.yaml) // Check if this is actually a custom module (has module.yaml)
const moduleYamlPath = path.join(cachedPath, 'module.yaml'); const moduleYamlPath = path.join(cachedPath, 'module.yaml');
if (await fs.pathExists(moduleYamlPath)) { if (await fs.pathExists(moduleYamlPath)) {
// For quick update, we always rebuild from cache // For quick update, we always rebuild from cache
customModuleSources.set(moduleId, { customModuleSources.set(moduleId, {
id: moduleId, id: moduleId,
name: moduleId, // We'll read the actual name if needed name: moduleId, // We'll read the actual name if needed
sourcePath: cachedPath, sourcePath: cachedPath,
cached: true, // Flag to indicate this is from cache cached: true, // Flag to indicate this is from cache
}); });
}
} }
} }
} }
@ -2407,6 +2564,7 @@ class Installer {
_savedIdeConfigs: savedIdeConfigs, // Pass saved IDE configs to installer _savedIdeConfigs: savedIdeConfigs, // Pass saved IDE configs to installer
_customModuleSources: customModuleSources, // Pass custom module sources for updates _customModuleSources: customModuleSources, // Pass custom module sources for updates
_existingModules: installedModules, // Pass all installed modules for manifest generation _existingModules: installedModules, // Pass all installed modules for manifest generation
customContent: config.customContent, // Pass through for re-caching from source
}; };
// Call the standard install method // Call the standard install method

View File

@ -456,8 +456,18 @@ LOAD and execute from: {project-root}/{{bmadFolderName}}/{{path}}
async cleanup(projectDir, options = {}) { async cleanup(projectDir, options = {}) {
// Clean all target directories // Clean all target directories
if (this.installerConfig?.targets) { if (this.installerConfig?.targets) {
const parentDirs = new Set();
for (const target of this.installerConfig.targets) { for (const target of this.installerConfig.targets) {
await this.cleanupTarget(projectDir, target.target_dir, options); await this.cleanupTarget(projectDir, target.target_dir, options);
// Track parent directories for empty-dir cleanup
const parentDir = path.dirname(target.target_dir);
if (parentDir && parentDir !== '.') {
parentDirs.add(parentDir);
}
}
// After all targets cleaned, remove empty parent directories (recursive up to projectDir)
for (const parentDir of parentDirs) {
await this.removeEmptyParents(projectDir, parentDir);
} }
} else if (this.installerConfig?.target_dir) { } else if (this.installerConfig?.target_dir) {
await this.cleanupTarget(projectDir, this.installerConfig.target_dir, options); await this.cleanupTarget(projectDir, this.installerConfig.target_dir, options);
@ -509,6 +519,41 @@ LOAD and execute from: {project-root}/{{bmadFolderName}}/{{path}}
if (removedCount > 0 && !options.silent) { if (removedCount > 0 && !options.silent) {
await prompts.log.message(` Cleaned ${removedCount} BMAD files from ${targetDir}`); await prompts.log.message(` Cleaned ${removedCount} BMAD files from ${targetDir}`);
} }
// Remove empty directory after cleanup
if (removedCount > 0) {
try {
const remaining = await fs.readdir(targetPath);
if (remaining.length === 0) {
await fs.remove(targetPath);
}
} catch {
// Directory may already be gone or in use — skip
}
}
}
/**
* Recursively remove empty directories walking up from dir toward projectDir
* Stops at projectDir boundary never removes projectDir itself
* @param {string} projectDir - Project root (boundary)
* @param {string} relativeDir - Relative directory to start from
*/
async removeEmptyParents(projectDir, relativeDir) {
let current = relativeDir;
let last = null;
while (current && current !== '.' && current !== last) {
last = current;
const fullPath = path.join(projectDir, current);
try {
if (!(await fs.pathExists(fullPath))) break;
const remaining = await fs.readdir(fullPath);
if (remaining.length > 0) break;
await fs.rmdir(fullPath);
} catch {
break;
}
current = path.dirname(current);
}
} }
} }

View File

@ -1,6 +1,6 @@
const path = require('node:path'); const path = require('node:path');
const { BaseIdeSetup } = require('./_base-ide'); const { BaseIdeSetup } = require('./_base-ide');
const chalk = require('chalk'); const prompts = require('../../../lib/prompts');
const { AgentCommandGenerator } = require('./shared/agent-command-generator'); const { AgentCommandGenerator } = require('./shared/agent-command-generator');
const { BMAD_FOLDER_NAME, toDashPath } = require('./shared/path-utils'); const { BMAD_FOLDER_NAME, toDashPath } = require('./shared/path-utils');
const fs = require('fs-extra'); const fs = require('fs-extra');
@ -31,7 +31,7 @@ class GitHubCopilotSetup extends BaseIdeSetup {
* @param {Object} options - Setup options * @param {Object} options - Setup options
*/ */
async setup(projectDir, bmadDir, options = {}) { async setup(projectDir, bmadDir, options = {}) {
console.log(chalk.cyan(`Setting up ${this.name}...`)); if (!options.silent) await prompts.log.info(`Setting up ${this.name}...`);
// Create .github/agents and .github/prompts directories // Create .github/agents and .github/prompts directories
const githubDir = path.join(projectDir, this.githubDir); const githubDir = path.join(projectDir, this.githubDir);
@ -66,21 +66,15 @@ class GitHubCopilotSetup extends BaseIdeSetup {
const targetPath = path.join(agentsDir, fileName); const targetPath = path.join(agentsDir, fileName);
await this.writeFile(targetPath, agentContent); await this.writeFile(targetPath, agentContent);
agentCount++; agentCount++;
console.log(chalk.green(` ✓ Created agent: ${fileName}`));
} }
// Generate prompt files from bmad-help.csv // Generate prompt files from bmad-help.csv
const promptCount = await this.generatePromptFiles(projectDir, bmadDir, agentArtifacts, agentManifest); const promptCount = await this.generatePromptFiles(projectDir, bmadDir, agentArtifacts, agentManifest);
// Generate copilot-instructions.md // Generate copilot-instructions.md
await this.generateCopilotInstructions(projectDir, bmadDir, agentManifest); await this.generateCopilotInstructions(projectDir, bmadDir, agentManifest, options);
console.log(chalk.green(`\n${this.name} configured:`)); if (!options.silent) await prompts.log.success(`${this.name} configured: ${agentCount} agents, ${promptCount} prompts → .github/`);
console.log(chalk.dim(` - ${agentCount} agents created in .github/agents/`));
console.log(chalk.dim(` - ${promptCount} prompts created in .github/prompts/`));
console.log(chalk.dim(` - copilot-instructions.md generated`));
console.log(chalk.dim(` - Destination: .github/`));
return { return {
success: true, success: true,
@ -406,7 +400,7 @@ tools: ${toolsStr}
* @param {string} bmadDir - BMAD installation directory * @param {string} bmadDir - BMAD installation directory
* @param {Map} agentManifest - Agent manifest data * @param {Map} agentManifest - Agent manifest data
*/ */
async generateCopilotInstructions(projectDir, bmadDir, agentManifest) { async generateCopilotInstructions(projectDir, bmadDir, agentManifest, options = {}) {
const configVars = await this.loadModuleConfig(bmadDir); const configVars = await this.loadModuleConfig(bmadDir);
// Build the agents table from the manifest // Build the agents table from the manifest
@ -495,19 +489,16 @@ Type \`/bmad-\` in Copilot Chat to see all available BMAD workflows and agent ac
const after = existing.slice(endIdx + markerEnd.length); const after = existing.slice(endIdx + markerEnd.length);
const merged = `${before}${markedContent}${after}`; const merged = `${before}${markedContent}${after}`;
await this.writeFile(instructionsPath, merged); await this.writeFile(instructionsPath, merged);
console.log(chalk.green(' ✓ Updated BMAD section in copilot-instructions.md'));
} else { } else {
// Existing file without markers — back it up before overwriting // Existing file without markers — back it up before overwriting
const backupPath = `${instructionsPath}.bak`; const backupPath = `${instructionsPath}.bak`;
await fs.copy(instructionsPath, backupPath); await fs.copy(instructionsPath, backupPath);
console.log(chalk.yellow(` ⚠ Backed up existing copilot-instructions.md → copilot-instructions.md.bak`)); if (!options.silent) await prompts.log.warn(` Backed up copilot-instructions.md → .bak`);
await this.writeFile(instructionsPath, `${markedContent}\n`); await this.writeFile(instructionsPath, `${markedContent}\n`);
console.log(chalk.green(' ✓ Generated copilot-instructions.md (with BMAD markers)'));
} }
} else { } else {
// No existing file — create fresh with markers // No existing file — create fresh with markers
await this.writeFile(instructionsPath, `${markedContent}\n`); await this.writeFile(instructionsPath, `${markedContent}\n`);
console.log(chalk.green(' ✓ Generated copilot-instructions.md'));
} }
} }
@ -607,7 +598,7 @@ Type \`/bmad-\` in Copilot Chat to see all available BMAD workflows and agent ac
/** /**
* Cleanup GitHub Copilot configuration - surgically remove only BMAD files * Cleanup GitHub Copilot configuration - surgically remove only BMAD files
*/ */
async cleanup(projectDir) { async cleanup(projectDir, options = {}) {
// Clean up agents directory // Clean up agents directory
const agentsDir = path.join(projectDir, this.githubDir, this.agentsDir); const agentsDir = path.join(projectDir, this.githubDir, this.agentsDir);
if (await fs.pathExists(agentsDir)) { if (await fs.pathExists(agentsDir)) {
@ -621,8 +612,8 @@ Type \`/bmad-\` in Copilot Chat to see all available BMAD workflows and agent ac
} }
} }
if (removed > 0) { if (removed > 0 && !options.silent) {
console.log(chalk.dim(` Cleaned up ${removed} existing BMAD agents`)); await prompts.log.message(` Cleaned up ${removed} existing BMAD agents`);
} }
} }
@ -639,16 +630,70 @@ Type \`/bmad-\` in Copilot Chat to see all available BMAD workflows and agent ac
} }
} }
if (removed > 0) { if (removed > 0 && !options.silent) {
console.log(chalk.dim(` Cleaned up ${removed} existing BMAD prompts`)); await prompts.log.message(` Cleaned up ${removed} existing BMAD prompts`);
} }
} }
// Note: copilot-instructions.md is NOT cleaned up here. // During uninstall, also strip BMAD markers from copilot-instructions.md.
// generateCopilotInstructions() handles marker-based replacement in a single // During reinstall (default), this is skipped because generateCopilotInstructions()
// read-modify-write pass, which correctly preserves user content outside the markers. // handles marker-based replacement in a single read-modify-write pass,
// Stripping markers here would cause generation to treat the file as legacy (no markers) // which correctly preserves user content outside the markers.
// and overwrite user content. if (options.isUninstall) {
await this.cleanupCopilotInstructions(projectDir, options);
}
}
/**
* Strip BMAD marker section from copilot-instructions.md
* If file becomes empty after stripping, delete it.
* If a .bak backup exists and the main file was deleted, restore the backup.
* @param {string} projectDir - Project directory
* @param {Object} [options] - Options (e.g. { silent: true })
*/
async cleanupCopilotInstructions(projectDir, options = {}) {
const instructionsPath = path.join(projectDir, this.githubDir, 'copilot-instructions.md');
const backupPath = `${instructionsPath}.bak`;
if (!(await fs.pathExists(instructionsPath))) {
return;
}
const content = await fs.readFile(instructionsPath, 'utf8');
const markerStart = '<!-- BMAD:START -->';
const markerEnd = '<!-- BMAD:END -->';
const startIdx = content.indexOf(markerStart);
const endIdx = content.indexOf(markerEnd);
if (startIdx === -1 || endIdx === -1 || endIdx <= startIdx) {
return; // No valid markers found
}
// Strip the marker section (including markers)
const before = content.slice(0, startIdx);
const after = content.slice(endIdx + markerEnd.length);
const cleaned = before + after;
if (cleaned.trim().length === 0) {
// File is empty after stripping — delete it
await fs.remove(instructionsPath);
// If backup exists, restore it
if (await fs.pathExists(backupPath)) {
await fs.rename(backupPath, instructionsPath);
if (!options.silent) {
await prompts.log.message(' Restored copilot-instructions.md from backup');
}
}
} else {
// Write cleaned content back (preserve original whitespace)
await fs.writeFile(instructionsPath, cleaned, 'utf8');
// If backup exists, it's stale now — remove it
if (await fs.pathExists(backupPath)) {
await fs.remove(backupPath);
}
}
} }
} }

View File

@ -216,13 +216,14 @@ class IdeManager {
/** /**
* Cleanup IDE configurations * Cleanup IDE configurations
* @param {string} projectDir - Project directory * @param {string} projectDir - Project directory
* @param {Object} [options] - Cleanup options passed through to handlers
*/ */
async cleanup(projectDir) { async cleanup(projectDir, options = {}) {
const results = []; const results = [];
for (const [name, handler] of this.handlers) { for (const [name, handler] of this.handlers) {
try { try {
await handler.cleanup(projectDir); await handler.cleanup(projectDir, options);
results.push({ ide: name, success: true }); results.push({ ide: name, success: true });
} catch (error) { } catch (error) {
results.push({ ide: name, success: false, error: error.message }); results.push({ ide: name, success: false, error: error.message });
@ -232,6 +233,40 @@ class IdeManager {
return results; return results;
} }
/**
* Cleanup only the IDEs in the provided list
* Falls back to cleanup() (all handlers) if ideList is empty or undefined
* @param {string} projectDir - Project directory
* @param {Array<string>} ideList - List of IDE names to clean up
* @param {Object} [options] - Cleanup options passed through to handlers
* @returns {Array} Results array
*/
async cleanupByList(projectDir, ideList, options = {}) {
if (!ideList || ideList.length === 0) {
return this.cleanup(projectDir, options);
}
await this.ensureInitialized();
const results = [];
// Build lowercase lookup for case-insensitive matching
const lowercaseHandlers = new Map([...this.handlers.entries()].map(([k, v]) => [k.toLowerCase(), v]));
for (const ideName of ideList) {
const handler = lowercaseHandlers.get(ideName.toLowerCase());
if (!handler) continue;
try {
await handler.cleanup(projectDir, options);
results.push({ ide: ideName, success: true });
} catch (error) {
results.push({ ide: ideName, success: false, error: error.message });
}
}
return results;
}
/** /**
* Get list of supported IDEs * Get list of supported IDEs
* @returns {Array} List of supported IDE names * @returns {Array} List of supported IDE names

View File

@ -1,5 +1,5 @@
--- ---
mode: primary name: '{{name}}'
description: '{{description}}' description: '{{description}}'
--- ---

View File

@ -1,10 +1,12 @@
--- ---
name: '{{name}}'
description: '{{description}}' description: '{{description}}'
--- ---
Execute the BMAD '{{name}}' task. Execute the BMAD '{{name}}' task.
TASK INSTRUCTIONS: TASK INSTRUCTIONS:
1. LOAD the task file from {project-root}/{{bmadFolderName}}/{{path}} 1. LOAD the task file from {project-root}/{{bmadFolderName}}/{{path}}
2. READ its entire contents 2. READ its entire contents
3. FOLLOW every instruction precisely as specified 3. FOLLOW every instruction precisely as specified

View File

@ -1,10 +1,12 @@
--- ---
name: '{{name}}'
description: '{{description}}' description: '{{description}}'
--- ---
Execute the BMAD '{{name}}' tool. Execute the BMAD '{{name}}' tool.
TOOL INSTRUCTIONS: TOOL INSTRUCTIONS:
1. LOAD the tool file from {project-root}/{{bmadFolderName}}/{{path}} 1. LOAD the tool file from {project-root}/{{bmadFolderName}}/{{path}}
2. READ its entire contents 2. READ its entire contents
3. FOLLOW every instruction precisely as specified 3. FOLLOW every instruction precisely as specified

View File

@ -1,4 +1,5 @@
--- ---
name: '{{name}}'
description: '{{description}}' description: '{{description}}'
--- ---
@ -7,6 +8,7 @@ Execute the BMAD '{{name}}' workflow.
CRITICAL: You must load and follow the workflow definition exactly. CRITICAL: You must load and follow the workflow definition exactly.
WORKFLOW INSTRUCTIONS: WORKFLOW INSTRUCTIONS:
1. LOAD the workflow file from {project-root}/{{bmadFolderName}}/{{path}} 1. LOAD the workflow file from {project-root}/{{bmadFolderName}}/{{path}}
2. READ its entire contents 2. READ its entire contents
3. FOLLOW every step precisely as specified 3. FOLLOW every step precisely as specified

View File

@ -1,4 +1,5 @@
--- ---
name: '{{name}}'
description: '{{description}}' description: '{{description}}'
--- ---
@ -7,6 +8,7 @@ Execute the BMAD '{{name}}' workflow.
CRITICAL: You must load and follow the workflow definition exactly. CRITICAL: You must load and follow the workflow definition exactly.
WORKFLOW INSTRUCTIONS: WORKFLOW INSTRUCTIONS:
1. LOAD the workflow file from {project-root}/{{bmadFolderName}}/{{path}} 1. LOAD the workflow file from {project-root}/{{bmadFolderName}}/{{path}}
2. READ its entire contents 2. READ its entire contents
3. FOLLOW every step precisely as specified 3. FOLLOW every step precisely as specified

View File

@ -734,8 +734,10 @@ class ModuleManager {
continue; continue;
} }
// Skip config.yaml templates - we'll generate clean ones with actual values // Skip module root config.yaml only - generated by config collector with actual values
if (file === 'config.yaml' || file.endsWith('/config.yaml')) { // Workflow-level config.yaml (e.g. workflows/orchestrate-story/config.yaml) must be copied
// for custom modules that use workflow-specific configuration
if (file === 'config.yaml') {
continue; continue;
} }

View File

@ -245,11 +245,48 @@ class UI {
// Handle quick update separately // Handle quick update separately
if (actionType === 'quick-update') { if (actionType === 'quick-update') {
// Quick update doesn't install custom content - just updates existing modules // Pass --custom-content through so installer can re-cache if cache is missing
let customContentForQuickUpdate = { hasCustomContent: false };
if (options.customContent) {
const paths = options.customContent
.split(',')
.map((p) => p.trim())
.filter(Boolean);
if (paths.length > 0) {
const customPaths = [];
const selectedModuleIds = [];
const sources = [];
for (const customPath of paths) {
const expandedPath = this.expandUserPath(customPath);
const validation = this.validateCustomContentPathSync(expandedPath);
if (validation) continue;
let moduleMeta;
try {
const moduleYamlPath = path.join(expandedPath, 'module.yaml');
moduleMeta = require('yaml').parse(await fs.readFile(moduleYamlPath, 'utf-8'));
} catch {
continue;
}
if (!moduleMeta?.code) continue;
customPaths.push(expandedPath);
selectedModuleIds.push(moduleMeta.code);
sources.push({ path: expandedPath, id: moduleMeta.code, name: moduleMeta.name || moduleMeta.code });
}
if (customPaths.length > 0) {
customContentForQuickUpdate = {
hasCustomContent: true,
selected: true,
sources,
selectedFiles: customPaths.map((p) => path.join(p, 'module.yaml')),
selectedModuleIds,
};
}
}
}
return { return {
actionType: 'quick-update', actionType: 'quick-update',
directory: confirmedDirectory, directory: confirmedDirectory,
customContent: { hasCustomContent: false }, customContent: customContentForQuickUpdate,
skipPrompts: options.yes || false, skipPrompts: options.yes || false,
}; };
} }
@ -305,6 +342,7 @@ class UI {
// Build custom content config similar to promptCustomContentSource // Build custom content config similar to promptCustomContentSource
const customPaths = []; const customPaths = [];
const selectedModuleIds = []; const selectedModuleIds = [];
const sources = [];
for (const customPath of paths) { for (const customPath of paths) {
const expandedPath = this.expandUserPath(customPath); const expandedPath = this.expandUserPath(customPath);
@ -326,6 +364,11 @@ class UI {
continue; continue;
} }
if (!moduleMeta) {
await prompts.log.warn(`Skipping custom content path: ${customPath} - module.yaml is empty`);
continue;
}
if (!moduleMeta.code) { if (!moduleMeta.code) {
await prompts.log.warn(`Skipping custom content path: ${customPath} - module.yaml missing 'code' field`); await prompts.log.warn(`Skipping custom content path: ${customPath} - module.yaml missing 'code' field`);
continue; continue;
@ -333,6 +376,11 @@ class UI {
customPaths.push(expandedPath); customPaths.push(expandedPath);
selectedModuleIds.push(moduleMeta.code); selectedModuleIds.push(moduleMeta.code);
sources.push({
path: expandedPath,
id: moduleMeta.code,
name: moduleMeta.name || moduleMeta.code,
});
} }
if (customPaths.length > 0) { if (customPaths.length > 0) {
@ -340,7 +388,9 @@ class UI {
selectedCustomModules: selectedModuleIds, selectedCustomModules: selectedModuleIds,
customContentConfig: { customContentConfig: {
hasCustomContent: true, hasCustomContent: true,
paths: customPaths, selected: true,
sources,
selectedFiles: customPaths.map((p) => path.join(p, 'module.yaml')),
selectedModuleIds: selectedModuleIds, selectedModuleIds: selectedModuleIds,
}, },
}; };
@ -446,6 +496,7 @@ class UI {
// Build custom content config similar to promptCustomContentSource // Build custom content config similar to promptCustomContentSource
const customPaths = []; const customPaths = [];
const selectedModuleIds = []; const selectedModuleIds = [];
const sources = [];
for (const customPath of paths) { for (const customPath of paths) {
const expandedPath = this.expandUserPath(customPath); const expandedPath = this.expandUserPath(customPath);
@ -467,6 +518,11 @@ class UI {
continue; continue;
} }
if (!moduleMeta) {
await prompts.log.warn(`Skipping custom content path: ${customPath} - module.yaml is empty`);
continue;
}
if (!moduleMeta.code) { if (!moduleMeta.code) {
await prompts.log.warn(`Skipping custom content path: ${customPath} - module.yaml missing 'code' field`); await prompts.log.warn(`Skipping custom content path: ${customPath} - module.yaml missing 'code' field`);
continue; continue;
@ -474,12 +530,19 @@ class UI {
customPaths.push(expandedPath); customPaths.push(expandedPath);
selectedModuleIds.push(moduleMeta.code); selectedModuleIds.push(moduleMeta.code);
sources.push({
path: expandedPath,
id: moduleMeta.code,
name: moduleMeta.name || moduleMeta.code,
});
} }
if (customPaths.length > 0) { if (customPaths.length > 0) {
customContentConfig = { customContentConfig = {
hasCustomContent: true, hasCustomContent: true,
paths: customPaths, selected: true,
sources,
selectedFiles: customPaths.map((p) => path.join(p, 'module.yaml')),
selectedModuleIds: selectedModuleIds, selectedModuleIds: selectedModuleIds,
}; };
} }