Merge branch 'main' into patch-1

This commit is contained in:
Brian 2025-09-06 13:39:39 -05:00 committed by GitHub
commit fabf7e8a0a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
31 changed files with 998 additions and 204 deletions

106
.github/FORK_GUIDE.md vendored Normal file
View File

@ -0,0 +1,106 @@
# Fork Guide - CI/CD Configuration
## CI/CD in Forks
By default, CI/CD workflows are **disabled in forks** to conserve GitHub Actions resources and provide a cleaner fork experience.
### Why This Approach?
- **Resource efficiency**: Prevents unnecessary GitHub Actions usage across 1,600+ forks
- **Clean fork experience**: No failed workflow notifications in your fork
- **Full control**: Enable CI/CD only when you actually need it
- **PR validation**: Your changes are still fully tested when submitting PRs to the main repository
## Enabling CI/CD in Your Fork
If you need to run CI/CD workflows in your fork, follow these steps:
1. Navigate to your fork's **Settings** tab
2. Go to **Secrets and variables****Actions** → **Variables**
3. Click **New repository variable**
4. Create a new variable:
- **Name**: `ENABLE_CI_IN_FORK`
- **Value**: `true`
5. Click **Add variable**
That's it! CI/CD workflows will now run in your fork.
## Disabling CI/CD Again
To disable CI/CD workflows in your fork, you can either:
- **Delete the variable**: Remove the `ENABLE_CI_IN_FORK` variable entirely, or
- **Set to false**: Change the `ENABLE_CI_IN_FORK` value to `false`
## Alternative Testing Options
You don't always need to enable CI/CD in your fork. Here are alternatives:
### Local Testing
Run tests locally before pushing:
```bash
# Install dependencies
npm ci
# Run linting
npm run lint
# Run format check
npm run format:check
# Run validation
npm run validate
# Build the project
npm run build
```
### Pull Request CI
When you open a Pull Request to the main repository:
- All CI/CD workflows automatically run
- You get full validation of your changes
- No configuration needed
### GitHub Codespaces
Use GitHub Codespaces for a full development environment:
- All tools pre-configured
- Same environment as CI/CD
- No local setup required
## Frequently Asked Questions
### Q: Will my PR be tested even if CI is disabled in my fork?
**A:** Yes! When you open a PR to the main repository, all CI/CD workflows run automatically, regardless of your fork's settings.
### Q: Can I selectively enable specific workflows?
**A:** The `ENABLE_CI_IN_FORK` variable enables all workflows. For selective control, you'd need to modify individual workflow files.
### Q: Do I need to enable CI in my fork to contribute?
**A:** No! Most contributors never need to enable CI in their forks. Local testing and PR validation are sufficient for most contributions.
### Q: Will disabling CI affect my ability to merge PRs?
**A:** No! PR merge requirements are based on CI runs in the main repository, not your fork.
### Q: Why was this implemented?
**A:** With over 1,600 forks of BMAD-METHOD, this saves thousands of GitHub Actions minutes monthly while maintaining code quality standards.
## Need Help?
- Join our [Discord Community](https://discord.gg/gk8jAdXWmj) for support
- Check the [Contributing Guide](../README.md#contributing) for more information
- Open an issue if you encounter any problems
---
> 💡 **Pro Tip**: This fork-friendly approach is particularly valuable for projects using AI/LLM tools that create many experimental commits, as it prevents unnecessary CI runs while maintaining code quality standards.

View File

@ -14,6 +14,7 @@ name: Discord Notification
jobs: jobs:
notify: notify:
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.event.repository.fork != true || vars.ENABLE_CI_IN_FORK == 'true'
steps: steps:
- name: Notify Discord - name: Notify Discord
uses: sarisia/actions-status-discord@v1 uses: sarisia/actions-status-discord@v1

View File

@ -7,6 +7,7 @@ name: format-check
jobs: jobs:
prettier: prettier:
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.event.repository.fork != true || vars.ENABLE_CI_IN_FORK == 'true'
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@ -25,6 +26,7 @@ jobs:
eslint: eslint:
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.event.repository.fork != true || vars.ENABLE_CI_IN_FORK == 'true'
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4

View File

@ -20,6 +20,7 @@ permissions:
jobs: jobs:
release: release:
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.event.repository.fork != true || vars.ENABLE_CI_IN_FORK == 'true'
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4

55
.github/workflows/pr-validation.yaml vendored Normal file
View File

@ -0,0 +1,55 @@
name: PR Validation
on:
pull_request:
branches: [main]
types: [opened, synchronize, reopened]
jobs:
validate:
runs-on: ubuntu-latest
if: github.event.repository.fork != true || vars.ENABLE_CI_IN_FORK == 'true'
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
cache: npm
- name: Install dependencies
run: npm ci
- name: Run validation
run: npm run validate
- name: Check formatting
run: npm run format:check
- name: Run linter
run: npm run lint
- name: Run tests (if available)
run: npm test --if-present
- name: Comment on PR if checks fail
if: failure()
uses: actions/github-script@v7
with:
script: |
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `❌ **PR Validation Failed**
This PR has validation errors that must be fixed before merging:
- Run \`npm run validate\` to check agent/team configs
- Run \`npm run format:check\` to check formatting (fix with \`npm run format\`)
- Run \`npm run lint\` to check linting issues (fix with \`npm run lint:fix\`)
Please fix these issues and push the changes.`
})

View File

@ -17,6 +17,47 @@ Also note, we use the discussions feature in GitHub to have a community to discu
By participating in this project, you agree to abide by our Code of Conduct. Please read it before participating. By participating in this project, you agree to abide by our Code of Conduct. Please read it before participating.
## Before Submitting a PR
**IMPORTANT**: All PRs must pass validation checks before they can be merged.
### Required Checks
Before submitting your PR, run these commands locally:
```bash
# Run all validation checks
npm run pre-release
# Or run them individually:
npm run validate # Validate agent/team configs
npm run format:check # Check code formatting
npm run lint # Check for linting issues
```
### Fixing Issues
If any checks fail, use these commands to fix them:
```bash
# Fix all issues automatically
npm run fix
# Or fix individually:
npm run format # Fix formatting issues
npm run lint:fix # Fix linting issues
```
### Setup Git Hooks (Optional but Recommended)
To catch issues before committing:
```bash
# Run this once after cloning
chmod +x tools/setup-hooks.sh
./tools/setup-hooks.sh
```
## How to Contribute ## How to Contribute
### Reporting Bugs ### Reporting Bugs

View File

@ -119,7 +119,7 @@ The BMAD-METHOD™ includes a powerful codebase flattener tool designed to prepa
### Features ### Features
- **AI-Optimized Output**: Generates clean XML format specifically designed for AI model consumption - **AI-Optimized Output**: Generates clean XML format specifically designed for AI model consumption
- **Smart Filtering**: Automatically respects `.gitignore` patterns to exclude unnecessary files - **Smart Filtering**: Automatically respects `.gitignore` patterns to exclude unnecessary files, plus optional project-level `.bmad-flattenignore` for additional exclusions
- **Binary File Detection**: Intelligently identifies and excludes binary files, focusing on source code - **Binary File Detection**: Intelligently identifies and excludes binary files, focusing on source code
- **Progress Tracking**: Real-time progress indicators and comprehensive completion statistics - **Progress Tracking**: Real-time progress indicators and comprehensive completion statistics
- **Flexible Output**: Customizable output file location and naming - **Flexible Output**: Customizable output file location and naming
@ -170,6 +170,18 @@ The generated XML file contains your project's text-based source files in a stru
- File discovery and ignoring - File discovery and ignoring
- Uses `git ls-files` when inside a git repository for speed and correctness; otherwise falls back to a glob-based scan. - Uses `git ls-files` when inside a git repository for speed and correctness; otherwise falls back to a glob-based scan.
- Applies your `.gitignore` plus a curated set of default ignore patterns (e.g., `node_modules`, build outputs, caches, logs, IDE folders, lockfiles, large media/binaries, `.env*`, and previously generated XML outputs). - Applies your `.gitignore` plus a curated set of default ignore patterns (e.g., `node_modules`, build outputs, caches, logs, IDE folders, lockfiles, large media/binaries, `.env*`, and previously generated XML outputs).
- Supports an optional `.bmad-flattenignore` file at the project root for additional ignore patterns (gitignore-style). If present, its rules are applied after `.gitignore` and the defaults.
##### `.bmad-flattenignore` example
Create a `.bmad-flattenignore` file in the root of your project to exclude files that must remain in git but should not be included in the flattened XML:
```text
seeds/**
scripts/private/**
**/*.snap
```
- Binary handling - Binary handling
- Binary files are detected and excluded from the XML content. They are counted in the final summary but not embedded in the output. - Binary files are detected and excluded from the XML content. They are counted in the final summary but not embedded in the output.
- XML format and safety - XML format and safety
@ -212,6 +224,26 @@ The generated XML file contains your project's text-based source files in a stru
📋 **[Read CONTRIBUTING.md](CONTRIBUTING.md)** - Complete guide to contributing, including guidelines, process, and requirements 📋 **[Read CONTRIBUTING.md](CONTRIBUTING.md)** - Complete guide to contributing, including guidelines, process, and requirements
### Working with Forks
When you fork this repository, CI/CD workflows are **disabled by default** to save resources. This is intentional and helps keep your fork clean.
#### Need CI/CD in Your Fork?
See our [Fork CI/CD Guide](.github/FORK_GUIDE.md) for instructions on enabling workflows in your fork.
#### Contributing Workflow
1. **Fork the repository** - Click the Fork button on GitHub
2. **Clone your fork** - `git clone https://github.com/YOUR-USERNAME/BMAD-METHOD.git`
3. **Create a feature branch** - `git checkout -b feature/amazing-feature`
4. **Make your changes** - Test locally with `npm test`
5. **Commit your changes** - `git commit -m 'feat: add amazing feature'`
6. **Push to your fork** - `git push origin feature/amazing-feature`
7. **Open a Pull Request** - CI/CD will run automatically on the PR
Your contributions are tested when you submit a PR - no need to enable CI in your fork!
## License ## License
MIT License - see [LICENSE](LICENSE) for details. MIT License - see [LICENSE](LICENSE) for details.

View File

@ -11,16 +11,16 @@ CRITICAL: Read the full YAML BLOCK that FOLLOWS IN THIS FILE to understand your
```yaml ```yaml
IDE-FILE-RESOLUTION: IDE-FILE-RESOLUTION:
- FOR LATER USE ONLY - NOT FOR ACTIVATION, when executing commands that reference dependencies - FOR LATER USE ONLY - NOT FOR ACTIVATION, when executing commands that reference dependencies
- Dependencies map to root/type/name - Dependencies map to {root}/{type}/{name}
- type=folder (tasks|templates|checklists|data|utils|etc...), name=file-name - type=folder (tasks|templates|checklists|data|utils|etc...), name=file-name
- Example: create-doc.md → root/tasks/create-doc.md - Example: create-doc.md → {root}/tasks/create-doc.md
- IMPORTANT: Only load these files when user requests specific command execution - IMPORTANT: Only load these files when user requests specific command execution
REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "draft story"→*create→create-next-story task, "make a new prd" would be dependencies->tasks->create-doc combined with the dependencies->templates->prd-tmpl.md), ALWAYS ask for clarification if no clear match. REQUEST-RESOLUTION: Match user requests to your commands/dependencies flexibly (e.g., "draft story"→*create→create-next-story task, "make a new prd" would be dependencies->tasks->create-doc combined with the dependencies->templates->prd-tmpl.md), ALWAYS ask for clarification if no clear match.
activation-instructions: activation-instructions:
- STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition - STEP 1: Read THIS ENTIRE FILE - it contains your complete persona definition
- STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below - STEP 2: Adopt the persona defined in the 'agent' and 'persona' sections below
- STEP 3: Load and read bmad-core/core-config.yaml (project configuration) before any greeting - STEP 3: Load and read `bmad-core/core-config.yaml` (project configuration) before any greeting
- STEP 4: Greet user with your name/role and immediately run *help to display available commands - STEP 4: Greet user with your name/role and immediately run `*help` to display available commands
- DO NOT: Load any other agent files during activation - DO NOT: Load any other agent files during activation
- ONLY load dependency files when user selects them for execution via command or request of a task - ONLY load dependency files when user selects them for execution via command or request of a task
- The agent.customization field ALWAYS takes precedence over any conflicting instructions - The agent.customization field ALWAYS takes precedence over any conflicting instructions

View File

@ -49,6 +49,7 @@ persona:
core_principles: core_principles:
- CRITICAL: Story has ALL info you will need aside from what you loaded during the startup commands. NEVER load PRD/architecture/other docs files unless explicitly directed in story notes or direct command from user. - CRITICAL: Story has ALL info you will need aside from what you loaded during the startup commands. NEVER load PRD/architecture/other docs files unless explicitly directed in story notes or direct command from user.
- CRITICAL: ALWAYS check current folder structure before starting your story tasks, don't create new working directory if it already exists. Create new one when you're sure it's a brand new project.
- CRITICAL: ONLY update story file Dev Agent Record sections (checkboxes/Debug Log/Completion Notes/Change Log) - CRITICAL: ONLY update story file Dev Agent Record sections (checkboxes/Debug Log/Completion Notes/Change Log)
- CRITICAL: FOLLOW THE develop-story command when the user tells you to implement the story - CRITICAL: FOLLOW THE develop-story command when the user tells you to implement the story
- Numbered Options - Always use numbered lists when presenting choices to the user - Numbered Options - Always use numbered lists when presenting choices to the user

View File

@ -102,6 +102,7 @@ npx bmad-method install
- **Cline**: VS Code extension with AI features - **Cline**: VS Code extension with AI features
- **Roo Code**: Web-based IDE with agent support - **Roo Code**: Web-based IDE with agent support
- **GitHub Copilot**: VS Code extension with AI peer programming assistant - **GitHub Copilot**: VS Code extension with AI peer programming assistant
- **Auggie CLI (Augment Code)**: AI-powered development environment
**Note for VS Code Users**: BMAD-METHOD™ assumes when you mention "VS Code" that you're using it with an AI-powered extension like GitHub Copilot, Cline, or Roo. Standard VS Code without AI capabilities cannot run BMad agents. The installer includes built-in support for Cline and Roo. **Note for VS Code Users**: BMAD-METHOD™ assumes when you mention "VS Code" that you're using it with an AI-powered extension like GitHub Copilot, Cline, or Roo. Standard VS Code without AI capabilities cannot run BMad agents. The installer includes built-in support for Cline and Roo.

View File

@ -160,7 +160,7 @@ workflow:
- Dev Agent (New Chat): Address remaining items - Dev Agent (New Chat): Address remaining items
- Return to QA for final approval - Return to QA for final approval
- repeat_development_cycle: - step: repeat_development_cycle
action: continue_for_all_stories action: continue_for_all_stories
notes: | notes: |
Repeat story cycle (SM → Dev → QA) for all epic stories Repeat story cycle (SM → Dev → QA) for all epic stories
@ -177,7 +177,7 @@ workflow:
- Validate epic was completed correctly - Validate epic was completed correctly
- Document learnings and improvements - Document learnings and improvements
- workflow_end: - step: workflow_end
action: project_complete action: project_complete
notes: | notes: |
All stories implemented and reviewed! All stories implemented and reviewed!

View File

@ -106,7 +106,7 @@ workflow:
- Dev Agent (New Chat): Address remaining items - Dev Agent (New Chat): Address remaining items
- Return to QA for final approval - Return to QA for final approval
- repeat_development_cycle: - step: repeat_development_cycle
action: continue_for_all_stories action: continue_for_all_stories
notes: | notes: |
Repeat story cycle (SM → Dev → QA) for all epic stories Repeat story cycle (SM → Dev → QA) for all epic stories
@ -123,7 +123,7 @@ workflow:
- Validate epic was completed correctly - Validate epic was completed correctly
- Document learnings and improvements - Document learnings and improvements
- workflow_end: - step: workflow_end
action: project_complete action: project_complete
notes: | notes: |
All stories implemented and reviewed! All stories implemented and reviewed!

View File

@ -113,7 +113,7 @@ workflow:
- Dev Agent (New Chat): Address remaining items - Dev Agent (New Chat): Address remaining items
- Return to QA for final approval - Return to QA for final approval
- repeat_development_cycle: - step: repeat_development_cycle
action: continue_for_all_stories action: continue_for_all_stories
notes: | notes: |
Repeat story cycle (SM → Dev → QA) for all epic stories Repeat story cycle (SM → Dev → QA) for all epic stories
@ -130,7 +130,7 @@ workflow:
- Validate epic was completed correctly - Validate epic was completed correctly
- Document learnings and improvements - Document learnings and improvements
- workflow_end: - step: workflow_end
action: project_complete action: project_complete
notes: | notes: |
All stories implemented and reviewed! All stories implemented and reviewed!

View File

@ -65,12 +65,12 @@ workflow:
condition: po_checklist_issues condition: po_checklist_issues
notes: "If PO finds issues, return to relevant agent to fix and re-export updated documents to docs/ folder." notes: "If PO finds issues, return to relevant agent to fix and re-export updated documents to docs/ folder."
- project_setup_guidance: - step: project_setup_guidance
action: guide_project_structure action: guide_project_structure
condition: user_has_generated_ui condition: user_has_generated_ui
notes: "If user generated UI with v0/Lovable: For polyrepo setup, place downloaded project in separate frontend repo alongside backend repo. For monorepo, place in apps/web or packages/frontend directory. Review architecture document for specific guidance." notes: "If user generated UI with v0/Lovable: For polyrepo setup, place downloaded project in separate frontend repo alongside backend repo. For monorepo, place in apps/web or packages/frontend directory. Review architecture document for specific guidance."
- development_order_guidance: - step: development_order_guidance
action: guide_development_sequence action: guide_development_sequence
notes: "Based on PRD stories: If stories are frontend-heavy, start with frontend project/directory first. If backend-heavy or API-first, start with backend. For tightly coupled features, follow story sequence in monorepo setup. Reference sharded PRD epics for development order." notes: "Based on PRD stories: If stories are frontend-heavy, start with frontend project/directory first. If backend-heavy or API-first, start with backend. For tightly coupled features, follow story sequence in monorepo setup. Reference sharded PRD epics for development order."
@ -138,7 +138,7 @@ workflow:
- Dev Agent (New Chat): Address remaining items - Dev Agent (New Chat): Address remaining items
- Return to QA for final approval - Return to QA for final approval
- repeat_development_cycle: - step: repeat_development_cycle
action: continue_for_all_stories action: continue_for_all_stories
notes: | notes: |
Repeat story cycle (SM → Dev → QA) for all epic stories Repeat story cycle (SM → Dev → QA) for all epic stories
@ -155,7 +155,7 @@ workflow:
- Validate epic was completed correctly - Validate epic was completed correctly
- Document learnings and improvements - Document learnings and improvements
- workflow_end: - step: workflow_end
action: project_complete action: project_complete
notes: | notes: |
All stories implemented and reviewed! All stories implemented and reviewed!

View File

@ -114,7 +114,7 @@ workflow:
- Dev Agent (New Chat): Address remaining items - Dev Agent (New Chat): Address remaining items
- Return to QA for final approval - Return to QA for final approval
- repeat_development_cycle: - step: repeat_development_cycle
action: continue_for_all_stories action: continue_for_all_stories
notes: | notes: |
Repeat story cycle (SM → Dev → QA) for all epic stories Repeat story cycle (SM → Dev → QA) for all epic stories
@ -131,7 +131,7 @@ workflow:
- Validate epic was completed correctly - Validate epic was completed correctly
- Document learnings and improvements - Document learnings and improvements
- workflow_end: - step: workflow_end
action: project_complete action: project_complete
notes: | notes: |
All stories implemented and reviewed! All stories implemented and reviewed!

View File

@ -64,7 +64,7 @@ workflow:
condition: po_checklist_issues condition: po_checklist_issues
notes: "If PO finds issues, return to relevant agent to fix and re-export updated documents to docs/ folder." notes: "If PO finds issues, return to relevant agent to fix and re-export updated documents to docs/ folder."
- project_setup_guidance: - step: project_setup_guidance
action: guide_project_structure action: guide_project_structure
condition: user_has_generated_ui condition: user_has_generated_ui
notes: "If user generated UI with v0/Lovable: For polyrepo setup, place downloaded project in separate frontend repo. For monorepo, place in apps/web or frontend/ directory. Review architecture document for specific guidance." notes: "If user generated UI with v0/Lovable: For polyrepo setup, place downloaded project in separate frontend repo. For monorepo, place in apps/web or frontend/ directory. Review architecture document for specific guidance."
@ -133,7 +133,7 @@ workflow:
- Dev Agent (New Chat): Address remaining items - Dev Agent (New Chat): Address remaining items
- Return to QA for final approval - Return to QA for final approval
- repeat_development_cycle: - step: repeat_development_cycle
action: continue_for_all_stories action: continue_for_all_stories
notes: | notes: |
Repeat story cycle (SM → Dev → QA) for all epic stories Repeat story cycle (SM → Dev → QA) for all epic stories
@ -150,7 +150,7 @@ workflow:
- Validate epic was completed correctly - Validate epic was completed correctly
- Document learnings and improvements - Document learnings and improvements
- workflow_end: - step: workflow_end
action: project_complete action: project_complete
notes: | notes: |
All stories implemented and reviewed! All stories implemented and reviewed!

View File

@ -18,7 +18,7 @@ Each expansion pack provides deep, specialized knowledge without bloating the co
Anyone can create and share expansion packs, fostering a ecosystem of AI-powered solutions across all industries and interests. Anyone can create and share expansion packs, fostering a ecosystem of AI-powered solutions across all industries and interests.
## Technical Expansion Packs ## Technical Expansion Packs (Examples of possible expansions to come)
### Game Development Pack ### Game Development Pack
@ -191,90 +191,10 @@ Research acceleration tools:
## Creating Your Own Expansion Pack ## Creating Your Own Expansion Pack
### Step 1: Define Your Domain The next major release will include a new agent and expansion pack builder and a new expansion format.
What expertise are you capturing? What problems will it solve?
### Step 2: Design Your Agents
Each agent should have:
- Clear expertise area
- Specific personality traits
- Defined capabilities
- Knowledge boundaries
### Step 3: Create Tasks
Tasks should be:
- Step-by-step procedures
- Reusable across scenarios
- Clear and actionable
- Domain-specific
### Step 4: Build Templates
Templates need:
- Structured output format
- Embedded LLM instructions
- Placeholders for customization
- Professional formatting
### Step 5: Test & Iterate
- Use with real scenarios
- Gather user feedback
- Refine agent responses
- Improve task clarity
### Step 6: Package & Share
- Create clear documentation
- Include usage examples
- Add to expansion-packs directory
- Share with community
## The Future of Expansion Packs
### Marketplace Potential
Imagine a future where:
- Professional expansion packs are sold
- Certified packs for regulated industries
- Community ratings and reviews
- Automatic updates and improvements
### AI Agent Ecosystems
Expansion packs could enable:
- Cross-pack agent collaboration
- Industry-standard agent protocols
- Interoperable AI workflows
- Universal agent languages
### Democratizing Expertise
Every expansion pack:
- Makes expert knowledge accessible
- Reduces barriers to entry
- Enables solo entrepreneurs
- Empowers small teams
## Getting Started
1. **Browse existing packs**: Check `expansion-packs/` directory
2. **Install what you need**: Use the installer's expansion pack option
3. **Create your own**: Use the expansion-creator pack
4. **Share with others**: Submit PRs with new packs
5. **Build the future**: Help shape AI-assisted work
## Remember ## Remember
The BMad Method is more than a development framework - it's a platform for structuring human expertise into AI-accessible formats. Every expansion pack you create makes specialized knowledge more accessible to everyone. The BMad Method is more than a Software Development Agile Framework! Every expansion pack makes specialized knowledge and workflows more accessible to everyone.
**What expertise will you share with the world?** **What expertise will you share with the world?**

View File

@ -187,6 +187,32 @@ If you want to do the planning on the web with Claude (Sonnet 4 or Opus), Gemini
npx bmad-method install npx bmad-method install
``` ```
### Codex (CLI & Web)
BMAD integrates with OpenAI Codex via `AGENTS.md` and committed core agent files.
- Two installation modes:
- Codex (local only): keeps `.bmad-core/` ignored for local dev.
- `npx bmad-method install -f -i codex -d .`
- Codex Web Enabled: ensures `.bmad-core/` is tracked so you can commit it for Codex Web.
- `npx bmad-method install -f -i codex-web -d .`
- What gets generated:
- `AGENTS.md` at the project root with a BMAD section containing
- How-to-use with Codex (CLI & Web)
- Agent Directory (Title, ID, When To Use)
- Detailed peragent sections with source path, when-to-use, activation phrasing, and YAML
- Tasks with quick usage notes
- If a `package.json` exists, helpful scripts are added:
- `bmad:refresh`, `bmad:list`, `bmad:validate`
- Using Codex:
- CLI: run `codex` in the project root and prompt naturally, e.g., “As dev, implement …”.
- Web: commit `.bmad-core/` and `AGENTS.md`, then open the repo in Codex and prompt the same way.
- Refresh after changes:
- Re-run the appropriate install mode (`codex` or `codex-web`) to update the BMAD block in `AGENTS.md`.
## Special Agents ## Special Agents
There are two BMad agents — in the future they'll be consolidated into a single BMad-Master. There are two BMad agents — in the future they'll be consolidated into a single BMad-Master.

View File

@ -5,7 +5,13 @@
> Gemini Web's 1M+ token context window or Gemini CLI (when it's working) can analyze your ENTIRE codebase, or critical sections of it, all at once (obviously within reason): > Gemini Web's 1M+ token context window or Gemini CLI (when it's working) can analyze your ENTIRE codebase, or critical sections of it, all at once (obviously within reason):
> >
> - Upload via GitHub URL or use gemini cli in the project folder > - Upload via GitHub URL or use gemini cli in the project folder
> - If working in the web: use `npx bmad-method flatten` to flatten your project into a single file, then upload that file to your web agent. > - If working in the web: use `npx bmad-method flatten` to flatten your project into a single file, then upload that file to your web agent. To exclude additional files that must remain in git but shouldn't be sent to the AI, add a `.bmad-flattenignore` file at the project root (gitignore-style), e.g.:
>
> ```text
> seeds/**
> scripts/private/**
> **/*.snap
> ```
## What is Brownfield Development? ## What is Brownfield Development?

229
implement-fork-friendly-ci.sh Executable file
View File

@ -0,0 +1,229 @@
#!/bin/bash
# Fork-Friendly CI/CD Implementation Script
# Usage: ./implement-fork-friendly-ci.sh
#
# This script automates the implementation of fork-friendly CI/CD
# by adding fork detection conditions to all GitHub Actions workflows
set -e
echo "🚀 Implementing Fork-Friendly CI/CD..."
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# 1. Check if .github/workflows directory exists
if [ ! -d ".github/workflows" ]; then
echo -e "${RED}${NC} No .github/workflows directory found"
echo "This script must be run from the repository root"
exit 1
fi
# 2. Backup existing workflows
echo "📦 Backing up workflows..."
backup_dir=".github/workflows.backup.$(date +%Y%m%d_%H%M%S)"
cp -r .github/workflows "$backup_dir"
echo -e "${GREEN}${NC} Workflows backed up to $backup_dir"
# 3. Count workflow files and jobs
WORKFLOW_COUNT=$(ls -1 .github/workflows/*.yml .github/workflows/*.yaml 2>/dev/null | wc -l)
echo "📊 Found ${WORKFLOW_COUNT} workflow files"
# 4. Process each workflow file
UPDATED_FILES=0
MANUAL_REVIEW_NEEDED=0
for file in .github/workflows/*.yml .github/workflows/*.yaml; do
if [ -f "$file" ]; then
filename=$(basename "$file")
echo -n "Processing ${filename}... "
# Create temporary file
temp_file="${file}.tmp"
# Track if file needs manual review
needs_review=0
# Process the file with awk
awk '
BEGIN {
in_jobs = 0
job_count = 0
modified = 0
}
/^jobs:/ {
in_jobs = 1
print
next
}
# Match job definitions (2 spaces + name + colon)
in_jobs && /^ [a-z][a-z0-9_-]*:/ {
job_name = $0
print job_name
job_count++
# Look ahead for existing conditions
getline next_line
# Check if next line is already an if condition
if (next_line ~ /^ if:/) {
# Job already has condition - combine with fork detection
existing_condition = next_line
sub(/^ if: /, "", existing_condition)
# Check if fork condition already exists
if (existing_condition !~ /github\.event\.repository\.fork/) {
print " # Fork-friendly CI: Combined with existing condition"
print " if: (" existing_condition ") && (github.event.repository.fork != true || vars.ENABLE_CI_IN_FORK == '\''true'\'')"
modified++
} else {
# Already has fork detection
print next_line
}
} else if (next_line ~ /^ runs-on:/) {
# No condition exists, add before runs-on
print " if: github.event.repository.fork != true || vars.ENABLE_CI_IN_FORK == '\''true'\''"
print next_line
modified++
} else {
# Some other configuration, preserve as-is
print next_line
}
next
}
# Reset when leaving jobs section
/^[a-z]/ && in_jobs {
in_jobs = 0
}
# Print all other lines
{
if (!in_jobs) print
}
END {
if (modified > 0) {
exit 0 # Success - file was modified
} else {
exit 1 # No modifications needed
}
}
' "$file" > "$temp_file"
# Check if modifications were made
if [ $? -eq 0 ]; then
mv "$temp_file" "$file"
echo -e "${GREEN}${NC} Updated"
((UPDATED_FILES++))
else
rm -f "$temp_file"
echo -e "${YELLOW}${NC} No changes needed"
fi
# Check for complex conditions that might need manual review
if grep -q "needs:" "$file" || grep -q "strategy:" "$file"; then
echo " ⚠️ Complex workflow detected - manual review recommended"
((MANUAL_REVIEW_NEEDED++))
fi
fi
done
echo -e "${GREEN}${NC} Updated ${UPDATED_FILES} workflow files"
# 5. Create Fork Guide if it doesn't exist
if [ ! -f ".github/FORK_GUIDE.md" ]; then
echo "📝 Creating Fork Guide documentation..."
cat > .github/FORK_GUIDE.md << 'EOF'
# Fork Guide - CI/CD Configuration
## CI/CD in Forks
By default, CI/CD workflows are **disabled in forks** to conserve GitHub Actions resources.
### Enabling CI/CD in Your Fork
If you need to run CI/CD workflows in your fork:
1. Navigate to **Settings** → **Secrets and variables** → **Actions** → **Variables**
2. Click **New repository variable**
3. Create variable:
- **Name**: `ENABLE_CI_IN_FORK`
- **Value**: `true`
4. Click **Add variable**
### Disabling CI/CD Again
Either:
- Delete the `ENABLE_CI_IN_FORK` variable, or
- Set its value to `false`
### Alternative Testing Options
- **Local testing**: Run tests locally before pushing
- **Pull Request CI**: Workflows automatically run when you open a PR
- **GitHub Codespaces**: Full development environment
EOF
echo -e "${GREEN}${NC} Fork Guide created"
else
echo " Fork Guide already exists"
fi
# 6. Validate YAML files (if yamllint is available)
if command -v yamllint &> /dev/null; then
echo "🔍 Validating YAML syntax..."
VALIDATION_ERRORS=0
for file in .github/workflows/*.yml .github/workflows/*.yaml; do
if [ -f "$file" ]; then
filename=$(basename "$file")
if yamllint -d relaxed "$file" &>/dev/null; then
echo -e " ${GREEN}${NC} ${filename}"
else
echo -e " ${RED}${NC} ${filename} - YAML validation failed"
((VALIDATION_ERRORS++))
fi
fi
done
if [ $VALIDATION_ERRORS -gt 0 ]; then
echo -e "${YELLOW}${NC} ${VALIDATION_ERRORS} files have YAML errors"
fi
else
echo " yamllint not found - skipping YAML validation"
echo " Install with: pip install yamllint"
fi
# 7. Summary
echo ""
echo "═══════════════════════════════════════"
echo " Fork-Friendly CI/CD Summary"
echo "═══════════════════════════════════════"
echo " 📁 Files updated: ${UPDATED_FILES}"
echo " 📊 Total workflows: ${WORKFLOW_COUNT}"
echo " 📝 Fork Guide: .github/FORK_GUIDE.md"
if [ $MANUAL_REVIEW_NEEDED -gt 0 ]; then
echo " ⚠️ Files needing review: ${MANUAL_REVIEW_NEEDED}"
fi
echo ""
echo "Next steps:"
echo "1. Review the changes: git diff"
echo "2. Test workflows locally (if possible)"
echo "3. Commit changes: git commit -m 'feat: implement fork-friendly CI/CD'"
echo "4. Push and create PR"
echo ""
echo "Remember to update README.md with fork information!"
echo "═══════════════════════════════════════"
# Exit with appropriate code
if [ $UPDATED_FILES -gt 0 ]; then
exit 0
else
echo "No files were updated - workflows may already be fork-friendly"
exit 1
fi

4
package-lock.json generated
View File

@ -1,12 +1,12 @@
{ {
"name": "bmad-method", "name": "bmad-method",
"version": "4.40.1", "version": "4.42.1",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "bmad-method", "name": "bmad-method",
"version": "4.40.1", "version": "4.42.1",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@kayvan/markdown-tree-parser": "^1.6.1", "@kayvan/markdown-tree-parser": "^1.6.1",

View File

@ -1,7 +1,7 @@
{ {
"$schema": "https://json.schemastore.org/package.json", "$schema": "https://json.schemastore.org/package.json",
"name": "bmad-method", "name": "bmad-method",
"version": "4.40.1", "version": "4.42.1",
"description": "Breakthrough Method of Agile AI-driven Development", "description": "Breakthrough Method of Agile AI-driven Development",
"keywords": [ "keywords": [
"agile", "agile",
@ -27,6 +27,7 @@
"build": "node tools/cli.js build", "build": "node tools/cli.js build",
"build:agents": "node tools/cli.js build --agents-only", "build:agents": "node tools/cli.js build --agents-only",
"build:teams": "node tools/cli.js build --teams-only", "build:teams": "node tools/cli.js build --teams-only",
"fix": "npm run format && npm run lint:fix",
"flatten": "node tools/flattener/main.js", "flatten": "node tools/flattener/main.js",
"format": "prettier --write \"**/*.{js,cjs,mjs,json,md,yaml}\"", "format": "prettier --write \"**/*.{js,cjs,mjs,json,md,yaml}\"",
"format:check": "prettier --check \"**/*.{js,cjs,mjs,json,md,yaml}\"", "format:check": "prettier --check \"**/*.{js,cjs,mjs,json,md,yaml}\"",
@ -34,12 +35,14 @@
"lint": "eslint . --ext .js,.cjs,.mjs,.yaml --max-warnings=0", "lint": "eslint . --ext .js,.cjs,.mjs,.yaml --max-warnings=0",
"lint:fix": "eslint . --ext .js,.cjs,.mjs,.yaml --fix", "lint:fix": "eslint . --ext .js,.cjs,.mjs,.yaml --fix",
"list:agents": "node tools/cli.js list:agents", "list:agents": "node tools/cli.js list:agents",
"pre-release": "npm run validate && npm run format:check && npm run lint",
"prepare": "husky", "prepare": "husky",
"preview:release": "node tools/preview-release-notes.js", "preview:release": "node tools/preview-release-notes.js",
"release:major": "gh workflow run \"Manual Release\" -f version_bump=major", "release:major": "gh workflow run \"Manual Release\" -f version_bump=major",
"release:minor": "gh workflow run \"Manual Release\" -f version_bump=minor", "release:minor": "gh workflow run \"Manual Release\" -f version_bump=minor",
"release:patch": "gh workflow run \"Manual Release\" -f version_bump=patch", "release:patch": "gh workflow run \"Manual Release\" -f version_bump=patch",
"release:watch": "gh run watch", "release:watch": "gh run watch",
"setup:hooks": "chmod +x tools/setup-hooks.sh && ./tools/setup-hooks.sh",
"validate": "node tools/cli.js validate", "validate": "node tools/cli.js validate",
"version:all": "node tools/bump-all-versions.js", "version:all": "node tools/bump-all-versions.js",
"version:all:major": "node tools/bump-all-versions.js major", "version:all:major": "node tools/bump-all-versions.js major",

1
test.md Normal file
View File

@ -0,0 +1 @@
# Test

View File

@ -154,9 +154,11 @@ async function parseGitignore(gitignorePath) {
async function loadIgnore(rootDir, extraPatterns = []) { async function loadIgnore(rootDir, extraPatterns = []) {
const ig = ignore(); const ig = ignore();
const gitignorePath = path.join(rootDir, '.gitignore'); const gitignorePath = path.join(rootDir, '.gitignore');
const flattenIgnorePath = path.join(rootDir, '.bmad-flattenignore');
const patterns = [ const patterns = [
...(await readIgnoreFile(gitignorePath)), ...(await readIgnoreFile(gitignorePath)),
...DEFAULT_PATTERNS, ...DEFAULT_PATTERNS,
...(await readIgnoreFile(flattenIgnorePath)),
...extraPatterns, ...extraPatterns,
]; ];
// De-duplicate // De-duplicate

View File

@ -49,7 +49,7 @@ program
.option('-d, --directory <path>', 'Installation directory') .option('-d, --directory <path>', 'Installation directory')
.option( .option(
'-i, --ide <ide...>', '-i, --ide <ide...>',
'Configure for specific IDE(s) - can specify multiple (cursor, claude-code, windsurf, trae, roo, kilo, cline, gemini, qwen-code, github-copilot, other)', 'Configure for specific IDE(s) - can specify multiple (cursor, claude-code, windsurf, trae, roo, kilo, cline, gemini, qwen-code, github-copilot, codex, codex-web, auggie-cli, other)',
) )
.option( .option(
'-e, --expansion-packs <packs...>', '-e, --expansion-packs <packs...>',
@ -406,6 +406,9 @@ async function promptInstallation() {
{ name: 'Qwen Code', value: 'qwen-code' }, { name: 'Qwen Code', value: 'qwen-code' },
{ name: 'Crush', value: 'crush' }, { name: 'Crush', value: 'crush' },
{ name: 'Github Copilot', value: 'github-copilot' }, { name: 'Github Copilot', value: 'github-copilot' },
{ name: 'Auggie CLI (Augment Code)', value: 'auggie-cli' },
{ name: 'Codex CLI', value: 'codex' },
{ name: 'Codex Web', value: 'codex-web' },
], ],
}, },
]); ]);
@ -474,6 +477,38 @@ async function promptInstallation() {
answers.githubCopilotConfig = { configChoice }; answers.githubCopilotConfig = { configChoice };
} }
// Configure Auggie CLI (Augment Code) immediately if selected
if (ides.includes('auggie-cli')) {
console.log(chalk.cyan('\n📍 Auggie CLI Location Configuration'));
console.log(chalk.dim('Choose where to install BMad agents for Auggie CLI access.\n'));
const { selectedLocations } = await inquirer.prompt([
{
type: 'checkbox',
name: 'selectedLocations',
message: 'Select Auggie CLI command locations:',
choices: [
{
name: 'User Commands (Global): Available across all your projects (user-wide)',
value: 'user',
},
{
name: 'Workspace Commands (Project): Stored in repository, shared with team',
value: 'workspace',
},
],
validate: (selected) => {
if (selected.length === 0) {
return 'Please select at least one location';
}
return true;
},
},
]);
answers.augmentCodeConfig = { selectedLocations };
}
// Ask for web bundles installation // Ask for web bundles installation
const { includeWebBundles } = await inquirer.prompt([ const { includeWebBundles } = await inquirer.prompt([
{ {

View File

@ -78,15 +78,15 @@ ide-configurations:
# 4. Rules are stored in .clinerules/ directory in your project # 4. Rules are stored in .clinerules/ directory in your project
gemini: gemini:
name: Gemini CLI name: Gemini CLI
rule-dir: .gemini/bmad-method/ rule-dir: .gemini/commands/BMad/
format: single-file format: multi-file
command-suffix: .md command-suffix: .toml
instructions: | instructions: |
# To use BMad agents with the Gemini CLI: # To use BMad agents with the Gemini CLI:
# 1. The installer creates a .gemini/bmad-method/ directory in your project. # 1. The installer creates a `BMad` folder in `.gemini/commands`.
# 2. It concatenates all agent files into a single GEMINI.md file. # 2. This adds custom commands for each agent and task.
# 3. Simply mention the agent in your prompt (e.g., "As *dev, ..."). # 3. Type /BMad:agents:<agent-name> (e.g., "/BMad:agents:dev", "/BMad:agents:pm") or /BMad:tasks:<task-name> (e.g., "/BMad:tasks:create-doc").
# 4. The Gemini CLI will automatically have the context for that agent. # 4. The agent will adopt that persona for the conversation or preform the task.
github-copilot: github-copilot:
name: Github Copilot name: Github Copilot
rule-dir: .github/chatmodes/ rule-dir: .github/chatmodes/
@ -121,3 +121,44 @@ ide-configurations:
# 2. It concatenates all agent files into a single QWEN.md file. # 2. It concatenates all agent files into a single QWEN.md file.
# 3. Simply mention the agent in your prompt (e.g., "As *dev, ..."). # 3. Simply mention the agent in your prompt (e.g., "As *dev, ...").
# 4. The Qwen Code CLI will automatically have the context for that agent. # 4. The Qwen Code CLI will automatically have the context for that agent.
auggie-cli:
name: Auggie CLI (Augment Code)
format: multi-location
locations:
user:
name: User Commands (Global)
rule-dir: ~/.augment/commands/bmad/
description: Available across all your projects (user-wide)
workspace:
name: Workspace Commands (Project)
rule-dir: ./.augment/commands/bmad/
description: Stored in your repository and shared with your team
command-suffix: .md
instructions: |
# To use BMad agents in Auggie CLI (Augment Code):
# 1. Type /bmad:agent-name (e.g., "/bmad:dev", "/bmad:pm", "/bmad:architect")
# 2. The agent will adopt that persona for the conversation
# 3. Commands are available based on your selected location(s)
codex:
name: Codex CLI
format: project-memory
file: AGENTS.md
instructions: |
# To use BMAD agents with Codex CLI:
# 1. The installer updates/creates AGENTS.md at your project root with BMAD agents and tasks.
# 2. Run `codex` in your project. Codex automatically reads AGENTS.md as project memory.
# 3. Mention agents in your prompt (e.g., "As dev, please implement ...") or reference tasks.
# 4. You can further customize global Codex behavior via ~/.codex/config.toml.
codex-web:
name: Codex Web Enabled
format: project-memory
file: AGENTS.md
instructions: |
# To enable BMAD agents for Codex Web (cloud):
# 1. The installer updates/creates AGENTS.md and ensures `.bmad-core` is NOT ignored by git.
# 2. Commit `.bmad-core/` and `AGENTS.md` to your repository.
# 3. Open the repo in Codex Web and reference agents naturally (e.g., "As dev, ...").
# 4. Re-run this installer to refresh agent sections when the core changes.

View File

@ -74,6 +74,15 @@ class IdeSetup extends BaseIdeSetup {
case 'qwen-code': { case 'qwen-code': {
return this.setupQwenCode(installDir, selectedAgent); return this.setupQwenCode(installDir, selectedAgent);
} }
case 'auggie-cli': {
return this.setupAuggieCLI(installDir, selectedAgent, spinner, preConfiguredSettings);
}
case 'codex': {
return this.setupCodex(installDir, selectedAgent, { webEnabled: false });
}
case 'codex-web': {
return this.setupCodex(installDir, selectedAgent, { webEnabled: true });
}
default: { default: {
console.log(chalk.yellow(`\nIDE ${ide} not yet supported`)); console.log(chalk.yellow(`\nIDE ${ide} not yet supported`));
return false; return false;
@ -81,6 +90,175 @@ class IdeSetup extends BaseIdeSetup {
} }
} }
async setupCodex(installDir, selectedAgent, options) {
options = options ?? { webEnabled: false };
// Codex reads AGENTS.md at the project root as project memory (CLI & Web).
// Inject/update a BMAD section with guidance, directory, and details.
const filePath = path.join(installDir, 'AGENTS.md');
const startMarker = '<!-- BEGIN: BMAD-AGENTS -->';
const endMarker = '<!-- END: BMAD-AGENTS -->';
const agents = selectedAgent ? [selectedAgent] : await this.getAllAgentIds(installDir);
const tasks = await this.getAllTaskIds(installDir);
// Build BMAD section content
let section = '';
section += `${startMarker}\n`;
section += `# BMAD-METHOD Agents and Tasks\n\n`;
section += `This section is auto-generated by BMAD-METHOD for Codex. Codex merges this AGENTS.md into context.\n\n`;
section += `## How To Use With Codex\n\n`;
section += `- Codex CLI: run \`codex\` in this project. Reference an agent naturally, e.g., "As dev, implement ...".\n`;
section += `- Codex Web: open this repo and reference roles the same way; Codex reads \`AGENTS.md\`.\n`;
section += `- Commit \`.bmad-core\` and this \`AGENTS.md\` file to your repo so Codex (Web/CLI) can read full agent definitions.\n`;
section += `- Refresh this section after agent updates: \`npx bmad-method install -f -i codex\`.\n\n`;
section += `### Helpful Commands\n\n`;
section += `- List agents: \`npx bmad-method list:agents\`\n`;
section += `- Reinstall BMAD core and regenerate AGENTS.md: \`npx bmad-method install -f -i codex\`\n`;
section += `- Validate configuration: \`npx bmad-method validate\`\n\n`;
// Agents directory table
section += `## Agents\n\n`;
section += `### Directory\n\n`;
section += `| Title | ID | When To Use |\n|---|---|---|\n`;
const agentSummaries = [];
for (const agentId of agents) {
const agentPath = await this.findAgentPath(agentId, installDir);
if (!agentPath) continue;
const raw = await fileManager.readFile(agentPath);
const yamlMatch = raw.match(/```ya?ml\r?\n([\s\S]*?)```/);
const yamlBlock = yamlMatch ? yamlMatch[1].trim() : null;
const title = await this.getAgentTitle(agentId, installDir);
const whenToUse = yamlBlock?.match(/whenToUse:\s*"?([^\n"]+)"?/i)?.[1]?.trim() || '';
agentSummaries.push({ agentId, title, whenToUse, yamlBlock, raw, path: agentPath });
section += `| ${title} | ${agentId} | ${whenToUse || '—'} |\n`;
}
section += `\n`;
// Detailed agent sections
for (const { agentId, title, whenToUse, yamlBlock, raw, path: agentPath } of agentSummaries) {
const relativePath = path.relative(installDir, agentPath).replaceAll('\\', '/');
section += `### ${title} (id: ${agentId})\n`;
section += `Source: ${relativePath}\n\n`;
if (whenToUse) section += `- When to use: ${whenToUse}\n`;
section += `- How to activate: Mention "As ${agentId}, ..." or "Use ${title} to ..."\n\n`;
if (yamlBlock) {
section += '```yaml\n' + yamlBlock + '\n```\n\n';
} else {
section += '```md\n' + raw.trim() + '\n```\n\n';
}
}
// Tasks
if (tasks && tasks.length > 0) {
section += `## Tasks\n\n`;
section += `These are reusable task briefs you can reference directly in Codex.\n\n`;
for (const taskId of tasks) {
const taskPath = await this.findTaskPath(taskId, installDir);
if (!taskPath) continue;
const raw = await fileManager.readFile(taskPath);
const relativePath = path.relative(installDir, taskPath).replaceAll('\\', '/');
section += `### Task: ${taskId}\n`;
section += `Source: ${relativePath}\n`;
section += `- How to use: "Use task ${taskId} with the appropriate agent" and paste relevant parts as needed.\n\n`;
section += '```md\n' + raw.trim() + '\n```\n\n';
}
}
section += `${endMarker}\n`;
// Write or update AGENTS.md
let finalContent = '';
if (await fileManager.pathExists(filePath)) {
const existing = await fileManager.readFile(filePath);
if (existing.includes(startMarker) && existing.includes(endMarker)) {
// Replace existing BMAD block
const pattern = String.raw`${startMarker}[\s\S]*?${endMarker}`;
const replaced = existing.replace(new RegExp(pattern, 'm'), section);
finalContent = replaced;
} else {
// Append BMAD block to existing file
finalContent = existing.trimEnd() + `\n\n` + section;
}
} else {
// Create fresh AGENTS.md with a small header and BMAD block
finalContent += '# Project Agents\n\n';
finalContent += 'This file provides guidance and memory for Codex CLI.\n\n';
finalContent += section;
}
await fileManager.writeFile(filePath, finalContent);
console.log(chalk.green('✓ Created/updated AGENTS.md for Codex CLI integration'));
console.log(
chalk.dim(
'Codex reads AGENTS.md automatically. Run `codex` in this project to use BMAD agents.',
),
);
// Optionally add helpful npm scripts if a package.json exists
try {
const pkgPath = path.join(installDir, 'package.json');
if (await fileManager.pathExists(pkgPath)) {
const pkgRaw = await fileManager.readFile(pkgPath);
const pkg = JSON.parse(pkgRaw);
pkg.scripts = pkg.scripts || {};
const updated = { ...pkg.scripts };
if (!updated['bmad:refresh']) updated['bmad:refresh'] = 'bmad-method install -f -i codex';
if (!updated['bmad:list']) updated['bmad:list'] = 'bmad-method list:agents';
if (!updated['bmad:validate']) updated['bmad:validate'] = 'bmad-method validate';
const changed = JSON.stringify(updated) !== JSON.stringify(pkg.scripts);
if (changed) {
const newPkg = { ...pkg, scripts: updated };
await fileManager.writeFile(pkgPath, JSON.stringify(newPkg, null, 2) + '\n');
console.log(chalk.green('✓ Added npm scripts: bmad:refresh, bmad:list, bmad:validate'));
}
}
} catch {
console.log(
chalk.yellow('⚠︎ Skipped adding npm scripts (package.json not writable or invalid)'),
);
}
// Adjust .gitignore behavior depending on Codex mode
try {
const gitignorePath = path.join(installDir, '.gitignore');
const ignoreLines = ['# BMAD (local only)', '.bmad-core/', '.bmad-*/'];
const exists = await fileManager.pathExists(gitignorePath);
if (options.webEnabled) {
if (exists) {
let gi = await fileManager.readFile(gitignorePath);
// Remove lines that ignore BMAD dot-folders
const updated = gi
.split(/\r?\n/)
.filter((l) => !/^\s*\.bmad-core\/?\s*$/.test(l) && !/^\s*\.bmad-\*\/?\s*$/.test(l))
.join('\n');
if (updated !== gi) {
await fileManager.writeFile(gitignorePath, updated.trimEnd() + '\n');
console.log(chalk.green('✓ Updated .gitignore to include .bmad-core in commits'));
}
}
} else {
// Local-only: add ignores if missing
let base = exists ? await fileManager.readFile(gitignorePath) : '';
const haveCore = base.includes('.bmad-core/');
const haveStar = base.includes('.bmad-*/');
if (!haveCore || !haveStar) {
const sep = base.endsWith('\n') || base.length === 0 ? '' : '\n';
const add = [!haveCore || !haveStar ? ignoreLines.join('\n') : '']
.filter(Boolean)
.join('\n');
const out = base + sep + add + '\n';
await fileManager.writeFile(gitignorePath, out);
console.log(chalk.green('✓ Added .bmad-core/* to .gitignore for local-only Codex setup'));
}
}
} catch {
console.log(chalk.yellow('⚠︎ Could not update .gitignore (skipping)'));
}
return true;
}
async setupCursor(installDir, selectedAgent) { async setupCursor(installDir, selectedAgent) {
const cursorRulesDir = path.join(installDir, '.cursor', 'rules', 'bmad'); const cursorRulesDir = path.join(installDir, '.cursor', 'rules', 'bmad');
const agents = selectedAgent ? [selectedAgent] : await this.getAllAgentIds(installDir); const agents = selectedAgent ? [selectedAgent] : await this.getAllAgentIds(installDir);
@ -512,6 +690,7 @@ class IdeSetup extends BaseIdeSetup {
async getCoreTaskIds(installDir) { async getCoreTaskIds(installDir) {
const allTaskIds = []; const allTaskIds = [];
const glob = require('glob');
// Check core tasks in .bmad-core or root only // Check core tasks in .bmad-core or root only
let tasksDir = path.join(installDir, '.bmad-core', 'tasks'); let tasksDir = path.join(installDir, '.bmad-core', 'tasks');
@ -520,7 +699,6 @@ class IdeSetup extends BaseIdeSetup {
} }
if (await fileManager.pathExists(tasksDir)) { if (await fileManager.pathExists(tasksDir)) {
const glob = require('glob');
const taskFiles = glob.sync('*.md', { cwd: tasksDir }); const taskFiles = glob.sync('*.md', { cwd: tasksDir });
allTaskIds.push(...taskFiles.map((file) => path.basename(file, '.md'))); allTaskIds.push(...taskFiles.map((file) => path.basename(file, '.md')));
} }
@ -528,6 +706,7 @@ class IdeSetup extends BaseIdeSetup {
// Check common tasks // Check common tasks
const commonTasksDir = path.join(installDir, 'common', 'tasks'); const commonTasksDir = path.join(installDir, 'common', 'tasks');
if (await fileManager.pathExists(commonTasksDir)) { if (await fileManager.pathExists(commonTasksDir)) {
const glob = require('glob');
const commonTaskFiles = glob.sync('*.md', { cwd: commonTasksDir }); const commonTaskFiles = glob.sync('*.md', { cwd: commonTasksDir });
allTaskIds.push(...commonTaskFiles.map((file) => path.basename(file, '.md'))); allTaskIds.push(...commonTaskFiles.map((file) => path.basename(file, '.md')));
} }
@ -1030,97 +1209,77 @@ class IdeSetup extends BaseIdeSetup {
return true; return true;
} }
async setupGeminiCli(installDir) { async setupGeminiCli(installDir, selectedAgent) {
const geminiDir = path.join(installDir, '.gemini'); const ideConfig = await configLoader.getIdeConfiguration('gemini');
const bmadMethodDir = path.join(geminiDir, 'bmad-method'); const bmadCommandsDir = path.join(installDir, ideConfig['rule-dir']);
await fileManager.ensureDirectory(bmadMethodDir);
// Update logic for existing settings.json const agentCommandsDir = path.join(bmadCommandsDir, 'agents');
const settingsPath = path.join(geminiDir, 'settings.json'); const taskCommandsDir = path.join(bmadCommandsDir, 'tasks');
if (await fileManager.pathExists(settingsPath)) { await fileManager.ensureDirectory(agentCommandsDir);
try { await fileManager.ensureDirectory(taskCommandsDir);
const settingsContent = await fileManager.readFile(settingsPath);
const settings = JSON.parse(settingsContent);
let updated = false;
// Handle contextFileName property
if (settings.contextFileName && Array.isArray(settings.contextFileName)) {
const originalLength = settings.contextFileName.length;
settings.contextFileName = settings.contextFileName.filter(
(fileName) => !fileName.startsWith('agents/'),
);
if (settings.contextFileName.length !== originalLength) {
updated = true;
}
}
if (updated) {
await fileManager.writeFile(settingsPath, JSON.stringify(settings, null, 2));
console.log(
chalk.green('✓ Updated .gemini/settings.json - removed agent file references'),
);
}
} catch (error) {
console.warn(chalk.yellow('Could not update .gemini/settings.json'), error);
}
}
// Remove old agents directory
const agentsDir = path.join(geminiDir, 'agents');
if (await fileManager.pathExists(agentsDir)) {
await fileManager.removeDirectory(agentsDir);
console.log(chalk.green('✓ Removed old .gemini/agents directory'));
}
// Get all available agents
const agents = await this.getAllAgentIds(installDir);
let concatenatedContent = '';
// Process Agents
const agents = selectedAgent ? [selectedAgent] : await this.getAllAgentIds(installDir);
for (const agentId of agents) { for (const agentId of agents) {
// Find the source agent file
const agentPath = await this.findAgentPath(agentId, installDir); const agentPath = await this.findAgentPath(agentId, installDir);
if (!agentPath) {
if (agentPath) { console.log(chalk.yellow(`✗ Agent file not found for ${agentId}, skipping.`));
const agentContent = await fileManager.readFile(agentPath); continue;
// Create properly formatted agent rule content (similar to trae)
let agentRuleContent = `# ${agentId.toUpperCase()} Agent Rule\n\n`;
agentRuleContent += `This rule is triggered when the user types \`*${agentId}\` and activates the ${await this.getAgentTitle(
agentId,
installDir,
)} agent persona.\n\n`;
agentRuleContent += '## Agent Activation\n\n';
agentRuleContent +=
'CRITICAL: Read the full YAML, start activation to alter your state of being, follow startup section instructions, stay in this being until told to exit this mode:\n\n';
agentRuleContent += '```yaml\n';
// Extract just the YAML content from the agent file
const yamlContent = extractYamlFromAgent(agentContent);
if (yamlContent) {
agentRuleContent += yamlContent;
} else {
// If no YAML found, include the whole content minus the header
agentRuleContent += agentContent.replace(/^#.*$/m, '').trim();
}
agentRuleContent += '\n```\n\n';
agentRuleContent += '## File Reference\n\n';
const relativePath = path.relative(installDir, agentPath).replaceAll('\\', '/');
agentRuleContent += `The complete agent definition is available in [${relativePath}](${relativePath}).\n\n`;
agentRuleContent += '## Usage\n\n';
agentRuleContent += `When the user types \`*${agentId}\`, activate this ${await this.getAgentTitle(
agentId,
installDir,
)} persona and follow all instructions defined in the YAML configuration above.\n`;
// Add to concatenated content with separator
concatenatedContent += agentRuleContent + '\n\n---\n\n';
console.log(chalk.green(`✓ Added context for @${agentId}`));
} }
const agentTitle = await this.getAgentTitle(agentId, installDir);
const commandPath = path.join(agentCommandsDir, `${agentId}.toml`);
// Get relative path from installDir to agent file for @{file} reference
const relativeAgentPath = path.relative(installDir, agentPath).replaceAll('\\', '/');
const tomlContent = `description = "Activates the ${agentTitle} agent from the BMad Method."
prompt = """
CRITICAL: You are now the BMad '${agentTitle}' agent. Adopt its persona, follow its instructions, and use its capabilities. The full agent definition is below.
@{${relativeAgentPath}}
"""`;
await fileManager.writeFile(commandPath, tomlContent);
console.log(chalk.green(`✓ Created agent command: /bmad:agents:${agentId}`));
} }
// Write the concatenated content to GEMINI.md // Process Tasks
const geminiMdPath = path.join(bmadMethodDir, 'GEMINI.md'); const tasks = await this.getAllTaskIds(installDir);
await fileManager.writeFile(geminiMdPath, concatenatedContent); for (const taskId of tasks) {
console.log(chalk.green(`\n✓ Created GEMINI.md in ${bmadMethodDir}`)); const taskPath = await this.findTaskPath(taskId, installDir);
if (!taskPath) {
console.log(chalk.yellow(`✗ Task file not found for ${taskId}, skipping.`));
continue;
}
const taskTitle = taskId
.split('-')
.map((word) => word.charAt(0).toUpperCase() + word.slice(1))
.join(' ');
const commandPath = path.join(taskCommandsDir, `${taskId}.toml`);
// Get relative path from installDir to task file for @{file} reference
const relativeTaskPath = path.relative(installDir, taskPath).replaceAll('\\', '/');
const tomlContent = `description = "Executes the BMad Task: ${taskTitle}"
prompt = """
CRITICAL: You are to execute the BMad Task defined below.
@{${relativeTaskPath}}
"""`;
await fileManager.writeFile(commandPath, tomlContent);
console.log(chalk.green(`✓ Created task command: /bmad:tasks:${taskId}`));
}
console.log(
chalk.green(`
Created Gemini CLI extension in ${bmadCommandsDir}`),
);
console.log(
chalk.dim('You can now use commands like /bmad:agents:dev or /bmad:tasks:create-doc.'),
);
return true; return true;
} }
@ -1436,6 +1595,96 @@ tools: ['changes', 'codebase', 'fetch', 'findTestFiles', 'githubRepo', 'problems
console.log(chalk.dim('')); console.log(chalk.dim(''));
console.log(chalk.dim('You can modify these settings anytime in .vscode/settings.json')); console.log(chalk.dim('You can modify these settings anytime in .vscode/settings.json'));
} }
async setupAuggieCLI(installDir, selectedAgent, spinner = null, preConfiguredSettings = null) {
const os = require('node:os');
const inquirer = require('inquirer');
const agents = selectedAgent ? [selectedAgent] : await this.getAllAgentIds(installDir);
// Get the IDE configuration to access location options
const ideConfig = await configLoader.getIdeConfiguration('auggie-cli');
const locations = ideConfig.locations;
// Use pre-configured settings if provided, otherwise prompt
let selectedLocations;
if (preConfiguredSettings && preConfiguredSettings.selectedLocations) {
selectedLocations = preConfiguredSettings.selectedLocations;
console.log(
chalk.dim(
`Using pre-configured Auggie CLI (Augment Code) locations: ${selectedLocations.join(', ')}`,
),
);
} else {
// Pause spinner during location selection to avoid UI conflicts
let spinnerWasActive = false;
if (spinner && spinner.isSpinning) {
spinner.stop();
spinnerWasActive = true;
}
// Clear any previous output and add spacing to avoid conflicts with loaders
console.log('\n'.repeat(2));
console.log(chalk.blue('📍 Auggie CLI Location Configuration'));
console.log(chalk.dim('Choose where to install BMad agents for Auggie CLI access.'));
console.log(''); // Add extra spacing
const response = await inquirer.prompt([
{
type: 'checkbox',
name: 'selectedLocations',
message: 'Select Auggie CLI command locations:',
choices: Object.entries(locations).map(([key, location]) => ({
name: `${location.name}: ${location.description}`,
value: key,
})),
validate: (selected) => {
if (selected.length === 0) {
return 'Please select at least one location';
}
return true;
},
},
]);
selectedLocations = response.selectedLocations;
// Restart spinner if it was active before prompts
if (spinner && spinnerWasActive) {
spinner.start();
}
}
// Install to each selected location
for (const locationKey of selectedLocations) {
const location = locations[locationKey];
let commandsDir = location['rule-dir'];
// Handle tilde expansion for user directory
if (commandsDir.startsWith('~/')) {
commandsDir = path.join(os.homedir(), commandsDir.slice(2));
} else if (commandsDir.startsWith('./')) {
commandsDir = path.join(installDir, commandsDir.slice(2));
}
await fileManager.ensureDirectory(commandsDir);
for (const agentId of agents) {
// Find the agent file
const agentPath = await this.findAgentPath(agentId, installDir);
if (agentPath) {
const agentContent = await fileManager.readFile(agentPath);
const mdPath = path.join(commandsDir, `${agentId}.md`);
await fileManager.writeFile(mdPath, agentContent);
console.log(chalk.green(`✓ Created command: ${agentId}.md in ${location.name}`));
}
}
console.log(chalk.green(`\n✓ Created Auggie CLI commands in ${commandsDir}`));
console.log(chalk.dim(` Location: ${location.name} - ${location.description}`));
}
return true;
}
} }
module.exports = new IdeSetup(); module.exports = new IdeSetup();

View File

@ -408,7 +408,12 @@ class Installer {
if (ides.length > 0) { if (ides.length > 0) {
for (const ide of ides) { for (const ide of ides) {
spinner.text = `Setting up ${ide} integration...`; spinner.text = `Setting up ${ide} integration...`;
const preConfiguredSettings = ide === 'github-copilot' ? config.githubCopilotConfig : null; let preConfiguredSettings = null;
if (ide === 'github-copilot') {
preConfiguredSettings = config.githubCopilotConfig;
} else if (ide === 'auggie-cli') {
preConfiguredSettings = config.augmentCodeConfig;
}
await ideSetup.setup(ide, installDir, config.agent, spinner, preConfiguredSettings); await ideSetup.setup(ide, installDir, config.agent, spinner, preConfiguredSettings);
} }
} }

View File

@ -1,12 +1,12 @@
{ {
"name": "bmad-method", "name": "bmad-method",
"version": "4.37.0-beta.4", "version": "4.42.1",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "bmad-method", "name": "bmad-method",
"version": "4.37.0-beta.4", "version": "4.42.1",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"chalk": "^4.1.2", "chalk": "^4.1.2",

View File

@ -1,6 +1,6 @@
{ {
"name": "bmad-method", "name": "bmad-method",
"version": "4.39.1", "version": "4.42.1",
"description": "BMad Method installer - AI-powered Agile development framework", "description": "BMad Method installer - AI-powered Agile development framework",
"keywords": [ "keywords": [
"bmad", "bmad",

37
tools/setup-hooks.sh Executable file
View File

@ -0,0 +1,37 @@
#!/bin/bash
# Setup script for git hooks
echo "Setting up git hooks..."
# Install husky
npm install --save-dev husky
# Initialize husky
npx husky init
# Create pre-commit hook
cat > .husky/pre-commit << 'EOF'
#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"
# Run validation checks before commit
echo "Running pre-commit checks..."
npm run validate
npm run format:check
npm run lint
if [ $? -ne 0 ]; then
echo "❌ Pre-commit checks failed. Please fix the issues before committing."
echo " Run 'npm run format' to fix formatting issues"
echo " Run 'npm run lint:fix' to fix some lint issues"
exit 1
fi
echo "✅ Pre-commit checks passed!"
EOF
chmod +x .husky/pre-commit
echo "✅ Git hooks setup complete!"
echo "Now commits will be validated before they're created."