Compare commits

...

52 Commits

Author SHA1 Message Date
Alex Verkhovsky b24df615d6
Merge 59ed596392 into 274dea16fa 2026-01-15 07:40:41 +03:00
Nwokoma Chukwuma U. 274dea16fa
Fix YAML indentation in kilo.js customInstructions field (#1291)
Co-authored-by: Brian <bmadcode@gmail.com>
2026-01-14 21:26:10 -06:00
Kevin Heidt dcd581c84a
Fix glob pattern to use forward slashes (#1241)
Normalize source directory path for glob pattern compatibility.

Reviewed-by: Alex Verkhovsky <alexey.verkhovsky@gmail.com>
2026-01-14 21:16:23 -06:00
Murat K Ozcan 6d84a60a78
docs: tea entry points and resume tip (#1246)
Co-authored-by: Brian <bmadcode@gmail.com>
2026-01-14 21:13:48 -06:00
Eduard Voiculescu 59e1b7067c
remove remember the users name is {user_name}, it is already present in the activation-steps.txt (#1315) 2026-01-14 21:04:43 -06:00
sjennings 1d8df63ac5
feat(bmgd): Add E2E testing methodology and scaffold workflow (#1322)
* feat(bmgd): Add E2E testing methodology and scaffold workflow

- Add comprehensive e2e-testing.md knowledge fragment
- Add e2e-scaffold workflow for infrastructure generation
- Update qa-index.csv with e2e-testing fragment reference
- Update game-qa.agent.yaml with ES trigger
- Update test-design and automate instructions with E2E guidance
- Update unity-testing.md with E2E section reference

* fix(bmgd): improve E2E testing infrastructure robustness

- Add WaitForValueApprox overloads for float/double comparisons
- Fix assembly definition to use precompiledReferences for test runners
- Fix CaptureOnFailure to yield before screenshot capture (main thread)
- Add error handling to test file cleanup with try/catch
- Fix ClickButton to use FindObjectsByType and check scene.isLoaded
- Add engine-specific output paths (Unity/Unreal/Godot) to workflow
- Fix knowledge_fragments paths to use correct relative paths

* feat(bmgd): add E2E testing support for Godot and Unreal

Godot:
- Add C# testing with xUnit/NSubstitute alongside GDScript GUT
- Add E2E infrastructure: GameE2ETestFixture, ScenarioBuilder,
  InputSimulator, AsyncAssert (all GDScript)
- Add example E2E tests and quick checklist

Unreal:
- Add E2E infrastructure extending AFunctionalTest
- Add GameE2ETestBase, ScenarioBuilder, InputSimulator classes
- Add AsyncTestHelpers with latent commands and macros
- Add example E2E tests for combat and turn cycle
- Add CLI commands for running E2E tests

---------

Co-authored-by: Scott Jennings <scott.jennings+CIGINT@cloudimperiumgames.com>
Co-authored-by: Brian <bmadcode@gmail.com>
2026-01-14 20:53:40 -06:00
VJSai 993d02b8b3
Enhance security policy documentation (#1312)
Expanded the security policy to include supported versions, reporting guidelines, response timelines, security scope, and best practices for users.

Co-authored-by: Alex Verkhovsky <alexey.verkhovsky@gmail.com>
2026-01-14 16:27:52 -06:00
Davor Racic 5cb5606ba3
fix(cli): replace inquirer with @clack/prompts for Windows compatibility (#1316)
* fix(cli): replace inquirer with @clack/prompts for Windows compatibility

- Add new prompts.js wrapper around @clack/prompts to fix Windows arrow
  key navigation issues (libuv #852)
- Fix validation logic in github-copilot.js that always returned true
- Add support for primitive choice values (string/number) in select/multiselect
- Add 'when' property support for conditional questions in prompt()
- Update all IDE installers to use new prompts module

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(cli): address code review feedback for prompts migration

- Move @clack/prompts from devDependencies to dependencies (critical)
- Remove unused inquirer dependency
- Fix potential crash in multiselect when initialValues is undefined
- Add async validator detection with explicit error message
- Extract validateCustomContentPathSync method in ui.js
- Extract promptInstallLocation methods in claude-code.js and antigravity.js
- Fix moduleId -> missing.id in installer.js remove flow
- Update multiselect to support native clack API (options/initialValues)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* chore: update comments to reference @clack/prompts instead of inquirer

- Update bmad-cli.js comment about CLI prompts
- Update config-collector.js JSDoc comments
- Rename inquirer variable to choiceUtils in ui.js
- Update JSDoc returns and calls documentation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(cli): add spacing between prompts and installation progress

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(cli): add multiselect usage hints for inexperienced users

Add inline navigation hints to all multiselect prompts showing
(↑/↓ navigate, SPACE select, ENTER confirm) to help users
unfamiliar with terminal multiselect controls.

Also restore detailed warning when no tools are selected,
explaining that SPACE must be pressed to select items.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* feat(cli): restore IDE grouping using groupMultiselect

Replace flat multiselect with native @clack/prompts groupMultiselect
component to restore visual grouping of IDE/tool options:
- "Previously Configured" - pre-selected IDEs from existing install
- "Recommended Tools" - starred preferred options
- "Additional Tools" - other available options

This restores the grouped UX that was lost during the Inquirer.js
to @clack/prompts migration.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 16:25:35 -06:00
Alex Verkhovsky 59ed596392 refactor(code-review): swap phase order - adversarial first, context-aware second
Reorder dual-phase review so adversarial diff review runs before
context-aware review. This ensures fresh-eyes code quality checks
happen before story-biased validation.
2026-01-06 23:29:41 -08:00
Alex Verkhovsky a2458a5537
Merge branch 'main' into refactor/code-review-sharded-dual-phase 2026-01-06 19:29:31 -08:00
Alex Verkhovsky b73670700b docs(code-review): expand story file validation to include empty and malformed files 2026-01-05 07:39:56 -08:00
Alex Verkhovsky 5a16c3a102 fix(code-review): halt on git command failure instead of silently treating as NO_GIT 2026-01-05 07:27:12 -08:00
Alex Verkhovsky 58e0b6a634 docs(code-review): add generic error handling for git commands 2026-01-05 07:26:50 -08:00
Alex Verkhovsky 2785d382d5 docs(code-review): use {sprint_status} variable instead of expanded path 2026-01-05 07:24:11 -08:00
Alex Verkhovsky 551a2ccb53 docs(code-review): use variable reference for sprint-status path 2026-01-05 07:21:44 -08:00
Alex Verkhovsky 3fc411d9c9 docs(code-review): clarify sprint_status file definition and location 2026-01-05 07:19:31 -08:00
Alex Verkhovsky ec30b580e7 refactor(code-review): use Skip to for flow control directive in substep 4
Skip to substep 5 correctly communicates jumping past the rest of the git
discovery logic in substep 4 when git repo is not found. Proceed would
suggest normal sequential flow, but we are skipping the conditional branch.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-05 07:13:41 -08:00
Alex Verkhovsky 9e6e991b53 fix(code-review): correct flow control directive in substep 4
Changed "Skip to substep 6" (which does not exist) to "Proceed to substep 5".
Step only has 5 substeps. After setting NO_GIT flag, workflow continues to
substep 5 (Cross-Reference Story vs Git), not to a non-existent substep 6.

Fixes h2 finding from adversarial review.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-05 07:09:56 -08:00
Alex Verkhovsky dbdaae1be7 refactor(code-review): remove NEXT directive from completion checklist
The checklist validates work done DURING step execution.
The NEXT directive is OUTPUT of completion, not a validation criterion.
It happens AFTER the checklist passes, so it does not belong there.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-05 06:34:33 -08:00
Alex Verkhovsky 1636bd5a55 refactor(code-review): remove redundant 'immediately' from halt instruction
'Immediately' is implied by HALT. No timing choice exists.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-05 06:33:05 -08:00
Alex Verkhovsky 53045d35b1 refactor(code-review): move NEXT STEP DIRECTIVE after COMPLETION CHECKLIST
Logical flow: verify checklist → then declare next step
Not: declare next step → then verify checklist

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-05 06:31:19 -08:00
Alex Verkhovsky b3643af6dc refactor(code-review): remove redundancy and clarify halt instruction
- Remove redundant "Do NOT proceed to the next step" (halt already means this)
- Change "item" to "criterion" (more precise terminology)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-05 06:30:31 -08:00
Alex Verkhovsky 4ba6e19303 refactor(code-review): rename SUCCESS METRICS to COMPLETION CHECKLIST
Correct terminology:
- "Metrics" implies quantitative measurement
- These are actually pass/fail criteria for step completion
- Section is self-validation checklist, not measurement data

Reframe as checkpoint before proceeding to next step:
- Add "Before proceeding to the next step, verify ALL of the following:"
- Change "If any metric" to "If any item"
- Explicit instruction: "Do NOT proceed to the next step" if checklist fails

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-05 06:29:52 -08:00
Alex Verkhovsky 38ab12da85 refactor(code-review): remove cargo cult failure modes repetition from step-01
FAILURE MODES section was just inverted SUCCESS METRICS. Not valuable.
Replaced with single catch-all statement: failure to meet any success metric = failure.

Let actual failure modes emerge from usage patterns, not speculation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-05 06:17:37 -08:00
Alex Verkhovsky 0ae6799cb6 refactor(code-review): remove project context loading from step-01
Step-01 focus is: load story + discover git changes. Nothing else.

Project context loading belongs in step-04 (Context-Aware Review) where it
provides audit rules, principles, and requirements for validating AC
implementation against project standards.

(See implementation-notes.md for detail)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-05 06:04:34 -08:00
Alex Verkhovsky e479b4164c refactor(code-review): add checkpoint for empty git changes and exclude ignored files
Step-01 substeps 5:
- If no git changes detected: halt and ask user "Continue anyway?"
  Allows AC audit on story File List even if no code changes in git
- Exclude git-ignored files from discrepancy comparison
  Prevents false positives if story modified only ignored files

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-05 06:02:17 -08:00
Alex Verkhovsky 71a1c325f7 refactor(code-review): add rename detection to git change discovery
Step-01 substep-4:
- Use git diff -M to detect renamed/moved files
- Include deleted, renamed files in git_changed_files
- Adversarial reviewer needs to see deletions (e.g., critical code removed)
- Downstream steps will handle these appropriately (documented in implementation-notes)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-05 05:54:32 -08:00
Alex Verkhovsky 59c58b2e2c refactor(code-review): clean up step-01 substep 3 and add error handling
Substep 3 (Extract File List):
- Removed repetitive wording
- Reference {story_content} variable instead of generic "story file"
- Add error handling: if Dev Agent Record/File List not found, set story_file_list = NO_FILE_LIST
- Consistent with NO_GIT pattern used elsewhere

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-05 05:47:46 -08:00
Alex Verkhovsky 18ac3c931a refactor(code-review): audit step-01 substeps and success/failure criteria
Step 01 audit findings:
- Substep 3 was extracting items not needed by step-01 (ACs, tasks, changelog)
  Trimmed to only extract story_file_list (needed for git comparison)
- Success/failure criteria now explicitly guard story_content completeness
  since downstream steps depend on the full file content
- Removed "downstream" jargon in favor of "later steps"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-05 05:42:04 -08:00
Alex Verkhovsky 8fc7db7b97 refactor(code-review): remove implementation notes from step-01
Implementation notes for the workflow should be collected in a dedicated
implementation-notes.md file, not embedded in step files. This keeps each
step focused and defers editorial comments to a separate tracking document.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-05 05:33:27 -08:00
Alex Verkhovsky 060d5562a4 docs(code-review): clarify fuzzy matching for story identification
- Changed priority 1 from exact to resembles: handles format variations (1 2, 1.2, one-two, one thirty two)
- Explicitly prevents false matches: 1-33 does not match 1-32
- Updated priority 3-4 to use resembles instead of contains: supports typos and TTS errors (paiment, passwd)
- Added examples for number variations and compound spoken formats
- Tested with agent validation: handles typos, format variations, misspellings correctly
2026-01-05 04:24:29 -08:00
Alex Verkhovsky 2bd6e9df1b docs(code-review): clarify step-01 story identification algorithm
- Fixed variable naming convention: backticks for names, curlies only for value substitution
- Rewrote Identify Story section with explicit two-path algorithm (file path vs sprint_status search)
- Added verification step for files not in sprint_status with user confirmation flow
- Clarified matching priority order: exact key > full ID > partial > name > description
- Made loopback instructions consistent and explicit (return to user prompt)
- Improved git_discrepancies description from vague "differences" to concrete "mismatches"
- Tested with 30+ test cases and fresh agent review - algorithm is clear and executable
2026-01-05 04:14:14 -08:00
Alex Verkhovsky 6886e3c8cd refactor(code-review): clarify step-01 description and NO_GIT handling 2026-01-05 02:54:00 -08:00
Alex Verkhovsky 1f5700ea14 refactor(code-review): remove unused thisStepFile/nextStepFile from frontmatter 2026-01-05 02:37:00 -08:00
Alex Verkhovsky 9700da9dc6 refactor(code-review): remove input_file_patterns from workflow.md to prevent context leak 2026-01-05 01:14:37 -08:00
Alex Verkhovsky 0f18c4bcba refactor(code-review): replace discover_inputs protocol with explicit file loading 2026-01-05 01:12:35 -08:00
Alex Verkhovsky ae9b83388c refactor(code-review): reorder phases - adversarial first, context-aware second
- Swap step-03 and step-04: adversarial review now runs before context-aware
- Move discover_inputs from step-01 to step-04 (JIT loading)
- Add input_file_patterns to workflow.md frontmatter
- Adversarial runs lean (just diff + code), context-aware loads planning docs
2026-01-05 01:09:38 -08:00
Alex Verkhovsky 64c32d8c8c refactor(code-review): add web_bundle: false, use "Read and follow" wording
- Add web_bundle: false to frontmatter (workflow needs file access)
- Change "Load and execute" to "Read and follow" (clearer for LLMs)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-05 00:39:17 -08:00
Alex Verkhovsky eae4ad46a1 refactor(code-review): remove unused validation path and checklist 2026-01-04 21:38:04 -08:00
Alex Verkhovsky a8758b0393 refactor(code-review): remove CRITICAL DIRECTIVES, add communication_language 2026-01-04 21:32:10 -08:00
Alex Verkhovsky ac081a27e8 docs(code-review): clarify step file loading in workflow architecture 2026-01-04 21:15:04 -08:00
Alex Verkhovsky 7c914ae8b2 refactor(code-review): inline single-use adversarial task path 2026-01-04 21:05:48 -08:00
Alex Verkhovsky dadca29b09 refactor(code-review): use installed_path variable in step files 2026-01-04 21:00:18 -08:00
Alex Verkhovsky 25f93a3b64 refactor(code-review): simplify workflow.md 2026-01-04 20:59:58 -08:00
Alex Verkhovsky 0f708d0b89 refactor(core): shorten adversarial review task name 2026-01-04 18:28:51 -08:00
Alex Verkhovsky 5fcdae02b5 refactor(code-review): defer finding IDs until consolidation 2026-01-04 05:33:21 -08:00
Alex Verkhovsky b8eeb78cff refactor(adversarial-review): simplify severity/validity classification 2026-01-04 04:13:46 -08:00
Alex Verkhovsky b628eec9fd refactor(code-review): simplify adversarial review task invocation 2026-01-04 04:07:23 -08:00
Alex Verkhovsky f5d949b922 feat(dev-story): capture baseline commit for code-review diff 2026-01-04 03:04:56 -08:00
Alex Verkhovsky 6d1d7d0e72 fix(adversarial-review): add tech-spec exclusion and read-only notes 2026-01-04 02:12:02 -08:00
Alex Verkhovsky 8b6a053d2e fix(code-review): simplify diff exclusion to implementation_artifacts only 2026-01-04 01:41:20 -08:00
Alex Verkhovsky 460c27e29a refactor(code-review): convert to sharded format with dual-phase review
Convert monolithic code-review workflow to step-file architecture:
- workflow.md: Overview and initialization
- step-01: Load story and discover git changes
- step-02: Build review attack plan
- step-03: Context-aware review (validates ACs, audits tasks)
- step-04: Adversarial review (information-asymmetric diff review)
- step-05: Consolidate findings (merge + deduplicate)
- step-06: Resolve findings and update status

Key features:
- Dual-phase review: context-aware + context-independent adversarial
- Information asymmetry: adversarial reviewer sees only diff, no story
- Uses review-adversarial-general.xml via subagent (with fallbacks)
- Findings consolidation with severity (CRITICAL/HIGH/MEDIUM/LOW)
- State variables for cross-step persistence

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-04 01:07:58 -08:00
42 changed files with 6281 additions and 1117 deletions

85
SECURITY.md Normal file
View File

@ -0,0 +1,85 @@
# Security Policy
## Supported Versions
We release security patches for the following versions:
| Version | Supported |
| ------- | ------------------ |
| Latest | :white_check_mark: |
| < Latest | :x: |
We recommend always using the latest version of BMad Method to ensure you have the most recent security updates.
## Reporting a Vulnerability
We take security vulnerabilities seriously. If you discover a security issue, please report it responsibly.
### How to Report
**Do NOT report security vulnerabilities through public GitHub issues.**
Instead, please report them via one of these methods:
1. **GitHub Security Advisories** (Preferred): Use [GitHub's private vulnerability reporting](https://github.com/bmad-code-org/BMAD-METHOD/security/advisories/new) to submit a confidential report.
2. **Discord**: Contact a maintainer directly via DM on our [Discord server](https://discord.gg/gk8jAdXWmj).
### What to Include
Please include as much of the following information as possible:
- Type of vulnerability (e.g., prompt injection, path traversal, etc.)
- Full paths of source file(s) related to the vulnerability
- Step-by-step instructions to reproduce the issue
- Proof-of-concept or exploit code (if available)
- Impact assessment of the vulnerability
### Response Timeline
- **Initial Response**: Within 48 hours of receiving your report
- **Status Update**: Within 7 days with our assessment
- **Resolution Target**: Critical issues within 30 days; other issues within 90 days
### What to Expect
1. We will acknowledge receipt of your report
2. We will investigate and validate the vulnerability
3. We will work on a fix and coordinate disclosure timing with you
4. We will credit you in the security advisory (unless you prefer to remain anonymous)
## Security Scope
### In Scope
- Vulnerabilities in BMad Method core framework code
- Security issues in agent definitions or workflows that could lead to unintended behavior
- Path traversal or file system access issues
- Prompt injection vulnerabilities that bypass intended agent behavior
- Supply chain vulnerabilities in dependencies
### Out of Scope
- Security issues in user-created custom agents or modules
- Vulnerabilities in third-party AI providers (Claude, GPT, etc.)
- Issues that require physical access to a user's machine
- Social engineering attacks
- Denial of service attacks that don't exploit a specific vulnerability
## Security Best Practices for Users
When using BMad Method:
1. **Review Agent Outputs**: Always review AI-generated code before executing it
2. **Limit File Access**: Configure your AI IDE to limit file system access where possible
3. **Keep Updated**: Regularly update to the latest version
4. **Validate Dependencies**: Review any dependencies added by generated code
5. **Environment Isolation**: Consider running AI-assisted development in isolated environments
## Acknowledgments
We appreciate the security research community's efforts in helping keep BMad Method secure. Contributors who report valid security issues will be acknowledged in our security advisories.
---
Thank you for helping keep BMad Method and our community safe.

188
package-lock.json generated
View File

@ -19,7 +19,6 @@
"fs-extra": "^11.3.0",
"glob": "^11.0.3",
"ignore": "^7.0.5",
"inquirer": "^9.3.8",
"js-yaml": "^4.1.0",
"ora": "^5.4.1",
"semver": "^7.6.3",
@ -34,6 +33,7 @@
"devDependencies": {
"@astrojs/sitemap": "^3.6.0",
"@astrojs/starlight": "^0.37.0",
"@clack/prompts": "^0.11.0",
"@eslint/js": "^9.33.0",
"archiver": "^7.0.1",
"astro": "^5.16.0",
@ -244,7 +244,6 @@
"integrity": "sha512-e7jT4DxYvIDLk1ZHmU/m/mB19rex9sv0c2ftBtjSBv+kVM/902eh0fINUzD7UwLLNR+jU585GxUJ8/EBfAM5fw==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@babel/code-frame": "^7.27.1",
"@babel/generator": "^7.28.5",
@ -756,6 +755,29 @@
"node": ">=18"
}
},
"node_modules/@clack/core": {
"version": "0.5.0",
"resolved": "https://registry.npmjs.org/@clack/core/-/core-0.5.0.tgz",
"integrity": "sha512-p3y0FIOwaYRUPRcMO7+dlmLh8PSRcrjuTndsiA0WAFbWES0mLZlrjVoBRZ9DzkPFJZG6KGkJmoEAY0ZcVWTkow==",
"dev": true,
"license": "MIT",
"dependencies": {
"picocolors": "^1.0.0",
"sisteransi": "^1.0.5"
}
},
"node_modules/@clack/prompts": {
"version": "0.11.0",
"resolved": "https://registry.npmjs.org/@clack/prompts/-/prompts-0.11.0.tgz",
"integrity": "sha512-pMN5FcrEw9hUkZA4f+zLlzivQSeQf5dRGJjSUbvVYDLvpKCdQx5OaknvKzgbtXOizhP+SJJJjqEbOe55uKKfAw==",
"dev": true,
"license": "MIT",
"dependencies": {
"@clack/core": "0.5.0",
"picocolors": "^1.0.0",
"sisteransi": "^1.0.5"
}
},
"node_modules/@colors/colors": {
"version": "1.5.0",
"resolved": "https://registry.npmjs.org/@colors/colors/-/colors-1.5.0.tgz",
@ -1998,36 +2020,6 @@
"url": "https://opencollective.com/libvips"
}
},
"node_modules/@inquirer/external-editor": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/@inquirer/external-editor/-/external-editor-1.0.3.tgz",
"integrity": "sha512-RWbSrDiYmO4LbejWY7ttpxczuwQyZLBUyygsA9Nsv95hpzUWwnNTVQmAq3xuh7vNwCp07UTmE5i11XAEExx4RA==",
"license": "MIT",
"dependencies": {
"chardet": "^2.1.1",
"iconv-lite": "^0.7.0"
},
"engines": {
"node": ">=18"
},
"peerDependencies": {
"@types/node": ">=18"
},
"peerDependenciesMeta": {
"@types/node": {
"optional": true
}
}
},
"node_modules/@inquirer/figures": {
"version": "1.0.15",
"resolved": "https://registry.npmjs.org/@inquirer/figures/-/figures-1.0.15.tgz",
"integrity": "sha512-t2IEY+unGHOzAaVM5Xx6DEWKeXlDDcNPeDyUpsRc6CUhBfU3VQOEl+Vssh7VNp1dR8MdUJBWhuObjXCsVpjN5g==",
"license": "MIT",
"engines": {
"node": ">=18"
}
},
"node_modules/@isaacs/balanced-match": {
"version": "4.0.1",
"resolved": "https://registry.npmjs.org/@isaacs/balanced-match/-/balanced-match-4.0.1.tgz",
@ -3641,9 +3633,8 @@
"version": "25.0.3",
"resolved": "https://registry.npmjs.org/@types/node/-/node-25.0.3.tgz",
"integrity": "sha512-W609buLVRVmeW693xKfzHeIV6nJGGz98uCPfeXI1ELMLXVeKYZ9m15fAMSaUPBHYLGFsVRcMmSCksQOrZV9BYA==",
"devOptional": true,
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"undici-types": "~7.16.0"
}
@ -3983,7 +3974,6 @@
"integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
"dev": true,
"license": "MIT",
"peer": true,
"bin": {
"acorn": "bin/acorn"
},
@ -4031,6 +4021,7 @@
"version": "4.3.2",
"resolved": "https://registry.npmjs.org/ansi-escapes/-/ansi-escapes-4.3.2.tgz",
"integrity": "sha512-gKXj5ALrKWQLsYG9jlTRmR/xKluxHV+Z9QEwNIgCfM1/uwPMCuzVVnh5mwTd+OuBZcwSIMbqssNWRm1lE51QaQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"type-fest": "^0.21.3"
@ -4046,6 +4037,7 @@
"version": "0.21.3",
"resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.21.3.tgz",
"integrity": "sha512-t0rzBq87m3fVcduHDUFhKmyyX+9eo6WQjZvf51Ea/M0Q7+T374Jp1aUiyUl0GKxp8M/OETVHSDvmkyPgvX+X2w==",
"dev": true,
"license": "(MIT OR CC0-1.0)",
"engines": {
"node": ">=10"
@ -4290,7 +4282,6 @@
"integrity": "sha512-6mF/YrvwwRxLTu+aMEa5pwzKUNl5ZetWbTyZCs9Um0F12HUmxUiF5UHiZPy4rifzU3gtpM3xP2DfdmkNX9eZRg==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@astrojs/compiler": "^2.13.0",
"@astrojs/internal-helpers": "0.7.5",
@ -5358,7 +5349,6 @@
}
],
"license": "MIT",
"peer": true,
"dependencies": {
"baseline-browser-mapping": "^2.9.0",
"caniuse-lite": "^1.0.30001759",
@ -5601,12 +5591,6 @@
"url": "https://github.com/sponsors/wooorm"
}
},
"node_modules/chardet": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/chardet/-/chardet-2.1.1.tgz",
"integrity": "sha512-PsezH1rqdV9VvyNhxxOW32/d75r01NY7TQCmOqomRo15ZSOKbpTFVsfjghxo6JloQUCGnH4k1LGu0R4yCLlWQQ==",
"license": "MIT"
},
"node_modules/chokidar": {
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/chokidar/-/chokidar-4.0.3.tgz",
@ -5787,15 +5771,6 @@
"url": "https://github.com/chalk/strip-ansi?sponsor=1"
}
},
"node_modules/cli-width": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/cli-width/-/cli-width-4.1.0.tgz",
"integrity": "sha512-ouuZd4/dm2Sw5Gmqy6bGyNNNe1qt9RpmxveLSO7KcgsTnU7RXfsw+/bukWGo1abgBiMAic068rclZsO4IWmmxQ==",
"license": "ISC",
"engines": {
"node": ">= 12"
}
},
"node_modules/cliui": {
"version": "8.0.1",
"resolved": "https://registry.npmjs.org/cliui/-/cliui-8.0.1.tgz",
@ -6689,7 +6664,6 @@
"integrity": "sha512-LEyamqS7W5HB3ujJyvi0HQK/dtVINZvd5mAAp9eT5S/ujByGjiZLCzPcHVzuXbpJDJF/cxwHlfceVUDZ2lnSTw==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@eslint-community/eslint-utils": "^4.8.0",
"@eslint-community/regexpp": "^4.12.1",
@ -8269,22 +8243,6 @@
"@babel/runtime": "^7.23.2"
}
},
"node_modules/iconv-lite": {
"version": "0.7.1",
"resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.7.1.tgz",
"integrity": "sha512-2Tth85cXwGFHfvRgZWszZSvdo+0Xsqmw8k8ZwxScfcBneNUraK+dxRxRm24nszx80Y0TVio8kKLt5sLE7ZCLlw==",
"license": "MIT",
"dependencies": {
"safer-buffer": ">= 2.1.2 < 3.0.0"
},
"engines": {
"node": ">=0.10.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/express"
}
},
"node_modules/ieee754": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/ieee754/-/ieee754-1.2.1.tgz",
@ -8420,43 +8378,6 @@
"dev": true,
"license": "MIT"
},
"node_modules/inquirer": {
"version": "9.3.8",
"resolved": "https://registry.npmjs.org/inquirer/-/inquirer-9.3.8.tgz",
"integrity": "sha512-pFGGdaHrmRKMh4WoDDSowddgjT1Vkl90atobmTeSmcPGdYiwikch/m/Ef5wRaiamHejtw0cUUMMerzDUXCci2w==",
"license": "MIT",
"dependencies": {
"@inquirer/external-editor": "^1.0.2",
"@inquirer/figures": "^1.0.3",
"ansi-escapes": "^4.3.2",
"cli-width": "^4.1.0",
"mute-stream": "1.0.0",
"ora": "^5.4.1",
"run-async": "^3.0.0",
"rxjs": "^7.8.1",
"string-width": "^4.2.3",
"strip-ansi": "^6.0.1",
"wrap-ansi": "^6.2.0",
"yoctocolors-cjs": "^2.1.2"
},
"engines": {
"node": ">=18"
}
},
"node_modules/inquirer/node_modules/wrap-ansi": {
"version": "6.2.0",
"resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-6.2.0.tgz",
"integrity": "sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA==",
"license": "MIT",
"dependencies": {
"ansi-styles": "^4.0.0",
"string-width": "^4.1.0",
"strip-ansi": "^6.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/iron-webcrypto": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/iron-webcrypto/-/iron-webcrypto-1.2.1.tgz",
@ -10304,7 +10225,6 @@
"integrity": "sha512-p3JTemJJbkiMjXEMiFwgm0v6ym5g8K+b2oDny+6xdl300tUKySxvilJQLSea48C6OaYNmO30kH9KxpiAg5bWJw==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"globby": "15.0.0",
"js-yaml": "4.1.1",
@ -11576,15 +11496,6 @@
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==",
"license": "MIT"
},
"node_modules/mute-stream": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/mute-stream/-/mute-stream-1.0.0.tgz",
"integrity": "sha512-avsJQhyd+680gKXyG/sQc0nXaC6rBkPOfyHYcFb9+hdkqQkR9bdnkJ0AMZhke0oesPqIO+mFFJ+IdBc7mst4IA==",
"license": "ISC",
"engines": {
"node": "^14.17.0 || ^16.13.0 || >=18.0.0"
}
},
"node_modules/nano-spawn": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/nano-spawn/-/nano-spawn-2.0.0.tgz",
@ -12378,7 +12289,6 @@
}
],
"license": "MIT",
"peer": true,
"dependencies": {
"nanoid": "^3.3.11",
"picocolors": "^1.1.1",
@ -12444,7 +12354,6 @@
"integrity": "sha512-v6UNi1+3hSlVvv8fSaoUbggEM5VErKmmpGA7Pl3HF8V6uKY7rvClBOJlH6yNwQtfTueNkGVpOv/mtWL9L4bgRA==",
"dev": true,
"license": "MIT",
"peer": true,
"bin": {
"prettier": "bin/prettier.cjs"
},
@ -13273,7 +13182,6 @@
"integrity": "sha512-3nk8Y3a9Ea8szgKhinMlGMhGMw89mqule3KWczxhIzqudyHdCIOHw8WJlj/r329fACjKLEh13ZSk7oE22kyeIw==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@types/estree": "1.0.8"
},
@ -13310,15 +13218,6 @@
"fsevents": "~2.3.2"
}
},
"node_modules/run-async": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/run-async/-/run-async-3.0.0.tgz",
"integrity": "sha512-540WwVDOMxA6dN6We19EcT9sc3hkXPw5mzRNGM3FkdN/vtE9NFvj5lFAPNwUDmJjXidm3v7TC1cTE7t17Ulm1Q==",
"license": "MIT",
"engines": {
"node": ">=0.12.0"
}
},
"node_modules/run-parallel": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/run-parallel/-/run-parallel-1.2.0.tgz",
@ -13343,15 +13242,6 @@
"queue-microtask": "^1.2.2"
}
},
"node_modules/rxjs": {
"version": "7.8.2",
"resolved": "https://registry.npmjs.org/rxjs/-/rxjs-7.8.2.tgz",
"integrity": "sha512-dhKf903U/PQZY6boNNtAGdWbG85WAbjT/1xYoZIC7FAY0yWapOBQVsVrDl58W86//e1VpMNBtRV4MaXfdMySFA==",
"license": "Apache-2.0",
"dependencies": {
"tslib": "^2.1.0"
}
},
"node_modules/safe-buffer": {
"version": "5.2.1",
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz",
@ -13372,12 +13262,6 @@
],
"license": "MIT"
},
"node_modules/safer-buffer": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz",
"integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==",
"license": "MIT"
},
"node_modules/sax": {
"version": "1.4.3",
"resolved": "https://registry.npmjs.org/sax/-/sax-1.4.3.tgz",
@ -14251,6 +14135,7 @@
"version": "2.8.1",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
"integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
"dev": true,
"license": "0BSD"
},
"node_modules/type-check": {
@ -14335,7 +14220,7 @@
"version": "7.16.0",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-7.16.0.tgz",
"integrity": "sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw==",
"devOptional": true,
"dev": true,
"license": "MIT"
},
"node_modules/unicode-properties": {
@ -14837,7 +14722,6 @@
"integrity": "sha512-+Oxm7q9hDoLMyJOYfUYBuHQo+dkAloi33apOPP56pzj+vsdJDzr+j1NISE5pyaAuKL4A3UD34qd0lx5+kfKp2g==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"esbuild": "^0.25.0",
"fdir": "^6.4.4",
@ -15111,7 +14995,6 @@
"resolved": "https://registry.npmjs.org/yaml/-/yaml-2.8.2.tgz",
"integrity": "sha512-mplynKqc1C2hTVYxd0PU2xQAc22TI1vShAYGksCCfxbn/dFwnHTNi1bvYsBTkhdUNtGIf5xNOg938rrSSYvS9A==",
"license": "ISC",
"peer": true,
"bin": {
"yaml": "bin.mjs"
},
@ -15270,18 +15153,6 @@
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/yoctocolors-cjs": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/yoctocolors-cjs/-/yoctocolors-cjs-2.1.3.tgz",
"integrity": "sha512-U/PBtDf35ff0D8X8D0jfdzHYEPFxAI7jJlxZXwCSez5M3190m+QobIfh+sWDWSHMCWWJN2AWamkegn6vr6YBTw==",
"license": "MIT",
"engines": {
"node": ">=18"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/zip-stream": {
"version": "6.0.1",
"resolved": "https://registry.npmjs.org/zip-stream/-/zip-stream-6.0.1.tgz",
@ -15303,7 +15174,6 @@
"integrity": "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ==",
"dev": true,
"license": "MIT",
"peer": true,
"funding": {
"url": "https://github.com/sponsors/colinhacks"
}

View File

@ -67,6 +67,7 @@
]
},
"dependencies": {
"@clack/prompts": "^0.11.0",
"@kayvan/markdown-tree-parser": "^1.6.1",
"boxen": "^5.1.2",
"chalk": "^4.1.2",
@ -77,7 +78,6 @@
"fs-extra": "^11.3.0",
"glob": "^11.0.3",
"ignore": "^7.0.5",
"inquirer": "^9.3.8",
"js-yaml": "^4.1.0",
"ora": "^5.4.1",
"semver": "^7.6.3",

View File

@ -18,7 +18,6 @@ agent:
critical_actions:
- "Load into memory {project-root}/_bmad/core/config.yaml and set variable project_name, output_folder, user_name, communication_language"
- "Remember the users name is {user_name}"
- "ALWAYS communicate in {communication_language}"
menu:

View File

@ -1,7 +1,7 @@
<!-- if possible, run this in a separate subagent or process with read access to the project,
but no context except the content to review -->
<task id="_bmad/core/tasks/review-adversarial-general.xml" name="Adversarial Review (General)">
<task id="_bmad/core/tasks/review-adversarial-general.xml" name="Adversarial Review">
<objective>Cynically review content and produce findings</objective>
<inputs>

View File

@ -22,6 +22,8 @@ agent:
critical_actions:
- "Consult {project-root}/_bmad/bmgd/gametest/qa-index.csv to select knowledge fragments under knowledge/ and load only the files needed for the current task"
- "For E2E testing requests, always load knowledge/e2e-testing.md first"
- "When scaffolding tests, distinguish between unit, integration, and E2E test needs"
- "Load the referenced fragment(s) from {project-root}/_bmad/bmgd/gametest/knowledge/ before giving recommendations"
- "Cross-check recommendations with the current official Unity Test Framework, Unreal Automation, or Godot GUT documentation"
- "Find if this exists, if it does, always treat it as the bible I plan and execute against: `**/project-context.md`"
@ -43,6 +45,10 @@ agent:
workflow: "{project-root}/_bmad/bmgd/workflows/gametest/automate/workflow.yaml"
description: "[TA] Generate automated game tests"
- trigger: ES or fuzzy match on e2e-scaffold
workflow: "{project-root}/_bmad/bmgd/workflows/gametest/e2e-scaffold/workflow.yaml"
description: "[ES] Scaffold E2E testing infrastructure"
- trigger: PP or fuzzy match on playtest-plan
workflow: "{project-root}/_bmad/bmgd/workflows/gametest/playtest-plan/workflow.yaml"
description: "[PP] Create structured playtesting plan"

File diff suppressed because it is too large Load Diff

View File

@ -374,3 +374,502 @@ test:
| Signal not detected | Signal not watched | Call `watch_signals()` before action |
| Physics not working | Missing frames | Await `physics_frame` |
| Flaky tests | Timing issues | Use proper await/signals |
## C# Testing in Godot
Godot 4 supports C# via .NET 6+. You can use standard .NET testing frameworks alongside GUT.
### Project Setup for C#
```
project/
├── addons/
│ └── gut/
├── src/
│ ├── Player/
│ │ └── PlayerController.cs
│ └── Combat/
│ └── DamageCalculator.cs
├── tests/
│ ├── gdscript/
│ │ └── test_integration.gd
│ └── csharp/
│ ├── Tests.csproj
│ └── DamageCalculatorTests.cs
└── project.csproj
```
### C# Test Project Setup
Create a separate test project that references your game assembly:
```xml
<!-- tests/csharp/Tests.csproj -->
<Project Sdk="Godot.NET.Sdk/4.2.0">
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<EnableDynamicLoading>true</EnableDynamicLoading>
<IsPackable>false</IsPackable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.8.0" />
<PackageReference Include="xunit" Version="2.6.2" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.5.4" />
<PackageReference Include="NSubstitute" Version="5.1.0" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="../../project.csproj" />
</ItemGroup>
</Project>
```
### Basic C# Unit Tests
```csharp
// tests/csharp/DamageCalculatorTests.cs
using Xunit;
using YourGame.Combat;
public class DamageCalculatorTests
{
private readonly DamageCalculator _calculator;
public DamageCalculatorTests()
{
_calculator = new DamageCalculator();
}
[Fact]
public void Calculate_BaseDamage_ReturnsCorrectValue()
{
var result = _calculator.Calculate(100f, 1f);
Assert.Equal(100f, result);
}
[Fact]
public void Calculate_CriticalHit_DoublesDamage()
{
var result = _calculator.Calculate(100f, 2f);
Assert.Equal(200f, result);
}
[Theory]
[InlineData(100f, 0.5f, 50f)]
[InlineData(100f, 1.5f, 150f)]
[InlineData(50f, 2f, 100f)]
public void Calculate_Parameterized_ReturnsExpected(
float baseDamage, float multiplier, float expected)
{
var result = _calculator.Calculate(baseDamage, multiplier);
Assert.Equal(expected, result);
}
}
```
### Testing Godot Nodes in C#
For tests requiring Godot runtime, use a hybrid approach:
```csharp
// tests/csharp/PlayerControllerTests.cs
using Godot;
using Xunit;
using YourGame.Player;
public class PlayerControllerTests : IDisposable
{
private readonly SceneTree _sceneTree;
private PlayerController _player;
public PlayerControllerTests()
{
// These tests must run within Godot runtime
// Use GodotXUnit or similar adapter
}
[GodotFact] // Custom attribute for Godot runtime tests
public async Task Player_Move_ChangesPosition()
{
var startPos = _player.GlobalPosition;
_player.SetInput(new Vector2(1, 0));
await ToSignal(GetTree().CreateTimer(0.5f), "timeout");
Assert.True(_player.GlobalPosition.X > startPos.X);
}
public void Dispose()
{
_player?.QueueFree();
}
}
```
### C# Mocking with NSubstitute
```csharp
using NSubstitute;
using Xunit;
public class EnemyAITests
{
[Fact]
public void Enemy_UsesPathfinding_WhenMoving()
{
var mockPathfinding = Substitute.For<IPathfinding>();
mockPathfinding.FindPath(Arg.Any<Vector2>(), Arg.Any<Vector2>())
.Returns(new[] { Vector2.Zero, new Vector2(10, 10) });
var enemy = new EnemyAI(mockPathfinding);
enemy.MoveTo(new Vector2(10, 10));
mockPathfinding.Received().FindPath(
Arg.Any<Vector2>(),
Arg.Is<Vector2>(v => v == new Vector2(10, 10)));
}
}
```
### Running C# Tests
```bash
# Run C# unit tests (no Godot runtime needed)
dotnet test tests/csharp/Tests.csproj
# Run with coverage
dotnet test tests/csharp/Tests.csproj --collect:"XPlat Code Coverage"
# Run specific test
dotnet test tests/csharp/Tests.csproj --filter "FullyQualifiedName~DamageCalculator"
```
### Hybrid Test Strategy
| Test Type | Framework | When to Use |
| ------------- | ---------------- | ---------------------------------- |
| Pure logic | xUnit/NUnit (C#) | Classes without Godot dependencies |
| Node behavior | GUT (GDScript) | MonoBehaviour-like testing |
| Integration | GUT (GDScript) | Scene and signal testing |
| E2E | GUT (GDScript) | Full gameplay flows |
## End-to-End Testing
For comprehensive E2E testing patterns, infrastructure scaffolding, and
scenario builders, see **knowledge/e2e-testing.md**.
### E2E Infrastructure for Godot
#### GameE2ETestFixture (GDScript)
```gdscript
# tests/e2e/infrastructure/game_e2e_test_fixture.gd
extends GutTest
class_name GameE2ETestFixture
var game_state: GameStateManager
var input_sim: InputSimulator
var scenario: ScenarioBuilder
var _scene_instance: Node
## Override to specify a different scene for specific test classes.
func get_scene_path() -> String:
return "res://scenes/game.tscn"
func before_each():
# Load game scene
var scene = load(get_scene_path())
_scene_instance = scene.instantiate()
add_child(_scene_instance)
# Get references
game_state = _scene_instance.get_node("GameStateManager")
assert_not_null(game_state, "GameStateManager not found in scene")
input_sim = InputSimulator.new()
scenario = ScenarioBuilder.new(game_state)
# Wait for ready
await wait_for_game_ready()
func after_each():
if _scene_instance:
_scene_instance.queue_free()
_scene_instance = null
input_sim = null
scenario = null
func wait_for_game_ready(timeout: float = 10.0):
var elapsed = 0.0
while not game_state.is_ready and elapsed < timeout:
await get_tree().process_frame
elapsed += get_process_delta_time()
assert_true(game_state.is_ready, "Game should be ready within timeout")
```
#### ScenarioBuilder (GDScript)
```gdscript
# tests/e2e/infrastructure/scenario_builder.gd
extends RefCounted
class_name ScenarioBuilder
var _game_state: GameStateManager
var _setup_actions: Array[Callable] = []
func _init(game_state: GameStateManager):
_game_state = game_state
## Load a pre-configured scenario from a save file.
func from_save_file(file_name: String) -> ScenarioBuilder:
_setup_actions.append(func(): await _load_save_file(file_name))
return self
## Configure the current turn number.
func on_turn(turn_number: int) -> ScenarioBuilder:
_setup_actions.append(func(): _set_turn(turn_number))
return self
## Spawn a unit at position.
func with_unit(faction: int, position: Vector2, movement_points: int = 6) -> ScenarioBuilder:
_setup_actions.append(func(): await _spawn_unit(faction, position, movement_points))
return self
## Execute all configured setup actions.
func build() -> void:
for action in _setup_actions:
await action.call()
_setup_actions.clear()
## Clear pending actions without executing.
func reset() -> void:
_setup_actions.clear()
# Private implementation
func _load_save_file(file_name: String) -> void:
var path = "res://tests/e2e/test_data/%s" % file_name
await _game_state.load_game(path)
func _set_turn(turn: int) -> void:
_game_state.set_turn_number(turn)
func _spawn_unit(faction: int, pos: Vector2, mp: int) -> void:
var unit = _game_state.spawn_unit(faction, pos)
unit.movement_points = mp
```
#### InputSimulator (GDScript)
```gdscript
# tests/e2e/infrastructure/input_simulator.gd
extends RefCounted
class_name InputSimulator
## Click at a world position.
func click_world_position(world_pos: Vector2) -> void:
var viewport = Engine.get_main_loop().root.get_viewport()
var camera = viewport.get_camera_2d()
var screen_pos = camera.get_screen_center_position() + (world_pos - camera.global_position)
await click_screen_position(screen_pos)
## Click at a screen position.
func click_screen_position(screen_pos: Vector2) -> void:
var press = InputEventMouseButton.new()
press.button_index = MOUSE_BUTTON_LEFT
press.pressed = true
press.position = screen_pos
var release = InputEventMouseButton.new()
release.button_index = MOUSE_BUTTON_LEFT
release.pressed = false
release.position = screen_pos
Input.parse_input_event(press)
await Engine.get_main_loop().process_frame
Input.parse_input_event(release)
await Engine.get_main_loop().process_frame
## Click a UI button by name.
func click_button(button_name: String) -> void:
var root = Engine.get_main_loop().root
var button = _find_button_recursive(root, button_name)
assert(button != null, "Button '%s' not found in scene tree" % button_name)
if not button.visible:
push_warning("[InputSimulator] Button '%s' is not visible" % button_name)
if button.disabled:
push_warning("[InputSimulator] Button '%s' is disabled" % button_name)
button.pressed.emit()
await Engine.get_main_loop().process_frame
func _find_button_recursive(node: Node, button_name: String) -> Button:
if node is Button and node.name == button_name:
return node
for child in node.get_children():
var found = _find_button_recursive(child, button_name)
if found:
return found
return null
## Press and release a key.
func press_key(keycode: Key) -> void:
var press = InputEventKey.new()
press.keycode = keycode
press.pressed = true
var release = InputEventKey.new()
release.keycode = keycode
release.pressed = false
Input.parse_input_event(press)
await Engine.get_main_loop().process_frame
Input.parse_input_event(release)
await Engine.get_main_loop().process_frame
## Simulate an input action.
func action_press(action_name: String) -> void:
Input.action_press(action_name)
await Engine.get_main_loop().process_frame
func action_release(action_name: String) -> void:
Input.action_release(action_name)
await Engine.get_main_loop().process_frame
## Reset all input state.
func reset() -> void:
Input.flush_buffered_events()
```
#### AsyncAssert (GDScript)
```gdscript
# tests/e2e/infrastructure/async_assert.gd
extends RefCounted
class_name AsyncAssert
## Wait until condition is true, or fail after timeout.
static func wait_until(
condition: Callable,
description: String,
timeout: float = 5.0
) -> void:
var elapsed := 0.0
while not condition.call() and elapsed < timeout:
await Engine.get_main_loop().process_frame
elapsed += Engine.get_main_loop().root.get_process_delta_time()
assert(condition.call(),
"Timeout after %.1fs waiting for: %s" % [timeout, description])
## Wait for a value to equal expected.
static func wait_for_value(
getter: Callable,
expected: Variant,
description: String,
timeout: float = 5.0
) -> void:
await wait_until(
func(): return getter.call() == expected,
"%s to equal '%s' (current: '%s')" % [description, expected, getter.call()],
timeout)
## Wait for a float value within tolerance.
static func wait_for_value_approx(
getter: Callable,
expected: float,
description: String,
tolerance: float = 0.0001,
timeout: float = 5.0
) -> void:
await wait_until(
func(): return absf(expected - getter.call()) < tolerance,
"%s to equal ~%s ±%s (current: %s)" % [description, expected, tolerance, getter.call()],
timeout)
## Assert that condition does NOT become true within duration.
static func assert_never_true(
condition: Callable,
description: String,
duration: float = 1.0
) -> void:
var elapsed := 0.0
while elapsed < duration:
assert(not condition.call(),
"Condition unexpectedly became true: %s" % description)
await Engine.get_main_loop().process_frame
elapsed += Engine.get_main_loop().root.get_process_delta_time()
## Wait for specified number of frames.
static func wait_frames(count: int) -> void:
for i in range(count):
await Engine.get_main_loop().process_frame
## Wait for physics to settle.
static func wait_for_physics(frames: int = 3) -> void:
for i in range(frames):
await Engine.get_main_loop().root.get_tree().physics_frame
```
### Example E2E Test (GDScript)
```gdscript
# tests/e2e/scenarios/test_combat_flow.gd
extends GameE2ETestFixture
func test_player_can_attack_enemy():
# GIVEN: Player and enemy in combat range
await scenario \
.with_unit(Faction.PLAYER, Vector2(100, 100)) \
.with_unit(Faction.ENEMY, Vector2(150, 100)) \
.build()
var enemy = game_state.get_units(Faction.ENEMY)[0]
var initial_health = enemy.health
# WHEN: Player attacks
await input_sim.click_world_position(Vector2(100, 100)) # Select player
await AsyncAssert.wait_until(
func(): return game_state.selected_unit != null,
"Unit should be selected")
await input_sim.click_world_position(Vector2(150, 100)) # Attack enemy
# THEN: Enemy takes damage
await AsyncAssert.wait_until(
func(): return enemy.health < initial_health,
"Enemy should take damage")
func test_turn_cycle_completes():
# GIVEN: Game in progress
await scenario.on_turn(1).build()
var starting_turn = game_state.turn_number
# WHEN: Player ends turn
await input_sim.click_button("EndTurnButton")
await AsyncAssert.wait_until(
func(): return game_state.current_faction == Faction.ENEMY,
"Should switch to enemy turn")
# AND: Enemy turn completes
await AsyncAssert.wait_until(
func(): return game_state.current_faction == Faction.PLAYER,
"Should return to player turn",
30.0) # AI might take a while
# THEN: Turn number incremented
assert_eq(game_state.turn_number, starting_turn + 1)
```
### Quick E2E Checklist for Godot
- [ ] Create `GameE2ETestFixture` base class extending GutTest
- [ ] Implement `ScenarioBuilder` for your game's domain
- [ ] Create `InputSimulator` wrapping Godot Input
- [ ] Add `AsyncAssert` utilities with proper await
- [ ] Organize E2E tests under `tests/e2e/scenarios/`
- [ ] Configure GUT to include E2E test directory
- [ ] Set up CI with headless Godot execution

View File

@ -381,3 +381,17 @@ test:
| NullReferenceException | Missing Setup | Ensure [SetUp] initializes all fields |
| Tests hang | Infinite coroutine | Add timeout or max iterations |
| Flaky physics tests | Timing dependent | Use WaitForFixedUpdate, increase tolerance |
## End-to-End Testing
For comprehensive E2E testing patterns, infrastructure scaffolding, and
scenario builders, see **knowledge/e2e-testing.md**.
### Quick E2E Checklist for Unity
- [ ] Create `GameE2ETestFixture` base class
- [ ] Implement `ScenarioBuilder` for your game's domain
- [ ] Create `InputSimulator` wrapping Input System
- [ ] Add `AsyncAssert` utilities
- [ ] Organize E2E tests under `Tests/PlayMode/E2E/`
- [ ] Configure separate CI job for E2E suite

File diff suppressed because it is too large Load Diff

View File

@ -15,3 +15,4 @@ localization-testing,Localization Testing,"Text, audio, and cultural validation
certification-testing,Platform Certification,"Console TRC/XR requirements and certification testing","certification,console,trc,xr",knowledge/certification-testing.md
smoke-testing,Smoke Testing,"Critical path validation for build verification","smoke-tests,bvt,ci",knowledge/smoke-testing.md
test-priorities,Test Priorities Matrix,"P0-P3 criteria, coverage targets, execution ordering for games","prioritization,risk,coverage",knowledge/test-priorities.md
e2e-testing,End-to-End Testing,"Complete player journey testing with infrastructure patterns and async utilities","e2e,integration,player-journeys,scenarios,infrastructure",knowledge/e2e-testing.md

1 id name description tags fragment_file
15 certification-testing Platform Certification Console TRC/XR requirements and certification testing certification,console,trc,xr knowledge/certification-testing.md
16 smoke-testing Smoke Testing Critical path validation for build verification smoke-tests,bvt,ci knowledge/smoke-testing.md
17 test-priorities Test Priorities Matrix P0-P3 criteria, coverage targets, execution ordering for games prioritization,risk,coverage knowledge/test-priorities.md
18 e2e-testing End-to-End Testing Complete player journey testing with infrastructure patterns and async utilities e2e,integration,player-journeys,scenarios,infrastructure knowledge/e2e-testing.md

View File

@ -209,6 +209,87 @@ func test_{feature}_integration():
# Cleanup
scene.queue_free()
```
### E2E Journey Tests
**Knowledge Base Reference**: `knowledge/e2e-testing.md`
```csharp
public class {Feature}E2ETests : GameE2ETestFixture
{
[UnityTest]
public IEnumerator {JourneyName}_Succeeds()
{
// GIVEN
yield return Scenario
.{SetupMethod1}()
.{SetupMethod2}()
.Build();
// WHEN
yield return Input.{Action1}();
yield return AsyncAssert.WaitUntil(
() => {Condition1}, "{Description1}");
yield return Input.{Action2}();
// THEN
yield return AsyncAssert.WaitUntil(
() => {FinalCondition}, "{FinalDescription}");
Assert.{Assertion}({expected}, {actual});
}
}
```
## Step 3.5: Generate E2E Infrastructure
Before generating E2E tests, scaffold the required infrastructure.
### Infrastructure Checklist
1. **Test Fixture Base Class**
- Scene loading/unloading
- Game ready state waiting
- Common service access
- Cleanup guarantees
2. **Scenario Builder**
- Fluent API for game state configuration
- Domain-specific methods (e.g., `WithUnit`, `OnTurn`)
- Yields for state propagation
3. **Input Simulator**
- Click/drag abstractions
- Button press simulation
- Keyboard input queuing
4. **Async Assertions**
- `WaitUntil` with timeout and message
- `WaitForEvent` for event-driven flows
- `WaitForState` for state machine transitions
### Generation Template
```csharp
// GameE2ETestFixture.cs
public abstract class GameE2ETestFixture
{
protected {GameStateClass} GameState;
protected {InputSimulatorClass} Input;
protected {ScenarioBuilderClass} Scenario;
[UnitySetUp]
public IEnumerator BaseSetUp()
{
yield return LoadScene("{main_scene}");
GameState = Object.FindFirstObjectByType<{GameStateClass}>();
Input = new {InputSimulatorClass}();
Scenario = new {ScenarioBuilderClass}(GameState);
yield return WaitForReady();
}
// ... (fill from e2e-testing.md patterns)
}
```
**After scaffolding infrastructure, proceed to generate actual E2E tests.**
---

View File

@ -0,0 +1,95 @@
# E2E Infrastructure Scaffold Checklist
## Preflight Validation
- [ ] Test framework already initialized (`Tests/` directory exists with proper structure)
- [ ] Game state manager class identified
- [ ] Main gameplay scene identified and loads without errors
- [ ] No existing E2E infrastructure conflicts
## Architecture Analysis
- [ ] Game engine correctly detected
- [ ] Engine version identified
- [ ] Input system type determined (New Input System, Legacy, Custom)
- [ ] Game state manager class located
- [ ] Ready/initialized state property identified
- [ ] Key domain entities catalogued for ScenarioBuilder
## Generated Files
### Directory Structure
- [ ] `Tests/PlayMode/E2E/` directory created
- [ ] `Tests/PlayMode/E2E/Infrastructure/` directory created
- [ ] `Tests/PlayMode/E2E/Scenarios/` directory created
- [ ] `Tests/PlayMode/E2E/TestData/` directory created
### Infrastructure Files
- [ ] `E2E.asmdef` created with correct assembly references
- [ ] `GameE2ETestFixture.cs` created with correct class references
- [ ] `ScenarioBuilder.cs` created with at least placeholder methods
- [ ] `InputSimulator.cs` created matching detected input system
- [ ] `AsyncAssert.cs` created with core assertion methods
### Example and Documentation
- [ ] `ExampleE2ETest.cs` created with working infrastructure test
- [ ] `README.md` created with usage documentation
## Code Quality
### GameE2ETestFixture
- [ ] Correct namespace applied
- [ ] Correct `GameStateClass` reference
- [ ] Correct `SceneName` default
- [ ] `WaitForGameReady` uses correct ready property
- [ ] `UnitySetUp` and `UnityTearDown` properly structured
- [ ] Virtual methods for derived class customization
### ScenarioBuilder
- [ ] Fluent API pattern correctly implemented
- [ ] `Build()` executes all queued actions
- [ ] At least one domain-specific method added (or clear TODOs)
- [ ] `FromSaveFile` method scaffolded
### InputSimulator
- [ ] Matches detected input system (New vs Legacy)
- [ ] Mouse click simulation works
- [ ] Button click by name works
- [ ] Keyboard input scaffolded
- [ ] `Reset()` method cleans up state
### AsyncAssert
- [ ] `WaitUntil` includes timeout and descriptive failure
- [ ] `WaitForValue` provides current vs expected in failure
- [ ] `AssertNeverTrue` for negative assertions
- [ ] Frame/physics wait utilities included
## Assembly Definition
- [ ] References main game assembly
- [ ] References Unity.InputSystem (if applicable)
- [ ] `overrideReferences` set to true
- [ ] `precompiledReferences` includes nunit.framework.dll
- [ ] `precompiledReferences` includes UnityEngine.TestRunner.dll
- [ ] `precompiledReferences` includes UnityEditor.TestRunner.dll
- [ ] `UNITY_INCLUDE_TESTS` define constraint set
## Verification
- [ ] Project compiles without errors after scaffold
- [ ] `ExampleE2ETests.Infrastructure_GameLoadsAndReachesReadyState` passes
- [ ] Test appears in Test Runner under PlayMode → E2E category
## Documentation Quality
- [ ] README explains all infrastructure components
- [ ] Quick start example is copy-pasteable
- [ ] Extension instructions are clear
- [ ] Troubleshooting table addresses common issues
## Handoff
- [ ] Summary output provided with all configuration values
- [ ] Next steps clearly listed
- [ ] Customization requirements highlighted
- [ ] Knowledge fragments referenced

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,145 @@
# E2E Test Infrastructure Scaffold Workflow
workflow:
id: e2e-scaffold
name: E2E Test Infrastructure Scaffold
version: 1.0
module: bmgd
agent: game-qa
description: |
Scaffold complete E2E testing infrastructure for an existing game project.
Creates test fixtures, scenario builders, input simulators, and async
assertion utilities tailored to the project's architecture.
triggers:
- "ES"
- "e2e-scaffold"
- "scaffold e2e"
- "e2e infrastructure"
- "setup e2e"
preflight:
- "Test framework initialized (run `test-framework` workflow first)"
- "Game has identifiable state manager"
- "Main gameplay scene exists"
# Paths are relative to this workflow file's location
knowledge_fragments:
- "../../../gametest/knowledge/e2e-testing.md"
- "../../../gametest/knowledge/unity-testing.md"
- "../../../gametest/knowledge/unreal-testing.md"
- "../../../gametest/knowledge/godot-testing.md"
inputs:
game_state_class:
description: "Primary game state manager class name"
required: true
example: "GameStateManager"
main_scene:
description: "Scene name where core gameplay occurs"
required: true
example: "GameScene"
input_system:
description: "Input system in use"
required: false
default: "auto-detect"
options:
- "unity-input-system"
- "unity-legacy"
- "unreal-enhanced"
- "godot-input"
- "custom"
# Output paths vary by engine. Generate files matching detected engine.
outputs:
unity:
condition: "engine == 'unity'"
infrastructure_files:
description: "Generated E2E infrastructure classes"
files:
- "Tests/PlayMode/E2E/Infrastructure/GameE2ETestFixture.cs"
- "Tests/PlayMode/E2E/Infrastructure/ScenarioBuilder.cs"
- "Tests/PlayMode/E2E/Infrastructure/InputSimulator.cs"
- "Tests/PlayMode/E2E/Infrastructure/AsyncAssert.cs"
assembly_definition:
description: "E2E test assembly configuration"
files:
- "Tests/PlayMode/E2E/E2E.asmdef"
example_test:
description: "Working example E2E test"
files:
- "Tests/PlayMode/E2E/ExampleE2ETest.cs"
documentation:
description: "E2E testing README"
files:
- "Tests/PlayMode/E2E/README.md"
unreal:
condition: "engine == 'unreal'"
infrastructure_files:
description: "Generated E2E infrastructure classes"
files:
- "Source/{ProjectName}/Tests/E2E/GameE2ETestBase.h"
- "Source/{ProjectName}/Tests/E2E/GameE2ETestBase.cpp"
- "Source/{ProjectName}/Tests/E2E/ScenarioBuilder.h"
- "Source/{ProjectName}/Tests/E2E/ScenarioBuilder.cpp"
- "Source/{ProjectName}/Tests/E2E/InputSimulator.h"
- "Source/{ProjectName}/Tests/E2E/InputSimulator.cpp"
- "Source/{ProjectName}/Tests/E2E/AsyncAssert.h"
build_configuration:
description: "E2E test build configuration"
files:
- "Source/{ProjectName}/Tests/E2E/{ProjectName}E2ETests.Build.cs"
example_test:
description: "Working example E2E test"
files:
- "Source/{ProjectName}/Tests/E2E/ExampleE2ETest.cpp"
documentation:
description: "E2E testing README"
files:
- "Source/{ProjectName}/Tests/E2E/README.md"
godot:
condition: "engine == 'godot'"
infrastructure_files:
description: "Generated E2E infrastructure classes"
files:
- "tests/e2e/infrastructure/game_e2e_test_fixture.gd"
- "tests/e2e/infrastructure/scenario_builder.gd"
- "tests/e2e/infrastructure/input_simulator.gd"
- "tests/e2e/infrastructure/async_assert.gd"
example_test:
description: "Working example E2E test"
files:
- "tests/e2e/scenarios/example_e2e_test.gd"
documentation:
description: "E2E testing README"
files:
- "tests/e2e/README.md"
steps:
- id: analyze
name: "Analyze Game Architecture"
instruction_file: "instructions.md#step-1-analyze-game-architecture"
- id: scaffold
name: "Generate Infrastructure"
instruction_file: "instructions.md#step-2-generate-infrastructure"
- id: example
name: "Generate Example Test"
instruction_file: "instructions.md#step-3-generate-example-test"
- id: document
name: "Generate Documentation"
instruction_file: "instructions.md#step-4-generate-documentation"
- id: complete
name: "Output Summary"
instruction_file: "instructions.md#step-5-output-summary"
validation:
checklist: "checklist.md"

View File

@ -91,6 +91,18 @@ Create comprehensive test scenarios for game projects, covering gameplay mechani
| Performance | FPS, loading times | P1 |
| Accessibility | Assist features | P1 |
### E2E Journey Testing
**Knowledge Base Reference**: `knowledge/e2e-testing.md`
| Category | Focus | Priority |
|----------|-------|----------|
| Core Loop | Complete gameplay cycle | P0 |
| Turn Lifecycle | Full turn from start to end | P0 |
| Save/Load Round-trip | Save → quit → load → resume | P0 |
| Scene Transitions | Menu → Game → Back | P1 |
| Win/Lose Paths | Victory and defeat conditions | P1 |
---
## Step 3: Create Test Scenarios
@ -153,6 +165,39 @@ SCENARIO: Gameplay Under High Latency
CATEGORY: multiplayer
```
### E2E Scenario Format
For player journey tests, use this extended format:
```
E2E SCENARIO: [Player Journey Name]
GIVEN [Initial game state - use ScenarioBuilder terms]
WHEN [Sequence of player actions]
THEN [Observable outcomes]
TIMEOUT: [Expected max duration in seconds]
PRIORITY: P0/P1
CATEGORY: e2e
INFRASTRUCTURE: [Required fixtures/builders]
```
### Example E2E Scenario
```
E2E SCENARIO: Complete Combat Encounter
GIVEN game loaded with player unit adjacent to enemy
AND player unit has full health and actions
WHEN player selects unit
AND player clicks attack on enemy
AND player confirms attack
AND attack animation completes
AND enemy responds (if alive)
THEN enemy health is reduced OR enemy is defeated
AND turn state advances appropriately
AND UI reflects new state
TIMEOUT: 15
PRIORITY: P0
CATEGORY: e2e
INFRASTRUCTURE: ScenarioBuilder, InputSimulator, AsyncAssert
```
---
## Step 4: Prioritize Test Coverage
@ -161,12 +206,12 @@ SCENARIO: Gameplay Under High Latency
**Knowledge Base Reference**: `knowledge/test-priorities.md`
| Priority | Criteria | Coverage Target |
| -------- | ---------------------------- | --------------- |
| P0 | Ship blockers, certification | 100% automated |
| P1 | Major features, common paths | 80% automated |
| P2 | Secondary features | 60% automated |
| P3 | Edge cases, polish | Manual only |
| Priority | Criteria | Unit | Integration | E2E | Manual |
|----------|----------|------|-------------|-----|--------|
| P0 | Ship blockers | 100% | 80% | Core flows | Smoke |
| P1 | Major features | 90% | 70% | Happy paths | Full |
| P2 | Secondary | 80% | 50% | - | Targeted |
| P3 | Edge cases | 60% | - | - | As needed |
### Risk-Based Ordering

View File

@ -33,7 +33,7 @@ agent:
menu:
- trigger: WS or fuzzy match on workflow-status
workflow: "{project-root}/_bmad/bmm/workflows/workflow-status/workflow.yaml"
description: "[WS] Get workflow status or initialize a workflow if not already done (optional)"
description: "[WS] Start here or resume - show workflow status and next best step"
- trigger: TF or fuzzy match on test-framework
workflow: "{project-root}/_bmad/bmm/workflows/testarch/framework/workflow.yaml"

View File

@ -1,23 +0,0 @@
# Senior Developer Review - Validation Checklist
- [ ] Story file loaded from `{{story_path}}`
- [ ] Story Status verified as reviewable (review)
- [ ] Epic and Story IDs resolved ({{epic_num}}.{{story_num}})
- [ ] Story Context located or warning recorded
- [ ] Epic Tech Spec located or warning recorded
- [ ] Architecture/standards docs loaded (as available)
- [ ] Tech stack detected and documented
- [ ] MCP doc search performed (or web fallback) and references captured
- [ ] Acceptance Criteria cross-checked against implementation
- [ ] File List reviewed and validated for completeness
- [ ] Tests identified and mapped to ACs; gaps noted
- [ ] Code quality review performed on changed files
- [ ] Security review performed on changed files and dependencies
- [ ] Outcome decided (Approve/Changes Requested/Blocked)
- [ ] Review notes appended under "Senior Developer Review (AI)"
- [ ] Change Log updated with review entry
- [ ] Status updated according to settings (if enabled)
- [ ] Sprint status synced (if sprint tracking enabled)
- [ ] Story saved successfully
_Reviewer: {{user_name}} on {{date}}_

View File

@ -1,227 +0,0 @@
<workflow>
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
<critical>Generate all documents in {document_output_language}</critical>
<critical>🔥 YOU ARE AN ADVERSARIAL CODE REVIEWER - Find what's wrong or missing! 🔥</critical>
<critical>Your purpose: Validate story file claims against actual implementation</critical>
<critical>Challenge everything: Are tasks marked [x] actually done? Are ACs really implemented?</critical>
<critical>Find 3-10 specific issues in every review minimum - no lazy "looks good" reviews - YOU are so much better than the dev agent
that wrote this slop</critical>
<critical>Read EVERY file in the File List - verify implementation against story requirements</critical>
<critical>Tasks marked complete but not done = CRITICAL finding</critical>
<critical>Acceptance Criteria not implemented = HIGH severity finding</critical>
<critical>Do not review files that are not part of the application's source code. Always exclude the _bmad/ and _bmad-output/ folders from the review. Always exclude IDE and CLI configuration folders like .cursor/ and .windsurf/ and .claude/</critical>
<step n="1" goal="Load story and discover changes">
<action>Use provided {{story_path}} or ask user which story file to review</action>
<action>Read COMPLETE story file</action>
<action>Set {{story_key}} = extracted key from filename (e.g., "1-2-user-authentication.md" → "1-2-user-authentication") or story
metadata</action>
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Agent Record → File List, Change Log</action>
<!-- Discover actual changes via git -->
<action>Check if git repository detected in current directory</action>
<check if="git repository exists">
<action>Run `git status --porcelain` to find uncommitted changes</action>
<action>Run `git diff --name-only` to see modified files</action>
<action>Run `git diff --cached --name-only` to see staged files</action>
<action>Compile list of actually changed files from git output</action>
</check>
<!-- Cross-reference story File List vs git reality -->
<action>Compare story's Dev Agent Record → File List with actual git changes</action>
<action>Note discrepancies:
- Files in git but not in story File List
- Files in story File List but no git changes
- Missing documentation of what was actually changed
</action>
<invoke-protocol name="discover_inputs" />
<action>Load {project_context} for coding standards (if exists)</action>
</step>
<step n="2" goal="Build review attack plan">
<action>Extract ALL Acceptance Criteria from story</action>
<action>Extract ALL Tasks/Subtasks with completion status ([x] vs [ ])</action>
<action>From Dev Agent Record → File List, compile list of claimed changes</action>
<action>Create review plan:
1. **AC Validation**: Verify each AC is actually implemented
2. **Task Audit**: Verify each [x] task is really done
3. **Code Quality**: Security, performance, maintainability
4. **Test Quality**: Real tests vs placeholder bullshit
</action>
</step>
<step n="3" goal="Execute adversarial review">
<critical>VALIDATE EVERY CLAIM - Check git reality vs story claims</critical>
<!-- Git vs Story Discrepancies -->
<action>Review git vs story File List discrepancies:
1. **Files changed but not in story File List** → MEDIUM finding (incomplete documentation)
2. **Story lists files but no git changes** → HIGH finding (false claims)
3. **Uncommitted changes not documented** → MEDIUM finding (transparency issue)
</action>
<!-- Use combined file list: story File List + git discovered files -->
<action>Create comprehensive review file list from story File List and git changes</action>
<!-- AC Validation -->
<action>For EACH Acceptance Criterion:
1. Read the AC requirement
2. Search implementation files for evidence
3. Determine: IMPLEMENTED, PARTIAL, or MISSING
4. If MISSING/PARTIAL → HIGH SEVERITY finding
</action>
<!-- Task Completion Audit -->
<action>For EACH task marked [x]:
1. Read the task description
2. Search files for evidence it was actually done
3. **CRITICAL**: If marked [x] but NOT DONE → CRITICAL finding
4. Record specific proof (file:line)
</action>
<!-- Code Quality Deep Dive -->
<action>For EACH file in comprehensive review list:
1. **Security**: Look for injection risks, missing validation, auth issues
2. **Performance**: N+1 queries, inefficient loops, missing caching
3. **Error Handling**: Missing try/catch, poor error messages
4. **Code Quality**: Complex functions, magic numbers, poor naming
5. **Test Quality**: Are tests real assertions or placeholders?
</action>
<check if="total_issues_found lt 3">
<critical>NOT LOOKING HARD ENOUGH - Find more problems!</critical>
<action>Re-examine code for:
- Edge cases and null handling
- Architecture violations
- Documentation gaps
- Integration issues
- Dependency problems
- Git commit message quality (if applicable)
</action>
<action>Find at least 3 more specific, actionable issues</action>
</check>
</step>
<step n="4" goal="Present findings and fix them">
<action>Categorize findings: HIGH (must fix), MEDIUM (should fix), LOW (nice to fix)</action>
<action>Set {{fixed_count}} = 0</action>
<action>Set {{action_count}} = 0</action>
<output>**🔥 CODE REVIEW FINDINGS, {user_name}!**
**Story:** {{story_file}}
**Git vs Story Discrepancies:** {{git_discrepancy_count}} found
**Issues Found:** {{high_count}} High, {{medium_count}} Medium, {{low_count}} Low
## 🔴 CRITICAL ISSUES
- Tasks marked [x] but not actually implemented
- Acceptance Criteria not implemented
- Story claims files changed but no git evidence
- Security vulnerabilities
## 🟡 MEDIUM ISSUES
- Files changed but not documented in story File List
- Uncommitted changes not tracked
- Performance problems
- Poor test coverage/quality
- Code maintainability issues
## 🟢 LOW ISSUES
- Code style improvements
- Documentation gaps
- Git commit message quality
</output>
<ask>What should I do with these issues?
1. **Fix them automatically** - I'll update the code and tests
2. **Create action items** - Add to story Tasks/Subtasks for later
3. **Show me details** - Deep dive into specific issues
Choose [1], [2], or specify which issue to examine:</ask>
<check if="user chooses 1">
<action>Fix all HIGH and MEDIUM issues in the code</action>
<action>Add/update tests as needed</action>
<action>Update File List in story if files changed</action>
<action>Update story Dev Agent Record with fixes applied</action>
<action>Set {{fixed_count}} = number of HIGH and MEDIUM issues fixed</action>
<action>Set {{action_count}} = 0</action>
</check>
<check if="user chooses 2">
<action>Add "Review Follow-ups (AI)" subsection to Tasks/Subtasks</action>
<action>For each issue: `- [ ] [AI-Review][Severity] Description [file:line]`</action>
<action>Set {{action_count}} = number of action items created</action>
<action>Set {{fixed_count}} = 0</action>
</check>
<check if="user chooses 3">
<action>Show detailed explanation with code examples</action>
<action>Return to fix decision</action>
</check>
</step>
<step n="5" goal="Update story status and sync sprint tracking">
<!-- Determine new status based on review outcome -->
<check if="all HIGH and MEDIUM issues fixed AND all ACs implemented">
<action>Set {{new_status}} = "done"</action>
<action>Update story Status field to "done"</action>
</check>
<check if="HIGH or MEDIUM issues remain OR ACs not fully implemented">
<action>Set {{new_status}} = "in-progress"</action>
<action>Update story Status field to "in-progress"</action>
</check>
<action>Save story file</action>
<!-- Determine sprint tracking status -->
<check if="{sprint_status} file exists">
<action>Set {{current_sprint_status}} = "enabled"</action>
</check>
<check if="{sprint_status} file does NOT exist">
<action>Set {{current_sprint_status}} = "no-sprint-tracking"</action>
</check>
<!-- Sync sprint-status.yaml when story status changes (only if sprint tracking enabled) -->
<check if="{{current_sprint_status}} != 'no-sprint-tracking'">
<action>Load the FULL file: {sprint_status}</action>
<action>Find development_status key matching {{story_key}}</action>
<check if="{{new_status}} == 'done'">
<action>Update development_status[{{story_key}}] = "done"</action>
<action>Save file, preserving ALL comments and structure</action>
<output>✅ Sprint status synced: {{story_key}} → done</output>
</check>
<check if="{{new_status}} == 'in-progress'">
<action>Update development_status[{{story_key}}] = "in-progress"</action>
<action>Save file, preserving ALL comments and structure</action>
<output>🔄 Sprint status synced: {{story_key}} → in-progress</output>
</check>
<check if="story key not found in sprint status">
<output>⚠️ Story file updated, but sprint-status sync failed: {{story_key}} not found in sprint-status.yaml</output>
</check>
</check>
<check if="{{current_sprint_status}} == 'no-sprint-tracking'">
<output> Story status updated (no sprint tracking configured)</output>
</check>
<output>**✅ Review Complete!**
**Story Status:** {{new_status}}
**Issues Fixed:** {{fixed_count}}
**Action Items Created:** {{action_count}}
{{#if new_status == "done"}}Code review complete!{{else}}Address the action items and continue development.{{/if}}
</output>
</step>
</workflow>

View File

@ -0,0 +1,122 @@
---
name: 'step-01-load-story'
description: "Compare story's file list against git changes"
---
# Step 1: Load Story and Discover Changes
---
## STATE VARIABLES (capture now, persist throughout)
These variables MUST be set in this step and available to all subsequent steps:
- `story_path` - Path to the story file being reviewed
- `story_key` - Story identifier (e.g., "1-2-user-authentication")
- `story_content` - Complete, unmodified file content from story_path (loaded in substep 2)
- `story_file_list` - Files claimed in story's Dev Agent Record → File List
- `git_changed_files` - Files actually changed according to git
- `git_discrepancies` - Mismatches between `story_file_list` and `git_changed_files`
---
## EXECUTION SEQUENCE
### 1. Identify Story
Ask user: "Which story would you like to review?"
**Try input as direct file path first:**
If input resolves to an existing file:
- Verify it's in {sprint_status} with status `review` or `done`
- If verified → set `story_path` to that file path
- If NOT verified → Warn user the file is not in {sprint_status} (or wrong status). Ask: "Continue anyway?"
- If yes → set `story_path`
- If no → return to user prompt (ask "Which story would you like to review?" again)
**Search {sprint_status}** (if input is not a direct file):
Search for stories with status `review` or `done`. Match by priority:
1. Story number resembles input closely enough (e.g., "1-2" matches "1 2", "1.2", "one dash two", "one two"; "1-32" matches "one thirty two"). Do NOT match if numbers differ (e.g., "1-33" does not match "1-32")
2. Exact story name/key (e.g., "1-2-user-auth-api")
3. Story name/title resembles input closely enough
4. Story description resembles input closely enough
**Resolution:**
- **Single match**: Confident. Set `story_path`, proceed to substep 2
- **Multiple matches**: Uncertain. Present all candidates to user. Wait for selection. Set `story_path`, proceed to substep 2
- **No match**: Ask user to clarify or provide the full story path. Return to user prompt (ask "Which story would you like to review?" again)
### 2. Load Story File
**Load file content:**
Read the complete contents of {story_path} and assign to `story_content` WITHOUT filtering, truncating or summarizing. If {story_path} cannot be read, is empty, or obviously doesn't have the story: report the error to the user and HALT the workflow.
**Extract story identifier:**
Verify the filename ends with `.md` extension. Remove `.md` to get `story_key` (e.g., "1-2-user-authentication.md" → "1-2-user-authentication"). If filename doesn't end with `.md` or the result is empty: report the error to the user and HALT the workflow.
### 3. Extract File List from Story
Extract `story_file_list` from the Dev Agent Record → File List section of {story_content}.
**If Dev Agent Record or File List section not found:** Report to user and set `story_file_list` = NO_FILE_LIST.
### 4. Discover Git Changes
Check if git repository exists.
**If NOT a git repo:** Set `git_changed_files` = NO_GIT, `git_discrepancies` = NO_GIT. Skip to substep 5.
**If git repo detected:**
```bash
git status --porcelain
git diff -M --name-only
git diff -M --cached --name-only
```
If any git command fails: Report the error to the user and HALT the workflow.
Compile `git_changed_files` = union of modified, staged, new, deleted, and renamed files.
### 5. Cross-Reference Story vs Git
**If {git_changed_files} is empty:**
Ask user: "No git changes detected. Continue anyway?"
- If **no**: HALT the workflow
- If **yes**: Continue to comparison
**Compare {story_file_list} with {git_changed_files}:**
Exclude git-ignored files from the comparison (run `git check-ignore` if needed).
Set `git_discrepancies` with categories:
- **files_in_git_not_story**: Files changed in git but not in story File List
- **files_in_story_not_git**: Files in story File List but no git changes (excluding git-ignored)
- **uncommitted_undocumented**: Uncommitted changes not tracked in story
---
## COMPLETION CHECKLIST
Before proceeding to the next step, verify ALL of the following:
- `story_path` identified and loaded
- `story_key` extracted
- `story_content` captured completely and unmodified
- `story_file_list` compiled from Dev Agent Record (or NO_FILE_LIST if not found)
- `git_changed_files` discovered via git commands (or NO_GIT if not a git repo)
- `git_discrepancies` calculated
**If any criterion is not met:** Report to the user and HALT the workflow.
---
## NEXT STEP DIRECTIVE
**CRITICAL:** When this step completes, explicitly state:
"**NEXT:** Loading `step-02-build-attack-plan.md`"

View File

@ -0,0 +1,155 @@
---
name: 'step-02-adversarial-review'
description: 'Lean adversarial review - context-independent diff analysis, no story knowledge'
---
# Step 2: Adversarial Review (Information Asymmetric)
**Goal:** Perform context-independent adversarial review of code changes. Reviewer sees ONLY the diff - no story, no ACs, no context about WHY changes were made.
<critical>Reviewer has FULL repo access but NO knowledge of WHY changes were made</critical>
<critical>DO NOT include story file in prompt - asymmetry is about intent, not visibility</critical>
<critical>This catches issues a fresh reviewer would find that story-biased review might miss</critical>
---
## AVAILABLE STATE
From previous steps:
- `{story_path}`, `{story_key}`
- `{file_list}` - Files listed in story's File List section
- `{git_changed_files}` - Files changed according to git
- `{baseline_commit}` - From story file Dev Agent Record
---
## STATE VARIABLE (capture now)
- `{diff_output}` - Complete diff of changes
- `{asymmetric_findings}` - Findings from adversarial review
---
## EXECUTION SEQUENCE
### 1. Construct Diff
Build complete diff of all changes for this story.
**Step 1a: Read baseline from story file**
Extract `Baseline Commit` from the story file's Dev Agent Record section.
- If found and not "NO_GIT": use as `{baseline_commit}`
- If "NO_GIT" or missing: proceed to fallback
**Step 1b: Construct diff (with baseline)**
If `{baseline_commit}` is a valid commit hash:
```bash
git diff {baseline_commit} -- ':!{implementation_artifacts}'
```
This captures all changes (committed + uncommitted) since dev-story started.
**Step 1c: Fallback (no baseline)**
If no baseline available, review current state of files in `{file_list}`:
- Read each file listed in the story's File List section
- Review as full file content (not a diff)
**Include in `{diff_output}`:**
- All modified tracked files (except files in `{implementation_artifacts}` - asymmetry requires hiding intent)
- All new files created for this story
- Full content for new files
**Note:** Do NOT `git add` anything - this is read-only inspection.
### 2. Invoke Adversarial Review
With `{diff_output}` constructed, invoke the review task. If possible, use information asymmetry: run this step, and only it, in a separate subagent or process with read access to the project, but no context except the `{diff_output}`.
```xml
<invoke-task>Review {diff_output} using {project-root}/_bmad/core/tasks/review-adversarial-general.xml</invoke-task>
```
**Platform fallback:** If task invocation not available, load the task file and execute its instructions inline, passing `{diff_output}` as the content.
The task should: review `{diff_output}` and return a list of findings.
### 3. Process Adversarial Findings
Capture findings from adversarial review.
**If zero findings:** HALT - this is suspicious. Re-analyze or ask for guidance.
Evaluate severity (Critical, High, Medium, Low) and validity (Real, Noise, Undecided).
Add each finding to `{asymmetric_findings}` (no IDs yet - assigned after merge):
```
{
source: "adversarial",
severity: "...",
validity: "...",
description: "...",
location: "file:line (if applicable)"
}
```
### 4. Phase 1 Summary
Present adversarial findings:
```
**Phase 1: Adversarial Review Complete**
**Reviewer Context:** Pure diff review (no story knowledge)
**Findings:** {count}
- Critical: {count}
- High: {count}
- Medium: {count}
- Low: {count}
**Validity Assessment:**
- Real: {count}
- Noise: {count}
- Undecided: {count}
Proceeding to attack plan construction...
```
---
## NEXT STEP DIRECTIVE
**CRITICAL:** When this step completes, explicitly state:
"**NEXT:** Loading `step-03-build-attack-plan.md`"
---
## SUCCESS METRICS
- Diff constructed from correct source (uncommitted or commits)
- Story file excluded from diff
- Task invoked with diff as input
- Adversarial review executed
- Findings captured with severity and validity
- `{asymmetric_findings}` populated
- Phase summary presented
- Explicit NEXT directive provided
## FAILURE MODES
- Including story file in diff (breaks asymmetry)
- Skipping adversarial review entirely
- Accepting zero findings without halt
- Invoking task without providing diff input
- Missing severity/validity classification
- Not storing findings for consolidation
- No explicit NEXT directive at step completion

View File

@ -0,0 +1,147 @@
---
name: 'step-03-build-attack-plan'
description: 'Extract ACs and tasks, create comprehensive review plan for context-aware phase'
---
# Step 3: Build Review Attack Plan
**Goal:** Extract all reviewable items from story and create attack plan for context-aware review phase.
---
## AVAILABLE STATE
From previous steps:
- `{story_path}` - Path to the story file
- `{story_key}` - Story identifier
- `{story_file_list}` - Files claimed in story
- `{git_changed_files}` - Files actually changed (git)
- `{git_discrepancies}` - Differences between claims and reality
- `{asymmetric_findings}` - Findings from Phase 1 (adversarial review)
---
## STATE VARIABLES (capture now)
- `{acceptance_criteria}` - All ACs extracted from story
- `{tasks_with_status}` - All tasks with their [x] or [ ] status
- `{comprehensive_file_list}` - Union of story files + git files
- `{review_attack_plan}` - Structured plan for context-aware phase
---
## EXECUTION SEQUENCE
### 1. Extract Acceptance Criteria
Parse all Acceptance Criteria from story:
```
{acceptance_criteria} = [
{ id: "AC1", requirement: "...", testable: true/false },
{ id: "AC2", requirement: "...", testable: true/false },
...
]
```
Note any ACs that are vague or untestable.
### 2. Extract Tasks with Status
Parse all Tasks/Subtasks with completion markers:
```
{tasks_with_status} = [
{ id: "T1", description: "...", status: "complete" ([x]) or "incomplete" ([ ]) },
{ id: "T1.1", description: "...", status: "complete" or "incomplete" },
...
]
```
Flag any tasks marked complete [x] for verification.
### 3. Build Comprehensive File List
Merge `{story_file_list}` and `{git_changed_files}`:
```
{comprehensive_file_list} = union of:
- Files in story Dev Agent Record
- Files changed according to git
- Deduped and sorted
```
Exclude from review:
- `_bmad/`, `_bmad-output/`
- `.cursor/`, `.windsurf/`, `.claude/`
- IDE/editor config files
### 4. Create Review Attack Plan
Structure the `{review_attack_plan}`:
```
PHASE 1: Adversarial Review (Step 2) [COMPLETE - {asymmetric_findings} findings]
├── Fresh code review without story context
│ └── {asymmetric_findings} items to consolidate
PHASE 2: Context-Aware Review (Step 4)
├── Git vs Story Discrepancies
│ └── {git_discrepancies} items
├── AC Validation
│ └── {acceptance_criteria} items to verify
├── Task Completion Audit
│ └── {tasks_with_status} marked [x] to verify
└── Code Quality Review
└── {comprehensive_file_list} files to review
```
### 5. Preview Attack Plan
Present to user (brief summary):
```
**Review Attack Plan**
**Story:** {story_key}
**Phase 1 (Adversarial - Complete):** {asymmetric_findings count} findings from fresh review
**Phase 2 (Context-Aware - Starting):**
- ACs to verify: {count}
- Tasks marked complete: {count}
- Files to review: {count}
- Git discrepancies detected: {count}
Proceeding with context-aware review...
```
---
## NEXT STEP DIRECTIVE
**CRITICAL:** When this step completes, explicitly state:
"**NEXT:** Loading `step-04-context-aware-review.md`"
---
## SUCCESS METRICS
- All ACs extracted with testability assessment
- All tasks extracted with completion status
- Comprehensive file list built (story + git)
- Exclusions applied correctly
- Attack plan structured for context-aware phase
- Summary presented to user
- Explicit NEXT directive provided
## FAILURE MODES
- Missing AC extraction
- Not capturing task completion status
- Forgetting to merge story + git files
- Not excluding IDE/config directories
- Skipping attack plan structure
- No explicit NEXT directive at step completion

View File

@ -0,0 +1,182 @@
---
name: 'step-04-context-aware-review'
description: 'Story-aware validation: verify ACs, audit task completion, check git discrepancies'
---
# Step 4: Context-Aware Review
**Goal:** Perform story-aware validation - verify AC implementation, audit task completion, review code quality with full story context.
<critical>VALIDATE EVERY CLAIM - Check git reality vs story claims</critical>
<critical>You KNOW the story requirements - use that knowledge to find gaps</critical>
---
## AVAILABLE STATE
From previous steps:
- `{story_path}`, `{story_key}`
- `{story_file_list}`, `{git_changed_files}`, `{git_discrepancies}`
- `{acceptance_criteria}`, `{tasks_with_status}`
- `{comprehensive_file_list}`, `{review_attack_plan}`
- `{asymmetric_findings}` - From Phase 1 (adversarial review)
---
## STATE VARIABLE (capture now)
- `{context_aware_findings}` - All findings from this phase
Initialize `{context_aware_findings}` as empty list.
---
## EXECUTION SEQUENCE
### 0. Load Planning Context (JIT)
Load planning documents for AC validation against system design:
- **Architecture**: `{planning_artifacts}/*architecture*.md` (or sharded: `{planning_artifacts}/*architecture*/*.md`)
- **UX Design**: `{planning_artifacts}/*ux*.md` (if UI review relevant)
- **Epic**: `{planning_artifacts}/*epic*/epic-{epic_num}.md` (the epic containing this story)
These provide the design context needed to validate AC implementation against system requirements.
### 1. Git vs Story Discrepancies
Review `{git_discrepancies}` and create findings:
| Discrepancy Type | Severity |
| --- | --- |
| Files changed but not in story File List | Medium |
| Story lists files but no git changes | High |
| Uncommitted changes not documented | Medium |
For each discrepancy, add to `{context_aware_findings}` (no IDs yet - assigned after merge):
```
{
source: "git-discrepancy",
severity: "...",
description: "...",
evidence: "file: X, git says: Y, story says: Z"
}
```
### 2. Acceptance Criteria Validation
For EACH AC in `{acceptance_criteria}`:
1. Read the AC requirement
2. Search implementation files in `{comprehensive_file_list}` for evidence
3. Determine status: IMPLEMENTED, PARTIAL, or MISSING
4. If PARTIAL or MISSING → add High severity finding
Add to `{context_aware_findings}`:
```
{
source: "ac-validation",
severity: "High",
description: "AC {id} not fully implemented: {details}",
evidence: "Expected: {ac}, Found: {what_was_found}"
}
```
### 3. Task Completion Audit
For EACH task marked [x] in `{tasks_with_status}`:
1. Read the task description
2. Search files for evidence it was actually done
3. **Critical**: If marked [x] but NOT DONE → Critical finding
4. Record specific proof (file:line) if done
Add to `{context_aware_findings}` if false:
```
{
source: "task-audit",
severity: "Critical",
description: "Task marked complete but not implemented: {task}",
evidence: "Searched: {files}, Found: no evidence of {expected}"
}
```
### 4. Code Quality Review (Context-Aware)
For EACH file in `{comprehensive_file_list}`:
Review with STORY CONTEXT (you know what was supposed to be built):
- **Security**: Missing validation for AC-specified inputs?
- **Performance**: Story mentioned scale requirements met?
- **Error Handling**: Edge cases from AC covered?
- **Test Quality**: Tests actually verify ACs or just placeholders?
- **Architecture Compliance**: Follows patterns in architecture doc?
Add findings to `{context_aware_findings}` with appropriate severity.
### 5. Minimum Finding Check
<critical>If total findings < 3, NOT LOOKING HARD ENOUGH</critical>
Re-examine for:
- Edge cases not covered by implementation
- Documentation gaps
- Integration issues with other components
- Dependency problems
- Comments missing for complex logic
---
## PHASE 2 SUMMARY
Present context-aware findings:
```
**Phase 2: Context-Aware Review Complete**
**Findings:** {count}
- Critical: {count}
- High: {count}
- Medium: {count}
- Low: {count}
Proceeding to findings consolidation...
```
Store `{context_aware_findings}` for consolidation in step 5.
---
## NEXT STEP DIRECTIVE
**CRITICAL:** When this step completes, explicitly state:
"**NEXT:** Loading `step-05-consolidate-findings.md`"
---
## SUCCESS METRICS
- All git discrepancies reviewed and findings created
- Every AC checked for implementation evidence
- Every [x] task verified with proof
- Code quality reviewed with story context
- Minimum 3 findings (push harder if not)
- `{context_aware_findings}` populated
- Phase summary presented
- Explicit NEXT directive provided
## FAILURE MODES
- Accepting "looks good" with < 3 findings
- Not verifying [x] tasks with actual evidence
- Missing AC validation
- Ignoring git discrepancies
- Not storing findings for consolidation
- No explicit NEXT directive at step completion

View File

@ -0,0 +1,158 @@
---
name: 'step-05-consolidate-findings'
description: 'Merge and deduplicate findings from both review phases'
---
# Step 5: Consolidate Findings
**Goal:** Merge findings from adversarial review (Phase 1) and context-aware review (Phase 2), deduplicate, and present unified findings table.
---
## AVAILABLE STATE
From previous steps:
- `{story_path}`, `{story_key}`
- `{asymmetric_findings}` - Findings from Phase 1 (step 2 - adversarial review)
- `{context_aware_findings}` - Findings from Phase 2 (step 4 - context-aware review)
---
## STATE VARIABLE (capture now)
- `{consolidated_findings}` - Merged, deduplicated findings
---
## EXECUTION SEQUENCE
### 1. Merge All Findings
Combine both finding lists:
```
all_findings = {context_aware_findings} + {asymmetric_findings}
```
### 2. Deduplicate Findings
Identify duplicates (same underlying issue found by both phases):
**Duplicate Detection Criteria:**
- Same file + same line range
- Same issue type (e.g., both about error handling in same function)
- Overlapping descriptions
**Resolution Rule:**
Keep the MORE DETAILED version:
- If context-aware finding has AC reference → keep that
- If adversarial finding has better technical detail → keep that
- When in doubt, keep context-aware (has more context)
Note which findings were merged (for transparency in the summary).
### 3. Normalize Severity
Apply consistent severity scale (Critical, High, Medium, Low).
### 4. Filter Noise
Review adversarial findings marked as Noise:
- If clearly false positive (e.g., style preference, not actual issue) → exclude
- If questionable → keep with Undecided validity
- If context reveals it's actually valid → upgrade to Real
**Do NOT filter:**
- Any Critical or High severity
- Any context-aware findings (they have story context)
### 5. Sort and Number Findings
Sort by severity (Critical → High → Medium → Low), then assign IDs: F1, F2, F3, etc.
Build `{consolidated_findings}`:
```markdown
| ID | Severity | Source | Description | Location |
|----|----------|--------|-------------|----------|
| F1 | Critical | task-audit | Task 3 marked [x] but not implemented | src/auth.ts |
| F2 | High | ac-validation | AC2 partially implemented | src/api/*.ts |
| F3 | High | adversarial | Missing error handling in API calls | src/api/client.ts:45 |
| F4 | Medium | git-discrepancy | File changed but not in story | src/utils.ts |
| F5 | Low | adversarial | Magic number should be constant | src/config.ts:12 |
```
### 6. Present Consolidated Findings
```markdown
**Consolidated Code Review Findings**
**Story:** {story_key}
**Summary:**
- Total findings: {count}
- Critical: {count}
- High: {count}
- Medium: {count}
- Low: {count}
**Deduplication:** {merged_count} duplicate findings merged
---
## Findings by Severity
### Critical (Must Fix)
{list critical findings with full details}
### High (Should Fix)
{list high findings with full details}
### Medium (Consider Fixing)
{list medium findings}
### Low (Nice to Fix)
{list low findings}
---
**Phase Sources:**
- Adversarial (Phase 1): {count} findings
- Context-Aware (Phase 2): {count} findings
```
---
## NEXT STEP DIRECTIVE
**CRITICAL:** When this step completes, explicitly state:
"**NEXT:** Loading `step-06-resolve-and-update.md`"
---
## SUCCESS METRICS
- All findings merged from both phases
- Duplicates identified and resolved (kept more detailed)
- Severity normalized consistently
- Noise filtered appropriately (but not excessively)
- Consolidated table created
- `{consolidated_findings}` populated
- Summary presented to user
- Explicit NEXT directive provided
## FAILURE MODES
- Missing findings from either phase
- Not detecting duplicates (double-counting issues)
- Inconsistent severity assignment
- Filtering real issues as noise
- Not storing consolidated findings
- No explicit NEXT directive at step completion

View File

@ -0,0 +1,213 @@
---
name: 'step-06-resolve-and-update'
description: 'Present findings, fix or create action items, update story and sprint status'
---
# Step 6: Resolve Findings and Update Status
**Goal:** Present findings to user, handle resolution (fix or action items), update story file and sprint status.
---
## AVAILABLE STATE
From previous steps:
- `{story_path}`, `{story_key}`
- `{consolidated_findings}` - Merged findings from step 5
- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
---
## STATE VARIABLES (capture now)
- `{fixed_count}` - Number of issues fixed
- `{action_count}` - Number of action items created
- `{new_status}` - Final story status
---
## EXECUTION SEQUENCE
### 1. Present Resolution Options
```markdown
**Code Review Findings for {user_name}**
**Story:** {story_key}
**Total Issues:** {consolidated_findings.count}
{consolidated_findings_table}
---
**What should I do with these issues?**
**[1] Fix them automatically** - I'll update the code and tests
**[2] Create action items** - Add to story Tasks/Subtasks for later
**[3] Walk through** - Discuss each finding individually
**[4] Show details** - Deep dive into specific issues
Choose [1], [2], [3], [4], or specify which issue (e.g., "CF-3"):
```
### 2. Handle User Choice
**Option [1]: Fix Automatically**
1. For each CRITICAL and HIGH finding:
- Apply the fix in the code
- Add/update tests if needed
- Record what was fixed
2. Update story Dev Agent Record → File List if files changed
3. Add "Code Review Fixes Applied" entry to Change Log
4. Set `{fixed_count}` = number of issues fixed
5. Set `{action_count}` = 0 (LOW findings can become action items)
**Option [2]: Create Action Items**
1. Add "Review Follow-ups (AI)" subsection to Tasks/Subtasks
2. For each finding:
```
- [ ] [AI-Review][{severity}] {description} [{location}]
```
3. Set `{action_count}` = number of action items created
4. Set `{fixed_count}` = 0
**Option [3]: Walk Through**
For each finding in order:
1. Present finding with full context and code snippet
2. Ask: **[f]ix now / [s]kip / [d]iscuss more**
3. If fix: Apply fix immediately, increment `{fixed_count}`
4. If skip: Note as acknowledged, optionally create action item
5. If discuss: Provide more detail, repeat choice
6. Continue to next finding
After all processed, summarize what was fixed/skipped.
**Option [4]: Show Details**
1. Present expanded details for specific finding(s)
2. Return to resolution choice
### 3. Determine Final Status
Evaluate completion:
**If ALL conditions met:**
- All CRITICAL issues fixed
- All HIGH issues fixed or have action items
- All ACs verified as implemented
Set `{new_status}` = "done"
**Otherwise:**
Set `{new_status}` = "in-progress"
### 4. Update Story File
1. Update story Status field to `{new_status}`
2. Add review notes to Dev Agent Record:
```markdown
## Senior Developer Review (AI)
**Date:** {date}
**Reviewer:** AI Code Review
**Findings Summary:**
- CRITICAL: {count} ({fixed}/{action_items})
- HIGH: {count} ({fixed}/{action_items})
- MEDIUM: {count}
- LOW: {count}
**Resolution:** {approach_taken}
**Files Modified:** {list if fixes applied}
```
3. Update Change Log:
```markdown
- [{date}] Code review completed - {outcome_summary}
```
4. Save story file
### 5. Sync Sprint Status
Check if `{sprint_status}` file exists:
**If exists:**
1. Load `{sprint_status}`
2. Find `{story_key}` in development_status
3. Update status to `{new_status}`
4. Save file, preserving ALL comments and structure
```
Sprint status synced: {story_key} {new_status}
```
**If not exists or key not found:**
```
Sprint status sync skipped (no sprint tracking or key not found)
```
### 6. Completion Output
```markdown
** Code Review Complete!**
**Story:** {story_key}
**Final Status:** {new_status}
**Issues Fixed:** {fixed_count}
**Action Items Created:** {action_count}
{if new_status == "done"}
Code review passed! Story is ready for final verification.
{else}
Address the action items and run another review cycle.
{endif}
---
**Next Steps:**
- Commit changes (if fixes applied)
- Run tests to verify fixes
- Address remaining action items (if any)
- Mark story complete when all items resolved
```
---
## WORKFLOW COMPLETE
This is the final step. The Code Review workflow is now complete.
---
## SUCCESS METRICS
- Resolution options presented clearly
- User choice handled correctly
- Fixes applied cleanly (if chosen)
- Action items created correctly (if chosen)
- Story status determined correctly
- Story file updated with review notes
- Sprint status synced (if applicable)
- Completion summary provided
## FAILURE MODES
- Not presenting resolution options
- Fixing without user consent
- Not updating story file
- Wrong status determination (done when issues remain)
- Not syncing sprint status when it exists
- Missing completion summary

View File

@ -0,0 +1,39 @@
---
name: code-review
description: 'Code review for dev-story output. Audits acceptance criteria against implementation, performs adversarial diff review, can auto-fix with approval. A different LLM than the implementer is recommended.'
web_bundle: false
---
# Code Review Workflow
## WORKFLOW ARCHITECTURE: STEP FILES
- This file (workflow.md) stays in context throughout
- Each step file is read just before processing (current step stays at end of context)
- State persists via variables: `{story_path}`, `{story_key}`, `{context_aware_findings}`, `{asymmetric_findings}`
---
## INITIALIZATION
### Configuration Loading
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
- `user_name`, `communication_language`, `user_skill_level`, `document_output_language`
- `planning_artifacts`, `implementation_artifacts`
- `date` as system-generated current datetime
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
### Paths
- `installed_path` = `{project-root}/_bmad/bmm/workflows/4-implementation/code-review`
- `project_context` = `**/project-context.md` (load if exists)
- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
---
## EXECUTION
Read and follow `steps/step-01-load-story.md` to begin the workflow.

View File

@ -1,51 +0,0 @@
# Review Story Workflow
name: code-review
description: "Perform an ADVERSARIAL Senior Developer code review that finds 3-10 specific problems in every story. Challenges everything: code quality, test coverage, architecture compliance, security, performance. NEVER accepts `looks good` - must find minimum issues and can auto-fix with user approval."
author: "BMad"
# Critical variables from config
config_source: "{project-root}/_bmad/bmm/config.yaml"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
user_skill_level: "{config_source}:user_skill_level"
document_output_language: "{config_source}:document_output_language"
date: system-generated
planning_artifacts: "{config_source}:planning_artifacts"
implementation_artifacts: "{config_source}:implementation_artifacts"
output_folder: "{implementation_artifacts}"
sprint_status: "{implementation_artifacts}/sprint-status.yaml"
# Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/code-review"
instructions: "{installed_path}/instructions.xml"
validation: "{installed_path}/checklist.md"
template: false
variables:
# Project context
project_context: "**/project-context.md"
story_dir: "{implementation_artifacts}"
# Smart input file references - handles both whole docs and sharded docs
# Priority: Whole document first, then sharded version
# Strategy: SELECTIVE LOAD - only load the specific epic needed for this story review
input_file_patterns:
architecture:
description: "System architecture for review context"
whole: "{planning_artifacts}/*architecture*.md"
sharded: "{planning_artifacts}/*architecture*/*.md"
load_strategy: "FULL_LOAD"
ux_design:
description: "UX design specification (if UI review)"
whole: "{planning_artifacts}/*ux*.md"
sharded: "{planning_artifacts}/*ux*/*.md"
load_strategy: "FULL_LOAD"
epics:
description: "Epic containing story being reviewed"
whole: "{planning_artifacts}/*epic*.md"
sharded_index: "{planning_artifacts}/*epic*/index.md"
sharded_single: "{planning_artifacts}/*epic*/epic-{{epic_num}}.md"
load_strategy: "SELECTIVE_LOAD"
standalone: true
web_bundle: false

View File

@ -219,6 +219,17 @@
<output> No sprint status file exists - story progress will be tracked in story file only</output>
<action>Set {{current_sprint_status}} = "no-sprint-tracking"</action>
</check>
<!-- Capture baseline commit for code review -->
<check if="git is available">
<action>Capture current HEAD commit: `git rev-parse HEAD`</action>
<action>Store as {{baseline_commit}}</action>
<action>Write to story file Dev Agent Record: "**Baseline Commit:** {{baseline_commit}}"</action>
</check>
<check if="git is NOT available">
<action>Set {{baseline_commit}} = "NO_GIT"</action>
<action>Write to story file Dev Agent Record: "**Baseline Commit:** NO_GIT"</action>
</check>
</step>
<step n="5" goal="Implement task following red-green-refactor cycle">

View File

@ -51,7 +51,11 @@ Use best-effort diff construction:
### Capture as {diff_output}
Merge all changes into `{diff_output}`.
**Include in `{diff_output}`:**
- All modified tracked files (except `{tech_spec_path}` if tech-spec mode - asymmetry requires hiding intent)
- All new files created during this workflow
- Full content for new files
**Note:** Do NOT `git add` anything - this is read-only inspection.
@ -75,7 +79,7 @@ The task should: review `{diff_output}` and return a list of findings.
Capture the findings from the task output.
**If zero findings:** HALT - this is suspicious. Re-analyze or request user guidance.
Evaluate severity (Critical, High, Medium, Low) and validity (real, noise, undecided).
Evaluate severity (Critical, High, Medium, Low) and validity (Real, Noise, Undecided).
DO NOT exclude findings based on severity or validity unless explicitly asked to do so.
Order findings by severity.
Number the ordered findings (F1, F2, F3, etc.).
@ -92,6 +96,7 @@ With findings in hand, load `step-06-resolve-findings.md` for user to choose res
## SUCCESS METRICS
- Diff constructed from baseline_commit
- Tech-spec excluded from diff when in tech-spec mode (information asymmetry)
- New files included in diff
- Task invoked with diff as input
- Findings received
@ -100,6 +105,7 @@ With findings in hand, load `step-06-resolve-findings.md` for user to choose res
## FAILURE MODES
- Missing baseline_commit (can't construct accurate diff)
- Including tech_spec_path in diff when in tech-spec mode (breaks asymmetry)
- Not including new untracked files in diff
- Invoking task without providing diff input
- Accepting zero findings without questioning

View File

@ -121,6 +121,8 @@ Parse these fields from YAML comments and metadata:
- {{workflow_name}} ({{agent}}) - {{status}}
{{/each}}
{{/if}}
**Tip:** For guardrail tests, run TEA `*automate` after `dev-story`. If you lose context, TEA workflows resume from artifacts in `{{output_folder}}`.
</output>
</step>

View File

@ -3,7 +3,7 @@ const path = require('node:path');
const fs = require('node:fs');
// Fix for stdin issues when running through npm on Windows
// Ensures keyboard interaction works properly with inquirer prompts
// Ensures keyboard interaction works properly with CLI prompts
if (process.stdin.isTTY) {
try {
process.stdin.resume();

View File

@ -71,14 +71,10 @@ module.exports = {
console.log(chalk.dim(' • ElevenLabs AI (150+ premium voices)'));
console.log(chalk.dim(' • Piper TTS (50+ free voices)\n'));
const { default: inquirer } = await import('inquirer');
await inquirer.prompt([
{
type: 'input',
name: 'continue',
message: chalk.green('Press Enter to start AgentVibes installer...'),
},
]);
const prompts = require('../lib/prompts');
await prompts.text({
message: chalk.green('Press Enter to start AgentVibes installer...'),
});
console.log('');

View File

@ -4,15 +4,7 @@ const yaml = require('yaml');
const chalk = require('chalk');
const { getProjectRoot, getModulePath } = require('../../../lib/project-root');
const { CLIUtils } = require('../../../lib/cli-utils');
// Lazy-load inquirer (ESM module) to avoid ERR_REQUIRE_ESM
let _inquirer = null;
async function getInquirer() {
if (!_inquirer) {
_inquirer = (await import('inquirer')).default;
}
return _inquirer;
}
const prompts = require('../../../lib/prompts');
class ConfigCollector {
constructor() {
@ -183,7 +175,6 @@ class ConfigCollector {
* @returns {boolean} True if new fields were prompted, false if all fields existed
*/
async collectModuleConfigQuick(moduleName, projectDir, silentMode = true) {
const inquirer = await getInquirer();
this.currentProjectDir = projectDir;
// Load existing config if not already loaded
@ -359,7 +350,7 @@ class ConfigCollector {
// Only show header if we actually have questions
CLIUtils.displayModuleConfigHeader(moduleName, moduleConfig.header, moduleConfig.subheader);
console.log(); // Line break before questions
const promptedAnswers = await inquirer.prompt(questions);
const promptedAnswers = await prompts.prompt(questions);
// Merge prompted answers with static answers
Object.assign(allAnswers, promptedAnswers);
@ -502,7 +493,6 @@ class ConfigCollector {
* @param {boolean} skipCompletion - Skip showing completion message (for early core collection)
*/
async collectModuleConfig(moduleName, projectDir, skipLoadExisting = false, skipCompletion = false) {
const inquirer = await getInquirer();
this.currentProjectDir = projectDir;
// Load existing config if needed and not already loaded
if (!skipLoadExisting && !this.existingConfig) {
@ -597,7 +587,7 @@ class ConfigCollector {
console.log(chalk.cyan('?') + ' ' + chalk.magenta(moduleDisplayName));
let customize = true;
if (moduleName !== 'core') {
const customizeAnswer = await inquirer.prompt([
const customizeAnswer = await prompts.prompt([
{
type: 'confirm',
name: 'customize',
@ -614,7 +604,7 @@ class ConfigCollector {
if (questionsWithoutDefaults.length > 0) {
console.log(chalk.dim(`\n Asking required questions for ${moduleName.toUpperCase()}...`));
const promptedAnswers = await inquirer.prompt(questionsWithoutDefaults);
const promptedAnswers = await prompts.prompt(questionsWithoutDefaults);
Object.assign(allAnswers, promptedAnswers);
}
@ -628,7 +618,7 @@ class ConfigCollector {
allAnswers[question.name] = question.default;
}
} else {
const promptedAnswers = await inquirer.prompt(questions);
const promptedAnswers = await prompts.prompt(questions);
Object.assign(allAnswers, promptedAnswers);
}
}
@ -750,7 +740,7 @@ class ConfigCollector {
console.log(chalk.cyan('?') + ' ' + chalk.magenta(moduleDisplayName));
// Ask user if they want to accept defaults or customize on the next line
const { customize } = await inquirer.prompt([
const { customize } = await prompts.prompt([
{
type: 'confirm',
name: 'customize',
@ -845,7 +835,7 @@ class ConfigCollector {
}
/**
* Build an inquirer question from a config item
* Build a prompt question from a config item
* @param {string} moduleName - Module name
* @param {string} key - Config key
* @param {Object} item - Config item definition
@ -1007,7 +997,7 @@ class ConfigCollector {
message: message,
};
// Set default - if it's dynamic, use a function that inquirer will evaluate with current answers
// Set default - if it's dynamic, use a function that the prompt will evaluate with current answers
// But if we have an existing value, always use that instead
if (existingValue !== null && existingValue !== undefined && questionType !== 'list') {
question.default = existingValue;

View File

@ -16,6 +16,7 @@ const { CLIUtils } = require('../../../lib/cli-utils');
const { ManifestGenerator } = require('./manifest-generator');
const { IdeConfigManager } = require('./ide-config-manager');
const { CustomHandler } = require('../custom/handler');
const prompts = require('../../../lib/prompts');
// BMAD installation folder name - this is constant and should never change
const BMAD_FOLDER_NAME = '_bmad';
@ -758,6 +759,9 @@ class Installer {
config.skipIde = toolSelection.skipIde;
const ideConfigurations = toolSelection.configurations;
// Add spacing after prompts before installation progress
console.log('');
if (spinner.isSpinning) {
spinner.text = 'Continuing installation...';
} else {
@ -2139,15 +2143,11 @@ class Installer {
* Private: Prompt for update action
*/
async promptUpdateAction() {
const { default: inquirer } = await import('inquirer');
return await inquirer.prompt([
{
type: 'list',
name: 'action',
message: 'What would you like to do?',
choices: [{ name: 'Update existing installation', value: 'update' }],
},
]);
const action = await prompts.select({
message: 'What would you like to do?',
choices: [{ name: 'Update existing installation', value: 'update' }],
});
return { action };
}
/**
@ -2156,8 +2156,6 @@ class Installer {
* @param {Object} _legacyV4 - Legacy V4 detection result (unused in simplified version)
*/
async handleLegacyV4Migration(_projectDir, _legacyV4) {
const { default: inquirer } = await import('inquirer');
console.log('');
console.log(chalk.yellow.bold('⚠️ Legacy BMAD v4 detected'));
console.log(chalk.yellow('─'.repeat(80)));
@ -2172,26 +2170,22 @@ class Installer {
console.log(chalk.dim('If your v4 installation set up rules or commands, you should remove those as well.'));
console.log('');
const { proceed } = await inquirer.prompt([
{
type: 'list',
name: 'proceed',
message: 'What would you like to do?',
choices: [
{
name: 'Exit and clean up manually (recommended)',
value: 'exit',
short: 'Exit installation',
},
{
name: 'Continue with installation anyway',
value: 'continue',
short: 'Continue',
},
],
default: 'exit',
},
]);
const proceed = await prompts.select({
message: 'What would you like to do?',
choices: [
{
name: 'Exit and clean up manually (recommended)',
value: 'exit',
hint: 'Exit installation',
},
{
name: 'Continue with installation anyway',
value: 'continue',
hint: 'Continue',
},
],
default: 'exit',
});
if (proceed === 'exit') {
console.log('');
@ -2437,7 +2431,6 @@ class Installer {
console.log(chalk.yellow(`\n⚠️ Found ${customModulesWithMissingSources.length} custom module(s) with missing sources:`));
const { default: inquirer } = await import('inquirer');
let keptCount = 0;
let updatedCount = 0;
let removedCount = 0;
@ -2451,12 +2444,12 @@ class Installer {
{
name: 'Keep installed (will not be processed)',
value: 'keep',
short: 'Keep',
hint: 'Keep',
},
{
name: 'Specify new source location',
value: 'update',
short: 'Update',
hint: 'Update',
},
];
@ -2465,47 +2458,40 @@ class Installer {
choices.push({
name: '⚠️ REMOVE module completely (destructive!)',
value: 'remove',
short: 'Remove',
hint: 'Remove',
});
}
const { action } = await inquirer.prompt([
{
type: 'list',
name: 'action',
message: `How would you like to handle "${missing.name}"?`,
choices,
},
]);
const action = await prompts.select({
message: `How would you like to handle "${missing.name}"?`,
choices,
});
switch (action) {
case 'update': {
const { newSourcePath } = await inquirer.prompt([
{
type: 'input',
name: 'newSourcePath',
message: 'Enter the new path to the custom module:',
default: missing.sourcePath,
validate: async (input) => {
if (!input || input.trim() === '') {
return 'Please enter a path';
}
const expandedPath = path.resolve(input.trim());
if (!(await fs.pathExists(expandedPath))) {
return 'Path does not exist';
}
// Check if it looks like a valid module
const moduleYamlPath = path.join(expandedPath, 'module.yaml');
const agentsPath = path.join(expandedPath, 'agents');
const workflowsPath = path.join(expandedPath, 'workflows');
// Use sync validation because @clack/prompts doesn't support async validate
const newSourcePath = await prompts.text({
message: 'Enter the new path to the custom module:',
default: missing.sourcePath,
validate: (input) => {
if (!input || input.trim() === '') {
return 'Please enter a path';
}
const expandedPath = path.resolve(input.trim());
if (!fs.pathExistsSync(expandedPath)) {
return 'Path does not exist';
}
// Check if it looks like a valid module
const moduleYamlPath = path.join(expandedPath, 'module.yaml');
const agentsPath = path.join(expandedPath, 'agents');
const workflowsPath = path.join(expandedPath, 'workflows');
if (!(await fs.pathExists(moduleYamlPath)) && !(await fs.pathExists(agentsPath)) && !(await fs.pathExists(workflowsPath))) {
return 'Path does not appear to contain a valid custom module';
}
return true;
},
if (!fs.pathExistsSync(moduleYamlPath) && !fs.pathExistsSync(agentsPath) && !fs.pathExistsSync(workflowsPath)) {
return 'Path does not appear to contain a valid custom module';
}
return; // clack expects undefined for valid input
},
]);
});
// Update the source in manifest
const resolvedPath = path.resolve(newSourcePath.trim());
@ -2531,46 +2517,38 @@ class Installer {
console.log(chalk.red.bold(`\n⚠️ WARNING: This will PERMANENTLY DELETE "${missing.name}" and all its files!`));
console.log(chalk.red(` Module location: ${path.join(bmadDir, missing.id)}`));
const { confirm } = await inquirer.prompt([
{
type: 'confirm',
name: 'confirm',
message: chalk.red.bold('Are you absolutely sure you want to delete this module?'),
default: false,
},
]);
const confirmDelete = await prompts.confirm({
message: chalk.red.bold('Are you absolutely sure you want to delete this module?'),
default: false,
});
if (confirm) {
const { typedConfirm } = await inquirer.prompt([
{
type: 'input',
name: 'typedConfirm',
message: chalk.red.bold('Type "DELETE" to confirm permanent deletion:'),
validate: (input) => {
if (input !== 'DELETE') {
return chalk.red('You must type "DELETE" exactly to proceed');
}
return true;
},
if (confirmDelete) {
const typedConfirm = await prompts.text({
message: chalk.red.bold('Type "DELETE" to confirm permanent deletion:'),
validate: (input) => {
if (input !== 'DELETE') {
return chalk.red('You must type "DELETE" exactly to proceed');
}
return; // clack expects undefined for valid input
},
]);
});
if (typedConfirm === 'DELETE') {
// Remove the module from filesystem and manifest
const modulePath = path.join(bmadDir, moduleId);
const modulePath = path.join(bmadDir, missing.id);
if (await fs.pathExists(modulePath)) {
const fsExtra = require('fs-extra');
await fsExtra.remove(modulePath);
console.log(chalk.yellow(` ✓ Deleted module directory: ${path.relative(projectRoot, modulePath)}`));
}
await this.manifest.removeModule(bmadDir, moduleId);
await this.manifest.removeCustomModule(bmadDir, moduleId);
await this.manifest.removeModule(bmadDir, missing.id);
await this.manifest.removeCustomModule(bmadDir, missing.id);
console.log(chalk.yellow(` ✓ Removed from manifest`));
// Also remove from installedModules list
if (installedModules && installedModules.includes(moduleId)) {
const index = installedModules.indexOf(moduleId);
if (installedModules && installedModules.includes(missing.id)) {
const index = installedModules.indexOf(missing.id);
if (index !== -1) {
installedModules.splice(index, 1);
}
@ -2591,7 +2569,7 @@ class Installer {
}
case 'keep': {
keptCount++;
keptModulesWithoutSources.push(moduleId);
keptModulesWithoutSources.push(missing.id);
console.log(chalk.dim(` Module will be kept as-is`));
break;

View File

@ -13,6 +13,7 @@ const {
resolveSubagentFiles,
} = require('./shared/module-injections');
const { getAgentsFromBmad, getAgentsFromDir } = require('./shared/bmad-artifacts');
const prompts = require('../../../lib/prompts');
/**
* Google Antigravity IDE setup handler
@ -26,6 +27,21 @@ class AntigravitySetup extends BaseIdeSetup {
this.workflowsDir = 'workflows';
}
/**
* Prompt for subagent installation location
* @returns {Promise<string>} Selected location ('project' or 'user')
*/
async _promptInstallLocation() {
return prompts.select({
message: 'Where would you like to install Antigravity subagents?',
choices: [
{ name: 'Project level (.agent/agents/)', value: 'project' },
{ name: 'User level (~/.agent/agents/)', value: 'user' },
],
default: 'project',
});
}
/**
* Collect configuration choices before installation
* @param {Object} options - Configuration options
@ -57,21 +73,7 @@ class AntigravitySetup extends BaseIdeSetup {
config.subagentChoices = await this.promptSubagentInstallation(injectionConfig.subagents);
if (config.subagentChoices.install !== 'none') {
// Ask for installation location
const { default: inquirer } = await import('inquirer');
const locationAnswer = await inquirer.prompt([
{
type: 'list',
name: 'location',
message: 'Where would you like to install Antigravity subagents?',
choices: [
{ name: 'Project level (.agent/agents/)', value: 'project' },
{ name: 'User level (~/.agent/agents/)', value: 'user' },
],
default: 'project',
},
]);
config.installLocation = locationAnswer.location;
config.installLocation = await this._promptInstallLocation();
}
}
} catch (error) {
@ -297,20 +299,7 @@ class AntigravitySetup extends BaseIdeSetup {
choices = await this.promptSubagentInstallation(config.subagents);
if (choices.install !== 'none') {
const { default: inquirer } = await import('inquirer');
const locationAnswer = await inquirer.prompt([
{
type: 'list',
name: 'location',
message: 'Where would you like to install Antigravity subagents?',
choices: [
{ name: 'Project level (.agent/agents/)', value: 'project' },
{ name: 'User level (~/.agent/agents/)', value: 'user' },
],
default: 'project',
},
]);
location = locationAnswer.location;
location = await this._promptInstallLocation();
}
}
@ -334,22 +323,16 @@ class AntigravitySetup extends BaseIdeSetup {
* Prompt user for subagent installation preferences
*/
async promptSubagentInstallation(subagentConfig) {
const { default: inquirer } = await import('inquirer');
// First ask if they want to install subagents
const { install } = await inquirer.prompt([
{
type: 'list',
name: 'install',
message: 'Would you like to install Antigravity subagents for enhanced functionality?',
choices: [
{ name: 'Yes, install all subagents', value: 'all' },
{ name: 'Yes, let me choose specific subagents', value: 'selective' },
{ name: 'No, skip subagent installation', value: 'none' },
],
default: 'all',
},
]);
const install = await prompts.select({
message: 'Would you like to install Antigravity subagents for enhanced functionality?',
choices: [
{ name: 'Yes, install all subagents', value: 'all' },
{ name: 'Yes, let me choose specific subagents', value: 'selective' },
{ name: 'No, skip subagent installation', value: 'none' },
],
default: 'all',
});
if (install === 'selective') {
// Show list of available subagents with descriptions
@ -361,18 +344,14 @@ class AntigravitySetup extends BaseIdeSetup {
'document-reviewer.md': 'Document quality review',
};
const { selected } = await inquirer.prompt([
{
type: 'checkbox',
name: 'selected',
message: 'Select subagents to install:',
choices: subagentConfig.files.map((file) => ({
name: `${file.replace('.md', '')} - ${subagentInfo[file] || 'Specialized assistant'}`,
value: file,
checked: true,
})),
},
]);
const selected = await prompts.multiselect({
message: `Select subagents to install ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`,
choices: subagentConfig.files.map((file) => ({
name: `${file.replace('.md', '')} - ${subagentInfo[file] || 'Specialized assistant'}`,
value: file,
checked: true,
})),
});
return { install: 'selective', selected };
}

View File

@ -13,6 +13,7 @@ const {
resolveSubagentFiles,
} = require('./shared/module-injections');
const { getAgentsFromBmad, getAgentsFromDir } = require('./shared/bmad-artifacts');
const prompts = require('../../../lib/prompts');
/**
* Claude Code IDE setup handler
@ -25,6 +26,21 @@ class ClaudeCodeSetup extends BaseIdeSetup {
this.agentsDir = 'agents';
}
/**
* Prompt for subagent installation location
* @returns {Promise<string>} Selected location ('project' or 'user')
*/
async promptInstallLocation() {
return prompts.select({
message: 'Where would you like to install Claude Code subagents?',
choices: [
{ name: 'Project level (.claude/agents/)', value: 'project' },
{ name: 'User level (~/.claude/agents/)', value: 'user' },
],
default: 'project',
});
}
/**
* Collect configuration choices before installation
* @param {Object} options - Configuration options
@ -56,21 +72,7 @@ class ClaudeCodeSetup extends BaseIdeSetup {
config.subagentChoices = await this.promptSubagentInstallation(injectionConfig.subagents);
if (config.subagentChoices.install !== 'none') {
// Ask for installation location
const { default: inquirer } = await import('inquirer');
const locationAnswer = await inquirer.prompt([
{
type: 'list',
name: 'location',
message: 'Where would you like to install Claude Code subagents?',
choices: [
{ name: 'Project level (.claude/agents/)', value: 'project' },
{ name: 'User level (~/.claude/agents/)', value: 'user' },
],
default: 'project',
},
]);
config.installLocation = locationAnswer.location;
config.installLocation = await this.promptInstallLocation();
}
}
} catch (error) {
@ -305,20 +307,7 @@ class ClaudeCodeSetup extends BaseIdeSetup {
choices = await this.promptSubagentInstallation(config.subagents);
if (choices.install !== 'none') {
const { default: inquirer } = await import('inquirer');
const locationAnswer = await inquirer.prompt([
{
type: 'list',
name: 'location',
message: 'Where would you like to install Claude Code subagents?',
choices: [
{ name: 'Project level (.claude/agents/)', value: 'project' },
{ name: 'User level (~/.claude/agents/)', value: 'user' },
],
default: 'project',
},
]);
location = locationAnswer.location;
location = await this.promptInstallLocation();
}
}
@ -342,22 +331,16 @@ class ClaudeCodeSetup extends BaseIdeSetup {
* Prompt user for subagent installation preferences
*/
async promptSubagentInstallation(subagentConfig) {
const { default: inquirer } = await import('inquirer');
// First ask if they want to install subagents
const { install } = await inquirer.prompt([
{
type: 'list',
name: 'install',
message: 'Would you like to install Claude Code subagents for enhanced functionality?',
choices: [
{ name: 'Yes, install all subagents', value: 'all' },
{ name: 'Yes, let me choose specific subagents', value: 'selective' },
{ name: 'No, skip subagent installation', value: 'none' },
],
default: 'all',
},
]);
const install = await prompts.select({
message: 'Would you like to install Claude Code subagents for enhanced functionality?',
choices: [
{ name: 'Yes, install all subagents', value: 'all' },
{ name: 'Yes, let me choose specific subagents', value: 'selective' },
{ name: 'No, skip subagent installation', value: 'none' },
],
default: 'all',
});
if (install === 'selective') {
// Show list of available subagents with descriptions
@ -369,18 +352,14 @@ class ClaudeCodeSetup extends BaseIdeSetup {
'document-reviewer.md': 'Document quality review',
};
const { selected } = await inquirer.prompt([
{
type: 'checkbox',
name: 'selected',
message: 'Select subagents to install:',
choices: subagentConfig.files.map((file) => ({
name: `${file.replace('.md', '')} - ${subagentInfo[file] || 'Specialized assistant'}`,
value: file,
checked: true,
})),
},
]);
const selected = await prompts.multiselect({
message: `Select subagents to install ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`,
options: subagentConfig.files.map((file) => ({
label: `${file.replace('.md', '')} - ${subagentInfo[file] || 'Specialized assistant'}`,
value: file,
})),
initialValues: subagentConfig.files,
});
return { install: 'selective', selected };
}

View File

@ -6,6 +6,7 @@ const { BaseIdeSetup } = require('./_base-ide');
const { WorkflowCommandGenerator } = require('./shared/workflow-command-generator');
const { AgentCommandGenerator } = require('./shared/agent-command-generator');
const { getTasksFromBmad } = require('./shared/bmad-artifacts');
const prompts = require('../../../lib/prompts');
/**
* Codex setup handler (CLI mode)
@ -21,32 +22,24 @@ class CodexSetup extends BaseIdeSetup {
* @returns {Object} Collected configuration
*/
async collectConfiguration(options = {}) {
const { default: inquirer } = await import('inquirer');
let confirmed = false;
let installLocation = 'global';
while (!confirmed) {
const { location } = await inquirer.prompt([
{
type: 'list',
name: 'location',
message: 'Where would you like to install Codex CLI prompts?',
choices: [
{
name: 'Global - Simple for single project ' + '(~/.codex/prompts, but references THIS project only)',
value: 'global',
},
{
name: `Project-specific - Recommended for real work (requires CODEX_HOME=<project-dir>${path.sep}.codex)`,
value: 'project',
},
],
default: 'global',
},
]);
installLocation = location;
installLocation = await prompts.select({
message: 'Where would you like to install Codex CLI prompts?',
choices: [
{
name: 'Global - Simple for single project ' + '(~/.codex/prompts, but references THIS project only)',
value: 'global',
},
{
name: `Project-specific - Recommended for real work (requires CODEX_HOME=<project-dir>${path.sep}.codex)`,
value: 'project',
},
],
default: 'global',
});
// Display detailed instructions for the chosen option
console.log('');
@ -57,16 +50,10 @@ class CodexSetup extends BaseIdeSetup {
}
// Confirm the choice
const { proceed } = await inquirer.prompt([
{
type: 'confirm',
name: 'proceed',
message: 'Proceed with this installation option?',
default: true,
},
]);
confirmed = proceed;
confirmed = await prompts.confirm({
message: 'Proceed with this installation option?',
default: true,
});
if (!confirmed) {
console.log(chalk.yellow("\n Let's choose a different installation option.\n"));

View File

@ -2,6 +2,7 @@ const path = require('node:path');
const { BaseIdeSetup } = require('./_base-ide');
const chalk = require('chalk');
const { AgentCommandGenerator } = require('./shared/agent-command-generator');
const prompts = require('../../../lib/prompts');
/**
* GitHub Copilot setup handler
@ -21,29 +22,23 @@ class GitHubCopilotSetup extends BaseIdeSetup {
* @returns {Object} Collected configuration
*/
async collectConfiguration(options = {}) {
const { default: inquirer } = await import('inquirer');
const config = {};
console.log('\n' + chalk.blue(' 🔧 VS Code Settings Configuration'));
console.log(chalk.dim(' GitHub Copilot works best with specific settings\n'));
const response = await inquirer.prompt([
{
type: 'list',
name: 'configChoice',
message: 'How would you like to configure VS Code settings?',
choices: [
{ name: 'Use recommended defaults (fastest)', value: 'defaults' },
{ name: 'Configure each setting manually', value: 'manual' },
{ name: 'Skip settings configuration', value: 'skip' },
],
default: 'defaults',
},
]);
config.vsCodeConfig = response.configChoice;
config.vsCodeConfig = await prompts.select({
message: 'How would you like to configure VS Code settings?',
choices: [
{ name: 'Use recommended defaults (fastest)', value: 'defaults' },
{ name: 'Configure each setting manually', value: 'manual' },
{ name: 'Skip settings configuration', value: 'skip' },
],
default: 'defaults',
});
if (response.configChoice === 'manual') {
config.manualSettings = await inquirer.prompt([
if (config.vsCodeConfig === 'manual') {
config.manualSettings = await prompts.prompt([
{
type: 'input',
name: 'maxRequests',
@ -52,7 +47,8 @@ class GitHubCopilotSetup extends BaseIdeSetup {
validate: (input) => {
const num = parseInt(input, 10);
if (isNaN(num)) return 'Enter a valid number 1-50';
return (num >= 1 && num <= 50) || 'Enter 1-50';
if (num < 1 || num > 50) return 'Enter a number between 1-50';
return true;
},
},
{

View File

@ -119,7 +119,8 @@ class KiloSetup extends BaseIdeSetup {
modeEntry += ` name: '${icon} ${title}'\n`;
modeEntry += ` roleDefinition: ${roleDefinition}\n`;
modeEntry += ` whenToUse: ${whenToUse}\n`;
modeEntry += ` customInstructions: ${activationHeader} Read the full YAML from ${relativePath} start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode\n`;
modeEntry += ` customInstructions: |\n`;
modeEntry += ` ${activationHeader} Read the full YAML from ${relativePath} start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode\n`;
modeEntry += ` groups:\n`;
modeEntry += ` - read\n`;
modeEntry += ` - edit\n`;

View File

@ -108,7 +108,10 @@ async function resolveSubagentFiles(handlerBaseDir, subagentConfig, subagentChoi
const resolved = [];
for (const file of filesToCopy) {
const pattern = path.join(sourceDir, '**', file);
// Use forward slashes for glob pattern (works on both Windows and Unix)
// Convert backslashes to forward slashes for glob compatibility
const normalizedSourceDir = sourceDir.replaceAll('\\', '/');
const pattern = `${normalizedSourceDir}/**/${file}`;
const matches = await glob(pattern);
if (matches.length > 0) {

432
tools/cli/lib/prompts.js Normal file
View File

@ -0,0 +1,432 @@
/**
* @clack/prompts wrapper for BMAD CLI
*
* This module provides a unified interface for CLI prompts using @clack/prompts.
* It replaces Inquirer.js to fix Windows arrow key navigation issues (libuv #852).
*
* @module prompts
*/
let _clack = null;
/**
* Lazy-load @clack/prompts (ESM module)
* @returns {Promise<Object>} The clack prompts module
*/
async function getClack() {
if (!_clack) {
_clack = await import('@clack/prompts');
}
return _clack;
}
/**
* Handle user cancellation gracefully
* @param {any} value - The value to check
* @param {string} [message='Operation cancelled'] - Message to display
* @returns {boolean} True if cancelled
*/
async function handleCancel(value, message = 'Operation cancelled') {
const clack = await getClack();
if (clack.isCancel(value)) {
clack.cancel(message);
process.exit(0);
}
return false;
}
/**
* Display intro message
* @param {string} message - The intro message
*/
async function intro(message) {
const clack = await getClack();
clack.intro(message);
}
/**
* Display outro message
* @param {string} message - The outro message
*/
async function outro(message) {
const clack = await getClack();
clack.outro(message);
}
/**
* Display a note/info box
* @param {string} message - The note content
* @param {string} [title] - Optional title
*/
async function note(message, title) {
const clack = await getClack();
clack.note(message, title);
}
/**
* Display a spinner for async operations
* @returns {Object} Spinner controller with start, stop, message methods
*/
async function spinner() {
const clack = await getClack();
return clack.spinner();
}
/**
* Single-select prompt (replaces Inquirer 'list' type)
* @param {Object} options - Prompt options
* @param {string} options.message - The question to ask
* @param {Array} options.choices - Array of choices [{name, value, hint?}]
* @param {any} [options.default] - Default selected value
* @returns {Promise<any>} Selected value
*/
async function select(options) {
const clack = await getClack();
// Convert Inquirer-style choices to clack format
// Handle both object choices {name, value, hint} and primitive choices (string/number)
const clackOptions = options.choices
.filter((c) => c.type !== 'separator') // Skip separators for now
.map((choice) => {
if (typeof choice === 'string' || typeof choice === 'number') {
return { value: choice, label: String(choice) };
}
return {
value: choice.value === undefined ? choice.name : choice.value,
label: choice.name || choice.label || String(choice.value),
hint: choice.hint || choice.description,
};
});
// Find initial value
let initialValue;
if (options.default !== undefined) {
initialValue = options.default;
}
const result = await clack.select({
message: options.message,
options: clackOptions,
initialValue,
});
await handleCancel(result);
return result;
}
/**
* Multi-select prompt (replaces Inquirer 'checkbox' type)
* @param {Object} options - Prompt options
* @param {string} options.message - The question to ask
* @param {Array} options.choices - Array of choices [{name, value, checked?, hint?}]
* @param {boolean} [options.required=false] - Whether at least one must be selected
* @returns {Promise<Array>} Array of selected values
*/
async function multiselect(options) {
const clack = await getClack();
// Support both clack-native (options) and Inquirer-style (choices) APIs
let clackOptions;
let initialValues;
if (options.options) {
// Native clack format: options with label/value
clackOptions = options.options;
initialValues = options.initialValues || [];
} else {
// Convert Inquirer-style choices to clack format
// Handle both object choices {name, value, hint} and primitive choices (string/number)
clackOptions = options.choices
.filter((c) => c.type !== 'separator') // Skip separators
.map((choice) => {
if (typeof choice === 'string' || typeof choice === 'number') {
return { value: choice, label: String(choice) };
}
return {
value: choice.value === undefined ? choice.name : choice.value,
label: choice.name || choice.label || String(choice.value),
hint: choice.hint || choice.description,
};
});
// Find initial values (pre-checked items)
initialValues = options.choices
.filter((c) => c.checked && c.type !== 'separator')
.map((c) => (c.value === undefined ? c.name : c.value));
}
const result = await clack.multiselect({
message: options.message,
options: clackOptions,
initialValues: initialValues.length > 0 ? initialValues : undefined,
required: options.required || false,
});
await handleCancel(result);
return result;
}
/**
* Grouped multi-select prompt for categorized options
* @param {Object} options - Prompt options
* @param {string} options.message - The question to ask
* @param {Object} options.options - Object mapping group names to arrays of choices
* @param {Array} [options.initialValues] - Array of initially selected values
* @param {boolean} [options.required=false] - Whether at least one must be selected
* @param {boolean} [options.selectableGroups=false] - Whether groups can be selected as a whole
* @returns {Promise<Array>} Array of selected values
*/
async function groupMultiselect(options) {
const clack = await getClack();
const result = await clack.groupMultiselect({
message: options.message,
options: options.options,
initialValues: options.initialValues,
required: options.required || false,
});
await handleCancel(result);
return result;
}
/**
* Confirm prompt (replaces Inquirer 'confirm' type)
* @param {Object} options - Prompt options
* @param {string} options.message - The question to ask
* @param {boolean} [options.default=true] - Default value
* @returns {Promise<boolean>} User's answer
*/
async function confirm(options) {
const clack = await getClack();
const result = await clack.confirm({
message: options.message,
initialValue: options.default === undefined ? true : options.default,
});
await handleCancel(result);
return result;
}
/**
* Text input prompt (replaces Inquirer 'input' type)
* @param {Object} options - Prompt options
* @param {string} options.message - The question to ask
* @param {string} [options.default] - Default value
* @param {string} [options.placeholder] - Placeholder text (defaults to options.default if not provided)
* @param {Function} [options.validate] - Validation function
* @returns {Promise<string>} User's input
*/
async function text(options) {
const clack = await getClack();
// Use default as placeholder if placeholder not explicitly provided
// This shows the default value as grayed-out hint text
const placeholder = options.placeholder === undefined ? options.default : options.placeholder;
const result = await clack.text({
message: options.message,
defaultValue: options.default,
placeholder: typeof placeholder === 'string' ? placeholder : undefined,
validate: options.validate,
});
await handleCancel(result);
return result;
}
/**
* Password input prompt (replaces Inquirer 'password' type)
* @param {Object} options - Prompt options
* @param {string} options.message - The question to ask
* @param {Function} [options.validate] - Validation function
* @returns {Promise<string>} User's input
*/
async function password(options) {
const clack = await getClack();
const result = await clack.password({
message: options.message,
validate: options.validate,
});
await handleCancel(result);
return result;
}
/**
* Group multiple prompts together
* @param {Object} prompts - Object of prompt functions
* @param {Object} [options] - Group options
* @returns {Promise<Object>} Object with all answers
*/
async function group(prompts, options = {}) {
const clack = await getClack();
const result = await clack.group(prompts, {
onCancel: () => {
clack.cancel('Operation cancelled');
process.exit(0);
},
...options,
});
return result;
}
/**
* Run tasks with spinner feedback
* @param {Array} tasks - Array of task objects [{title, task, enabled?}]
* @returns {Promise<void>}
*/
async function tasks(taskList) {
const clack = await getClack();
await clack.tasks(taskList);
}
/**
* Log messages with styling
*/
const log = {
async info(message) {
const clack = await getClack();
clack.log.info(message);
},
async success(message) {
const clack = await getClack();
clack.log.success(message);
},
async warn(message) {
const clack = await getClack();
clack.log.warn(message);
},
async error(message) {
const clack = await getClack();
clack.log.error(message);
},
async message(message) {
const clack = await getClack();
clack.log.message(message);
},
async step(message) {
const clack = await getClack();
clack.log.step(message);
},
};
/**
* Execute an array of Inquirer-style questions using @clack/prompts
* This provides compatibility with dynamic question arrays
* @param {Array} questions - Array of Inquirer-style question objects
* @returns {Promise<Object>} Object with answers keyed by question name
*/
async function prompt(questions) {
const answers = {};
for (const question of questions) {
const { type, name, message, choices, default: defaultValue, validate, when } = question;
// Handle conditional questions via 'when' property
if (when !== undefined) {
const shouldAsk = typeof when === 'function' ? await when(answers) : when;
if (!shouldAsk) continue;
}
let answer;
switch (type) {
case 'input': {
// Note: @clack/prompts doesn't support async validation, so validate must be sync
answer = await text({
message,
default: typeof defaultValue === 'function' ? defaultValue(answers) : defaultValue,
validate: validate
? (val) => {
const result = validate(val, answers);
if (result instanceof Promise) {
throw new TypeError('Async validation is not supported by @clack/prompts. Please use synchronous validation.');
}
return result === true ? undefined : result;
}
: undefined,
});
break;
}
case 'confirm': {
answer = await confirm({
message,
default: typeof defaultValue === 'function' ? defaultValue(answers) : defaultValue,
});
break;
}
case 'list': {
answer = await select({
message,
choices: choices || [],
default: typeof defaultValue === 'function' ? defaultValue(answers) : defaultValue,
});
break;
}
case 'checkbox': {
answer = await multiselect({
message,
choices: choices || [],
required: false,
});
break;
}
case 'password': {
// Note: @clack/prompts doesn't support async validation, so validate must be sync
answer = await password({
message,
validate: validate
? (val) => {
const result = validate(val, answers);
if (result instanceof Promise) {
throw new TypeError('Async validation is not supported by @clack/prompts. Please use synchronous validation.');
}
return result === true ? undefined : result;
}
: undefined,
});
break;
}
default: {
// Default to text input for unknown types
answer = await text({
message,
default: typeof defaultValue === 'function' ? defaultValue(answers) : defaultValue,
});
}
}
answers[name] = answer;
}
return answers;
}
module.exports = {
getClack,
handleCancel,
intro,
outro,
note,
spinner,
select,
multiselect,
groupMultiselect,
confirm,
text,
password,
group,
tasks,
log,
prompt,
};

View File

@ -4,16 +4,21 @@ const os = require('node:os');
const fs = require('fs-extra');
const { CLIUtils } = require('./cli-utils');
const { CustomHandler } = require('../installers/lib/custom/handler');
const prompts = require('./prompts');
// Lazy-load inquirer (ESM module) to avoid ERR_REQUIRE_ESM
let _inquirer = null;
async function getInquirer() {
if (!_inquirer) {
_inquirer = (await import('inquirer')).default;
// Separator class for visual grouping in select/multiselect prompts
// Note: @clack/prompts doesn't support separators natively, they are filtered out
class Separator {
constructor(text = '────────') {
this.line = text;
this.name = text;
}
return _inquirer;
type = 'separator';
}
// Separator for choice lists (compatible interface)
const choiceUtils = { Separator };
/**
* UI utilities for the installer
*/
@ -23,7 +28,6 @@ class UI {
* @returns {Object} Installation configuration
*/
async promptInstall() {
const inquirer = await getInquirer();
CLIUtils.displayLogo();
// Display version-specific start message from install-messages.yaml
@ -113,26 +117,20 @@ class UI {
console.log(chalk.yellow('─'.repeat(80)));
console.log('');
const { proceed } = await inquirer.prompt([
{
type: 'list',
name: 'proceed',
message: 'What would you like to do?',
choices: [
{
name: 'Cancel and do a fresh install (recommended)',
value: 'cancel',
short: 'Cancel installation',
},
{
name: 'Proceed anyway (will attempt update, potentially may fail or have unstable behavior)',
value: 'proceed',
short: 'Proceed with update',
},
],
default: 'cancel',
},
]);
const proceed = await prompts.select({
message: 'What would you like to do?',
choices: [
{
name: 'Cancel and do a fresh install (recommended)',
value: 'cancel',
},
{
name: 'Proceed anyway (will attempt update, potentially may fail or have unstable behavior)',
value: 'proceed',
},
],
default: 'cancel',
});
if (proceed === 'cancel') {
console.log('');
@ -188,14 +186,10 @@ class UI {
// If Claude Code was selected, ask about TTS
if (claudeCodeSelected) {
const { enableTts } = await inquirer.prompt([
{
type: 'confirm',
name: 'enableTts',
message: 'Claude Code supports TTS (Text-to-Speech). Would you like to enable it?',
default: false,
},
]);
const enableTts = await prompts.confirm({
message: 'Claude Code supports TTS (Text-to-Speech). Would you like to enable it?',
default: false,
});
if (enableTts) {
agentVibesConfig = { enabled: true, alreadyInstalled: false };
@ -250,18 +244,11 @@ class UI {
// Common actions
choices.push({ name: 'Modify BMAD Installation', value: 'update' });
const promptResult = await inquirer.prompt([
{
type: 'list',
name: 'actionType',
message: 'What would you like to do?',
choices: choices,
default: choices[0].value, // Use the first option as default
},
]);
// Extract actionType from prompt result
actionType = promptResult.actionType;
actionType = await prompts.select({
message: 'What would you like to do?',
choices: choices,
default: choices[0].value,
});
// Handle quick update separately
if (actionType === 'quick-update') {
@ -290,14 +277,10 @@ class UI {
const { installedModuleIds } = await this.getExistingInstallation(confirmedDirectory);
console.log(chalk.dim(` Found existing modules: ${[...installedModuleIds].join(', ')}`));
const { changeModuleSelection } = await inquirer.prompt([
{
type: 'confirm',
name: 'changeModuleSelection',
message: 'Modify official module selection (BMad Method, BMad Builder, Creative Innovation Suite)?',
default: false,
},
]);
const changeModuleSelection = await prompts.confirm({
message: 'Modify official module selection (BMad Method, BMad Builder, Creative Innovation Suite)?',
default: false,
});
let selectedModules = [];
if (changeModuleSelection) {
@ -310,14 +293,10 @@ class UI {
// After module selection, ask about custom modules
console.log('');
const { changeCustomModules } = await inquirer.prompt([
{
type: 'confirm',
name: 'changeCustomModules',
message: 'Modify custom module selection (add, update, or remove custom modules/agents/workflows)?',
default: false,
},
]);
const changeCustomModules = await prompts.confirm({
message: 'Modify custom module selection (add, update, or remove custom modules/agents/workflows)?',
default: false,
});
let customModuleResult = { selectedCustomModules: [], customContentConfig: { hasCustomContent: false } };
if (changeCustomModules) {
@ -352,15 +331,10 @@ class UI {
let enableTts = false;
if (hasClaudeCode) {
const { enableTts: enable } = await inquirer.prompt([
{
type: 'confirm',
name: 'enableTts',
message: 'Claude Code supports TTS (Text-to-Speech). Would you like to enable it?',
default: false,
},
]);
enableTts = enable;
enableTts = await prompts.confirm({
message: 'Claude Code supports TTS (Text-to-Speech). Would you like to enable it?',
default: false,
});
}
// Core config with existing defaults (ask after TTS)
@ -385,14 +359,10 @@ class UI {
const { installedModuleIds } = await this.getExistingInstallation(confirmedDirectory);
// Ask about official modules for new installations
const { wantsOfficialModules } = await inquirer.prompt([
{
type: 'confirm',
name: 'wantsOfficialModules',
message: 'Will you be installing any official BMad modules (BMad Method, BMad Builder, Creative Innovation Suite)?',
default: true,
},
]);
const wantsOfficialModules = await prompts.confirm({
message: 'Will you be installing any official BMad modules (BMad Method, BMad Builder, Creative Innovation Suite)?',
default: true,
});
let selectedOfficialModules = [];
if (wantsOfficialModules) {
@ -401,14 +371,10 @@ class UI {
}
// Ask about custom content
const { wantsCustomContent } = await inquirer.prompt([
{
type: 'confirm',
name: 'wantsCustomContent',
message: 'Would you like to install a local custom module (this includes custom agents and workflows also)?',
default: false,
},
]);
const wantsCustomContent = await prompts.confirm({
message: 'Would you like to install a local custom module (this includes custom agents and workflows also)?',
default: false,
});
if (wantsCustomContent) {
customContentConfig = await this.promptCustomContentSource();
@ -459,7 +425,6 @@ class UI {
* @returns {Object} Tool configuration
*/
async promptToolSelection(projectDir, selectedModules) {
const inquirer = await getInquirer();
// Check for existing configured IDEs - use findBmadDir to detect custom folder names
const { Detector } = require('../installers/lib/core/detector');
const { Installer } = require('../installers/lib/core/installer');
@ -477,13 +442,14 @@ class UI {
const preferredIdes = ideManager.getPreferredIdes();
const otherIdes = ideManager.getOtherIdes();
// Build IDE choices array with separators
const ideChoices = [];
// Build grouped options object for groupMultiselect
const groupedOptions = {};
const processedIdes = new Set();
const initialValues = [];
// First, add previously configured IDEs at the top, marked with ✅
if (configuredIdes.length > 0) {
ideChoices.push(new inquirer.Separator('── Previously Configured ──'));
const configuredGroup = [];
for (const ideValue of configuredIdes) {
// Skip empty or invalid IDE values
if (!ideValue || typeof ideValue !== 'string') {
@ -496,81 +462,71 @@ class UI {
const ide = preferredIde || otherIde;
if (ide) {
ideChoices.push({
name: `${ide.name}`,
configuredGroup.push({
label: `${ide.name}`,
value: ide.value,
checked: true, // Previously configured IDEs are checked by default
});
processedIdes.add(ide.value);
initialValues.push(ide.value); // Pre-select configured IDEs
} else {
// Warn about unrecognized IDE (but don't fail)
console.log(chalk.yellow(`⚠️ Previously configured IDE '${ideValue}' is no longer available`));
}
}
if (configuredGroup.length > 0) {
groupedOptions['Previously Configured'] = configuredGroup;
}
}
// Add preferred tools (excluding already processed)
const remainingPreferred = preferredIdes.filter((ide) => !processedIdes.has(ide.value));
if (remainingPreferred.length > 0) {
ideChoices.push(new inquirer.Separator('── Recommended Tools ──'));
for (const ide of remainingPreferred) {
ideChoices.push({
name: `${ide.name}`,
value: ide.value,
checked: false,
});
groupedOptions['Recommended Tools'] = remainingPreferred.map((ide) => {
processedIdes.add(ide.value);
}
return {
label: `${ide.name}`,
value: ide.value,
};
});
}
// Add other tools (excluding already processed)
const remainingOther = otherIdes.filter((ide) => !processedIdes.has(ide.value));
if (remainingOther.length > 0) {
ideChoices.push(new inquirer.Separator('── Additional Tools ──'));
for (const ide of remainingOther) {
ideChoices.push({
name: ide.name,
value: ide.value,
checked: false,
});
}
groupedOptions['Additional Tools'] = remainingOther.map((ide) => ({
label: ide.name,
value: ide.value,
}));
}
let answers;
let selectedIdes = [];
let userConfirmedNoTools = false;
// Loop until user selects at least one tool OR explicitly confirms no tools
while (!userConfirmedNoTools) {
answers = await inquirer.prompt([
{
type: 'checkbox',
name: 'ides',
message: 'Select tools to configure:',
choices: ideChoices,
pageSize: 30,
},
]);
selectedIdes = await prompts.groupMultiselect({
message: `Select tools to configure ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`,
options: groupedOptions,
initialValues: initialValues.length > 0 ? initialValues : undefined,
required: false,
});
// If tools were selected, we're done
if (answers.ides && answers.ides.length > 0) {
if (selectedIdes && selectedIdes.length > 0) {
break;
}
// Warn that no tools were selected - users often miss the spacebar requirement
console.log();
console.log(chalk.red.bold('⚠️ WARNING: No tools were selected!'));
console.log(chalk.red(' You must press SPACEBAR to select items, then ENTER to confirm.'));
console.log(chalk.red(' You must press SPACE to select items, then ENTER to confirm.'));
console.log(chalk.red(' Simply highlighting an item does NOT select it.'));
console.log();
const { goBack } = await inquirer.prompt([
{
type: 'confirm',
name: 'goBack',
message: chalk.yellow('Would you like to go back and select at least one tool?'),
default: true,
},
]);
const goBack = await prompts.confirm({
message: chalk.yellow('Would you like to go back and select at least one tool?'),
default: true,
});
if (goBack) {
// Re-display a message before looping back
@ -582,8 +538,8 @@ class UI {
}
return {
ides: answers.ides || [],
skipIde: !answers.ides || answers.ides.length === 0,
ides: selectedIdes || [],
skipIde: !selectedIdes || selectedIdes.length === 0,
};
}
@ -592,23 +548,17 @@ class UI {
* @returns {Object} Update configuration
*/
async promptUpdate() {
const inquirer = await getInquirer();
const answers = await inquirer.prompt([
{
type: 'confirm',
name: 'backupFirst',
message: 'Create backup before updating?',
default: true,
},
{
type: 'confirm',
name: 'preserveCustomizations',
message: 'Preserve local customizations?',
default: true,
},
]);
const backupFirst = await prompts.confirm({
message: 'Create backup before updating?',
default: true,
});
return answers;
const preserveCustomizations = await prompts.confirm({
message: 'Preserve local customizations?',
default: true,
});
return { backupFirst, preserveCustomizations };
}
/**
@ -617,27 +567,17 @@ class UI {
* @returns {Array} Selected modules
*/
async promptModules(modules) {
const inquirer = await getInquirer();
const choices = modules.map((mod) => ({
name: `${mod.name} - ${mod.description}`,
value: mod.id,
checked: false,
}));
const { selectedModules } = await inquirer.prompt([
{
type: 'checkbox',
name: 'selectedModules',
message: 'Select modules to add:',
choices,
validate: (answer) => {
if (answer.length === 0) {
return 'You must choose at least one module.';
}
return true;
},
},
]);
const selectedModules = await prompts.multiselect({
message: `Select modules to add ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`,
choices,
required: true,
});
return selectedModules;
}
@ -649,17 +589,10 @@ class UI {
* @returns {boolean} User confirmation
*/
async confirm(message, defaultValue = false) {
const inquirer = await getInquirer();
const { confirmed } = await inquirer.prompt([
{
type: 'confirm',
name: 'confirmed',
message,
default: defaultValue,
},
]);
return confirmed;
return await prompts.confirm({
message,
default: defaultValue,
});
}
/**
@ -753,10 +686,9 @@ class UI {
* Get module choices for selection
* @param {Set} installedModuleIds - Currently installed module IDs
* @param {Object} customContentConfig - Custom content configuration
* @returns {Array} Module choices for inquirer
* @returns {Array} Module choices for prompt
*/
async getModuleChoices(installedModuleIds, customContentConfig = null) {
const inquirer = await getInquirer();
const moduleChoices = [];
const isNewInstallation = installedModuleIds.size === 0;
@ -811,9 +743,9 @@ class UI {
if (allCustomModules.length > 0) {
// Add separator for custom content, all custom modules, and official content separator
moduleChoices.push(
new inquirer.Separator('── Custom Content ──'),
new choiceUtils.Separator('── Custom Content ──'),
...allCustomModules,
new inquirer.Separator('── Official Content ──'),
new choiceUtils.Separator('── Official Content ──'),
);
}
@ -837,44 +769,43 @@ class UI {
* @returns {Array} Selected module IDs
*/
async selectModules(moduleChoices, defaultSelections = []) {
const inquirer = await getInquirer();
const moduleAnswer = await inquirer.prompt([
{
type: 'checkbox',
name: 'modules',
message: 'Select modules to install:',
choices: moduleChoices,
default: defaultSelections,
},
]);
// Mark choices as checked based on defaultSelections
const choicesWithDefaults = moduleChoices.map((choice) => ({
...choice,
checked: defaultSelections.includes(choice.value),
}));
const selected = moduleAnswer.modules || [];
const selected = await prompts.multiselect({
message: `Select modules to install ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`,
choices: choicesWithDefaults,
required: false,
});
return selected;
return selected || [];
}
/**
* Prompt for directory selection
* @returns {Object} Directory answer from inquirer
* @returns {Object} Directory answer from prompt
*/
async promptForDirectory() {
const inquirer = await getInquirer();
return await inquirer.prompt([
{
type: 'input',
name: 'directory',
message: `Installation directory:`,
default: process.cwd(),
validate: async (input) => this.validateDirectory(input),
filter: (input) => {
// If empty, use the default
if (!input || input.trim() === '') {
return process.cwd();
}
return this.expandUserPath(input);
},
},
]);
// Use sync validation because @clack/prompts doesn't support async validate
const directory = await prompts.text({
message: 'Installation directory:',
default: process.cwd(),
placeholder: process.cwd(),
validate: (input) => this.validateDirectorySync(input),
});
// Apply filter logic
let filteredDir = directory;
if (!filteredDir || filteredDir.trim() === '') {
filteredDir = process.cwd();
} else {
filteredDir = this.expandUserPath(filteredDir);
}
return { directory: filteredDir };
}
/**
@ -915,45 +846,92 @@ class UI {
* @returns {boolean} Whether user confirmed
*/
async confirmDirectory(directory) {
const inquirer = await getInquirer();
const dirExists = await fs.pathExists(directory);
if (dirExists) {
const confirmAnswer = await inquirer.prompt([
{
type: 'confirm',
name: 'proceed',
message: `Install to this directory?`,
default: true,
},
]);
const proceed = await prompts.confirm({
message: 'Install to this directory?',
default: true,
});
if (!confirmAnswer.proceed) {
if (!proceed) {
console.log(chalk.yellow("\nLet's try again with a different path.\n"));
}
return confirmAnswer.proceed;
return proceed;
} else {
// Ask for confirmation to create the directory
const createConfirm = await inquirer.prompt([
{
type: 'confirm',
name: 'create',
message: `The directory '${directory}' doesn't exist. Would you like to create it?`,
default: false,
},
]);
const create = await prompts.confirm({
message: `The directory '${directory}' doesn't exist. Would you like to create it?`,
default: false,
});
if (!createConfirm.create) {
if (!create) {
console.log(chalk.yellow("\nLet's try again with a different path.\n"));
}
return createConfirm.create;
return create;
}
}
/**
* Validate directory path for installation
* Validate directory path for installation (sync version for clack prompts)
* @param {string} input - User input path
* @returns {string|undefined} Error message or undefined if valid
*/
validateDirectorySync(input) {
// Allow empty input to use the default
if (!input || input.trim() === '') {
return; // Empty means use default, undefined = valid for clack
}
let expandedPath;
try {
expandedPath = this.expandUserPath(input.trim());
} catch (error) {
return error.message;
}
// Check if the path exists
const pathExists = fs.pathExistsSync(expandedPath);
if (!pathExists) {
// Find the first existing parent directory
const existingParent = this.findExistingParentSync(expandedPath);
if (!existingParent) {
return 'Cannot create directory: no existing parent directory found';
}
// Check if the existing parent is writable
try {
fs.accessSync(existingParent, fs.constants.W_OK);
// Path doesn't exist but can be created - will prompt for confirmation later
return;
} catch {
// Provide a detailed error message explaining both issues
return `Directory '${expandedPath}' does not exist and cannot be created: parent directory '${existingParent}' is not writable`;
}
}
// If it exists, validate it's a directory and writable
const stat = fs.statSync(expandedPath);
if (!stat.isDirectory()) {
return `Path exists but is not a directory: ${expandedPath}`;
}
// Check write permissions
try {
fs.accessSync(expandedPath, fs.constants.W_OK);
} catch {
return `Directory is not writable: ${expandedPath}`;
}
return;
}
/**
* Validate directory path for installation (async version)
* @param {string} input - User input path
* @returns {string|true} Error message or true if valid
*/
@ -1009,7 +987,28 @@ class UI {
}
/**
* Find the first existing parent directory
* Find the first existing parent directory (sync version)
* @param {string} targetPath - The path to check
* @returns {string|null} The first existing parent directory, or null if none found
*/
findExistingParentSync(targetPath) {
let currentPath = path.resolve(targetPath);
// Walk up the directory tree until we find an existing directory
while (currentPath !== path.dirname(currentPath)) {
// Stop at root
const parent = path.dirname(currentPath);
if (fs.pathExistsSync(parent)) {
return parent;
}
currentPath = parent;
}
return null; // No existing parent found (shouldn't happen in practice)
}
/**
* Find the first existing parent directory (async version)
* @param {string} targetPath - The path to check
* @returns {string|null} The first existing parent directory, or null if none found
*/
@ -1071,7 +1070,7 @@ class UI {
* @sideeffects None - pure user input collection, no files written
* @edgecases Shows warning if user enables TTS but AgentVibes not detected
* @calledby promptInstall() during installation flow, after core config, before IDE selection
* @calls checkAgentVibesInstalled(), inquirer.prompt(), chalk.green/yellow/dim()
* @calls checkAgentVibesInstalled(), prompts.select(), chalk.green/yellow/dim()
*
* AI NOTE: This prompt is strategically positioned in installation flow:
* - AFTER core config (user_name, etc)
@ -1102,7 +1101,6 @@ class UI {
* - GitHub Issue: paulpreibisch/AgentVibes#36
*/
async promptAgentVibes(projectDir) {
const inquirer = await getInquirer();
CLIUtils.displaySection('🎤 Voice Features', 'Enable TTS for multi-agent conversations');
// Check if AgentVibes is already installed
@ -1114,23 +1112,19 @@ class UI {
console.log(chalk.dim(' AgentVibes not detected'));
}
const answers = await inquirer.prompt([
{
type: 'confirm',
name: 'enableTts',
message: 'Enable Agents to Speak Out loud (powered by Agent Vibes? Claude Code only currently)',
default: false, // Default to yes - recommended for best experience
},
]);
const enableTts = await prompts.confirm({
message: 'Enable Agents to Speak Out loud (powered by Agent Vibes? Claude Code only currently)',
default: false,
});
if (answers.enableTts && !agentVibesInstalled) {
if (enableTts && !agentVibesInstalled) {
console.log(chalk.yellow('\n ⚠️ AgentVibes not installed'));
console.log(chalk.dim(' Install AgentVibes separately to enable TTS:'));
console.log(chalk.dim(' https://github.com/paulpreibisch/AgentVibes\n'));
}
return {
enabled: answers.enableTts,
enabled: enableTts,
alreadyInstalled: agentVibesInstalled,
};
}
@ -1248,30 +1242,75 @@ class UI {
return existingInstall.ides || [];
}
/**
* Validate custom content path synchronously
* @param {string} input - User input path
* @returns {string|undefined} Error message or undefined if valid
*/
validateCustomContentPathSync(input) {
// Allow empty input to cancel
if (!input || input.trim() === '') {
return; // Allow empty to exit
}
try {
// Expand the path
const expandedPath = this.expandUserPath(input.trim());
// Check if path exists
if (!fs.pathExistsSync(expandedPath)) {
return 'Path does not exist';
}
// Check if it's a directory
const stat = fs.statSync(expandedPath);
if (!stat.isDirectory()) {
return 'Path must be a directory';
}
// Check for module.yaml in the root
const moduleYamlPath = path.join(expandedPath, 'module.yaml');
if (!fs.pathExistsSync(moduleYamlPath)) {
return 'Directory must contain a module.yaml file in the root';
}
// Try to parse the module.yaml to get the module ID
try {
const yaml = require('yaml');
const content = fs.readFileSync(moduleYamlPath, 'utf8');
const moduleData = yaml.parse(content);
if (!moduleData.code) {
return 'module.yaml must contain a "code" field for the module ID';
}
} catch (error) {
return 'Invalid module.yaml file: ' + error.message;
}
return; // Valid
} catch (error) {
return 'Error validating path: ' + error.message;
}
}
/**
* Prompt user for custom content source location
* @returns {Object} Custom content configuration
*/
async promptCustomContentSource() {
const inquirer = await getInquirer();
const customContentConfig = { hasCustomContent: true, sources: [] };
// Keep asking for more sources until user is done
while (true) {
// First ask if user wants to add another module or continue
if (customContentConfig.sources.length > 0) {
const { action } = await inquirer.prompt([
{
type: 'list',
name: 'action',
message: 'Would you like to:',
choices: [
{ name: 'Add another custom module', value: 'add' },
{ name: 'Continue with installation', value: 'continue' },
],
default: 'continue',
},
]);
const action = await prompts.select({
message: 'Would you like to:',
choices: [
{ name: 'Add another custom module', value: 'add' },
{ name: 'Continue with installation', value: 'continue' },
],
default: 'continue',
});
if (action === 'continue') {
break;
@ -1282,57 +1321,11 @@ class UI {
let isValid = false;
while (!isValid) {
const { path: inputPath } = await inquirer.prompt([
{
type: 'input',
name: 'path',
message: 'Enter the path to your custom content folder (or press Enter to cancel):',
validate: async (input) => {
// Allow empty input to cancel
if (!input || input.trim() === '') {
return true; // Allow empty to exit
}
try {
// Expand the path
const expandedPath = this.expandUserPath(input.trim());
// Check if path exists
if (!(await fs.pathExists(expandedPath))) {
return 'Path does not exist';
}
// Check if it's a directory
const stat = await fs.stat(expandedPath);
if (!stat.isDirectory()) {
return 'Path must be a directory';
}
// Check for module.yaml in the root
const moduleYamlPath = path.join(expandedPath, 'module.yaml');
if (!(await fs.pathExists(moduleYamlPath))) {
return 'Directory must contain a module.yaml file in the root';
}
// Try to parse the module.yaml to get the module ID
try {
const yaml = require('yaml');
const content = await fs.readFile(moduleYamlPath, 'utf8');
const moduleData = yaml.parse(content);
if (!moduleData.code) {
return 'module.yaml must contain a "code" field for the module ID';
}
} catch (error) {
return 'Invalid module.yaml file: ' + error.message;
}
return true;
} catch (error) {
return 'Error validating path: ' + error.message;
}
},
},
]);
// Use sync validation because @clack/prompts doesn't support async validate
const inputPath = await prompts.text({
message: 'Enter the path to your custom content folder (or press Enter to cancel):',
validate: (input) => this.validateCustomContentPathSync(input),
});
// If user pressed Enter without typing anything, exit the loop
if (!inputPath || inputPath.trim() === '') {
@ -1364,14 +1357,10 @@ class UI {
}
// Ask if user wants to add these to the installation
const { shouldInstall } = await inquirer.prompt([
{
type: 'confirm',
name: 'shouldInstall',
message: `Install ${customContentConfig.sources.length} custom module(s) now?`,
default: true,
},
]);
const shouldInstall = await prompts.confirm({
message: `Install ${customContentConfig.sources.length} custom module(s) now?`,
default: true,
});
if (shouldInstall) {
customContentConfig.selected = true;
@ -1391,7 +1380,6 @@ class UI {
* @returns {Object} Result with selected custom modules and custom content config
*/
async handleCustomModulesInModifyFlow(directory, selectedModules) {
const inquirer = await getInquirer();
// Get existing installation to find custom modules
const { existingInstall } = await this.getExistingInstallation(directory);
@ -1451,16 +1439,11 @@ class UI {
choices.push({ name: 'Add new custom modules', value: 'add' }, { name: 'Cancel (no custom modules)', value: 'cancel' });
}
const { customAction } = await inquirer.prompt([
{
type: 'list',
name: 'customAction',
message:
cachedCustomModules.length > 0 ? 'What would you like to do with custom modules?' : 'Would you like to add custom modules?',
choices: choices,
default: cachedCustomModules.length > 0 ? 'keep' : 'add',
},
]);
const customAction = await prompts.select({
message: cachedCustomModules.length > 0 ? 'What would you like to do with custom modules?' : 'Would you like to add custom modules?',
choices: choices,
default: cachedCustomModules.length > 0 ? 'keep' : 'add',
});
switch (customAction) {
case 'keep': {
@ -1472,21 +1455,18 @@ class UI {
case 'select': {
// Let user choose which to keep
const choices = cachedCustomModules.map((m) => ({
const selectChoices = cachedCustomModules.map((m) => ({
name: `${m.name} ${chalk.gray(`(${m.id})`)}`,
value: m.id,
checked: m.checked,
}));
const { keepModules } = await inquirer.prompt([
{
type: 'checkbox',
name: 'keepModules',
message: 'Select custom modules to keep:',
choices: choices,
default: cachedCustomModules.filter((m) => m.checked).map((m) => m.id),
},
]);
result.selectedCustomModules = keepModules;
const keepModules = await prompts.multiselect({
message: `Select custom modules to keep ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`,
choices: selectChoices,
required: false,
});
result.selectedCustomModules = keepModules || [];
break;
}
@ -1586,7 +1566,6 @@ class UI {
* @returns {Promise<boolean>} True if user wants to proceed, false if they cancel
*/
async showOldAlphaVersionWarning(installedVersion, currentVersion, bmadFolderName) {
const inquirer = await getInquirer();
const versionInfo = this.checkAlphaVersionAge(installedVersion, currentVersion);
// Also warn if version is unknown or can't be parsed (legacy/unsupported)
@ -1627,26 +1606,20 @@ class UI {
console.log(chalk.yellow('─'.repeat(80)));
console.log('');
const { proceed } = await inquirer.prompt([
{
type: 'list',
name: 'proceed',
message: 'What would you like to do?',
choices: [
{
name: 'Proceed with update anyway (may have issues)',
value: 'proceed',
short: 'Proceed with update',
},
{
name: 'Cancel (recommended - do a fresh install instead)',
value: 'cancel',
short: 'Cancel installation',
},
],
default: 'cancel',
},
]);
const proceed = await prompts.select({
message: 'What would you like to do?',
choices: [
{
name: 'Proceed with update anyway (may have issues)',
value: 'proceed',
},
{
name: 'Cancel (recommended - do a fresh install instead)',
value: 'cancel',
},
],
default: 'cancel',
});
if (proceed === 'cancel') {
console.log('');