Compare commits

...

15 Commits

Author SHA1 Message Date
shindo107 4f9389a95f
Merge 02f2955c09 into 2b7f7ff421 2026-01-15 06:38:56 +00:00
Brian Madison 2b7f7ff421 minor updates to installer multiselects 2026-01-14 23:48:50 -06:00
Brian Madison 3360666c2a remove hard inclusion of AV from installer, to replace with module soon 2026-01-14 23:04:19 -06:00
Nwokoma Chukwuma U. 274dea16fa
Fix YAML indentation in kilo.js customInstructions field (#1291)
Co-authored-by: Brian <bmadcode@gmail.com>
2026-01-14 21:26:10 -06:00
Brian 02f2955c09
Merge branch 'main' into feat/bug-tracking-workflow 2026-01-15 11:19:21 +08:00
Kevin Heidt dcd581c84a
Fix glob pattern to use forward slashes (#1241)
Normalize source directory path for glob pattern compatibility.

Reviewed-by: Alex Verkhovsky <alexey.verkhovsky@gmail.com>
2026-01-14 21:16:23 -06:00
Murat K Ozcan 6d84a60a78
docs: tea entry points and resume tip (#1246)
Co-authored-by: Brian <bmadcode@gmail.com>
2026-01-14 21:13:48 -06:00
Eduard Voiculescu 59e1b7067c
remove remember the users name is {user_name}, it is already present in the activation-steps.txt (#1315) 2026-01-14 21:04:43 -06:00
sjennings 1d8df63ac5
feat(bmgd): Add E2E testing methodology and scaffold workflow (#1322)
* feat(bmgd): Add E2E testing methodology and scaffold workflow

- Add comprehensive e2e-testing.md knowledge fragment
- Add e2e-scaffold workflow for infrastructure generation
- Update qa-index.csv with e2e-testing fragment reference
- Update game-qa.agent.yaml with ES trigger
- Update test-design and automate instructions with E2E guidance
- Update unity-testing.md with E2E section reference

* fix(bmgd): improve E2E testing infrastructure robustness

- Add WaitForValueApprox overloads for float/double comparisons
- Fix assembly definition to use precompiledReferences for test runners
- Fix CaptureOnFailure to yield before screenshot capture (main thread)
- Add error handling to test file cleanup with try/catch
- Fix ClickButton to use FindObjectsByType and check scene.isLoaded
- Add engine-specific output paths (Unity/Unreal/Godot) to workflow
- Fix knowledge_fragments paths to use correct relative paths

* feat(bmgd): add E2E testing support for Godot and Unreal

Godot:
- Add C# testing with xUnit/NSubstitute alongside GDScript GUT
- Add E2E infrastructure: GameE2ETestFixture, ScenarioBuilder,
  InputSimulator, AsyncAssert (all GDScript)
- Add example E2E tests and quick checklist

Unreal:
- Add E2E infrastructure extending AFunctionalTest
- Add GameE2ETestBase, ScenarioBuilder, InputSimulator classes
- Add AsyncTestHelpers with latent commands and macros
- Add example E2E tests for combat and turn cycle
- Add CLI commands for running E2E tests

---------

Co-authored-by: Scott Jennings <scott.jennings+CIGINT@cloudimperiumgames.com>
Co-authored-by: Brian <bmadcode@gmail.com>
2026-01-14 20:53:40 -06:00
Darren Podolak 49d284179a Merge upstream/main into feat/bug-tracking-workflow 2026-01-08 22:09:28 -05:00
Darren Podolak 594d9854eb updating file path for readme 2025-12-31 09:34:58 -05:00
Darren Podolak 9565bef286 Adding bug-tracking-workflow README file for reference 2025-12-31 09:34:58 -05:00
Darren Podolak 54ab3f13d3 chore: Add fork docs gitignore and improve implement workflow
- Add BUG-TRACKING.md to gitignore for fork-specific documentation
- Improve implement workflow doc update tasks with return instructions
  - PRD, architecture, and UX update tasks now remind to return to /implement
  - Ensures implementation proceeds after doc updates complete

🤖 Generated with [Claude Code](https://claude.ai/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 09:34:58 -05:00
Darren Podolak c3cf0c1fc6 refactor(bmm): Convert bug-tracking to progressive disclosure workflow
- Replace monolithic instructions.md with step-based micro-file architecture
- Remove workflow.yaml in favor of workflow.md entry point (exec: pattern)
- Extract shared sync-bug-tracking.xml task to core/tasks for reuse
- Integrate bug sync into code-review and story-done workflows
- Add main_config to workflow.md frontmatter per convention

Follows BMB and phase 1-3 progressive disclosure conventions.

🤖 Generated with [Claude Code](https://claude.ai/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 09:34:58 -05:00
Darren Podolak 5247468d98 feat(bmm): Add bug-tracking workflows - built-in Jira-lite for AI agents
Adds three human-in-the-loop workflows for tracking bugs and features:

- bug-tracking (triage): Converts informal bugs.md → structured bugs.yaml
- implement: Executes fixes with user confirmation at each step
- verify: Closes items after human testing confirms resolution

Key features:
- Built-in Jira-lite: No external tools needed for issue tracking
- Human-in-the-loop: User confirms routing, approach, and verification
- Production API sync: Framework for fetching bug reports from app database
- Dual-file system: bugs.md (human input) + bugs.yaml (agent metadata)
- Severity/complexity routing matrix with auto-routing logic
- Documentation impact assessment (PRD/Architecture/UX triggers)

Workflow integrations:
- sprint-planning: Loads bugs.yaml, tracks feature-to-story mappings
- sprint-status: Shows bug/feature counts, recommends verify for pending items
- story-done: Syncs related bugs/features to [IMPLEMENTED] when story completes
- retrospective: Closes epic-linked bugs/features when epic is marked done

Reference implementation includes:
- Database schema for in-app bug reporting (Drizzle ORM example)
- API endpoints for sync workflow (GET pending, POST mark-synced)
- UI component examples (Svelte 5, React)
2025-12-31 09:34:58 -05:00
58 changed files with 7800 additions and 566 deletions

3
.gitignore vendored
View File

@ -79,3 +79,6 @@ bmad-custom-src/
website/.astro/ website/.astro/
website/dist/ website/dist/
build/ build/
# Fork-specific documentation (not committed)
BUG-TRACKING.md

View File

@ -11,7 +11,6 @@ ignores:
- .claude/** - .claude/**
- .roo/** - .roo/**
- .codex/** - .codex/**
- .agentvibes/**
- .kiro/** - .kiro/**
- sample-project/** - sample-project/**
- test-project-install/** - test-project-install/**

View File

@ -0,0 +1,536 @@
# Bug Tracking Workflow Wireframe
## Quick Reference
```
COMMANDS:
/triage - Triage new bugs from bugs.md
/implement bug-NNN - Implement a bug fix
/implement feature-N - Implement a feature
/verify - List pending verification
/verify bug-NNN - Verify and close specific bug
/verify all - Batch verify all
FILES:
docs/bugs.md - Human-readable bug tracking
docs/bugs.yaml - Agent-readable metadata
SEVERITY → COMPLEXITY → WORKFLOW ROUTING:
┌──────────┬─────────┬─────────┬─────────┬─────────┐
│ │ TRIVIAL │ SMALL │ MEDIUM │ COMPLEX │
├──────────┼─────────┼─────────┼─────────┼─────────┤
│ CRITICAL │ correct-course (any complexity) │
├──────────┼─────────┼─────────┼─────────┼─────────┤
│ HIGH │direct-fx│tech-spec│corr-crs │corr-crs │
├──────────┼─────────┼─────────┼─────────┼─────────┤
│ MEDIUM │direct-fx│tech-spec│corr-crs │corr-crs │
├──────────┼─────────┼─────────┼─────────┼─────────┤
│ LOW │direct-fx│ backlog │ backlog │ backlog │
└──────────┴─────────┴─────────┴─────────┴─────────┘
SEVERITY:
critical - Core broken, crashes, data loss
high - Major feature blocked, workaround exists
medium - Partial breakage, minor impact
low - Cosmetic, edge case
COMPLEXITY:
trivial - One-liner, minimal change
small - Single file change
medium - Multi-file change
complex - Architectural change
STATUS FLOW:
reported → triaged → [routed] → in-progress → fixed/implemented → verified → closed
STATUS VALUES:
triaged - Analyzed, routed, awaiting implementation
routed - Sent to tech-spec or correct-course workflow
in-progress - Developer actively working
fixed - Code complete, awaiting verification (bugs)
implemented - Code complete, awaiting verification (features)
closed - Verified and closed
backlog - Deferred to future sprint
blocked - Cannot proceed until issue resolved
```
---
## Part 1: System Architecture
### System Overview
```
INPUT SOURCES
+-------------------+ +-------------------+ +-------------------+
| IN-APP MODAL | | MANUAL ENTRY | | EXTERNAL ISSUE |
| (Optional API) | | (bugs.md) | | TRACKER IMPORT |
+--------+----------+ +--------+----------+ +--------+----------+
| | |
+------------+------------+-------------------------+
|
v
+============================+
| /triage (WORKFLOW) |
+============================+
|
+---------------+---------------+---------------+
| | | |
v v v v
direct-fix tech-spec correct-course backlog
| | | |
v v v v
/implement /tech-spec /correct-course (deferred)
| | |
+---------------+---------------+
|
v
/verify → CLOSED
```
### File Architecture
```
{project-root}/
docs/
bugs.md <-- User-facing: informal bug reports & tracking
bugs.yaml <-- Agent-facing: structured metadata database
epics.md <-- Context: for mapping bugs to stories
_bmad/bmm/
config.yaml <-- Project configuration
workflows/
bug-tracking/ <-- Triage workflow files
implement/ <-- Implementation workflow
verify/ <-- Verification workflow
```
### bugs.md Structure
```markdown
# Bug Tracking - {project_name}
# manual input
## Bug: Login fails on iOS Safari
Description of the bug...
Reported by: User Name
Date: 2025-01-15
- **Crash on startup (Android)**: App crashes immediately. CRITICAL.
1. Form validation missing - No validation on email field
---
# Tracked Bugs
### bug-001: Login fails on iOS Safari
Brief description...
- **Severity:** high
- **Complexity:** small
- **Workflow:** tech-spec
- **Related:** story-2-3
**Notes:** Triage reasoning...
---
# Tracked Feature Requests
### feature-001: Dark mode toggle
Brief description...
- **Priority:** medium
- **Complexity:** medium
- **Workflow:** tech-spec
---
# Fixed Bugs
[IMPLEMENTED] bug-003: Header alignment [Sev: low, Fixed: 2025-01-18, Verified: pending]
- Fix: Adjusted flexbox styling
- File(s): src/components/Header.tsx
bug-002: Form submission error [Sev: medium, Fixed: 2025-01-15, Verified: 2025-01-16, CLOSED]
- Fix: Added error boundary
---
# Implemented Features
[IMPLEMENTED] feature-002: Export to CSV [Impl: 2025-01-20, Verified: pending]
- Files: src/export.ts, src/utils/csv.ts
```
---
## Part 2: Workflow Operations
### Slash Command Reference
| Command | Description | When to Use |
|---------|-------------|-------------|
| `/triage` | Main workflow - triage user-reported bugs | When new bugs are in bugs.md |
| `/implement bug-NNN` | Implement a specific bug fix | After triage, routed for direct-fix |
| `/implement feature-NNN` | Implement a feature request | After feature is triaged |
| `/verify` | List all pending verification | After implementation, before closing |
| `/verify bug-NNN` | Verify and close specific bug | After testing confirms fix works |
| `/verify all` | Batch verify all pending items | Bulk close multiple fixes |
### /triage Workflow
```
USER INVOKES: /triage
|
v
+---------------------------+
| STEP 1: INITIALIZATION |
+---------------------------+
| - Load config.yaml |
| - Check for bugs.yaml |
| - Detect existing session |
+------------+--------------+
|
+--------+--------+
| |
v v
+-------------+ +-------------+
| Has pending | | Fresh start |
| triaged work| +------+------+
+------+------+ |
| v
v +-------------+
+-------------+ | Scan manual |
| Show status | | input section|
| [T/I/V/L/Q] | +------+------+
+-------------+ |
v
+-------------+
| [S/C/Q] |
| Sync/Cont/Q |
+------+------+
|
+---------------+---------------+
v v v
[S] API Sync [C] Continue [Q] Quit
+---------------------------+
| STEP 2: API SYNC | (Optional - if [S] selected)
+---------------------------+
| GET /api/bug-reports/pending
| - Fetch, format, insert to bugs.md
| - POST /mark-synced
+---------------------------+
+---------------------------+
| STEP 3: PARSE |
+---------------------------+
| Read "# manual input" only
| - Parse headers, bullets, numbered lists
| - Extract: title, desc, reporter, platform
| - Compare with bugs.yaml (NEW vs EXISTING)
+------------+--------------+
|
+--------+--------+
v v
No new bugs {N} new bugs
[HALT] [C] Continue
|
v
+---------------------------+
| STEP 4: TRIAGE (per bug) |
+---------------------------+
| FOR EACH NEW BUG:
| 1. Generate bug-NNN ID
| 2. Assess SEVERITY (critical|high|med|low)
| 3. Assess COMPLEXITY (trivial|small|med|complex)
| 4. Apply ROUTING MATRIX → workflow
| 5. Map to story/epic if applicable
| 6. Assess DOC IMPACT (prd|architecture|ux)
| 7. Add triage notes
| 8. Present: [A]ccept/[M]odify/[S]kip/[N]ext
+---------------------------+
|
v (after all bugs)
+---------------------------+
| STEP 5: UPDATE FILES |
+---------------------------+
| bugs.yaml: Add entries, update stats
| bugs.md: Remove from manual input,
| Add to Tracked Bugs/Features
+---------------------------+
|
v
+---------------------------+
| STEP 6: COMPLETE |
+---------------------------+
| Show summary + next steps:
| /implement bug-NNN
| /verify bug-NNN
+---------------------------+
```
### /implement Workflow
```
USER INVOKES: /implement bug-NNN
|
v
+-------------------------------+
| STEP 1-2: Load Context |
+-------------------------------+
| - Parse ID (bug-NNN/feature-NNN)
| - Load from bugs.yaml
| - Check status (halt if backlog/blocked/deferred)
+---------------+---------------+
|
v
+-------------------------------+
| STEP 3: Check Workflow Route |
+-------------------------------+
|
+-----------+-----------+-----------+
v v v v
correct- tech-spec direct-fix ambiguous
course |
| | | Apply Matrix
v v |
[ROUTES TO [ROUTES TO |
/correct- /tech-spec |
course] workflow] |
| | |
v v v
Creates Creates +--------+
story spec | STEP 4:|
| Confirm|
+---+----+
|
v
+---------------+
| STEP 5: |
| IMPLEMENT |
+---------------+
| Dev Agent: |
| - Read files |
| - Make changes|
| - Minimal fix |
+-------+-------+
|
v
+---------------+
| STEP 6: Check |
| npm run check |
+-------+-------+
|
v
+---------------+
| STEP 7-8: |
| Update Files |
+---------------+
| bugs.yaml: |
| status: fixed|
| bugs.md: |
| [IMPLEMENTED]|
+-------+-------+
|
v
+---------------+
| STEP 9: |
| "Run /verify" |
+---------------+
```
### /verify Workflow
```
USER INVOKES: /verify [bug-NNN]
|
+-----------+-----------+
v v
+---------------+ +---------------+
| No ID given | | ID provided |
+-------+-------+ +-------+-------+
| |
v |
+---------------+ |
| List pending | |
| [IMPLEMENTED] | |
| items | |
+-------+-------+ |
| |
+-------+---------------+
|
v
+-------------------------------+
| STEP 2: Load & Validate |
+-------------------------------+
| - Verify status: fixed/implemented
| - Check file sync
+---------------+---------------+
|
v
+-------------------------------+
| STEP 3: Confirm Verification |
+-------------------------------+
| Show: Title, type, date, files
| "Has this been tested?"
| [yes | no | skip]
+---------------+---------------+
|
+-----------+-----------+
v v v
+-------+ +-------+ +-------+
| YES | | NO | | SKIP |
+---+---+ +---+---+ +---+---+
| | |
v v v
Step 4 Add note Next item
"rework"
+-------------------------------+
| STEP 4-5: Update Files |
+-------------------------------+
| bugs.yaml: status: closed,
| verified_date
| bugs.md: Remove [IMPLEMENTED],
| Add CLOSED tag
+-------------------------------+
|
v
+-------------------------------+
| STEP 6: Summary |
| "bug-NNN VERIFIED and CLOSED" |
+-------------------------------+
```
---
## Part 3: Routing & Agent Delegation
### Workflow Routing by Type
| Workflow | Trigger Conditions | Pre-Implement Phase | Implementation Phase |
|----------|-------------------|---------------------|---------------------|
| **direct-fix** | high/med + trivial | None | Dev Agent in /implement Step 5 |
| **tech-spec** | high/med + small | Architect creates spec | /dev-story per spec |
| **correct-course** | critical (any) OR med/complex+ OR doc_impact | PM→Architect→SM create story | /dev-story per story |
| **backlog** | low + small+ | None (deferred) | Awaits sprint promotion |
### Agent Responsibilities
```
/triage
|
v
+------------------------+
| SM AGENT (Scrum |
| Master Facilitator) |
+------------------------+
| - Runs triage workflow |
| - Assesses severity |
| - Routes to workflows |
+-----------+------------+
|
+-------------------+-------------------+
v v v
+------------+ +------------+ +------------+
| Direct-Fix | | Tech-Spec | | Correct- |
+-----+------+ +-----+------+ | Course |
| | +-----+------+
v v |
+------------+ +------------+ v
| DEV AGENT | | ARCHITECT | +------------+
| /implement | | /tech-spec | | PM AGENT |
| Step 5 | +-----+------+ | + ARCHITECT|
+------------+ | | + SM |
v +-----+------+
+------------+ |
| DEV AGENT | v
| /dev-story | +------------+
+------------+ | DEV AGENT |
| /dev-story |
+------------+
```
### Doc Impact Routing
When `doc_impact` flags are detected during /implement:
| Flag | Agent | Action |
|------|-------|--------|
| PRD | PM Agent | Update PRD.md |
| Architecture | Architect Agent | Update architecture.md |
| UX | UX Designer Agent | Update UX specs |
User is prompted: `[update-docs-first | proceed-anyway | cancel]`
---
## Part 4: State & Lifecycle
### File State Transitions
```
═══════════════════════════════════════════════════════════════════════════════
DIRECT-FIX TECH-SPEC CORRECT-COURSE BACKLOG
═══════════════════════════════════════════════════════════════════════════════
ENTRY # manual input # manual input # manual input # manual input
(informal text) (informal text) (informal text) (informal text)
│ │ │ │
▼ ▼ ▼ ▼
─────────────────────────────────────────────────────────────────────────────────
TRIAGE # Tracked Bugs # Tracked Bugs # Tracked Bugs # Tracked Bugs
bug-NNN bug-NNN bug-NNN bug-NNN
wf: direct-fix wf: tech-spec wf: correct-crs wf: backlog
│ │ │ │
▼ ▼ ▼ │
─────────────────────────────────────────────────────────────────────────────────
ROUTE (skip) /tech-spec /correct-course (waiting)
creates spec creates story │
│ │ │ │
▼ ▼ ▼ │
─────────────────────────────────────────────────────────────────────────────────
CODE /implement /dev-story /dev-story (waiting)
Step 5 per spec per story │
│ │ │ │
▼ ▼ ▼ │
─────────────────────────────────────────────────────────────────────────────────
IMPL # Fixed Bugs # Fixed Bugs # Fixed Bugs (unchanged)
[IMPLEMENTED] [IMPLEMENTED] [IMPLEMENTED] │
bug-NNN bug-NNN bug-NNN │
│ │ │ │
▼ ▼ ▼ │
─────────────────────────────────────────────────────────────────────────────────
VERIFY /verify /verify /verify (waiting)
bug-NNN bug-NNN bug-NNN │
│ │ │ │
▼ ▼ ▼ ▼
─────────────────────────────────────────────────────────────────────────────────
DONE CLOSED ✓ CLOSED ✓ CLOSED ✓ WAITING ◷
═══════════════════════════════════════════════════════════════════════════════
FILE STATE SUMMARY:
┌──────────┬─────────────────────────────┬──────────────────────────────────┐
│ STAGE │ bugs.md │ bugs.yaml │
├──────────┼─────────────────────────────┼──────────────────────────────────┤
│ Entry │ # manual input │ (no entry) │
├──────────┼─────────────────────────────┼──────────────────────────────────┤
│ Triage │ → # Tracked Bugs/Features │ status: triaged + metadata │
├──────────┼─────────────────────────────┼──────────────────────────────────┤
│ Implement│ → # Fixed [IMPLEMENTED] │ status: fixed/implemented │
├──────────┼─────────────────────────────┼──────────────────────────────────┤
│ Verify │ [IMPLEMENTED] → CLOSED │ status: closed + verified_date │
└──────────┴─────────────────────────────┴──────────────────────────────────┘
```
---
## Appendix: Optional Extensions
### In-App Bug Reporting API
Optional integration for apps with built-in bug reporting UI:
1. **User submits** via in-app modal → `POST /api/bug-reports`
2. **Database stores** with `status: 'new'`
3. **During /triage Step 2** (if [S]ync selected):
- `GET /api/bug-reports/pending` fetches new reports
- Formats as markdown, inserts to `# manual input`
- `POST /api/bug-reports/mark-synced` prevents re-fetch
This is optional - manual entry to bugs.md works without any API.

View File

@ -20,10 +20,13 @@ This flexibility enables:
## Categories ## Categories
- [Categories](#categories)
- [Custom Stand-Alone Modules](#custom-stand-alone-modules) - [Custom Stand-Alone Modules](#custom-stand-alone-modules)
- [Custom Add-On Modules](#custom-add-on-modules) - [Custom Add-On Modules](#custom-add-on-modules)
- [Custom Global Modules](#custom-global-modules) - [Custom Global Modules](#custom-global-modules)
- [Custom Agents](#custom-agents) - [Custom Agents](#custom-agents)
- [BMad Tiny Agents](#bmad-tiny-agents)
- [Simple and Expert Agents](#simple-and-expert-agents)
- [Custom Workflows](#custom-workflows) - [Custom Workflows](#custom-workflows)
## Custom Stand-Alone Modules ## Custom Stand-Alone Modules
@ -59,7 +62,6 @@ Similar to Custom Stand-Alone Modules, but designed to add functionality that ap
Examples include: Examples include:
- The current TTS (Text-to-Speech) functionality for Claude, which will soon be converted to a global module
- The core module, which is always installed and provides all agents with party mode and advanced elicitation capabilities - The core module, which is always installed and provides all agents with party mode and advanced elicitation capabilities
- Installation and update tools that work with any BMad method configuration - Installation and update tools that work with any BMad method configuration

View File

@ -66,19 +66,18 @@ Type "exit" or "done" to conclude the session. Participating agents will say per
## Example Party Compositions ## Example Party Compositions
| Topic | Typical Agents | | Topic | Typical Agents |
|-------|---------------| | ---------------------- | ------------------------------------------------------------- |
| **Product Strategy** | PM + Innovation Strategist (CIS) + Analyst | | **Product Strategy** | PM + Innovation Strategist (CIS) + Analyst |
| **Technical Design** | Architect + Creative Problem Solver (CIS) + Game Architect | | **Technical Design** | Architect + Creative Problem Solver (CIS) + Game Architect |
| **User Experience** | UX Designer + Design Thinking Coach (CIS) + Storyteller (CIS) | | **User Experience** | UX Designer + Design Thinking Coach (CIS) + Storyteller (CIS) |
| **Quality Assessment** | TEA + DEV + Architect | | **Quality Assessment** | TEA + DEV + Architect |
## Key Features ## Key Features
- **Intelligent agent selection** — Selects based on expertise needed - **Intelligent agent selection** — Selects based on expertise needed
- **Authentic personalities** — Each agent maintains their unique voice - **Authentic personalities** — Each agent maintains their unique voice
- **Natural cross-talk** — Agents reference and build on each other - **Natural cross-talk** — Agents reference and build on each other
- **Optional TTS** — Voice configurations for each agent
- **Graceful exit** — Personalized farewells - **Graceful exit** — Personalized farewells
## Tips ## Tips

6
package-lock.json generated
View File

@ -9,6 +9,7 @@
"version": "6.0.0-alpha.23", "version": "6.0.0-alpha.23",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@clack/prompts": "^0.11.0",
"@kayvan/markdown-tree-parser": "^1.6.1", "@kayvan/markdown-tree-parser": "^1.6.1",
"boxen": "^5.1.2", "boxen": "^5.1.2",
"chalk": "^4.1.2", "chalk": "^4.1.2",
@ -33,7 +34,6 @@
"devDependencies": { "devDependencies": {
"@astrojs/sitemap": "^3.6.0", "@astrojs/sitemap": "^3.6.0",
"@astrojs/starlight": "^0.37.0", "@astrojs/starlight": "^0.37.0",
"@clack/prompts": "^0.11.0",
"@eslint/js": "^9.33.0", "@eslint/js": "^9.33.0",
"archiver": "^7.0.1", "archiver": "^7.0.1",
"astro": "^5.16.0", "astro": "^5.16.0",
@ -759,7 +759,6 @@
"version": "0.5.0", "version": "0.5.0",
"resolved": "https://registry.npmjs.org/@clack/core/-/core-0.5.0.tgz", "resolved": "https://registry.npmjs.org/@clack/core/-/core-0.5.0.tgz",
"integrity": "sha512-p3y0FIOwaYRUPRcMO7+dlmLh8PSRcrjuTndsiA0WAFbWES0mLZlrjVoBRZ9DzkPFJZG6KGkJmoEAY0ZcVWTkow==", "integrity": "sha512-p3y0FIOwaYRUPRcMO7+dlmLh8PSRcrjuTndsiA0WAFbWES0mLZlrjVoBRZ9DzkPFJZG6KGkJmoEAY0ZcVWTkow==",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"picocolors": "^1.0.0", "picocolors": "^1.0.0",
@ -770,7 +769,6 @@
"version": "0.11.0", "version": "0.11.0",
"resolved": "https://registry.npmjs.org/@clack/prompts/-/prompts-0.11.0.tgz", "resolved": "https://registry.npmjs.org/@clack/prompts/-/prompts-0.11.0.tgz",
"integrity": "sha512-pMN5FcrEw9hUkZA4f+zLlzivQSeQf5dRGJjSUbvVYDLvpKCdQx5OaknvKzgbtXOizhP+SJJJjqEbOe55uKKfAw==", "integrity": "sha512-pMN5FcrEw9hUkZA4f+zLlzivQSeQf5dRGJjSUbvVYDLvpKCdQx5OaknvKzgbtXOizhP+SJJJjqEbOe55uKKfAw==",
"dev": true,
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@clack/core": "0.5.0", "@clack/core": "0.5.0",
@ -12151,7 +12149,6 @@
"version": "1.1.1", "version": "1.1.1",
"resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz", "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz",
"integrity": "sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==", "integrity": "sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==",
"dev": true,
"license": "ISC" "license": "ISC"
}, },
"node_modules/picomatch": { "node_modules/picomatch": {
@ -13398,7 +13395,6 @@
"version": "1.0.5", "version": "1.0.5",
"resolved": "https://registry.npmjs.org/sisteransi/-/sisteransi-1.0.5.tgz", "resolved": "https://registry.npmjs.org/sisteransi/-/sisteransi-1.0.5.tgz",
"integrity": "sha512-bLGGlR1QxBcynn2d5YmDX4MGjlZvy2MRBDRNHLJ8VI6l6+9FUiyTFNJ0IveOSP0bcXgVDPRcfGqA0pjaqUpfVg==", "integrity": "sha512-bLGGlR1QxBcynn2d5YmDX4MGjlZvy2MRBDRNHLJ8VI6l6+9FUiyTFNJ0IveOSP0bcXgVDPRcfGqA0pjaqUpfVg==",
"dev": true,
"license": "MIT" "license": "MIT"
}, },
"node_modules/sitemap": { "node_modules/sitemap": {

View File

@ -18,7 +18,6 @@ agent:
critical_actions: critical_actions:
- "Load into memory {project-root}/_bmad/core/config.yaml and set variable project_name, output_folder, user_name, communication_language" - "Load into memory {project-root}/_bmad/core/config.yaml and set variable project_name, output_folder, user_name, communication_language"
- "Remember the users name is {user_name}"
- "ALWAYS communicate in {communication_language}" - "ALWAYS communicate in {communication_language}"
menu: menu:

View File

@ -0,0 +1,123 @@
<task id="{bmad_folder}/core/tasks/sync-bug-tracking.xml" name="Sync Bug Tracking">
<objective>Sync bugs.yaml and bugs.md when a story is marked done, updating related bugs to "fixed" and features to "implemented"</objective>
<description>
This task is invoked by workflows (story-done, code-review) after a story is marked done.
It searches bugs.yaml for bugs/features linked to the completed story and updates their status.
For multi-story features, it only marks "implemented" when ALL linked stories are done.
</description>
<inputs>
<input name="story_key" required="true">The story key (e.g., "3-7-checkout-from-club-detail-page")</input>
<input name="story_id" required="false">The story ID (e.g., "3.7") - used for related_story matching</input>
<input name="bugs_yaml" required="true">Path to bugs.yaml file</input>
<input name="bugs_md" required="true">Path to bugs.md file</input>
<input name="sprint_status" required="true">Path to sprint-status.yaml file</input>
<input name="date" required="true">Current date for timestamps</input>
</inputs>
<outputs>
<output name="bugs_updated">List of bug IDs marked as fixed</output>
<output name="features_updated">List of feature IDs marked as implemented</output>
<output name="features_pending">List of feature IDs with incomplete stories</output>
</outputs>
<flow>
<step n="1" goal="Load bugs.yaml and check for existence">
<action>Load {bugs_yaml} if it exists</action>
<check if="bugs.yaml does not exist">
<action>Set bugs_updated = [], features_updated = [], features_pending = []</action>
<action>Return early - no bug tracking to sync</action>
</check>
</step>
<step n="2" goal="Find matching bugs and features using multiple methods">
<action>Initialize: bugs_updated = [], features_updated = [], features_pending = []</action>
<action>Search for entries matching this story using ALL THREE methods:</action>
<action>1. Check sprint-status.yaml for comment "# Source: bugs.yaml/feature-XXX" or "# Source: bugs.yaml/bug-XXX" on the {story_key} line - this is the MOST RELIABLE method</action>
<action>2. Check related_story field in bugs.yaml matching {story_id} or {story_key}</action>
<action>3. Check sprint_stories arrays in feature_requests for entries containing {story_key}</action>
<critical>PRIORITY: Use sprint-status comment source if present - it's explicit and unambiguous</critical>
</step>
<step n="3" goal="Update matching bugs">
<check if="matching bugs found in bugs section">
<action>For each matching bug:</action>
<action>- Update status: "triaged" or "routed" or "in-progress" → "fixed"</action>
<action>- Set fixed_date: {date}</action>
<action>- Set assigned_to: "dev-agent" (if not already set)</action>
<action>- Append to notes: "Auto-closed via sync-bug-tracking. Story {story_key} marked done on {date}."</action>
<action>- Add bug ID to bugs_updated list</action>
</check>
</step>
<step n="4" goal="Update matching features (with multi-story check)">
<check if="matching features found in feature_requests section">
<action>For each matching feature (via related_story OR sprint_stories):</action>
<critical>MULTI-STORY FEATURE CHECK: If feature has sprint_stories array with multiple entries:</critical>
<action>1. Extract all story keys from sprint_stories (format: "story-key: status")</action>
<action>2. Load sprint-status.yaml and check development_status for EACH story</action>
<action>3. Only proceed if ALL stories in sprint_stories have status "done" in sprint-status.yaml</action>
<action>4. If any story is NOT done, add feature to features_pending and log: "Feature {feature_id} has incomplete stories: {incomplete_list}"</action>
<check if="ALL sprint_stories are done (or feature has single story that matches)">
<action>- Update status: "backlog" or "triaged" or "routed" or "in-progress" → "implemented"</action>
<action>- Set implemented_date: {date}</action>
<action>- Update sprint_stories entries to reflect done status</action>
<action>- Append to notes: "Auto-closed via sync-bug-tracking. Story {story_key} marked done on {date}."</action>
<action>- Add feature ID to features_updated list</action>
</check>
</check>
</step>
<step n="5" goal="Save bugs.yaml updates">
<check if="bugs_updated is not empty OR features_updated is not empty">
<action>Save updated bugs.yaml, preserving all structure and comments</action>
</check>
</step>
<step n="6" goal="Update bugs.md to match">
<check if="bugs_updated is not empty OR features_updated is not empty">
<action>Load {bugs_md}</action>
<check if="bugs_updated is not empty">
<action>For each bug in bugs_updated:</action>
<action>- Find the bug entry in "# Tracked Bugs" section</action>
<action>- Move it to "# Fixed Bugs" section</action>
<action>- Add [IMPLEMENTED] tag prefix with date: "[IMPLEMENTED] bug-XXX: Title [Fixed: {date}, Verified: pending]"</action>
</check>
<check if="features_updated is not empty">
<action>For each feature in features_updated:</action>
<action>- Find the feature entry in "# Tracked Feature Requests" section</action>
<action>- Move it to "# Implemented Features" section</action>
<action>- Add [IMPLEMENTED] tag prefix with date: "[IMPLEMENTED] feature-XXX: Title [Implemented: {date}, Verified: pending]"</action>
</check>
<action>Update statistics section if present</action>
<action>Save updated bugs.md</action>
</check>
</step>
<step n="7" goal="Return results">
<output>
Bug/Feature Sync Results:
{{#if bugs_updated}}
- Bugs marked fixed: {{bugs_updated}}
{{/if}}
{{#if features_updated}}
- Features marked implemented: {{features_updated}}
{{/if}}
{{#if features_pending}}
- Features with incomplete stories (not yet implemented): {{features_pending}}
{{/if}}
{{#if no_matches}}
- No related bugs/features found for story {story_key}
{{/if}}
</output>
</step>
</flow>
</task>

View File

@ -130,7 +130,6 @@ After agent loading and introduction:
- Handle missing or incomplete agent entries gracefully - Handle missing or incomplete agent entries gracefully
- Cross-reference manifest with actual agent files - Cross-reference manifest with actual agent files
- Prepare agent selection logic for intelligent conversation routing - Prepare agent selection logic for intelligent conversation routing
- Set up TTS voice configurations for each agent
## NEXT STEP: ## NEXT STEP:

View File

@ -6,7 +6,6 @@
- 🎯 SELECT RELEVANT AGENTS based on topic analysis and expertise matching - 🎯 SELECT RELEVANT AGENTS based on topic analysis and expertise matching
- 📋 MAINTAIN CHARACTER CONSISTENCY using merged agent personalities - 📋 MAINTAIN CHARACTER CONSISTENCY using merged agent personalities
- 🔍 ENABLE NATURAL CROSS-TALK between agents for dynamic conversation - 🔍 ENABLE NATURAL CROSS-TALK between agents for dynamic conversation
- 💬 INTEGRATE TTS for each agent response immediately after text
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}` - ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
## EXECUTION PROTOCOLS: ## EXECUTION PROTOCOLS:
@ -21,7 +20,6 @@
- Complete agent roster with merged personalities is available - Complete agent roster with merged personalities is available
- User topic and conversation history guide agent selection - User topic and conversation history guide agent selection
- Party mode is active with TTS integration enabled
- Exit triggers: `*exit`, `goodbye`, `end party`, `quit` - Exit triggers: `*exit`, `goodbye`, `end party`, `quit`
## YOUR TASK: ## YOUR TASK:
@ -116,19 +114,9 @@ Allow natural back-and-forth within the same response round for dynamic interact
### 6. Response Round Completion ### 6. Response Round Completion
After generating all agent responses for the round: After generating all agent responses for the round, let the user know he can speak naturally with the agents, an then show this menu opion"
**Presentation Format:** `[E] Exit Party Mode - End the collaborative session`
[Agent 1 Response with TTS]
[Empty line for readability]
[Agent 2 Response with TTS, potentially referencing Agent 1]
[Empty line for readability]
[Agent 3 Response with TTS, building on or offering new perspective]
**Continue Option:**
"[Agents have contributed their perspectives. Ready for more discussion?]
[E] Exit Party Mode - End the collaborative session"
### 7. Exit Condition Checking ### 7. Exit Condition Checking
@ -142,23 +130,19 @@ Check for exit conditions before continuing:
**Natural Conclusion:** **Natural Conclusion:**
- Conversation seems naturally concluding - Conversation seems naturally concluding
- Ask user: "Would you like to continue the discussion or end party mode?" - Confirm if the user wants to exit party mode and go back to where they were or continue chatting. Do it in a conversational way with an agent in the party.
- Respect user choice to continue or exit
### 8. Handle Exit Selection ### 8. Handle Exit Selection
#### If 'E' (Exit Party Mode): #### If 'E' (Exit Party Mode):
- Update frontmatter: `stepsCompleted: [1, 2]` - Load read and execute: `./step-03-graceful-exit.md`
- Set `party_active: false`
- Load: `./step-03-graceful-exit.md`
## SUCCESS METRICS: ## SUCCESS METRICS:
✅ Intelligent agent selection based on topic analysis ✅ Intelligent agent selection based on topic analysis
✅ Authentic in-character responses maintained consistently ✅ Authentic in-character responses maintained consistently
✅ Natural cross-talk and agent interactions enabled ✅ Natural cross-talk and agent interactions enabled
✅ TTS integration working for all agent responses
✅ Question handling protocol followed correctly ✅ Question handling protocol followed correctly
✅ [E] exit option presented after each response round ✅ [E] exit option presented after each response round
✅ Conversation context and state maintained throughout ✅ Conversation context and state maintained throughout
@ -168,7 +152,6 @@ Check for exit conditions before continuing:
❌ Generic responses without character consistency ❌ Generic responses without character consistency
❌ Poor agent selection not matching topic expertise ❌ Poor agent selection not matching topic expertise
❌ Missing TTS integration for agent responses
❌ Ignoring user questions or exit triggers ❌ Ignoring user questions or exit triggers
❌ Not enabling natural agent cross-talk and interactions ❌ Not enabling natural agent cross-talk and interactions
❌ Continuing conversation without user input when questions asked ❌ Continuing conversation without user input when questions asked

View File

@ -106,7 +106,6 @@ workflow_completed: true
- Clear any active conversation state - Clear any active conversation state
- Reset agent selection cache - Reset agent selection cache
- Finalize TTS session cleanup
- Mark party mode workflow as completed - Mark party mode workflow as completed
### 6. Exit Workflow ### 6. Exit Workflow
@ -122,7 +121,6 @@ Thank you for using BMAD Party Mode for collaborative multi-agent discussions!"
✅ Satisfying agent farewells generated in authentic character voices ✅ Satisfying agent farewells generated in authentic character voices
✅ Session highlights and contributions acknowledged meaningfully ✅ Session highlights and contributions acknowledged meaningfully
✅ Positive and appreciative closure atmosphere maintained ✅ Positive and appreciative closure atmosphere maintained
✅ TTS integration working for farewell messages
✅ Frontmatter properly updated with workflow completion ✅ Frontmatter properly updated with workflow completion
✅ All workflow state cleaned up appropriately ✅ All workflow state cleaned up appropriately
✅ User left with positive impression of collaborative experience ✅ User left with positive impression of collaborative experience

View File

@ -178,18 +178,6 @@ If conversation naturally concludes:
--- ---
## TTS INTEGRATION
Party mode includes Text-to-Speech for each agent response:
**TTS Protocol:**
- Trigger TTS immediately after each agent's text response
- Use agent's merged voice configuration from manifest
- Format: `Bash: .claude/hooks/bmad-speak.sh "[Agent Name]" "[Their response]"`
---
## MODERATION NOTES ## MODERATION NOTES
**Quality Control:** **Quality Control:**

View File

@ -22,6 +22,8 @@ agent:
critical_actions: critical_actions:
- "Consult {project-root}/_bmad/bmgd/gametest/qa-index.csv to select knowledge fragments under knowledge/ and load only the files needed for the current task" - "Consult {project-root}/_bmad/bmgd/gametest/qa-index.csv to select knowledge fragments under knowledge/ and load only the files needed for the current task"
- "For E2E testing requests, always load knowledge/e2e-testing.md first"
- "When scaffolding tests, distinguish between unit, integration, and E2E test needs"
- "Load the referenced fragment(s) from {project-root}/_bmad/bmgd/gametest/knowledge/ before giving recommendations" - "Load the referenced fragment(s) from {project-root}/_bmad/bmgd/gametest/knowledge/ before giving recommendations"
- "Cross-check recommendations with the current official Unity Test Framework, Unreal Automation, or Godot GUT documentation" - "Cross-check recommendations with the current official Unity Test Framework, Unreal Automation, or Godot GUT documentation"
- "Find if this exists, if it does, always treat it as the bible I plan and execute against: `**/project-context.md`" - "Find if this exists, if it does, always treat it as the bible I plan and execute against: `**/project-context.md`"
@ -43,6 +45,10 @@ agent:
workflow: "{project-root}/_bmad/bmgd/workflows/gametest/automate/workflow.yaml" workflow: "{project-root}/_bmad/bmgd/workflows/gametest/automate/workflow.yaml"
description: "[TA] Generate automated game tests" description: "[TA] Generate automated game tests"
- trigger: ES or fuzzy match on e2e-scaffold
workflow: "{project-root}/_bmad/bmgd/workflows/gametest/e2e-scaffold/workflow.yaml"
description: "[ES] Scaffold E2E testing infrastructure"
- trigger: PP or fuzzy match on playtest-plan - trigger: PP or fuzzy match on playtest-plan
workflow: "{project-root}/_bmad/bmgd/workflows/gametest/playtest-plan/workflow.yaml" workflow: "{project-root}/_bmad/bmgd/workflows/gametest/playtest-plan/workflow.yaml"
description: "[PP] Create structured playtesting plan" description: "[PP] Create structured playtesting plan"

File diff suppressed because it is too large Load Diff

View File

@ -374,3 +374,502 @@ test:
| Signal not detected | Signal not watched | Call `watch_signals()` before action | | Signal not detected | Signal not watched | Call `watch_signals()` before action |
| Physics not working | Missing frames | Await `physics_frame` | | Physics not working | Missing frames | Await `physics_frame` |
| Flaky tests | Timing issues | Use proper await/signals | | Flaky tests | Timing issues | Use proper await/signals |
## C# Testing in Godot
Godot 4 supports C# via .NET 6+. You can use standard .NET testing frameworks alongside GUT.
### Project Setup for C#
```
project/
├── addons/
│ └── gut/
├── src/
│ ├── Player/
│ │ └── PlayerController.cs
│ └── Combat/
│ └── DamageCalculator.cs
├── tests/
│ ├── gdscript/
│ │ └── test_integration.gd
│ └── csharp/
│ ├── Tests.csproj
│ └── DamageCalculatorTests.cs
└── project.csproj
```
### C# Test Project Setup
Create a separate test project that references your game assembly:
```xml
<!-- tests/csharp/Tests.csproj -->
<Project Sdk="Godot.NET.Sdk/4.2.0">
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<EnableDynamicLoading>true</EnableDynamicLoading>
<IsPackable>false</IsPackable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.8.0" />
<PackageReference Include="xunit" Version="2.6.2" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.5.4" />
<PackageReference Include="NSubstitute" Version="5.1.0" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="../../project.csproj" />
</ItemGroup>
</Project>
```
### Basic C# Unit Tests
```csharp
// tests/csharp/DamageCalculatorTests.cs
using Xunit;
using YourGame.Combat;
public class DamageCalculatorTests
{
private readonly DamageCalculator _calculator;
public DamageCalculatorTests()
{
_calculator = new DamageCalculator();
}
[Fact]
public void Calculate_BaseDamage_ReturnsCorrectValue()
{
var result = _calculator.Calculate(100f, 1f);
Assert.Equal(100f, result);
}
[Fact]
public void Calculate_CriticalHit_DoublesDamage()
{
var result = _calculator.Calculate(100f, 2f);
Assert.Equal(200f, result);
}
[Theory]
[InlineData(100f, 0.5f, 50f)]
[InlineData(100f, 1.5f, 150f)]
[InlineData(50f, 2f, 100f)]
public void Calculate_Parameterized_ReturnsExpected(
float baseDamage, float multiplier, float expected)
{
var result = _calculator.Calculate(baseDamage, multiplier);
Assert.Equal(expected, result);
}
}
```
### Testing Godot Nodes in C#
For tests requiring Godot runtime, use a hybrid approach:
```csharp
// tests/csharp/PlayerControllerTests.cs
using Godot;
using Xunit;
using YourGame.Player;
public class PlayerControllerTests : IDisposable
{
private readonly SceneTree _sceneTree;
private PlayerController _player;
public PlayerControllerTests()
{
// These tests must run within Godot runtime
// Use GodotXUnit or similar adapter
}
[GodotFact] // Custom attribute for Godot runtime tests
public async Task Player_Move_ChangesPosition()
{
var startPos = _player.GlobalPosition;
_player.SetInput(new Vector2(1, 0));
await ToSignal(GetTree().CreateTimer(0.5f), "timeout");
Assert.True(_player.GlobalPosition.X > startPos.X);
}
public void Dispose()
{
_player?.QueueFree();
}
}
```
### C# Mocking with NSubstitute
```csharp
using NSubstitute;
using Xunit;
public class EnemyAITests
{
[Fact]
public void Enemy_UsesPathfinding_WhenMoving()
{
var mockPathfinding = Substitute.For<IPathfinding>();
mockPathfinding.FindPath(Arg.Any<Vector2>(), Arg.Any<Vector2>())
.Returns(new[] { Vector2.Zero, new Vector2(10, 10) });
var enemy = new EnemyAI(mockPathfinding);
enemy.MoveTo(new Vector2(10, 10));
mockPathfinding.Received().FindPath(
Arg.Any<Vector2>(),
Arg.Is<Vector2>(v => v == new Vector2(10, 10)));
}
}
```
### Running C# Tests
```bash
# Run C# unit tests (no Godot runtime needed)
dotnet test tests/csharp/Tests.csproj
# Run with coverage
dotnet test tests/csharp/Tests.csproj --collect:"XPlat Code Coverage"
# Run specific test
dotnet test tests/csharp/Tests.csproj --filter "FullyQualifiedName~DamageCalculator"
```
### Hybrid Test Strategy
| Test Type | Framework | When to Use |
| ------------- | ---------------- | ---------------------------------- |
| Pure logic | xUnit/NUnit (C#) | Classes without Godot dependencies |
| Node behavior | GUT (GDScript) | MonoBehaviour-like testing |
| Integration | GUT (GDScript) | Scene and signal testing |
| E2E | GUT (GDScript) | Full gameplay flows |
## End-to-End Testing
For comprehensive E2E testing patterns, infrastructure scaffolding, and
scenario builders, see **knowledge/e2e-testing.md**.
### E2E Infrastructure for Godot
#### GameE2ETestFixture (GDScript)
```gdscript
# tests/e2e/infrastructure/game_e2e_test_fixture.gd
extends GutTest
class_name GameE2ETestFixture
var game_state: GameStateManager
var input_sim: InputSimulator
var scenario: ScenarioBuilder
var _scene_instance: Node
## Override to specify a different scene for specific test classes.
func get_scene_path() -> String:
return "res://scenes/game.tscn"
func before_each():
# Load game scene
var scene = load(get_scene_path())
_scene_instance = scene.instantiate()
add_child(_scene_instance)
# Get references
game_state = _scene_instance.get_node("GameStateManager")
assert_not_null(game_state, "GameStateManager not found in scene")
input_sim = InputSimulator.new()
scenario = ScenarioBuilder.new(game_state)
# Wait for ready
await wait_for_game_ready()
func after_each():
if _scene_instance:
_scene_instance.queue_free()
_scene_instance = null
input_sim = null
scenario = null
func wait_for_game_ready(timeout: float = 10.0):
var elapsed = 0.0
while not game_state.is_ready and elapsed < timeout:
await get_tree().process_frame
elapsed += get_process_delta_time()
assert_true(game_state.is_ready, "Game should be ready within timeout")
```
#### ScenarioBuilder (GDScript)
```gdscript
# tests/e2e/infrastructure/scenario_builder.gd
extends RefCounted
class_name ScenarioBuilder
var _game_state: GameStateManager
var _setup_actions: Array[Callable] = []
func _init(game_state: GameStateManager):
_game_state = game_state
## Load a pre-configured scenario from a save file.
func from_save_file(file_name: String) -> ScenarioBuilder:
_setup_actions.append(func(): await _load_save_file(file_name))
return self
## Configure the current turn number.
func on_turn(turn_number: int) -> ScenarioBuilder:
_setup_actions.append(func(): _set_turn(turn_number))
return self
## Spawn a unit at position.
func with_unit(faction: int, position: Vector2, movement_points: int = 6) -> ScenarioBuilder:
_setup_actions.append(func(): await _spawn_unit(faction, position, movement_points))
return self
## Execute all configured setup actions.
func build() -> void:
for action in _setup_actions:
await action.call()
_setup_actions.clear()
## Clear pending actions without executing.
func reset() -> void:
_setup_actions.clear()
# Private implementation
func _load_save_file(file_name: String) -> void:
var path = "res://tests/e2e/test_data/%s" % file_name
await _game_state.load_game(path)
func _set_turn(turn: int) -> void:
_game_state.set_turn_number(turn)
func _spawn_unit(faction: int, pos: Vector2, mp: int) -> void:
var unit = _game_state.spawn_unit(faction, pos)
unit.movement_points = mp
```
#### InputSimulator (GDScript)
```gdscript
# tests/e2e/infrastructure/input_simulator.gd
extends RefCounted
class_name InputSimulator
## Click at a world position.
func click_world_position(world_pos: Vector2) -> void:
var viewport = Engine.get_main_loop().root.get_viewport()
var camera = viewport.get_camera_2d()
var screen_pos = camera.get_screen_center_position() + (world_pos - camera.global_position)
await click_screen_position(screen_pos)
## Click at a screen position.
func click_screen_position(screen_pos: Vector2) -> void:
var press = InputEventMouseButton.new()
press.button_index = MOUSE_BUTTON_LEFT
press.pressed = true
press.position = screen_pos
var release = InputEventMouseButton.new()
release.button_index = MOUSE_BUTTON_LEFT
release.pressed = false
release.position = screen_pos
Input.parse_input_event(press)
await Engine.get_main_loop().process_frame
Input.parse_input_event(release)
await Engine.get_main_loop().process_frame
## Click a UI button by name.
func click_button(button_name: String) -> void:
var root = Engine.get_main_loop().root
var button = _find_button_recursive(root, button_name)
assert(button != null, "Button '%s' not found in scene tree" % button_name)
if not button.visible:
push_warning("[InputSimulator] Button '%s' is not visible" % button_name)
if button.disabled:
push_warning("[InputSimulator] Button '%s' is disabled" % button_name)
button.pressed.emit()
await Engine.get_main_loop().process_frame
func _find_button_recursive(node: Node, button_name: String) -> Button:
if node is Button and node.name == button_name:
return node
for child in node.get_children():
var found = _find_button_recursive(child, button_name)
if found:
return found
return null
## Press and release a key.
func press_key(keycode: Key) -> void:
var press = InputEventKey.new()
press.keycode = keycode
press.pressed = true
var release = InputEventKey.new()
release.keycode = keycode
release.pressed = false
Input.parse_input_event(press)
await Engine.get_main_loop().process_frame
Input.parse_input_event(release)
await Engine.get_main_loop().process_frame
## Simulate an input action.
func action_press(action_name: String) -> void:
Input.action_press(action_name)
await Engine.get_main_loop().process_frame
func action_release(action_name: String) -> void:
Input.action_release(action_name)
await Engine.get_main_loop().process_frame
## Reset all input state.
func reset() -> void:
Input.flush_buffered_events()
```
#### AsyncAssert (GDScript)
```gdscript
# tests/e2e/infrastructure/async_assert.gd
extends RefCounted
class_name AsyncAssert
## Wait until condition is true, or fail after timeout.
static func wait_until(
condition: Callable,
description: String,
timeout: float = 5.0
) -> void:
var elapsed := 0.0
while not condition.call() and elapsed < timeout:
await Engine.get_main_loop().process_frame
elapsed += Engine.get_main_loop().root.get_process_delta_time()
assert(condition.call(),
"Timeout after %.1fs waiting for: %s" % [timeout, description])
## Wait for a value to equal expected.
static func wait_for_value(
getter: Callable,
expected: Variant,
description: String,
timeout: float = 5.0
) -> void:
await wait_until(
func(): return getter.call() == expected,
"%s to equal '%s' (current: '%s')" % [description, expected, getter.call()],
timeout)
## Wait for a float value within tolerance.
static func wait_for_value_approx(
getter: Callable,
expected: float,
description: String,
tolerance: float = 0.0001,
timeout: float = 5.0
) -> void:
await wait_until(
func(): return absf(expected - getter.call()) < tolerance,
"%s to equal ~%s ±%s (current: %s)" % [description, expected, tolerance, getter.call()],
timeout)
## Assert that condition does NOT become true within duration.
static func assert_never_true(
condition: Callable,
description: String,
duration: float = 1.0
) -> void:
var elapsed := 0.0
while elapsed < duration:
assert(not condition.call(),
"Condition unexpectedly became true: %s" % description)
await Engine.get_main_loop().process_frame
elapsed += Engine.get_main_loop().root.get_process_delta_time()
## Wait for specified number of frames.
static func wait_frames(count: int) -> void:
for i in range(count):
await Engine.get_main_loop().process_frame
## Wait for physics to settle.
static func wait_for_physics(frames: int = 3) -> void:
for i in range(frames):
await Engine.get_main_loop().root.get_tree().physics_frame
```
### Example E2E Test (GDScript)
```gdscript
# tests/e2e/scenarios/test_combat_flow.gd
extends GameE2ETestFixture
func test_player_can_attack_enemy():
# GIVEN: Player and enemy in combat range
await scenario \
.with_unit(Faction.PLAYER, Vector2(100, 100)) \
.with_unit(Faction.ENEMY, Vector2(150, 100)) \
.build()
var enemy = game_state.get_units(Faction.ENEMY)[0]
var initial_health = enemy.health
# WHEN: Player attacks
await input_sim.click_world_position(Vector2(100, 100)) # Select player
await AsyncAssert.wait_until(
func(): return game_state.selected_unit != null,
"Unit should be selected")
await input_sim.click_world_position(Vector2(150, 100)) # Attack enemy
# THEN: Enemy takes damage
await AsyncAssert.wait_until(
func(): return enemy.health < initial_health,
"Enemy should take damage")
func test_turn_cycle_completes():
# GIVEN: Game in progress
await scenario.on_turn(1).build()
var starting_turn = game_state.turn_number
# WHEN: Player ends turn
await input_sim.click_button("EndTurnButton")
await AsyncAssert.wait_until(
func(): return game_state.current_faction == Faction.ENEMY,
"Should switch to enemy turn")
# AND: Enemy turn completes
await AsyncAssert.wait_until(
func(): return game_state.current_faction == Faction.PLAYER,
"Should return to player turn",
30.0) # AI might take a while
# THEN: Turn number incremented
assert_eq(game_state.turn_number, starting_turn + 1)
```
### Quick E2E Checklist for Godot
- [ ] Create `GameE2ETestFixture` base class extending GutTest
- [ ] Implement `ScenarioBuilder` for your game's domain
- [ ] Create `InputSimulator` wrapping Godot Input
- [ ] Add `AsyncAssert` utilities with proper await
- [ ] Organize E2E tests under `tests/e2e/scenarios/`
- [ ] Configure GUT to include E2E test directory
- [ ] Set up CI with headless Godot execution

View File

@ -381,3 +381,17 @@ test:
| NullReferenceException | Missing Setup | Ensure [SetUp] initializes all fields | | NullReferenceException | Missing Setup | Ensure [SetUp] initializes all fields |
| Tests hang | Infinite coroutine | Add timeout or max iterations | | Tests hang | Infinite coroutine | Add timeout or max iterations |
| Flaky physics tests | Timing dependent | Use WaitForFixedUpdate, increase tolerance | | Flaky physics tests | Timing dependent | Use WaitForFixedUpdate, increase tolerance |
## End-to-End Testing
For comprehensive E2E testing patterns, infrastructure scaffolding, and
scenario builders, see **knowledge/e2e-testing.md**.
### Quick E2E Checklist for Unity
- [ ] Create `GameE2ETestFixture` base class
- [ ] Implement `ScenarioBuilder` for your game's domain
- [ ] Create `InputSimulator` wrapping Input System
- [ ] Add `AsyncAssert` utilities
- [ ] Organize E2E tests under `Tests/PlayMode/E2E/`
- [ ] Configure separate CI job for E2E suite

File diff suppressed because it is too large Load Diff

View File

@ -14,4 +14,5 @@ input-testing,Input Testing,"Controller, keyboard, and touch input validation","
localization-testing,Localization Testing,"Text, audio, and cultural validation for international releases","localization,i18n,text",knowledge/localization-testing.md localization-testing,Localization Testing,"Text, audio, and cultural validation for international releases","localization,i18n,text",knowledge/localization-testing.md
certification-testing,Platform Certification,"Console TRC/XR requirements and certification testing","certification,console,trc,xr",knowledge/certification-testing.md certification-testing,Platform Certification,"Console TRC/XR requirements and certification testing","certification,console,trc,xr",knowledge/certification-testing.md
smoke-testing,Smoke Testing,"Critical path validation for build verification","smoke-tests,bvt,ci",knowledge/smoke-testing.md smoke-testing,Smoke Testing,"Critical path validation for build verification","smoke-tests,bvt,ci",knowledge/smoke-testing.md
test-priorities,Test Priorities Matrix,"P0-P3 criteria, coverage targets, execution ordering for games","prioritization,risk,coverage",knowledge/test-priorities.md test-priorities,Test Priorities Matrix,"P0-P3 criteria, coverage targets, execution ordering for games","prioritization,risk,coverage",knowledge/test-priorities.md
e2e-testing,End-to-End Testing,"Complete player journey testing with infrastructure patterns and async utilities","e2e,integration,player-journeys,scenarios,infrastructure",knowledge/e2e-testing.md

1 id name description tags fragment_file
14 localization-testing Localization Testing Text, audio, and cultural validation for international releases localization,i18n,text knowledge/localization-testing.md
15 certification-testing Platform Certification Console TRC/XR requirements and certification testing certification,console,trc,xr knowledge/certification-testing.md
16 smoke-testing Smoke Testing Critical path validation for build verification smoke-tests,bvt,ci knowledge/smoke-testing.md
17 test-priorities Test Priorities Matrix P0-P3 criteria, coverage targets, execution ordering for games prioritization,risk,coverage knowledge/test-priorities.md
18 e2e-testing End-to-End Testing Complete player journey testing with infrastructure patterns and async utilities e2e,integration,player-journeys,scenarios,infrastructure knowledge/e2e-testing.md

View File

@ -209,6 +209,87 @@ func test_{feature}_integration():
# Cleanup # Cleanup
scene.queue_free() scene.queue_free()
``` ```
### E2E Journey Tests
**Knowledge Base Reference**: `knowledge/e2e-testing.md`
```csharp
public class {Feature}E2ETests : GameE2ETestFixture
{
[UnityTest]
public IEnumerator {JourneyName}_Succeeds()
{
// GIVEN
yield return Scenario
.{SetupMethod1}()
.{SetupMethod2}()
.Build();
// WHEN
yield return Input.{Action1}();
yield return AsyncAssert.WaitUntil(
() => {Condition1}, "{Description1}");
yield return Input.{Action2}();
// THEN
yield return AsyncAssert.WaitUntil(
() => {FinalCondition}, "{FinalDescription}");
Assert.{Assertion}({expected}, {actual});
}
}
```
## Step 3.5: Generate E2E Infrastructure
Before generating E2E tests, scaffold the required infrastructure.
### Infrastructure Checklist
1. **Test Fixture Base Class**
- Scene loading/unloading
- Game ready state waiting
- Common service access
- Cleanup guarantees
2. **Scenario Builder**
- Fluent API for game state configuration
- Domain-specific methods (e.g., `WithUnit`, `OnTurn`)
- Yields for state propagation
3. **Input Simulator**
- Click/drag abstractions
- Button press simulation
- Keyboard input queuing
4. **Async Assertions**
- `WaitUntil` with timeout and message
- `WaitForEvent` for event-driven flows
- `WaitForState` for state machine transitions
### Generation Template
```csharp
// GameE2ETestFixture.cs
public abstract class GameE2ETestFixture
{
protected {GameStateClass} GameState;
protected {InputSimulatorClass} Input;
protected {ScenarioBuilderClass} Scenario;
[UnitySetUp]
public IEnumerator BaseSetUp()
{
yield return LoadScene("{main_scene}");
GameState = Object.FindFirstObjectByType<{GameStateClass}>();
Input = new {InputSimulatorClass}();
Scenario = new {ScenarioBuilderClass}(GameState);
yield return WaitForReady();
}
// ... (fill from e2e-testing.md patterns)
}
```
**After scaffolding infrastructure, proceed to generate actual E2E tests.**
--- ---

View File

@ -0,0 +1,95 @@
# E2E Infrastructure Scaffold Checklist
## Preflight Validation
- [ ] Test framework already initialized (`Tests/` directory exists with proper structure)
- [ ] Game state manager class identified
- [ ] Main gameplay scene identified and loads without errors
- [ ] No existing E2E infrastructure conflicts
## Architecture Analysis
- [ ] Game engine correctly detected
- [ ] Engine version identified
- [ ] Input system type determined (New Input System, Legacy, Custom)
- [ ] Game state manager class located
- [ ] Ready/initialized state property identified
- [ ] Key domain entities catalogued for ScenarioBuilder
## Generated Files
### Directory Structure
- [ ] `Tests/PlayMode/E2E/` directory created
- [ ] `Tests/PlayMode/E2E/Infrastructure/` directory created
- [ ] `Tests/PlayMode/E2E/Scenarios/` directory created
- [ ] `Tests/PlayMode/E2E/TestData/` directory created
### Infrastructure Files
- [ ] `E2E.asmdef` created with correct assembly references
- [ ] `GameE2ETestFixture.cs` created with correct class references
- [ ] `ScenarioBuilder.cs` created with at least placeholder methods
- [ ] `InputSimulator.cs` created matching detected input system
- [ ] `AsyncAssert.cs` created with core assertion methods
### Example and Documentation
- [ ] `ExampleE2ETest.cs` created with working infrastructure test
- [ ] `README.md` created with usage documentation
## Code Quality
### GameE2ETestFixture
- [ ] Correct namespace applied
- [ ] Correct `GameStateClass` reference
- [ ] Correct `SceneName` default
- [ ] `WaitForGameReady` uses correct ready property
- [ ] `UnitySetUp` and `UnityTearDown` properly structured
- [ ] Virtual methods for derived class customization
### ScenarioBuilder
- [ ] Fluent API pattern correctly implemented
- [ ] `Build()` executes all queued actions
- [ ] At least one domain-specific method added (or clear TODOs)
- [ ] `FromSaveFile` method scaffolded
### InputSimulator
- [ ] Matches detected input system (New vs Legacy)
- [ ] Mouse click simulation works
- [ ] Button click by name works
- [ ] Keyboard input scaffolded
- [ ] `Reset()` method cleans up state
### AsyncAssert
- [ ] `WaitUntil` includes timeout and descriptive failure
- [ ] `WaitForValue` provides current vs expected in failure
- [ ] `AssertNeverTrue` for negative assertions
- [ ] Frame/physics wait utilities included
## Assembly Definition
- [ ] References main game assembly
- [ ] References Unity.InputSystem (if applicable)
- [ ] `overrideReferences` set to true
- [ ] `precompiledReferences` includes nunit.framework.dll
- [ ] `precompiledReferences` includes UnityEngine.TestRunner.dll
- [ ] `precompiledReferences` includes UnityEditor.TestRunner.dll
- [ ] `UNITY_INCLUDE_TESTS` define constraint set
## Verification
- [ ] Project compiles without errors after scaffold
- [ ] `ExampleE2ETests.Infrastructure_GameLoadsAndReachesReadyState` passes
- [ ] Test appears in Test Runner under PlayMode → E2E category
## Documentation Quality
- [ ] README explains all infrastructure components
- [ ] Quick start example is copy-pasteable
- [ ] Extension instructions are clear
- [ ] Troubleshooting table addresses common issues
## Handoff
- [ ] Summary output provided with all configuration values
- [ ] Next steps clearly listed
- [ ] Customization requirements highlighted
- [ ] Knowledge fragments referenced

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,145 @@
# E2E Test Infrastructure Scaffold Workflow
workflow:
id: e2e-scaffold
name: E2E Test Infrastructure Scaffold
version: 1.0
module: bmgd
agent: game-qa
description: |
Scaffold complete E2E testing infrastructure for an existing game project.
Creates test fixtures, scenario builders, input simulators, and async
assertion utilities tailored to the project's architecture.
triggers:
- "ES"
- "e2e-scaffold"
- "scaffold e2e"
- "e2e infrastructure"
- "setup e2e"
preflight:
- "Test framework initialized (run `test-framework` workflow first)"
- "Game has identifiable state manager"
- "Main gameplay scene exists"
# Paths are relative to this workflow file's location
knowledge_fragments:
- "../../../gametest/knowledge/e2e-testing.md"
- "../../../gametest/knowledge/unity-testing.md"
- "../../../gametest/knowledge/unreal-testing.md"
- "../../../gametest/knowledge/godot-testing.md"
inputs:
game_state_class:
description: "Primary game state manager class name"
required: true
example: "GameStateManager"
main_scene:
description: "Scene name where core gameplay occurs"
required: true
example: "GameScene"
input_system:
description: "Input system in use"
required: false
default: "auto-detect"
options:
- "unity-input-system"
- "unity-legacy"
- "unreal-enhanced"
- "godot-input"
- "custom"
# Output paths vary by engine. Generate files matching detected engine.
outputs:
unity:
condition: "engine == 'unity'"
infrastructure_files:
description: "Generated E2E infrastructure classes"
files:
- "Tests/PlayMode/E2E/Infrastructure/GameE2ETestFixture.cs"
- "Tests/PlayMode/E2E/Infrastructure/ScenarioBuilder.cs"
- "Tests/PlayMode/E2E/Infrastructure/InputSimulator.cs"
- "Tests/PlayMode/E2E/Infrastructure/AsyncAssert.cs"
assembly_definition:
description: "E2E test assembly configuration"
files:
- "Tests/PlayMode/E2E/E2E.asmdef"
example_test:
description: "Working example E2E test"
files:
- "Tests/PlayMode/E2E/ExampleE2ETest.cs"
documentation:
description: "E2E testing README"
files:
- "Tests/PlayMode/E2E/README.md"
unreal:
condition: "engine == 'unreal'"
infrastructure_files:
description: "Generated E2E infrastructure classes"
files:
- "Source/{ProjectName}/Tests/E2E/GameE2ETestBase.h"
- "Source/{ProjectName}/Tests/E2E/GameE2ETestBase.cpp"
- "Source/{ProjectName}/Tests/E2E/ScenarioBuilder.h"
- "Source/{ProjectName}/Tests/E2E/ScenarioBuilder.cpp"
- "Source/{ProjectName}/Tests/E2E/InputSimulator.h"
- "Source/{ProjectName}/Tests/E2E/InputSimulator.cpp"
- "Source/{ProjectName}/Tests/E2E/AsyncAssert.h"
build_configuration:
description: "E2E test build configuration"
files:
- "Source/{ProjectName}/Tests/E2E/{ProjectName}E2ETests.Build.cs"
example_test:
description: "Working example E2E test"
files:
- "Source/{ProjectName}/Tests/E2E/ExampleE2ETest.cpp"
documentation:
description: "E2E testing README"
files:
- "Source/{ProjectName}/Tests/E2E/README.md"
godot:
condition: "engine == 'godot'"
infrastructure_files:
description: "Generated E2E infrastructure classes"
files:
- "tests/e2e/infrastructure/game_e2e_test_fixture.gd"
- "tests/e2e/infrastructure/scenario_builder.gd"
- "tests/e2e/infrastructure/input_simulator.gd"
- "tests/e2e/infrastructure/async_assert.gd"
example_test:
description: "Working example E2E test"
files:
- "tests/e2e/scenarios/example_e2e_test.gd"
documentation:
description: "E2E testing README"
files:
- "tests/e2e/README.md"
steps:
- id: analyze
name: "Analyze Game Architecture"
instruction_file: "instructions.md#step-1-analyze-game-architecture"
- id: scaffold
name: "Generate Infrastructure"
instruction_file: "instructions.md#step-2-generate-infrastructure"
- id: example
name: "Generate Example Test"
instruction_file: "instructions.md#step-3-generate-example-test"
- id: document
name: "Generate Documentation"
instruction_file: "instructions.md#step-4-generate-documentation"
- id: complete
name: "Output Summary"
instruction_file: "instructions.md#step-5-output-summary"
validation:
checklist: "checklist.md"

View File

@ -91,6 +91,18 @@ Create comprehensive test scenarios for game projects, covering gameplay mechani
| Performance | FPS, loading times | P1 | | Performance | FPS, loading times | P1 |
| Accessibility | Assist features | P1 | | Accessibility | Assist features | P1 |
### E2E Journey Testing
**Knowledge Base Reference**: `knowledge/e2e-testing.md`
| Category | Focus | Priority |
|----------|-------|----------|
| Core Loop | Complete gameplay cycle | P0 |
| Turn Lifecycle | Full turn from start to end | P0 |
| Save/Load Round-trip | Save → quit → load → resume | P0 |
| Scene Transitions | Menu → Game → Back | P1 |
| Win/Lose Paths | Victory and defeat conditions | P1 |
--- ---
## Step 3: Create Test Scenarios ## Step 3: Create Test Scenarios
@ -153,6 +165,39 @@ SCENARIO: Gameplay Under High Latency
CATEGORY: multiplayer CATEGORY: multiplayer
``` ```
### E2E Scenario Format
For player journey tests, use this extended format:
```
E2E SCENARIO: [Player Journey Name]
GIVEN [Initial game state - use ScenarioBuilder terms]
WHEN [Sequence of player actions]
THEN [Observable outcomes]
TIMEOUT: [Expected max duration in seconds]
PRIORITY: P0/P1
CATEGORY: e2e
INFRASTRUCTURE: [Required fixtures/builders]
```
### Example E2E Scenario
```
E2E SCENARIO: Complete Combat Encounter
GIVEN game loaded with player unit adjacent to enemy
AND player unit has full health and actions
WHEN player selects unit
AND player clicks attack on enemy
AND player confirms attack
AND attack animation completes
AND enemy responds (if alive)
THEN enemy health is reduced OR enemy is defeated
AND turn state advances appropriately
AND UI reflects new state
TIMEOUT: 15
PRIORITY: P0
CATEGORY: e2e
INFRASTRUCTURE: ScenarioBuilder, InputSimulator, AsyncAssert
```
--- ---
## Step 4: Prioritize Test Coverage ## Step 4: Prioritize Test Coverage
@ -161,12 +206,12 @@ SCENARIO: Gameplay Under High Latency
**Knowledge Base Reference**: `knowledge/test-priorities.md` **Knowledge Base Reference**: `knowledge/test-priorities.md`
| Priority | Criteria | Coverage Target | | Priority | Criteria | Unit | Integration | E2E | Manual |
| -------- | ---------------------------- | --------------- | |----------|----------|------|-------------|-----|--------|
| P0 | Ship blockers, certification | 100% automated | | P0 | Ship blockers | 100% | 80% | Core flows | Smoke |
| P1 | Major features, common paths | 80% automated | | P1 | Major features | 90% | 70% | Happy paths | Full |
| P2 | Secondary features | 60% automated | | P2 | Secondary | 80% | 50% | - | Targeted |
| P3 | Edge cases, polish | Manual only | | P3 | Edge cases | 60% | - | - | As needed |
### Risk-Based Ordering ### Risk-Based Ordering

View File

@ -33,7 +33,7 @@ agent:
menu: menu:
- trigger: WS or fuzzy match on workflow-status - trigger: WS or fuzzy match on workflow-status
workflow: "{project-root}/_bmad/bmm/workflows/workflow-status/workflow.yaml" workflow: "{project-root}/_bmad/bmm/workflows/workflow-status/workflow.yaml"
description: "[WS] Get workflow status or initialize a workflow if not already done (optional)" description: "[WS] Start here or resume - show workflow status and next best step"
- trigger: TF or fuzzy match on test-framework - trigger: TF or fuzzy match on test-framework
workflow: "{project-root}/_bmad/bmm/workflows/testarch/framework/workflow.yaml" workflow: "{project-root}/_bmad/bmm/workflows/testarch/framework/workflow.yaml"

View File

@ -214,11 +214,24 @@
<output> Story status updated (no sprint tracking configured)</output> <output> Story status updated (no sprint tracking configured)</output>
</check> </check>
<!-- Sync bug tracking when story is marked done -->
<check if="{{new_status}} == 'done'">
<invoke-task path="{project-root}/.bmad/core/tasks/sync-bug-tracking.xml">
<param name="story_key">{{story_key}}</param>
<param name="story_id">{{story_id}}</param>
<param name="bugs_yaml">{output_folder}/bugs.yaml</param>
<param name="bugs_md">{output_folder}/bugs.md</param>
<param name="sprint_status">{sprint_status}</param>
<param name="date">{date}</param>
</invoke-task>
</check>
<output>**✅ Review Complete!** <output>**✅ Review Complete!**
**Story Status:** {{new_status}} **Story Status:** {{new_status}}
**Issues Fixed:** {{fixed_count}} **Issues Fixed:** {{fixed_count}}
**Action Items Created:** {{action_count}} **Action Items Created:** {{action_count}}
{{#if new_status == "done"}}**Bug/Feature Tracking:** Synced automatically{{/if}}
{{#if new_status == "done"}}Code review complete!{{else}}Address the action items and continue development.{{/if}} {{#if new_status == "done"}}Code review complete!{{else}}Address the action items and continue development.{{/if}}
</output> </output>

View File

@ -1300,7 +1300,67 @@ Bob (Scrum Master): "See you all when prep work is done. Meeting adjourned!"
</step> </step>
<step n="11" goal="Save Retrospective and Update Sprint Status"> <step n="11" goal="Sync Epic-Linked Bugs/Features to Closed Status">
<critical>Check bugs.yaml for bugs/features linked to this epic and close them</critical>
<action>Load {bugs_yaml} if it exists</action>
<check if="bugs.yaml exists">
<action>Search for entries with related_epic matching {{epic_number}}</action>
<action>For bugs section - find bugs with related_epic == {{epic_number}} AND status in ["fixed", "triaged", "routed"]:</action>
<check if="matching bugs found">
<action>For each matching bug:</action>
<action>- Move entry from "bugs" section to "closed_bugs" section</action>
<action>- Update status: → "closed"</action>
<action>- Set verified_by: "retrospective-workflow"</action>
<action>- Set verified_date: {date}</action>
<action>- Append to notes: "Auto-closed via epic retrospective. Epic {{epic_number}} completed on {date}."</action>
</check>
<action>For feature_requests section - find features with related_epic == {{epic_number}} AND status in ["implemented", "backlog", "in-progress"]:</action>
<check if="matching features found">
<action>For each matching feature:</action>
<action>- Move entry from "feature_requests" section to "implemented_features" section</action>
<action>- Update status: → "complete"</action>
<action>- Set completed_by: "retrospective-workflow"</action>
<action>- Set completed_date: {date}</action>
<action>- Append to notes: "Auto-closed via epic retrospective. Epic {{epic_number}} completed on {date}."</action>
</check>
<action>Update statistics section with new counts</action>
<action>Save updated bugs.yaml</action>
<check if="bugs/features were moved">
<action>Also update bugs.md:</action>
<action>- Remove [IMPLEMENTED] tag from closed items</action>
<action>- Move bug entries to "# Fixed Bugs" section if not already there</action>
<action>- Move feature entries to "# Implemented Features" section if not already there</action>
<action>- Add [CLOSED] or [COMPLETE] tag to indicate final status</action>
<action>Save updated bugs.md</action>
</check>
<output>
Bug/Feature Closure:
{{#if bugs_closed}}
- Bugs closed for Epic {{epic_number}}: {{bugs_closed_list}}
{{/if}}
{{#if features_completed}}
- Features completed for Epic {{epic_number}}: {{features_completed_list}}
{{/if}}
{{#if no_matches}}
- No outstanding bugs/features linked to Epic {{epic_number}}
{{/if}}
</output>
</check>
<check if="bugs.yaml does not exist">
<action>Skip bug tracking sync - no bugs.yaml file present</action>
</check>
</step>
<step n="12" goal="Save Retrospective and Update Sprint Status">
<action>Ensure retrospectives folder exists: {retrospectives_folder}</action> <action>Ensure retrospectives folder exists: {retrospectives_folder}</action>
<action>Create folder if it doesn't exist</action> <action>Create folder if it doesn't exist</action>
@ -1356,7 +1416,7 @@ Retrospective document was saved successfully, but {sprint_status_file} may need
</step> </step>
<step n="12" goal="Final Summary and Handoff"> <step n="13" goal="Final Summary and Handoff">
<output> <output>
**✅ Retrospective Complete, {user_name}!** **✅ Retrospective Complete, {user_name}!**

View File

@ -54,5 +54,9 @@ sprint_status_file: "{implementation_artifacts}/sprint-status.yaml"
story_directory: "{implementation_artifacts}" story_directory: "{implementation_artifacts}"
retrospectives_folder: "{implementation_artifacts}" retrospectives_folder: "{implementation_artifacts}"
# Bug tracking integration (optional)
bugs_yaml: "{planning_artifacts}/bugs.yaml"
bugs_md: "{planning_artifacts}/bugs.md"
standalone: true standalone: true
web_bundle: false web_bundle: false

View File

@ -48,6 +48,26 @@
<note>After discovery, these content variables are available: {epics_content} (all epics loaded - uses FULL_LOAD strategy)</note> <note>After discovery, these content variables are available: {epics_content} (all epics loaded - uses FULL_LOAD strategy)</note>
</step> </step>
<step n="1.5" goal="Load bugs.yaml for bug/feature tracking (optional)">
<action>Check if {bugs_yaml} exists in {planning_artifacts}</action>
<check if="bugs_yaml exists">
<action>Read bugs.yaml using grep to find all bug-NNN and feature-NNN entries</action>
<action>For each bug/feature, extract:
- ID (e.g., bug-001, feature-003)
- Title
- Status (triaged, routed, in-progress, fixed/implemented, verified, closed)
- Recommended workflow (direct-fix, tech-spec, correct-course, backlog)
- Related stories (sprint_stories field for features)
</action>
<action>Build bug/feature inventory for inclusion in sprint status</action>
<action>Track feature-to-story mappings (feature-001 → stories 7-1, 7-2, etc.)</action>
</check>
<check if="bugs_yaml does not exist">
<output>Note: No bugs.yaml found - bug tracking not enabled for this project.</output>
<action>Continue without bug integration</action>
</check>
</step>
<step n="2" goal="Build sprint status structure"> <step n="2" goal="Build sprint status structure">
<action>For each epic found, create entries in this order:</action> <action>For each epic found, create entries in this order:</action>
@ -65,6 +85,17 @@ development_status:
epic-1-retrospective: optional epic-1-retrospective: optional
``` ```
<action>If bugs.yaml was loaded, add bug/feature sources header comment:</action>
```yaml
# STORY SOURCES:
# ==============
# - epics.md: Primary source ({story_count} stories)
# - bugs.yaml: Feature-driven stories ({feature_story_count} stories from sprint_stories)
# - feature-001: 7-1, 7-2, 7-3 (from sprint_stories field)
# - feature-002: 3-7
```
</step> </step>
<step n="3" goal="Apply intelligent status detection"> <step n="3" goal="Apply intelligent status detection">

View File

@ -33,6 +33,10 @@ variables:
epics_location: "{planning_artifacts}" # Directory containing epic*.md files epics_location: "{planning_artifacts}" # Directory containing epic*.md files
epics_pattern: "epic*.md" # Pattern to find epic files epics_pattern: "epic*.md" # Pattern to find epic files
# Bug tracking integration (optional)
bugs_yaml: "{planning_artifacts}/bugs.yaml" # Structured bug/feature metadata
bugs_md: "{planning_artifacts}/bugs.md" # Human-readable bug tracking
# Output configuration # Output configuration
status_file: "{implementation_artifacts}/sprint-status.yaml" status_file: "{implementation_artifacts}/sprint-status.yaml"

View File

@ -88,15 +88,31 @@ Enter corrections (e.g., "1=in-progress, 2=backlog") or "skip" to continue witho
- IF any epic has status in-progress but has no associated stories: warn "in-progress epic has no stories" - IF any epic has status in-progress but has no associated stories: warn "in-progress epic has no stories"
</step> </step>
<step n="2.5" goal="Load bug/feature tracking status (optional)">
<action>Check if {bugs_yaml} exists</action>
<check if="bugs_yaml exists">
<action>Grep for bug-NNN and feature-NNN entries with status field</action>
<action>Count items by status: triaged, fixed/implemented (pending verify), verified, closed</action>
<action>Identify items needing action:
- Items with [IMPLEMENTED] tag → need verification
- Items with status "triaged" + workflow "direct-fix" → ready for implementation
</action>
<action>Store: bugs_pending_verify, bugs_triaged, features_pending_verify, features_triaged</action>
</check>
</step>
<step n="3" goal="Select next action recommendation"> <step n="3" goal="Select next action recommendation">
<action>Pick the next recommended workflow using priority:</action> <action>Pick the next recommended workflow using priority:</action>
<note>When selecting "first" story: sort by epic number, then story number (e.g., 1-1 before 1-2 before 2-1)</note> <note>When selecting "first" story: sort by epic number, then story number (e.g., 1-1 before 1-2 before 2-1)</note>
1. If any story status == in-progress → recommend `dev-story` for the first in-progress story <note>Bug verification takes priority over new story work to close the feedback loop</note>
2. Else if any story status == review → recommend `code-review` for the first review story 1. If any bug/feature has [IMPLEMENTED] tag (pending verify) → recommend `verify` for first pending item
3. Else if any story status == ready-for-dev → recommend `dev-story` 2. If any story status == in-progress → recommend `dev-story` for the first in-progress story
4. Else if any story status == backlog → recommend `create-story` 3. Else if any story status == review → recommend `code-review` for the first review story
5. Else if any retrospective status == optional → recommend `retrospective` 4. Else if any story status == ready-for-dev → recommend `dev-story`
6. Else → All implementation items done; suggest `workflow-status` to plan next phase 5. Else if any bug status == triaged with workflow == direct-fix → recommend `implement` for first triaged bug
6. Else if any story status == backlog → recommend `create-story`
7. Else if any retrospective status == optional → recommend `retrospective`
8. Else → All implementation items done; suggest `workflow-status` to plan next phase
<action>Store selected recommendation as: next_story_id, next_workflow_id, next_agent (SM/DEV as appropriate)</action> <action>Store selected recommendation as: next_story_id, next_workflow_id, next_agent (SM/DEV as appropriate)</action>
</step> </step>
@ -112,6 +128,11 @@ Enter corrections (e.g., "1=in-progress, 2=backlog") or "skip" to continue witho
**Epics:** backlog {{epic_backlog}}, in-progress {{epic_in_progress}}, done {{epic_done}} **Epics:** backlog {{epic_backlog}}, in-progress {{epic_in_progress}}, done {{epic_done}}
{{#if bugs_yaml_exists}}
**Bugs:** triaged {{bugs_triaged}}, pending-verify {{bugs_pending_verify}}, closed {{bugs_closed}}
**Features:** triaged {{features_triaged}}, pending-verify {{features_pending_verify}}, complete {{features_complete}}
{{/if}}
**Next Recommendation:** /bmad:bmm:workflows:{{next_workflow_id}} ({{next_story_id}}) **Next Recommendation:** /bmad:bmm:workflows:{{next_workflow_id}} ({{next_story_id}})
{{#if risks}} {{#if risks}}

View File

@ -21,6 +21,9 @@ instructions: "{installed_path}/instructions.md"
variables: variables:
sprint_status_file: "{implementation_artifacts}/sprint-status.yaml" sprint_status_file: "{implementation_artifacts}/sprint-status.yaml"
tracking_system: "file-system" tracking_system: "file-system"
# Bug tracking integration (optional)
bugs_yaml: "{planning_artifacts}/bugs.yaml"
bugs_md: "{planning_artifacts}/bugs.md"
# Smart input file references # Smart input file references
input_file_patterns: input_file_patterns:

View File

@ -0,0 +1,124 @@
# Story Approved Workflow Instructions (DEV Agent)
<critical>The workflow execution engine is governed by: {project-root}/.bmad/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language}</critical>
<workflow>
<critical>This workflow is run by DEV agent AFTER user confirms a story is approved (Definition of Done is complete)</critical>
<critical>Workflow: Update story file status to Done</critical>
<step n="1" goal="Find reviewed story to mark done" tag="sprint-status">
<check if="{story_path} is provided">
<action>Use {story_path} directly</action>
<action>Read COMPLETE story file and parse sections</action>
<action>Extract story_key from filename or story metadata</action>
<action>Verify Status is "review" - if not, HALT with message: "Story status must be 'review' to mark as done"</action>
</check>
<check if="{story_path} is NOT provided">
<critical>MUST read COMPLETE sprint-status.yaml file from start to end to preserve order</critical>
<action>Load the FULL file: {output_folder}/sprint-status.yaml</action>
<action>Read ALL lines from beginning to end - do not skip any content</action>
<action>Parse the development_status section completely</action>
<action>Find FIRST story (reading in order from top to bottom) where: - Key matches pattern: number-number-name (e.g., "1-2-user-auth") - NOT an epic key (epic-X) or retrospective (epic-X-retrospective) - Status value equals "review"
</action>
<check if="no story with status 'review' found">
<output>No stories with status "review" found
All stories are either still in development or already done.
**Next Steps:**
1. Run `dev-story` to implement stories
2. Run `code-review` if stories need review first
3. Check sprint-status.yaml for current story states
</output>
<action>HALT</action>
</check>
<action>Use the first reviewed story found</action>
<action>Find matching story file in {story_dir} using story_key pattern</action>
<action>Read the COMPLETE story file</action>
</check>
<action>Extract story_id and story_title from the story file</action>
<action>Find the "Status:" line (usually at the top)</action>
<action>Update story file: Change Status to "done"</action>
<action>Add completion notes to Dev Agent Record section:</action>
<action>Find "## Dev Agent Record" section and add:
```
### Completion Notes
**Completed:** {date}
**Definition of Done:** All acceptance criteria met, code reviewed, tests passing
```
</action>
<action>Save the story file</action>
</step>
<step n="2" goal="Update sprint status to done" tag="sprint-status">
<action>Load the FULL file: {output_folder}/sprint-status.yaml</action>
<action>Find development_status key matching {story_key}</action>
<action>Verify current status is "review" (expected previous state)</action>
<action>Update development_status[{story_key}] = "done"</action>
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
<check if="story key not found in file">
<output>Story file updated, but could not update sprint-status: {story_key} not found
Story is marked Done in file, but sprint-status.yaml may be out of sync.
</output>
</check>
</step>
<step n="3" goal="Sync related bugs/features in bug tracking">
<critical>Invoke shared task to sync bugs.yaml and bugs.md for this completed story</critical>
<invoke-task path="{project-root}/.bmad/core/tasks/sync-bug-tracking.xml">
<param name="story_key">{story_key}</param>
<param name="story_id">{story_id}</param>
<param name="bugs_yaml">{bugs_yaml}</param>
<param name="bugs_md">{bugs_md}</param>
<param name="sprint_status">{sprint_status}</param>
<param name="date">{date}</param>
</invoke-task>
</step>
<step n="4" goal="Confirm completion to user">
<output>**Story Approved and Marked Done, {user_name}!**
Story file updated - Status: done
Sprint status updated: review → done
**Completed Story:**
- **ID:** {story_id}
- **Key:** {story_key}
- **Title:** {story_title}
- **Completed:** {date}
**Next Steps:**
1. Continue with next story in your backlog
- Run `create-story` for next backlog story
- Or run `dev-story` if ready stories exist
2. Check epic completion status
- Run `retrospective` workflow to check if epic is complete
- Epic retrospective will verify all stories are done
</output>
</step>
</workflow>

View File

@ -0,0 +1,27 @@
# Story Done Workflow (DEV Agent)
name: story-done
description: 'Marks a story as done (DoD complete), updates sprint-status → DONE, and syncs related bugs/features in bugs.yaml/bugs.md to [IMPLEMENTED] status.'
author: 'BMad'
# Critical variables from config
config_source: '{project-root}/.bmad/bmm/config.yaml'
output_folder: '{config_source}:output_folder'
user_name: '{config_source}:user_name'
communication_language: '{config_source}:communication_language'
date: system-generated
sprint_status: '{output_folder}/sprint-status.yaml'
# Workflow components
installed_path: '{project-root}/.bmad/bmm/workflows/4-implementation/story-done'
instructions: '{installed_path}/instructions.md'
# Variables and inputs
variables:
story_dir: '{config_source}:dev_ephemeral_location/stories' # Directory where stories are stored
bugs_yaml: '{output_folder}/bugs.yaml' # Bug/feature tracking structured data
bugs_md: '{output_folder}/bugs.md' # Bug/feature tracking human-readable log
# Output configuration - no output file, just status updates
default_output_file: ''
standalone: true

View File

@ -0,0 +1,542 @@
# In-App Bug Reporting - Reference Implementation
This document provides a reference implementation for adding **in-app bug reporting** to your project. The BMAD bug-tracking workflow works without this feature (using manual `bugs.md` input), but in-app reporting provides a better user experience.
## Overview
The in-app bug reporting feature allows users to submit bug reports directly from your application. Reports are stored in your database and then synced to `bugs.md` by the triage workflow.
```
User -> UI Modal -> API -> Database -> Triage Workflow -> bugs.md/bugs.yaml
```
## Components Required
| Component | Purpose | Stack-Specific |
|-----------|---------|----------------|
| Database table | Store pending bug reports | Yes |
| API: Create report | Accept user submissions | Yes |
| API: Get pending | Fetch unsynced reports | Yes |
| API: Mark synced | Update status after sync | Yes |
| UI Modal | Bug report form | Yes |
| Validation schemas | Input validation | Partially |
## 1. Database Schema
### Drizzle ORM (PostgreSQL)
```typescript
// src/lib/server/db/schema.ts
import { pgTable, uuid, text, timestamp, index } from 'drizzle-orm/pg-core';
export const bugReports = pgTable(
'bug_reports',
{
id: uuid('id').primaryKey().defaultRandom(),
organizationId: uuid('organization_id').notNull(), // For multi-tenant apps
reporterType: text('reporter_type').notNull(), // 'staff' | 'member' | 'user'
reporterId: uuid('reporter_id').notNull(),
title: text('title').notNull(),
description: text('description').notNull(),
userAgent: text('user_agent'),
pageUrl: text('page_url'),
platform: text('platform'), // 'Windows', 'macOS', 'iOS', etc.
browser: text('browser'), // 'Chrome', 'Safari', 'Firefox'
screenshotUrl: text('screenshot_url'), // Optional: cloud storage URL
status: text('status').notNull().default('new'), // 'new' | 'synced' | 'dismissed'
createdAt: timestamp('created_at', { withTimezone: true }).defaultNow().notNull(),
syncedAt: timestamp('synced_at', { withTimezone: true })
},
(table) => [
index('bug_reports_organization_id_idx').on(table.organizationId),
index('bug_reports_status_idx').on(table.status),
index('bug_reports_created_at_idx').on(table.createdAt)
]
);
export const BUG_REPORT_STATUS = {
NEW: 'new',
SYNCED: 'synced',
DISMISSED: 'dismissed'
} as const;
export const REPORTER_TYPE = {
STAFF: 'staff',
MEMBER: 'member',
USER: 'user'
} as const;
```
### Prisma Schema
```prisma
model BugReport {
id String @id @default(uuid())
organizationId String @map("organization_id")
reporterType String @map("reporter_type")
reporterId String @map("reporter_id")
title String
description String
userAgent String? @map("user_agent")
pageUrl String? @map("page_url")
platform String?
browser String?
screenshotUrl String? @map("screenshot_url")
status String @default("new")
createdAt DateTime @default(now()) @map("created_at")
syncedAt DateTime? @map("synced_at")
@@index([organizationId])
@@index([status])
@@index([createdAt])
@@map("bug_reports")
}
```
## 2. Validation Schemas
### Zod (TypeScript)
```typescript
// src/lib/schemas/bug-report.ts
import { z } from 'zod';
export const createBugReportSchema = z.object({
title: z
.string()
.trim()
.min(5, 'Title must be at least 5 characters')
.max(200, 'Title must be 200 characters or less'),
description: z
.string()
.trim()
.min(10, 'Description must be at least 10 characters')
.max(5000, 'Description must be 5000 characters or less'),
pageUrl: z.string().url().optional(),
userAgent: z.string().max(1000).optional(),
platform: z.string().max(50).optional(),
browser: z.string().max(50).optional()
});
export const markSyncedSchema = z.object({
ids: z.array(z.string().uuid()).min(1, 'At least one ID is required')
});
export const SCREENSHOT_CONFIG = {
maxSizeBytes: 5 * 1024 * 1024, // 5MB
maxSizeMB: 5,
allowedTypes: ['image/jpeg', 'image/png', 'image/webp'] as const
} as const;
```
## 3. API Endpoints
### POST /api/bug-reports - Create Report
```typescript
// SvelteKit: src/routes/api/bug-reports/+server.ts
import { json } from '@sveltejs/kit';
import type { RequestHandler } from './$types';
import { db } from '$lib/server/db';
import { bugReports } from '$lib/server/db/schema';
import { createBugReportSchema } from '$lib/schemas/bug-report';
export const POST: RequestHandler = async ({ request, locals }) => {
// Determine reporter from auth context
if (!locals.user) {
return json({ error: { code: 'UNAUTHORIZED' } }, { status: 401 });
}
const body = await request.json();
const result = createBugReportSchema.safeParse(body);
if (!result.success) {
return json({
error: { code: 'VALIDATION_ERROR', message: result.error.issues[0]?.message }
}, { status: 400 });
}
const { title, description, pageUrl, userAgent, platform, browser } = result.data;
const [newReport] = await db
.insert(bugReports)
.values({
organizationId: locals.user.organizationId,
reporterType: 'staff',
reporterId: locals.user.id,
title,
description,
pageUrl,
userAgent,
platform,
browser
})
.returning();
return json({
data: {
bugReport: {
id: newReport.id,
title: newReport.title,
createdAt: newReport.createdAt.toISOString()
}
}
}, { status: 201 });
};
```
### GET /api/bug-reports/pending - Fetch for Triage
```typescript
// SvelteKit: src/routes/api/bug-reports/pending/+server.ts
import { json } from '@sveltejs/kit';
import type { RequestHandler } from './$types';
import { db } from '$lib/server/db';
import { bugReports, BUG_REPORT_STATUS } from '$lib/server/db/schema';
import { eq } from 'drizzle-orm';
export const GET: RequestHandler = async () => {
const reports = await db
.select()
.from(bugReports)
.where(eq(bugReports.status, BUG_REPORT_STATUS.NEW))
.orderBy(bugReports.createdAt);
// Map to workflow-expected format
const formatted = reports.map((r) => ({
id: r.id,
title: r.title,
description: r.description,
reporterType: r.reporterType,
reporterName: 'Unknown', // Join with users table for real name
platform: r.platform,
browser: r.browser,
pageUrl: r.pageUrl,
screenshotUrl: r.screenshotUrl,
createdAt: r.createdAt.toISOString()
}));
return json({
data: {
reports: formatted,
count: formatted.length
}
});
};
```
### POST /api/bug-reports/mark-synced - Update After Sync
```typescript
// SvelteKit: src/routes/api/bug-reports/mark-synced/+server.ts
import { json } from '@sveltejs/kit';
import type { RequestHandler } from './$types';
import { db } from '$lib/server/db';
import { bugReports, BUG_REPORT_STATUS } from '$lib/server/db/schema';
import { inArray } from 'drizzle-orm';
import { markSyncedSchema } from '$lib/schemas/bug-report';
export const POST: RequestHandler = async ({ request }) => {
const body = await request.json();
const result = markSyncedSchema.safeParse(body);
if (!result.success) {
return json({
error: { code: 'VALIDATION_ERROR', message: result.error.issues[0]?.message }
}, { status: 400 });
}
const updated = await db
.update(bugReports)
.set({
status: BUG_REPORT_STATUS.SYNCED,
syncedAt: new Date()
})
.where(inArray(bugReports.id, result.data.ids))
.returning({ id: bugReports.id });
return json({
data: {
updatedCount: updated.length,
updatedIds: updated.map((r) => r.id)
}
});
};
```
## 4. UI Component
### Svelte 5 (with shadcn-svelte)
```svelte
<!-- src/lib/components/BugReportModal.svelte -->
<script lang="ts">
import * as Dialog from '$lib/components/ui/dialog';
import { Button } from '$lib/components/ui/button';
import { Input } from '$lib/components/ui/input';
import { Textarea } from '$lib/components/ui/textarea';
import { toast } from 'svelte-sonner';
import { Bug } from 'lucide-svelte';
import { browser } from '$app/environment';
interface Props {
open: boolean;
onClose: () => void;
}
let { open = $bindable(), onClose }: Props = $props();
let title = $state('');
let description = $state('');
let loading = $state(false);
// Auto-detect environment
let platform = $derived(browser ? detectPlatform() : '');
let browserName = $derived(browser ? detectBrowser() : '');
let currentUrl = $derived(browser ? window.location.href : '');
function detectPlatform(): string {
const ua = navigator.userAgent.toLowerCase();
if (ua.includes('iphone') || ua.includes('ipad')) return 'iOS';
if (ua.includes('android')) return 'Android';
if (ua.includes('mac')) return 'macOS';
if (ua.includes('win')) return 'Windows';
return 'Unknown';
}
function detectBrowser(): string {
const ua = navigator.userAgent;
if (ua.includes('Chrome') && !ua.includes('Edg')) return 'Chrome';
if (ua.includes('Safari') && !ua.includes('Chrome')) return 'Safari';
if (ua.includes('Firefox')) return 'Firefox';
if (ua.includes('Edg')) return 'Edge';
return 'Unknown';
}
async function handleSubmit() {
loading = true;
try {
const response = await fetch('/api/bug-reports', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
title,
description,
pageUrl: currentUrl,
userAgent: navigator.userAgent,
platform,
browser: browserName
})
});
if (!response.ok) {
const data = await response.json();
toast.error(data.error?.message || 'Failed to submit');
return;
}
toast.success('Bug report submitted');
onClose();
} finally {
loading = false;
}
}
</script>
<Dialog.Root bind:open onOpenChange={(o) => !o && onClose()}>
<Dialog.Content class="sm:max-w-[500px]">
<Dialog.Header>
<Dialog.Title class="flex items-center gap-2">
<Bug class="h-5 w-5" />
Report a Bug
</Dialog.Title>
</Dialog.Header>
<form onsubmit={(e) => { e.preventDefault(); handleSubmit(); }} class="space-y-4">
<div>
<Input bind:value={title} placeholder="Brief summary" maxlength={200} />
</div>
<div>
<Textarea bind:value={description} placeholder="What happened?" rows={4} />
</div>
<div class="rounded-md bg-muted p-3 text-sm text-muted-foreground">
{platform} / {browserName}
</div>
<Dialog.Footer>
<Button variant="outline" onclick={onClose} disabled={loading}>Cancel</Button>
<Button type="submit" disabled={loading}>Submit</Button>
</Dialog.Footer>
</form>
</Dialog.Content>
</Dialog.Root>
```
### React (with shadcn/ui)
```tsx
// src/components/BugReportModal.tsx
import { useState } from 'react';
import { Dialog, DialogContent, DialogHeader, DialogTitle, DialogFooter } from '@/components/ui/dialog';
import { Button } from '@/components/ui/button';
import { Input } from '@/components/ui/input';
import { Textarea } from '@/components/ui/textarea';
import { Bug } from 'lucide-react';
import { toast } from 'sonner';
interface Props {
open: boolean;
onClose: () => void;
}
export function BugReportModal({ open, onClose }: Props) {
const [title, setTitle] = useState('');
const [description, setDescription] = useState('');
const [loading, setLoading] = useState(false);
const detectPlatform = () => {
const ua = navigator.userAgent.toLowerCase();
if (ua.includes('iphone') || ua.includes('ipad')) return 'iOS';
if (ua.includes('android')) return 'Android';
if (ua.includes('mac')) return 'macOS';
if (ua.includes('win')) return 'Windows';
return 'Unknown';
};
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
setLoading(true);
try {
const response = await fetch('/api/bug-reports', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
title,
description,
pageUrl: window.location.href,
userAgent: navigator.userAgent,
platform: detectPlatform()
})
});
if (!response.ok) throw new Error('Failed to submit');
toast.success('Bug report submitted');
onClose();
} catch {
toast.error('Failed to submit bug report');
} finally {
setLoading(false);
}
};
return (
<Dialog open={open} onOpenChange={(o) => !o && onClose()}>
<DialogContent>
<DialogHeader>
<DialogTitle className="flex items-center gap-2">
<Bug className="h-5 w-5" />
Report a Bug
</DialogTitle>
</DialogHeader>
<form onSubmit={handleSubmit} className="space-y-4">
<Input value={title} onChange={(e) => setTitle(e.target.value)} placeholder="Brief summary" />
<Textarea value={description} onChange={(e) => setDescription(e.target.value)} placeholder="What happened?" />
<DialogFooter>
<Button variant="outline" onClick={onClose} disabled={loading}>Cancel</Button>
<Button type="submit" disabled={loading}>Submit</Button>
</DialogFooter>
</form>
</DialogContent>
</Dialog>
);
}
```
## 5. Workflow Configuration
Update your project's `.bmad/bmm/config.yaml` to set the `project_url`:
```yaml
# .bmad/bmm/config.yaml
project_url: "http://localhost:5173" # Dev
# project_url: "https://your-app.com" # Prod
```
The triage workflow will use this to call your API endpoints.
## 6. API Response Format
The workflow expects these response formats:
### GET /api/bug-reports/pending
```json
{
"data": {
"reports": [
{
"id": "uuid",
"title": "Bug title",
"description": "Bug description",
"reporterType": "staff",
"reporterName": "John Doe",
"platform": "macOS",
"browser": "Chrome",
"pageUrl": "https://...",
"screenshotUrl": "https://...",
"createdAt": "2025-01-01T00:00:00Z"
}
],
"count": 1
}
}
```
### POST /api/bug-reports/mark-synced
Request:
```json
{ "ids": ["uuid1", "uuid2"] }
```
Response:
```json
{
"data": {
"updatedCount": 2,
"updatedIds": ["uuid1", "uuid2"]
}
}
```
## 7. Optional: Screenshot Storage
For screenshot uploads, you'll need cloud storage (R2, S3, etc.):
1. Create an upload endpoint: `POST /api/bug-reports/[id]/upload-screenshot`
2. Upload to cloud storage
3. Update `screenshotUrl` on the bug report record
## 8. Security Considerations
- **Authentication**: Create endpoint should require auth
- **API Key**: Consider adding API key auth for pending/mark-synced endpoints in production
- **Rate Limiting**: Add rate limits to prevent spam
- **Input Sanitization**: Validate all user input (handled by Zod schemas)
## Without In-App Reporting
If you don't implement in-app reporting, the workflow still works:
1. Users manually add bugs to `docs/bugs.md` under `# manual input`
2. Run `/triage` to process them
3. Workflow skips Step 0 (API sync) when no API is available
The workflows are designed to be flexible and work with or without the in-app feature.

View File

@ -0,0 +1,167 @@
# Step 1: Bug Tracking Workflow Initialization
## MANDATORY EXECUTION RULES (READ FIRST):
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: ALWAYS read the complete step file before taking any action - partial understanding leads to incomplete triage
- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding
- ✅ ALWAYS treat this as collaborative triage between peers
- 📋 YOU ARE A FACILITATOR, not an automatic processor
- 💬 FOCUS on initialization and setup only - don't look ahead to future steps
- 🚪 DETECT existing workflow state and handle continuation properly
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
## EXECUTION PROTOCOLS:
- 🎯 Show your analysis before taking any action
- 💾 Initialize bugs.yaml if needed
- 📖 Track workflow state for potential continuation
- 🚫 FORBIDDEN to load next step until setup is complete
## CONTEXT BOUNDARIES:
- Variables from workflow.md are available in memory
- bugs.yaml tracks all structured bug metadata
- bugs.md is the user-facing input file
- Don't assume knowledge from other steps
## YOUR TASK:
Initialize the Bug Tracking workflow by detecting existing state, discovering input files, and setting up for collaborative triage.
## INITIALIZATION SEQUENCE:
### 1. Check for Existing Session
First, check workflow state:
- Look for existing `{bugs_output}` (bugs.yaml)
- If exists, grep for bugs with `status: triaged` (pending implementation)
- Check `{bugs_input}` (bugs.md) for items in "# manual input" section
### 2. Handle Continuation (If Pending Work Exists)
If bugs.yaml exists with triaged bugs awaiting action:
- **STOP here** and load `./step-01b-continue.md` immediately
- Do not proceed with fresh initialization
- Let step-01b handle the continuation logic
### 3. Fresh Workflow Setup (If No Pending Work)
If no bugs.yaml exists OR no pending triaged bugs:
#### A. Input File Discovery
Discover and validate required files:
**Required Files:**
- `{bugs_input}` (bugs.md) - User-facing bug reports
- Must have "# manual input" section for new bugs
- May have "# Tracked Bugs" and "# Fixed Bugs" sections
**Optional Context Files:**
- `{sprint_status}` - Current sprint context (which stories are in progress)
- `{epics_file}` - For mapping bugs to related stories/epics
#### B. Initialize bugs.yaml (If Not Exists)
If bugs.yaml doesn't exist, create it with header structure:
```yaml
# Bug Tracking Database
# Generated by bug-tracking workflow
# Last updated: {date}
# Severity Definitions:
# - critical: Prevents core functionality, crashes, data loss
# - high: Blocks major features, significantly degrades UX
# - medium: Affects subset of users, minor impact with workaround
# - low: Cosmetic, edge case, or minor inconvenience
# Complexity Definitions:
# - trivial: One-line fix, obvious solution
# - small: Single file/component, solution clear
# - medium: Multiple files OR requires investigation
# - complex: Architectural change, affects many areas
# Workflow Routing Matrix:
# - critical + any → correct-course
# - high + trivial → direct-fix
# - high + small → tech-spec
# - high + medium/complex → correct-course
# - medium + trivial → direct-fix
# - medium + small → tech-spec
# - medium + medium/complex → correct-course
# - low + trivial → direct-fix
# - low + small/medium/complex → backlog
bugs: []
features: []
closed_bugs: []
statistics:
total_active: 0
by_severity:
critical: 0
high: 0
medium: 0
low: 0
last_updated: {date}
```
#### C. Scan for New Bugs
Read ONLY the "# manual input" section from bugs.md:
- Grep for "# manual input" to find starting line
- Grep for next section header to find ending line
- Read just that range (do NOT read entire file)
Count items found in manual input section.
#### D. Complete Initialization and Report
Report to user:
"Welcome {user_name}! I've initialized the Bug Tracking workspace for {project_name}.
**Files Status:**
- bugs.md: {found/created} - {count} item(s) in manual input section
- bugs.yaml: {found/created} - {active_count} active bugs tracked
**Context Files:**
- Sprint Status: {loaded/not found}
- Epics: {loaded/not found}
**Ready for Triage:**
{count} new item(s) found in manual input section.
[S] Sync bug reports from API first (if app integration configured)
[C] Continue to parse and triage bugs
[Q] Quit - no new bugs to triage"
## SUCCESS METRICS:
✅ Existing workflow detected and handed off to step-01b correctly
✅ Fresh workflow initialized with bugs.yaml structure
✅ Input files discovered and validated
✅ Manual input section scanned for new items
✅ User informed of status and can proceed
## FAILURE MODES:
❌ Proceeding with fresh initialization when pending work exists
❌ Not creating bugs.yaml with proper header/definitions
❌ Reading entire bugs.md instead of just manual input section
❌ Not reporting status to user before proceeding
**CRITICAL**: Reading only partial step file - leads to incomplete understanding
**CRITICAL**: Proceeding with 'C' without fully reading the next step file
## NEXT STEP:
- If user selects [S], load `./step-02-sync.md` to sync from API
- If user selects [C], load `./step-03-parse.md` to parse and identify new bugs
- If user selects [Q], end workflow gracefully
Remember: Do NOT proceed until user explicitly selects an option from the menu!

View File

@ -0,0 +1,110 @@
# Step 1b: Continue Existing Bug Tracking Session
## MANDATORY EXECUTION RULES (READ FIRST):
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: ALWAYS read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding
- ✅ ALWAYS treat this as collaborative triage between peers
- 📋 YOU ARE A FACILITATOR, not an automatic processor
- 🚪 This step handles CONTINUATION of existing work
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
## EXECUTION PROTOCOLS:
- 🎯 Summarize existing state before offering options
- 💾 Preserve all existing bugs.yaml data
- 📖 Help user understand where they left off
- 🚫 FORBIDDEN to lose or overwrite existing triage work
## CONTEXT BOUNDARIES:
- Variables from workflow.md are available in memory
- bugs.yaml contains existing structured data
- User may have triaged bugs awaiting implementation
- Don't re-triage already processed bugs
## YOUR TASK:
Welcome user back and summarize the current state of bug tracking, offering relevant continuation options.
## CONTINUATION SEQUENCE:
### 1. Load Current State
Read bugs.yaml and extract:
- Total active bugs count
- Bugs by status (triaged, implemented, verified)
- Bugs by severity breakdown
- Bugs by recommended workflow
### 2. Check for New Input
Scan "# manual input" section of bugs.md:
- Count items not yet in bugs.yaml
- These are new bugs needing triage
### 3. Present Continuation Summary
Report to user:
"Welcome back, {user_name}! Here's your Bug Tracking status for {project_name}.
**Current State:**
- Active Bugs: {total_active}
- Triaged (awaiting action): {triaged_count}
- Implemented (awaiting verification): {implemented_count}
- By Severity: Critical: {critical} | High: {high} | Medium: {medium} | Low: {low}
**Workflow Routing:**
- Direct Fix: {direct_fix_count} bug(s)
- Tech-Spec: {tech_spec_count} bug(s)
- Correct-Course: {correct_course_count} bug(s)
- Backlog: {backlog_count} bug(s)
**New Items:**
- {new_count} new item(s) found in manual input section
**Options:**
[T] Triage new bugs ({new_count} items)
[I] Implement a bug - `/implement bug-NNN`
[V] Verify implemented bugs - `/verify`
[L] List bugs by status/severity
[Q] Quit"
### 4. Handle User Selection
Based on user choice:
- **[T] Triage**: Load `./step-03-parse.md` to process new bugs
- **[I] Implement**: Guide user to run `/implement bug-NNN` skill
- **[V] Verify**: Guide user to run `/verify` skill
- **[L] List**: Show filtered bug list, then return to menu
- **[Q] Quit**: End workflow gracefully
## SUCCESS METRICS:
✅ Existing state accurately summarized
✅ New items detected and counted
✅ User given clear options based on current state
✅ Appropriate next step loaded based on selection
## FAILURE MODES:
❌ Losing track of existing triaged bugs
❌ Re-triaging already processed bugs
❌ Not detecting new items in manual input
❌ Proceeding without user selection
**CRITICAL**: Reading only partial step file
**CRITICAL**: Proceeding without explicit user menu selection
## NEXT STEP:
Load appropriate step based on user selection:
- [T] → `./step-03-parse.md`
- [I], [V] → Guide to relevant skill, then return here
- [L] → Display list, return to this menu
- [Q] → End workflow
Remember: Do NOT proceed until user explicitly selects an option!

View File

@ -0,0 +1,145 @@
# Step 2: Sync Bug Reports from API
## MANDATORY EXECUTION RULES (READ FIRST):
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: ALWAYS read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding
- ✅ ALWAYS treat this as collaborative triage between peers
- 📋 YOU ARE A FACILITATOR, not an automatic processor
- 🌐 This step handles OPTIONAL API integration for in-app bug reporting
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
## EXECUTION PROTOCOLS:
- 🎯 Attempt API sync only if configured
- 💾 Preserve existing manual input entries
- 📖 Format synced reports as markdown entries
- 🚫 FORBIDDEN to lose manually entered bugs
## CONTEXT BOUNDARIES:
- Variables from workflow.md are available in memory
- `project_url` may or may not be configured
- API endpoints are optional - gracefully handle if unavailable
- This step can be skipped if no API integration
## YOUR TASK:
Sync pending bug reports from the application's PostgreSQL database via API, formatting them as markdown entries in bugs.md.
## SYNC SEQUENCE:
### 1. Check API Configuration
Verify `{project_url}` is configured:
- If not configured or user skipped this step, proceed to step-03
- If configured, attempt API connection
### 2. Fetch Pending Reports
**API Call:**
```
GET {project_url}/api/bug-reports/pending
```
**Expected Response:**
```json
{
"data": {
"reports": [...],
"count": number
}
}
```
**Report Fields:**
- `id` - Database ID
- `title` - Bug title
- `description` - Bug description
- `reporterType` - Type of reporter (user, staff, admin)
- `reporterName` - Name of reporter
- `platform` - Platform (iOS, Android, web)
- `browser` - Browser if web
- `pageUrl` - URL where bug occurred
- `screenshotUrl` - Optional screenshot
- `createdAt` - Timestamp
### 3. Handle No Reports
If count == 0:
"No new bug reports from the application API.
[C] Continue to triage existing manual input
[Q] Quit - nothing to process"
### 4. Format Reports as Markdown
For each report, create markdown entry:
```markdown
## Bug: {title}
{description}
Reported by: {reporterName} ({reporterType})
Date: {createdAt formatted as YYYY-MM-DD}
Platform: {platform} / {browser}
Page: {pageUrl}
{if screenshotUrl: Screenshot: {screenshotUrl}}
```
### 5. Insert into bugs.md
- Read the "# manual input" section location from bugs.md
- Insert new markdown entries after the "# manual input" header
- Preserve any existing manual input entries
- Write updated bugs.md
### 6. Mark Reports as Synced
**API Call:**
```
POST {project_url}/api/bug-reports/mark-synced
Body: { "ids": [array of synced report IDs] }
```
This updates status to 'synced' so reports won't be fetched again.
### 7. Report Sync Results
"**Synced {count} bug report(s) from application:**
{for each report:}
- {title} (from {reporterName})
{end for}
These have been added to the manual input section of bugs.md.
[C] Continue to parse and triage all bugs
[Q] Quit"
## SUCCESS METRICS:
✅ API availability checked gracefully
✅ Pending reports fetched and formatted
✅ Existing manual entries preserved
✅ Reports marked as synced in database
✅ User informed of sync results
## FAILURE MODES:
❌ Crashing if API unavailable (should gracefully skip)
❌ Overwriting existing manual input entries
❌ Not marking reports as synced (causes duplicates)
❌ Proceeding without user confirmation
**CRITICAL**: Reading only partial step file
**CRITICAL**: Proceeding without explicit user selection
## NEXT STEP:
After user selects [C], load `./step-03-parse.md` to parse and identify all bugs needing triage.
Remember: Do NOT proceed until user explicitly selects [C] from the menu!

View File

@ -0,0 +1,164 @@
# Step 3: Parse and Identify New Bugs
## MANDATORY EXECUTION RULES (READ FIRST):
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: ALWAYS read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding
- ✅ ALWAYS treat this as collaborative triage between peers
- 📋 YOU ARE A FACILITATOR, not an automatic processor
- 🔍 This step PARSES input only - triage happens in next step
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
## EXECUTION PROTOCOLS:
- 🎯 Parse manual input section thoroughly
- 💾 Compare against existing bugs.yaml entries
- 📖 Extract all available information from informal reports
- 🚫 FORBIDDEN to start triage in this step - parsing only
## CONTEXT BOUNDARIES:
- Variables from workflow.md are available in memory
- bugs.yaml contains existing triaged bugs
- Only parse "# manual input" section of bugs.md
- Do NOT read entire bugs.md file
## YOUR TASK:
Parse the "# manual input" section of bugs.md, extract bug information, and identify which items need triage.
## PARSE SEQUENCE:
### 1. Read Manual Input Section
Section-based reading of bugs.md:
- Grep for "# manual input" to find starting line number
- Grep for next section header ("# Tracked Bugs", "# Tracked Feature Requests", "# Fixed Bugs") to find ending line
- Read just that range using offset/limit (do NOT read entire file)
- If no closing section found within initial window, expand read range and retry
### 2. Search Existing IDs in bugs.yaml
Do NOT read entire bugs.yaml file:
- Grep for `id: bug-[0-9]+` pattern to find all existing bug IDs
- Grep for `id: feature-[0-9]+` pattern to find all existing feature IDs
- This enables duplicate detection and next-ID generation
### 3. Parse Bug Reports
Expected formats in manual input (informal, user-written):
**Format A: Markdown Headers**
```markdown
## Bug: Title Here
Description text, possibly multi-paragraph.
Reported by: Name
Date: YYYY-MM-DD
Related: Story 2.7
Platform: iOS
```
**Format B: Bullet Lists**
```markdown
- **Title (Platform)**: Description text. CRITICAL if urgent.
```
**Format C: Numbered Lists**
```markdown
1. Title - Description text
2. Another bug - More description
```
### 4. Extract Information
For each bug report, extract:
| Field | Required | Notes |
|-------|----------|-------|
| Title | Yes | First line or header |
| Description | Yes | May be multi-paragraph |
| Reported by | No | Extract if mentioned |
| Date | No | Extract if mentioned |
| Related story | No | e.g., "2-7", "Story 2.7" |
| Platform | No | iOS, Android, web, all |
| Reproduction steps | No | If provided |
| Severity hints | No | "CRITICAL", "urgent", etc. |
### 5. Categorize Items
Compare extracted bugs with existing bugs.yaml:
- **New bugs**: Not in bugs.yaml yet (need full triage)
- **Updated bugs**: In bugs.yaml but description changed (need re-triage)
- **Feature requests**: Items that are enhancements, not bugs
- **Unchanged**: Already triaged, skip
### 6. Handle No New Bugs
If NO new bugs found:
"No new bugs found in the manual input section.
All items have already been triaged and are tracked in bugs.yaml.
**Options:**
1. Add new bugs to docs/bugs.md (informal format)
2. View bugs.yaml to see structured bug tracking
3. Route existing triaged bugs to workflows
[Q] Quit - nothing to triage"
**HALT** - Do not proceed.
### 7. Present Parsed Items
"**Parsed {total_count} item(s) from manual input:**
**New Bugs ({new_count}):**
{for each new bug:}
- {extracted_title}
- Description: {first 100 chars}...
- Platform: {platform or "not specified"}
- Related: {story or "not specified"}
{end for}
**Feature Requests ({feature_count}):**
{for each feature:}
- {title}
{end for}
**Already Triaged ({unchanged_count}):**
{list titles of skipped items}
Ready to triage {new_count} new bug(s) and {feature_count} feature request(s).
[C] Continue to triage
[E] Edit - re-parse with corrections
[Q] Quit"
## SUCCESS METRICS:
✅ Manual input section read efficiently (not entire file)
✅ All formats parsed correctly (headers, bullets, numbered)
✅ Existing bugs detected to prevent duplicates
✅ New vs updated vs unchanged correctly categorized
✅ User shown summary and can proceed
## FAILURE MODES:
❌ Reading entire bugs.md instead of section
❌ Missing bugs due to format not recognized
❌ Not detecting duplicates against bugs.yaml
❌ Starting triage in this step (should only parse)
**CRITICAL**: Reading only partial step file
**CRITICAL**: Proceeding without user selection
## NEXT STEP:
After user selects [C], load `./step-04-triage.md` to perform triage analysis on each new bug.
Remember: Do NOT proceed until user explicitly selects [C] from the menu!

View File

@ -0,0 +1,212 @@
# Step 4: Triage Each Bug
## MANDATORY EXECUTION RULES (READ FIRST):
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: ALWAYS read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding
- ✅ ALWAYS treat this as collaborative triage between peers
- 📋 YOU ARE A FACILITATOR - ask clarifying questions when needed
- 🎯 This step performs the CORE TRIAGE analysis
- ⚠️ ABSOLUTELY NO TIME ESTIMATES - AI development speed varies widely
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
## EXECUTION PROTOCOLS:
- 🎯 Triage ONE bug at a time with user confirmation
- 💾 Track triage decisions for bugs.yaml update
- 📖 Ask clarifying questions when severity/complexity unclear
- 🚫 FORBIDDEN to auto-triage without user review
## CONTEXT BOUNDARIES:
- Parsed bugs from step-03 are in memory
- Reference bugs.yaml header for severity/complexity definitions
- Reference epics.md for story mapping
- Each bug gets full triage analysis
## YOUR TASK:
Perform collaborative triage analysis on each parsed bug, assessing severity, complexity, effort, workflow routing, and documentation impact.
## TRIAGE SEQUENCE (FOR EACH BUG):
### 1. Generate Bug ID
- Find highest existing bug-NNN from step-03 grep results
- Assign next sequential ID (e.g., bug-006)
- Format: `bug-` + zero-padded 3-digit number
- For features: `feature-` + zero-padded 3-digit number
### 2. Assess Severity
**Severity Levels:**
| Level | Criteria |
|-------|----------|
| critical | Prevents core functionality, crashes, data loss |
| high | Blocks major features, significantly degrades UX but has workaround |
| medium | Affects subset of users, minor impact |
| low | Cosmetic, edge case, minor inconvenience |
**Analysis Questions:**
- Does it prevent core functionality? → critical
- Does it cause crashes or data loss? → critical
- Does it block major features? → high
- Does it significantly degrade UX but have workaround? → high
- Does it affect subset of users with minor impact? → medium
- Is it cosmetic or edge case? → low
**If Unclear - ASK:**
"**Clarification needed for: {bug_title}**
I need more information to assess severity:
1. Does this bug prevent users from completing core flows?
2. Does the bug cause crashes or data loss?
3. How many users are affected? (all users, specific platform, edge case)
4. Is there a workaround available?
Please provide additional context."
### 3. Assess Complexity
**Complexity Levels:**
| Level | Criteria |
|-------|----------|
| trivial | One-line fix, obvious solution |
| small | Single file/component, solution clear |
| medium | Multiple files OR requires investigation |
| complex | Architectural change, affects many areas |
**If Unclear - ASK:**
"**Clarification needed for: {bug_title}**
To estimate complexity, I need:
1. Have you identified the root cause, or does it need investigation?
2. Which file(s) or component(s) are affected?
3. Is this isolated or does it affect multiple parts of the app?
Please provide technical details if available."
### 4. Determine Workflow Routing
**Routing Matrix:**
| Severity | Complexity | Workflow |
|----------|------------|----------|
| critical | any | correct-course |
| high | trivial | direct-fix |
| high | small | tech-spec |
| high | medium/complex | correct-course |
| medium | trivial | direct-fix |
| medium | small | tech-spec |
| medium | medium/complex | correct-course |
| low | trivial | direct-fix |
| low | small+ | backlog |
### 5. Map to Related Story/Epic
- If bug mentions story ID (e.g., "2-7"), use that
- Otherwise, infer from description using epic keywords
- Reference epics.md for story matching
- Format: `{epic_number}-{story_number}` or null
### 6. Determine Affected Platform
Extract from description:
- `all` - Default if not specified
- `ios` - iOS only
- `android` - Android only
- `web` - Web only
### 7. Assess Documentation Impact
**PRD Impact** (`doc_impact.prd: true/false`)
Set TRUE if issue:
- Conflicts with stated product goals
- Requires changing MVP scope
- Adds/removes/modifies core functionality
- Changes success metrics
- Affects multiple epics
**Architecture Impact** (`doc_impact.architecture: true/false`)
Set TRUE if issue:
- Requires new system components
- Changes data model (new tables, schema)
- Affects API contracts
- Introduces new dependencies
- Changes auth/security model
**UX Impact** (`doc_impact.ux: true/false`)
Set TRUE if issue:
- Adds new screens or navigation
- Changes existing user flows
- Requires new UI components
- Affects accessibility
**If any doc_impact is TRUE AND workflow != correct-course:**
- Override workflow to `correct-course`
- Add note: "Workflow elevated due to documentation impact"
### 8. Add Triage Notes
Document reasoning:
- Why this severity? (business impact, user impact)
- Why this complexity? (investigation needed, files affected)
- Why this workflow? (routing logic applied)
- Suggested next steps or investigation areas
### 9. Present Triage for Confirmation
"**Triage: {bug_id} - {bug_title}**
| Field | Value |
|-------|-------|
| Severity | {severity} |
| Complexity | {complexity} |
| Platform | {platform} |
| Workflow | {recommended_workflow} |
| Related | {related_story or 'None'} |
**Documentation Impact:**
- PRD: {yes/no}
- Architecture: {yes/no}
- UX: {yes/no}
**Triage Notes:**
{triage_notes}
[A] Accept triage
[M] Modify - adjust severity/complexity/workflow
[S] Skip - don't triage this item now
[N] Next bug (after accepting)"
### 10. Handle Modifications
If user selects [M]:
- Ask which field to modify
- Accept new value
- Re-present triage for confirmation
## SUCCESS METRICS:
✅ Each bug triaged with user confirmation
✅ Unclear items prompted for clarification
✅ Routing matrix applied correctly
✅ Documentation impact assessed
✅ Triage notes document reasoning
## FAILURE MODES:
❌ Auto-triaging without user review
❌ Not asking clarifying questions when needed
❌ Incorrect routing matrix application
❌ Missing documentation impact assessment
❌ Not documenting triage reasoning
**CRITICAL**: Reading only partial step file
**CRITICAL**: Proceeding without user confirmation per bug
## NEXT STEP:
After ALL bugs triaged (user selected [A] or [N] for each), load `./step-05-update.md` to update bugs.yaml and bugs.md.
Remember: Triage each bug individually with user confirmation!

View File

@ -0,0 +1,200 @@
# Step 5: Update Files with Triaged Metadata
## MANDATORY EXECUTION RULES (READ FIRST):
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: ALWAYS read the complete step file before taking any action
- 🔄 CRITICAL: When loading next step with 'C', ensure the entire file is read and understood before proceeding
- ✅ ALWAYS treat this as collaborative triage between peers
- 📋 YOU ARE A FACILITATOR, not an automatic processor
- 💾 This step WRITES the triage results to files
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
## EXECUTION PROTOCOLS:
- 🎯 Update both bugs.yaml and bugs.md atomically
- 💾 Preserve ALL existing data - append only
- 📖 Move items from manual input to tracked sections
- 🚫 FORBIDDEN to lose or corrupt existing data
## CONTEXT BOUNDARIES:
- Triage decisions from step-04 are in memory
- bugs.yaml structure defined in step-01
- bugs.md sections: manual input, Tracked Bugs, Tracked Feature Requests, Fixed Bugs
- Preserve header comments and definitions
## YOUR TASK:
Write all triaged metadata to bugs.yaml and move triaged items from "# manual input" to appropriate tracked sections in bugs.md.
## UPDATE SEQUENCE:
### 1. Update bugs.yaml
#### A. Load Existing Structure
Read current bugs.yaml (if exists):
- Preserve ALL header comments and definitions
- Preserve existing `bugs:` array entries
- Preserve existing `features:` array entries
- Preserve existing `closed_bugs:` array
#### B. Add New Bug Entries
For each triaged bug, add to `bugs:` array:
```yaml
- id: bug-NNN
title: "Bug title"
description: |
Full description text
Can be multi-line
severity: critical|high|medium|low
complexity: trivial|small|medium|complex
affected_platform: all|ios|android|web
recommended_workflow: direct-fix|tech-spec|correct-course|backlog
related_story: "X-Y" or null
status: triaged
reported_by: "Name" or null
reported_date: "YYYY-MM-DD" or null
triaged_date: "{date}"
doc_impact:
prd: true|false
architecture: true|false
ux: true|false
notes: "Impact description" or null
triage_notes: |
Reasoning for severity, complexity, workflow decisions
implemented_by: null
implemented_date: null
verified_by: null
verified_date: null
```
#### C. Add Feature Request Entries
For features, add to `features:` array with similar structure.
#### D. Update Statistics
Recalculate statistics section:
```yaml
statistics:
total_active: {count of non-closed bugs}
by_severity:
critical: {count}
high: {count}
medium: {count}
low: {count}
by_status:
triaged: {count}
implemented: {count}
verified: {count}
by_workflow:
direct-fix: {count}
tech-spec: {count}
correct-course: {count}
backlog: {count}
last_updated: "{date}"
```
#### E. Write bugs.yaml
Write complete bugs.yaml file preserving all content.
### 2. Update bugs.md
#### A. Section-Based Reading
Use grep to locate section line numbers:
- "# manual input"
- "# Tracked Bugs"
- "# Tracked Feature Requests"
- "# Fixed Bugs"
Read only relevant sections with offset/limit.
#### B. Remove from Manual Input
For each triaged item:
- Remove the original entry from "# manual input" section
- Handle both header format and bullet format
#### C. Add to Tracked Bugs
For each triaged bug, add to "# Tracked Bugs" section:
```markdown
### {bug_id}: {title}
{brief_description}
- **Severity:** {severity}
- **Complexity:** {complexity}
- **Platform:** {platform}
- **Workflow:** {workflow}
- **Related:** {story or "None"}
{if doc_impact flagged:}
- **Doc Impact:** {PRD|Architecture|UX as applicable}
{end if}
**Notes:** {triage_notes_summary}
---
```
Create "# Tracked Bugs" section if it doesn't exist.
#### D. Add to Tracked Feature Requests
For features, add to "# Tracked Feature Requests" section with similar format.
#### E. Write bugs.md
Write updated bugs.md preserving all sections.
### 3. Confirm Updates
"**Files Updated:**
**bugs.yaml:**
- Added {bug_count} new bug(s)
- Added {feature_count} new feature request(s)
- Total active bugs: {total_active}
- Statistics recalculated
**bugs.md:**
- Removed {count} item(s) from manual input
- Added {bug_count} bug(s) to Tracked Bugs section
- Added {feature_count} feature(s) to Tracked Feature Requests section
[C] Continue to summary
[R] Review changes - show diff
[U] Undo - restore previous state"
## SUCCESS METRICS:
✅ bugs.yaml updated with all triaged metadata
✅ bugs.md items moved from manual input to tracked sections
✅ Statistics accurately recalculated
✅ All existing data preserved
✅ User confirmed updates
## FAILURE MODES:
❌ Losing existing bugs.yaml entries
❌ Corrupting bugs.md structure
❌ Items remaining in manual input after triage
❌ Statistics not matching actual data
❌ Not preserving header comments/definitions
**CRITICAL**: Reading only partial step file
**CRITICAL**: Proceeding without user confirmation
## NEXT STEP:
After user selects [C], load `./step-06-complete.md` to present final triage summary.
Remember: Do NOT proceed until user explicitly selects [C] from the menu!

View File

@ -0,0 +1,180 @@
# Step 6: Triage Complete - Summary and Next Steps
## MANDATORY EXECUTION RULES (READ FIRST):
- 🛑 NEVER generate content without user input
- 📖 CRITICAL: ALWAYS read the complete step file before taking any action
- ✅ ALWAYS treat this as collaborative triage between peers
- 📋 YOU ARE A FACILITATOR, not an automatic processor
- 🎉 This is the FINAL step - present comprehensive summary
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
## EXECUTION PROTOCOLS:
- 🎯 Present comprehensive triage summary
- 💾 All data already written in step-05
- 📖 Guide user to next actions
- 🚫 FORBIDDEN to modify files in this step
## CONTEXT BOUNDARIES:
- All triage decisions finalized in previous steps
- bugs.yaml and bugs.md already updated
- This step is READ-ONLY presentation
- Focus on actionable next steps
## YOUR TASK:
Present a comprehensive summary of the triage session and guide the user to appropriate next actions based on workflow recommendations.
## COMPLETION SEQUENCE:
### 1. Present Triage Summary
"**Bug Triage Complete, {user_name}!**
---
## Triaged Items
{for each triaged bug:}
### {bug_id}: {bug_title}
| Field | Value |
|-------|-------|
| Severity | {severity} |
| Complexity | {complexity} |
| Platform | {platform} |
| Workflow | {recommended_workflow} |
| Related | {related_story or 'None'} |
{if doc_impact flagged:}
**Documentation Impact:**
- PRD: {yes/no}
- Architecture: {yes/no}
- UX: {yes/no}
- Notes: {doc_impact_notes}
{end if}
**Triage Reasoning:**
{triage_notes}
---
{end for}
## Updated Files
- **bugs.yaml** - Structured metadata for all triaged items
- **bugs.md** - Moved triaged items to Tracked sections
---
## Statistics Summary
| Metric | Count |
|--------|-------|
| Total Active Bugs | {total_active} |
| Critical | {critical_count} |
| High | {high_count} |
| Medium | {medium_count} |
| Low | {low_count} |
{if any doc_impact flagged:}
## Documentation Updates Required
Items with documentation impact have been routed to `correct-course` workflow:
- PRD Impact: {prd_impact_count} item(s)
- Architecture Impact: {arch_impact_count} item(s)
- UX Impact: {ux_impact_count} item(s)
{end if}
---
## Workflow Recommendations
### Direct Fix ({direct_fix_count} items)
Quick fixes with obvious solutions. No spec needed.
**Command:** `/implement bug-NNN`
{list bug IDs for direct-fix}
### Tech-Spec ({tech_spec_count} items)
Require technical specification before implementation.
**Process:** Create tech-spec first, then `/implement`
{list bug IDs for tech-spec}
### Correct-Course ({correct_course_count} items)
Need impact analysis before proceeding.
**Process:** Run correct-course workflow for impact analysis
{list bug IDs for correct-course}
### Backlog ({backlog_count} items)
Deferred - low priority items for future consideration.
{list bug IDs for backlog}
---
## Next Steps
**To implement a bug fix:**
```
/implement bug-NNN
```
**To verify after testing:**
```
/verify bug-NNN
```
**To verify all implemented bugs:**
```
/verify
```
**To list bugs by platform:**
```
/list-bugs android
/list-bugs ios
```
---
Thank you for completing the triage session!"
### 2. End Workflow
The workflow is complete. No further steps.
## SUCCESS METRICS:
✅ Comprehensive summary presented
✅ All triaged items listed with metadata
✅ Statistics accurately displayed
✅ Workflow recommendations clear
✅ Next step commands provided
✅ User knows how to proceed
## FAILURE MODES:
❌ Incomplete summary missing items
❌ Statistics not matching bugs.yaml
❌ Unclear next step guidance
❌ Modifying files in this step (should be read-only)
## WORKFLOW COMPLETE
This is the final step. The bug tracking triage workflow is complete.
User can now:
- Run `/implement bug-NNN` to fix bugs
- Run `/verify` to verify implemented bugs
- Add new bugs to bugs.md and run triage again

View File

@ -0,0 +1,58 @@
---
name: bug-tracking
description: Triage user-reported bugs from bugs.md, generate structured metadata in bugs.yaml, and route to appropriate workflow
main_config: '{project-root}/_bmad/bmm/config.yaml'
web_bundle: true
---
# Bug Tracking Workflow
**Goal:** Transform informal bug reports into structured, actionable metadata with severity assessment, complexity estimation, and workflow routing recommendations.
**Your Role:** You are a triage facilitator collaborating with a peer. This is a partnership, not a client-vendor relationship. You bring structured analysis and triage methodology, while the user brings domain expertise and context about their product. Work together to efficiently categorize and route bugs for resolution.
---
## WORKFLOW ARCHITECTURE
This uses **micro-file architecture** for disciplined execution:
- Each step is a self-contained file with embedded rules
- Sequential progression with user control at each step
- State tracked via bugs.yaml metadata
- Append-only updates to bugs.md (move triaged items, never delete)
- You NEVER proceed to a step file if the current step file indicates the user must approve and indicate continuation.
---
## INITIALIZATION
### Configuration Loading
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
- `project_name`, `output_folder`, `user_name`
- `communication_language`, `date` as system-generated current datetime
- `dev_ephemeral_location` for sprint-status.yaml location
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
### Paths
- `installed_path` = `{project-root}/_bmad/bmm/workflows/bug-tracking`
- `bugs_input` = `{output_folder}/bugs.md` (user-facing bug reports)
- `bugs_output` = `{output_folder}/bugs.yaml` (agent-facing structured metadata)
- `sprint_status` = `{dev_ephemeral_location}/sprint-status.yaml`
- `epics_file` = `{output_folder}/epics.md`
### Optional API Integration
- `project_url` = configurable base URL for in-app bug report sync (default: `http://localhost:5173`)
- See `reference-implementation.md` for in-app bug reporting setup
---
## EXECUTION
Load and execute `steps/step-01-init.md` to begin the workflow.
**Note:** Input file discovery and initialization protocols are handled in step-01-init.md.

View File

@ -0,0 +1,523 @@
# Implement Workflow (Bug Fix or Feature)
```xml
<critical>This workflow loads bug/feature context, implements the code, and updates tracking in both bugs.yaml and bugs.md</critical>
<critical>Communicate in {communication_language} with {user_name}</critical>
<critical>Auto-detects type from ID format: bug-NNN = bug fix, feature-NNN = feature implementation</critical>
<workflow>
<step n="1" goal="Get item ID from user">
<check if="item_id not provided in user input">
<ask>Which bug or feature should I implement? (e.g., bug-026 or feature-021)</ask>
</check>
<action>Extract item ID from user input</action>
<action>Detect type from ID format:</action>
<action>- "bug-NNN" -> type = "bug", action_verb = "fix", past_verb = "Fixed"</action>
<action>- "feature-NNN" -> type = "feature", action_verb = "implement", past_verb = "Implemented"</action>
<check if="ID doesn't match either format">
<output>Invalid ID format. Use bug-NNN (e.g., bug-026) or feature-NNN (e.g., feature-021)</output>
<action>HALT</action>
</check>
</step>
<step n="2" goal="Load context from bugs.yaml">
<action>Search for {item_id} in {bugs_yaml} using grep with 50+ lines of context after the match (do NOT read entire file - it exceeds token limits)</action>
<check if="type == bug">
<action>Entry will be in bugs section, grep will capture all fields</action>
</check>
<check if="type == feature">
<action>Entry will be in feature_requests section, grep will capture all fields</action>
</check>
<check if="item not found in bugs.yaml">
<output>{item_id} not found in bugs.yaml. Please verify the ID or run bug-tracking workflow first.</output>
<action>HALT</action>
</check>
<action>Extract and store metadata:</action>
<action>- title: Title/summary</action>
<action>- description: Full description</action>
<action>- severity/priority: Importance level</action>
<action>- complexity: trivial | small | medium | complex</action>
<action>- effort_estimate: Estimated hours</action>
<action>- affected_platform: all | ios | android (bugs only)</action>
<action>- related_story/related_epic: Related items if applicable</action>
<action>- doc_impact: Documentation impact flags (prd, architecture, ux) and notes</action>
<action>- notes: Triage notes including planned approach, files to check, implementation strategy</action>
<check if="recommended_workflow == 'backlog'">
<output>**BACKLOG ITEM - NOT READY FOR IMPLEMENTATION**
**{item_id}: {title}**
This item has `recommended_workflow: backlog` which means it's deferred and not scheduled for implementation.
**To implement this item, first promote it to the sprint:**
1. Run `*sprint-planning` and select this item for promotion
2. Or manually update bugs.yaml: change `recommended_workflow` to `direct-fix`, `tech-spec`, or `correct-course`
**Current Status:** {status}
**Priority:** {priority}
**Complexity:** {complexity}
</output>
<action>HALT</action>
</check>
<check if="status == 'deferred'">
<output>**DEFERRED ITEM - NOT READY FOR IMPLEMENTATION**
**{item_id}: {title}**
This item is deferred (marked for future release, not current MVP).
**To implement this item:**
1. Update bugs.yaml: change `status` from `deferred` to `backlog`
2. Run `*sprint-planning` to promote to current sprint
**Notes:** {notes}
</output>
<action>HALT</action>
</check>
<check if="status == 'blocked'">
<output>**BLOCKED ITEM - CANNOT IMPLEMENT**
**{item_id}: {title}**
This item is blocked and requires clarification before implementation.
**Blocking reason:** {notes}
**To unblock:**
1. Resolve the blocking issue
2. Update bugs.yaml: change `status` from `blocked` to `backlog`
3. Run `/triage {item_id}` to re-evaluate
</output>
<action>HALT</action>
</check>
</step>
<step n="2.5" goal="Check for documentation impact and route to appropriate agents">
<action>Check doc_impact fields from bugs.yaml entry</action>
<check if="doc_impact.prd OR doc_impact.architecture OR doc_impact.ux is TRUE">
<output>**DOCUMENTATION IMPACT DETECTED**
**{item_id}: {title}**
This {type} requires documentation updates BEFORE implementation:
{if doc_impact.prd:}
- **PRD Impact:** Updates needed to product requirements
-> Route to PM Agent for PRD updates
{end if}
{if doc_impact.architecture:}
- **Architecture Impact:** Updates needed to architecture docs
-> Route to Architect Agent for architecture updates
{end if}
{if doc_impact.ux:}
- **UX Impact:** Updates needed to UX specifications
-> Route to UX Designer Agent for UX spec updates
{end if}
**Details:** {doc_impact.notes}
**Options:**
1. **update-docs-first** - Route to agents for documentation updates before implementation (recommended)
2. **proceed-anyway** - Skip documentation updates and implement directly (not recommended)
3. **cancel** - Return to review</output>
<ask>How should we proceed?</ask>
<check if="user chooses update-docs-first">
<output>Routing to documentation update workflow...
**Documentation Update Sequence:**</output>
<check if="doc_impact.prd">
<output>1. **PRD Update** - Invoking PM Agent...</output>
<action>Prepare PRD update context:</action>
<action>- Source item: {item_id}</action>
<action>- Change description: {description}</action>
<action>- Specific PRD sections: {doc_impact.notes PRD sections}</action>
<invoke-agent agent="pm">
<task>Review and update PRD for {item_id}: {title}
Change context: {description}
Documentation notes: {doc_impact.notes}
Please update the relevant PRD sections to reflect this change.
After updates:
1. Summarize what was changed
2. Return to the implement workflow by running: /implement {item_id}
IMPORTANT: You MUST return to /implement {item_id} after completing the PRD updates so the actual code implementation can proceed.</task>
</invoke-agent>
</check>
<check if="doc_impact.architecture">
<output>2. **Architecture Update** - Invoking Architect Agent...</output>
<action>Prepare architecture update context:</action>
<action>- Source item: {item_id}</action>
<action>- Change description: {description}</action>
<action>- Specific architecture sections: {doc_impact.notes architecture sections}</action>
<invoke-agent agent="architect">
<task>Review and update Architecture documentation for {item_id}: {title}
Change context: {description}
Documentation notes: {doc_impact.notes}
Please update the relevant architecture sections (data model, APIs, security, etc.) to reflect this change.
After updates:
1. Summarize what was changed
2. Return to the implement workflow by running: /implement {item_id}
IMPORTANT: You MUST return to /implement {item_id} after completing the architecture updates so the actual code implementation can proceed.</task>
</invoke-agent>
</check>
<check if="doc_impact.ux">
<output>3. **UX Spec Update** - Invoking UX Designer Agent...</output>
<action>Prepare UX update context:</action>
<action>- Source item: {item_id}</action>
<action>- Change description: {description}</action>
<action>- Specific UX sections: {doc_impact.notes UX sections}</action>
<invoke-agent agent="ux-designer">
<task>Review and update UX specification for {item_id}: {title}
Change context: {description}
Documentation notes: {doc_impact.notes}
Please update the relevant UX spec sections (screens, flows, components, etc.) to reflect this change.
After updates:
1. Summarize what was changed
2. Return to the implement workflow by running: /implement {item_id}
IMPORTANT: You MUST return to /implement {item_id} after completing the UX updates so the actual code implementation can proceed.</task>
</invoke-agent>
</check>
<output>**Documentation updates complete.**
Proceeding with implementation...</output>
<action>Continue to step 3</action>
</check>
<check if="user chooses cancel">
<output>Cancelled. {item_id} remains in current state.</output>
<action>HALT</action>
</check>
<action if="user chooses proceed-anyway">
<output>Proceeding without documentation updates. Remember to update docs after implementation.</output>
<action>Continue to step 3</action>
</action>
</check>
</step>
<step n="3" goal="Evaluate routing and auto-route to correct-course if needed">
<action>Check recommended_workflow field from bugs.yaml</action>
<check if="recommended_workflow == 'correct-course'">
<output>**AUTO-ROUTING TO CORRECT-COURSE**
**{item_id}: {title}**
**Priority:** {severity_or_priority} | **Complexity:** {complexity}
This {type} has `recommended_workflow: correct-course` which requires impact analysis and story creation before implementation.
Invoking correct-course workflow via SM agent...</output>
<action>Invoke the correct-course workflow skill with item context</action>
<invoke-skill skill="bmad:bmm:workflows:correct-course">
<args>{item_id}: {title} - {description}
Priority: {severity_or_priority}
Complexity: {complexity}
Doc Impact: {doc_impact summary}
Notes: {notes}</args>
</invoke-skill>
<action>HALT - Correct Course workflow will handle story/epic creation</action>
</check>
<check if="recommended_workflow == 'tech-spec'">
<output>**AUTO-ROUTING TO TECH-SPEC**
**{item_id}: {title}**
This {type} has `recommended_workflow: tech-spec`. Invoking tech-spec workflow...</output>
<invoke-skill skill="bmad:bmm:workflows:tech-spec">
<args>{item_id}: {title} - {description}</args>
</invoke-skill>
<action>HALT - Tech-spec workflow will create implementation spec</action>
</check>
<check if="recommended_workflow == 'direct-fix'">
<output>**DIRECT IMPLEMENTATION**
This {type} is routed for direct implementation. Proceeding...</output>
<action>Continue to step 4</action>
</check>
<check if="recommended_workflow is not set OR recommended_workflow is ambiguous">
<action>Evaluate the workflow routing matrix based on severity and complexity:</action>
<action>**Routing Matrix:**</action>
<action>- critical + any -> correct-course</action>
<action>- high/medium + medium/complex -> correct-course</action>
<action>- high + trivial -> direct-fix</action>
<action>- high/medium + small -> tech-spec</action>
<action>- medium + trivial -> direct-fix</action>
<action>- low + trivial -> direct-fix</action>
<action>- low + small+ -> backlog</action>
<action>Apply matrix to determine routing and continue accordingly</action>
</check>
</step>
<step n="4" goal="Present context and confirm approach">
<output>**{item_id}: {title}**
**Type:** {type} | **Severity/Priority:** {severity_or_priority} | **Complexity:** {complexity} | **Effort:** ~{effort_estimate}h
**Description:**
{description}
**Planned Approach (from triage notes):**
{notes}
**Related:** {related_story} / {related_epic}
</output>
<ask>Ready to {action_verb} this {type}? (yes/no/clarify)</ask>
<check if="user says clarify">
<ask>What additional context do you need?</ask>
<action>Gather clarification, update mental model</action>
</check>
<check if="user says no">
<output>Cancelled. {item_id} remains in current state.</output>
<action>HALT</action>
</check>
</step>
<step n="5" goal="Implement the fix/feature">
<action>Based on the notes/planned approach, identify files to modify or create</action>
<action>Read each affected file to understand current implementation</action>
<action>Implement following the planned approach:</action>
<action>- Make minimal, targeted changes</action>
<action>- Follow existing code patterns and style</action>
<action>- Add comments only where logic is non-obvious</action>
<action>- Do not over-engineer or add unrelated improvements</action>
<action>- Do not add extra features or "nice to haves"</action>
<action>For each file modified/created, track:</action>
<action>- File path</action>
<action>- What was changed/added</action>
<action>- How it addresses the bug/feature</action>
<check if="requires new files">
<action>Create new files following project conventions</action>
<action>Add appropriate imports/exports</action>
</check>
<check if="planned approach is unclear or insufficient">
<ask>The triage notes don't provide a clear approach.
Based on my analysis, I suggest: {proposed_approach}
Should I proceed with this approach?</ask>
</check>
</step>
<step n="6" goal="Verify implementation compiles">
<action>Run TypeScript compilation check: npm run check</action>
<check if="compilation errors in modified files">
<action>Fix compilation errors</action>
<action>Re-run compilation check</action>
</check>
<output>Compilation check passed.</output>
</step>
<step n="6.5" goal="Pre-update sync check">
<action>Search for {item_id} in both bugs.yaml and bugs.md using grep to check current status</action>
<check if="status differs between files OR item missing from one file">
<output>SYNC WARNING: {item_id} status mismatch detected
- bugs.yaml: {yaml_status}
- bugs.md: {md_status}
Proceeding will update both files to "{new_status}".</output>
</check>
</step>
<step n="7" goal="Update bugs.yaml">
<action>Update entry in bugs.yaml:</action>
<check if="type == bug">
<action>- status: "fixed"</action>
<action>- fixed_date: {date} (YYYY-MM-DD format)</action>
</check>
<check if="type == feature">
<action>- status: "implemented"</action>
<action>- implemented_date: {date} (YYYY-MM-DD format)</action>
</check>
<action>- assigned_to: "dev-agent"</action>
<action>- files_modified: {list of files changed/created during implementation}</action>
<action>- Append to notes: "{past_verb} ({date}): {summary of changes made}"</action>
<action>Write updated bugs.yaml</action>
</step>
<step n="8" goal="Update bugs.md">
<action>Search for {item_id} in {bugs_md} using grep with surrounding context to locate the entry</action>
<action>**8a. Remove from tracked section (if present)**</action>
<check if="type == bug">
<action>Search for "{item_id}:" in "# Tracked Bugs" section</action>
</check>
<check if="type == feature">
<action>Search for "{item_id}:" in "# Tracked Feature Requests" section</action>
</check>
<action>If found, remove the entire entry (including any indented sub-items)</action>
<action>**8b. Add to completed section (INSERT AT TOP - newest first)**</action>
<check if="type == bug">
<action>Locate "# Fixed Bugs" section in bugs.md</action>
<action>If section not found, create it</action>
<action>INSERT AT TOP of section (immediately after "# Fixed Bugs" header):</action>
<action>[IMPLEMENTED] {item_id}: {title} - {brief_description}. [Severity: {severity}, Platform: {platform}, Fixed: {date}, Verified: pending]</action>
<action> - Fix: {description of what was fixed}</action>
<action> - File(s): {list of modified files}</action>
</check>
<check if="type == feature">
<action>Locate "# Implemented Features" section in bugs.md</action>
<action>If section not found, create it before "# Fixed Bugs"</action>
<action>INSERT AT TOP of section (immediately after "# Implemented Features" header):</action>
<action>[IMPLEMENTED] {item_id}: {title} - {brief_description}. [Implemented: {date}, Platform: {platform}, Verified: pending]</action>
<action> - Files: {list of modified/created files}</action>
<action> - Features: {bullet list of what was implemented}</action>
</check>
<action>Write updated bugs.md</action>
</step>
<step n="9" goal="Post-update validation">
<action>Search for {item_id} in both bugs.yaml and bugs.md using grep to validate updates</action>
<action>Confirm {item_id} shows status "fixed"/"implemented" in bugs.yaml</action>
<action>Confirm {item_id} has [IMPLEMENTED] tag in bugs.md</action>
<check if="validation fails">
<output>SYNC ERROR: Files may be out of sync. Please verify manually:
- bugs.yaml: Expected status "fixed"/"implemented"
- bugs.md: Expected [IMPLEMENTED] tag in appropriate section</output>
</check>
</step>
<step n="10" goal="Present completion summary">
<output>**{item_id} {past_verb.upper()}**
**Changes Made:**
{for each modified file:}
- {file_path}: {what was changed}
{end for}
**Updated Tracking:**
- bugs.yaml: status -> "{status}", {date_field} -> {date}, files_modified updated
- bugs.md: Moved to "{target_section}" with [IMPLEMENTED] tag
**Verification Status:** pending
**Next Steps:**
1. Test manually
2. Run `/verify {item_id}` after verification to close
</output>
</step>
</workflow>
```
## Usage
```
/implement bug-026
/implement feature-021
```
## Key Principles
1. **Auto-detect Type** - ID format determines bug vs feature handling
2. **Context First** - Always read and present details before implementing
3. **Confirm Approach** - Validate planned approach with user before coding
4. **Minimal Changes** - Only implement what's needed, no scope creep
5. **Dual Tracking** - ALWAYS update both bugs.yaml AND bugs.md
6. **[IMPLEMENTED] Tag** - Indicates complete but awaiting verification
---
## Reference: Bug Tracking Definitions
### Severity Levels
| Level | Description | Action |
|-------|-------------|--------|
| **critical** | Blocks core functionality, prevents app use, or causes data loss (crashes, auth broken, data corruption) | Fix immediately, may require hotfix |
| **high** | Major feature broken, significant UX degradation, workaround exists but painful (platform-specific failure, 5+ sec delays, accessibility blocker) | Fix in current/next sprint |
| **medium** | Feature partially broken, UX degraded but usable (minor feature broken, unclear errors, 1-3 sec delays) | Fix when capacity allows |
| **low** | Minor issue, cosmetic, edge case (typos, spacing, visual glitches) | Fix opportunistically or defer |
### Complexity Levels
| Level | Description | Effort |
|-------|-------------|--------|
| **trivial** | Obvious fix, single line change, no investigation needed (typo, missing semicolon, wrong color) | < 30 minutes |
| **small** | Single file/component, clear root cause, solution known (missing validation, incorrect prop, logic error) | 30 min - 2 hours |
| **medium** | Multiple files affected OR investigation required (spans 2-3 components, debugging needed, integration issue) | 2-8 hours |
| **complex** | Architectural issue, affects multiple stories, requires design changes (race conditions, refactoring, profiling) | 8+ hours (1-2 days) |
### Workflow Routing Matrix
| Severity | Complexity | Workflow | Rationale |
|----------|------------|----------|-----------|
| critical | any | correct-course -> urgent | Need impact analysis even if small fix |
| high | trivial | direct-fix (urgent) | Fast path for obvious important fix |
| high | small | tech-spec (urgent) | Fast path with minimal overhead |
| high | medium+ | correct-course -> story | Need proper analysis + testing |
| medium | trivial | direct-fix | Too small for workflow overhead |
| medium | small | tech-spec | Isolated fix needs spec |
| medium | medium+ | correct-course -> story | Multi-file change needs story |
| low | trivial | direct-fix (defer) | Fix opportunistically |
| low | small+ | backlog (defer) | Document but don't schedule yet |
### Status Flow
```
reported -> triaged -> routed -> in-progress -> fixed -> verified -> closed
```
| Status | Description |
|--------|-------------|
| **reported** | Bug logged in bugs.md, not yet analyzed |
| **triaged** | PM analyzed, assigned severity/complexity/workflow |
| **routed** | Workflow determined, story/tech-spec created |
| **in-progress** | Developer actively working on fix |
| **fixed** | Code changed, tests passing, ready for verification |
| **verified** | Bug confirmed fixed by reporter or QA |
| **closed** | Bug resolved and verified, no further action |
### Metadata Fields
| Field | Description |
|-------|-------------|
| id | Unique identifier (bug-NNN or feature-NNN) |
| title | Short description (< 80 chars) |
| description | Detailed explanation |
| severity | critical \| high \| medium \| low |
| complexity | trivial \| small \| medium \| complex |
| status | Current workflow state |
| recommended_workflow | direct-fix \| tech-spec \| correct-course \| backlog |
| effort_estimate | Hours (based on complexity) |
| reported_by / reported_date | Who found it and when |
| triaged_by / triaged_date | Who triaged and when |
| fixed_date / verified_date | Implementation and verification dates |
| related_story / related_epic | Context links |
| affected_platform | all \| ios \| android |
| doc_impact | Documentation impact: prd, architecture, ux flags + notes |
| notes | Investigation notes, decisions, implementation details |

View File

@ -0,0 +1,22 @@
name: implement
description: "Implement a bug fix or feature - loads context from bugs.yaml, implements the code, updates both bugs.yaml and bugs.md with [IMPLEMENTED] tag"
author: "BMad"
# Critical variables from config
config_source: "{project-root}/.bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
date: system-generated
# Workflow components
installed_path: "{project-root}/.bmad/bmm/workflows/implement"
instructions: "{installed_path}/instructions.md"
template: false
# Input and output files
variables:
bugs_md: "{output_folder}/bugs.md"
bugs_yaml: "{output_folder}/bugs.yaml"
standalone: true

View File

@ -0,0 +1,219 @@
# Verify Workflow (Close Implemented Bugs/Features)
```xml
<critical>This workflow verifies implemented items and closes them in both bugs.yaml and bugs.md</critical>
<critical>Communicate in {communication_language} with {user_name}</critical>
<critical>Removes [IMPLEMENTED] tag and updates status to CLOSED (bugs) or COMPLETE (features)</critical>
<workflow>
<step n="1" goal="Get item ID or list pending items">
<check if="item_id provided in user input">
<action>Extract item ID from user input</action>
<action>Detect type from ID format:</action>
<action>- "bug-NNN" -> type = "bug", final_status = "CLOSED"</action>
<action>- "feature-NNN" -> type = "feature", final_status = "COMPLETE"</action>
<action>Proceed to Step 2</action>
</check>
<check if="no item_id provided OR user says 'list'">
<action>Search {bugs_yaml} for 'status: "fixed"' or 'status: "implemented"' using grep (do NOT read entire file)</action>
<action>Search {bugs_md} for '[IMPLEMENTED]' entries using grep</action>
<action>Find all items with:</action>
<action>- status == "fixed" or "implemented" in bugs.yaml</action>
<action>- [IMPLEMENTED] tag in bugs.md</action>
<output>**Pending Verification:**
{for each pending item:}
- **{item_id}**: {title} [{type}, {fixed_date or implemented_date}]
{end for}
**Total:** {count} item(s) awaiting verification
To verify an item: `/verify bug-026`
To verify all: Type "verify all"
</output>
<ask>Which item would you like to verify?</ask>
</check>
<check if="user says 'verify all' or 'all'">
<action>Set batch_mode = true</action>
<action>Collect all pending items</action>
<action>Proceed to Step 2 with batch processing</action>
</check>
</step>
<step n="2" goal="Load item context and check sync">
<action>Search for {item_id} in {bugs_yaml} using grep with 50+ lines of context after the match (do NOT read entire file - it exceeds token limits)</action>
<check if="type == bug">
<action>Entry will be in bugs section, verify status == "fixed"</action>
</check>
<check if="type == feature">
<action>Entry will be in feature_requests section, verify status == "implemented"</action>
</check>
<check if="item not found OR status not fixed/implemented">
<output>{item_id} is not in an implemented state. Current status: {status}
Only items with status "fixed" (bugs) or "implemented" (features) can be verified.</output>
<action>HALT</action>
</check>
<action>Extract metadata: title, description, fixed_date/implemented_date, notes</action>
<action>**Sync Check:** Also read {bugs_md} to verify sync status</action>
<check if="bugs.yaml says fixed/implemented but bugs.md missing [IMPLEMENTED] tag">
<output>SYNC WARNING: {item_id} status mismatch detected
- bugs.yaml: {yaml_status}
- bugs.md: Missing [IMPLEMENTED] tag (may have been implemented outside workflow)
Proceeding will update both files to CLOSED/COMPLETE.</output>
<ask>Continue with verification? (yes/no)</ask>
<check if="user says no">
<output>Cancelled. Please run /implement {item_id} first to sync files.</output>
<action>HALT</action>
</check>
</check>
</step>
<step n="3" goal="Confirm verification">
<output>**Verify {item_id}: {title}**
**Type:** {type}
**{past_verb}:** {fixed_date or implemented_date}
**Implementation Notes:**
{notes - show the FIXED/IMPLEMENTED section}
**Files Changed:**
{extract file list from notes}
</output>
<ask>Has this been tested and verified working? (yes/no/skip)</ask>
<check if="user says no">
<ask>What issue did you find? (I'll add it to the notes)</ask>
<action>Append verification failure note to bugs.yaml notes field</action>
<output>Noted. {item_id} remains in implemented state for rework.</output>
<action>HALT or continue to next item in batch</action>
</check>
<check if="user says skip">
<output>Skipped. {item_id} remains in implemented state.</output>
<action>Continue to next item in batch or HALT</action>
</check>
</step>
<step n="4" goal="Update bugs.yaml">
<action>Update entry in bugs.yaml:</action>
<action>- status: "closed"</action>
<action>- verified_by: {user_name}</action>
<action>- verified_date: {date} (YYYY-MM-DD format)</action>
<action>- Append to notes: "Verified ({date}) by {user_name}"</action>
<action>Write updated bugs.yaml</action>
</step>
<step n="5" goal="Update bugs.md">
<action>Search for {item_id} in {bugs_md} using grep with surrounding context to locate the entry</action>
<action>**5a. Find the entry**</action>
<check if="type == bug">
<action>Search for "[IMPLEMENTED] {item_id}:" in "# Fixed Bugs" section</action>
<check if="not found">
<action>Search for "{item_id}:" in "# Tracked Bugs" section (implemented outside workflow)</action>
</check>
</check>
<check if="type == feature">
<action>Search for "[IMPLEMENTED] {item_id}:" in "# Implemented Features" section</action>
<check if="not found">
<action>Search for "{item_id}:" in "# Tracked Feature Requests" section (implemented outside workflow)</action>
</check>
</check>
<action>**5b. Move entry if in wrong section**</action>
<check if="entry found in Tracked section (implemented outside workflow)">
<action>DELETE the entry from "# Tracked Bugs" or "# Tracked Feature Requests"</action>
<action>ADD entry to correct section:</action>
<check if="type == bug">
<action>Add to "# Fixed Bugs" section</action>
</check>
<check if="type == feature">
<action>Add to "# Implemented Features" section (at top, before other entries)</action>
</check>
</check>
<action>**5c. Update the entry format**</action>
<action>Remove "[IMPLEMENTED] " prefix if present</action>
<action>Update the status tag in brackets:</action>
<check if="type == bug">
<action>Change from "[Severity: X, Fixed: DATE, Verified: pending]" or "[Severity: X, Complexity: Y, Workflow: Z]"</action>
<action>To "[Severity: X, Platform: Y, Fixed: {date}, Verified: {date}, CLOSED]"</action>
</check>
<check if="type == feature">
<action>Change from "[Implemented: DATE, Verified: pending]" or "[Priority: X, Complexity: Y, Workflow: Z]"</action>
<action>To "[Implemented: {date}, Platform: Y, Verified: {date}, COMPLETE]"</action>
</check>
<action>Add implementation notes if available from bugs.yaml</action>
<action>Write updated bugs.md</action>
</step>
<step n="5.5" goal="Post-update validation">
<action>Search for {item_id} in both bugs.yaml and bugs.md using grep to validate updates</action>
<action>Confirm bugs.yaml: status="closed", verified_by set, verified_date set</action>
<action>Confirm bugs.md: No [IMPLEMENTED] tag, has CLOSED/COMPLETE in status tag</action>
<check if="validation fails">
<output>SYNC ERROR: Verification may be incomplete. Please check both files:
- bugs.yaml: Expected status "closed", verified_by/verified_date set
- bugs.md: Expected CLOSED/COMPLETE tag, no [IMPLEMENTED] prefix</output>
</check>
</step>
<step n="6" goal="Present completion summary">
<check if="batch_mode">
<output>**Verification Complete**
**Verified {verified_count} item(s):**
{for each verified item:}
- {item_id}: {title} -> {final_status}
{end for}
**Skipped:** {skipped_count}
**Failed verification:** {failed_count}
**Updated Files:**
- bugs.yaml: status -> "closed", verified_by/verified_date set
- bugs.md: [IMPLEMENTED] tag removed, status -> {final_status}
</output>
</check>
<check if="not batch_mode">
<output>**{item_id} VERIFIED and {final_status}**
**Updated:**
- bugs.yaml: status -> "closed", verified_by -> {user_name}, verified_date -> {date}
- bugs.md: Removed [IMPLEMENTED] tag, added "Verified: {date}, {final_status}"
This item is now fully closed.
</output>
</check>
</step>
</workflow>
```
## Usage
```
/verify # List all pending verification
/verify bug-026 # Verify specific bug
/verify feature-021 # Verify specific feature
/verify all # Verify all pending items
```
## Status Transitions
| Type | Before | After |
|------|--------|-------|
| Bug | status: "fixed", [IMPLEMENTED] | status: "closed", CLOSED |
| Feature | status: "implemented", [IMPLEMENTED] | status: "closed", COMPLETE |
## Key Principles
1. **Verification Gate** - User must confirm item was tested and works
2. **Failure Handling** - If verification fails, add note and keep in implemented state
3. **Batch Support** - Can verify multiple items at once
4. **Dual Tracking** - ALWAYS update both bugs.yaml AND bugs.md
5. **Proper Closure** - Removes [IMPLEMENTED] tag, adds final CLOSED/COMPLETE status

View File

@ -0,0 +1,22 @@
name: verify
description: "Verify and close implemented bugs/features - removes [IMPLEMENTED] tag, updates status to CLOSED/COMPLETE in both bugs.yaml and bugs.md"
author: "BMad"
# Critical variables from config
config_source: "{project-root}/.bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
date: system-generated
# Workflow components
installed_path: "{project-root}/.bmad/bmm/workflows/verify"
instructions: "{installed_path}/instructions.md"
template: false
# Input and output files
variables:
bugs_md: "{output_folder}/bugs.md"
bugs_yaml: "{output_folder}/bugs.yaml"
standalone: true

View File

@ -121,6 +121,8 @@ Parse these fields from YAML comments and metadata:
- {{workflow_name}} ({{agent}}) - {{status}} - {{workflow_name}} ({{agent}}) - {{status}}
{{/each}} {{/each}}
{{/if}} {{/if}}
**Tip:** For guardrail tests, run TEA `*automate` after `dev-story`. If you lose context, TEA workflows resume from artifacts in `{{output_folder}}`.
</output> </output>
</step> </step>

View File

@ -1,6 +1,5 @@
<rules> <rules>
<r>ALWAYS communicate in {communication_language} UNLESS contradicted by communication_style.</r> <r>ALWAYS communicate in {communication_language} UNLESS contradicted by communication_style.</r>
<!-- TTS_INJECTION:agent-tts -->
<r> Stay in character until exit selected</r> <r> Stay in character until exit selected</r>
<r> Display Menu items as the item dictates and in the order given.</r> <r> Display Menu items as the item dictates and in the order given.</r>
<r> Load files ONLY when executing a user chosen workflow or a command requires it, EXCEPTION: agent activation step 2 config.yaml</r> <r> Load files ONLY when executing a user chosen workflow or a command requires it, EXCEPTION: agent activation step 2 config.yaml</r>

View File

@ -62,40 +62,6 @@ module.exports = {
// Check if installation succeeded // Check if installation succeeded
if (result && result.success) { if (result && result.success) {
// Run AgentVibes installer if needed
if (result.needsAgentVibes) {
// Add some spacing before AgentVibes setup
console.log('');
console.log(chalk.magenta('🎙️ AgentVibes TTS Setup'));
console.log(chalk.cyan('AgentVibes provides voice synthesis for BMAD agents with:'));
console.log(chalk.dim(' • ElevenLabs AI (150+ premium voices)'));
console.log(chalk.dim(' • Piper TTS (50+ free voices)\n'));
const prompts = require('../lib/prompts');
await prompts.text({
message: chalk.green('Press Enter to start AgentVibes installer...'),
});
console.log('');
// Run AgentVibes installer
const { execSync } = require('node:child_process');
try {
execSync('npx agentvibes@latest install', {
cwd: result.projectDir,
stdio: 'inherit',
shell: true,
});
console.log(chalk.green('\n✓ AgentVibes installation complete'));
console.log(chalk.cyan('\n✨ BMAD with TTS is ready to use!'));
} catch {
console.log(chalk.yellow('\n⚠ AgentVibes installation was interrupted or failed'));
console.log(chalk.cyan('You can run it manually later with:'));
console.log(chalk.green(` cd ${result.projectDir}`));
console.log(chalk.green(' npx agentvibes install\n'));
}
}
// Display version-specific end message from install-messages.yaml // Display version-specific end message from install-messages.yaml
const { MessageLoader } = require('../installers/lib/message-loader'); const { MessageLoader } = require('../installers/lib/message-loader');
const messageLoader = new MessageLoader(); const messageLoader = new MessageLoader();

View File

@ -34,7 +34,6 @@ class Installer {
this.configCollector = new ConfigCollector(); this.configCollector = new ConfigCollector();
this.ideConfigManager = new IdeConfigManager(); this.ideConfigManager = new IdeConfigManager();
this.installedFiles = new Set(); // Track all installed files this.installedFiles = new Set(); // Track all installed files
this.ttsInjectedFiles = []; // Track files with TTS injection applied
this.bmadFolderName = BMAD_FOLDER_NAME; this.bmadFolderName = BMAD_FOLDER_NAME;
} }
@ -69,7 +68,7 @@ class Installer {
/** /**
* @function copyFileWithPlaceholderReplacement * @function copyFileWithPlaceholderReplacement
* @intent Copy files from BMAD source to installation directory with dynamic content transformation * @intent Copy files from BMAD source to installation directory with dynamic content transformation
* @why Enables installation-time customization: _bmad replacement + optional AgentVibes TTS injection * @why Enables installation-time customization: _bmad replacement
* @param {string} sourcePath - Absolute path to source file in BMAD repository * @param {string} sourcePath - Absolute path to source file in BMAD repository
* @param {string} targetPath - Absolute path to destination file in user's project * @param {string} targetPath - Absolute path to destination file in user's project
* @param {string} bmadFolderName - User's chosen bmad folder name (default: 'bmad') * @param {string} bmadFolderName - User's chosen bmad folder name (default: 'bmad')
@ -77,24 +76,9 @@ class Installer {
* @sideeffects Writes transformed file to targetPath, creates parent directories if needed * @sideeffects Writes transformed file to targetPath, creates parent directories if needed
* @edgecases Binary files bypass transformation, falls back to raw copy if UTF-8 read fails * @edgecases Binary files bypass transformation, falls back to raw copy if UTF-8 read fails
* @calledby installCore(), installModule(), IDE installers during file vendoring * @calledby installCore(), installModule(), IDE installers during file vendoring
* @calls processTTSInjectionPoints(), fs.readFile(), fs.writeFile(), fs.copy() * @calls fs.readFile(), fs.writeFile(), fs.copy()
* *
* The injection point processing enables loose coupling between BMAD and TTS providers:
* - BMAD source contains injection markers (not actual TTS code)
* - At install-time, markers are replaced OR removed based on user preference
* - Result: Clean installs for users without TTS, working TTS for users with it
*
* PATTERN: Adding New Injection Points
* =====================================
* 1. Add HTML comment marker in BMAD source file:
* <!-- TTS_INJECTION:feature-name -->
*
* 2. Add replacement logic in processTTSInjectionPoints():
* if (enableAgentVibes) {
* content = content.replace(/<!-- TTS_INJECTION:feature-name -->/g, 'actual code');
* } else {
* content = content.replace(/<!-- TTS_INJECTION:feature-name -->\n?/g, '');
* }
* *
* 3. Document marker in instructions.md (if applicable) * 3. Document marker in instructions.md (if applicable)
*/ */
@ -109,9 +93,6 @@ class Installer {
// Read the file content // Read the file content
let content = await fs.readFile(sourcePath, 'utf8'); let content = await fs.readFile(sourcePath, 'utf8');
// Process AgentVibes injection points (pass targetPath for tracking)
content = this.processTTSInjectionPoints(content, targetPath);
// Write to target with replaced content // Write to target with replaced content
await fs.ensureDir(path.dirname(targetPath)); await fs.ensureDir(path.dirname(targetPath));
await fs.writeFile(targetPath, content, 'utf8'); await fs.writeFile(targetPath, content, 'utf8');
@ -125,116 +106,6 @@ class Installer {
} }
} }
/**
* @function processTTSInjectionPoints
* @intent Transform TTS injection markers based on user's installation choice
* @why Enables optional TTS integration without tight coupling between BMAD and TTS providers
* @param {string} content - Raw file content containing potential injection markers
* @returns {string} Transformed content with markers replaced (if enabled) or stripped (if disabled)
* @sideeffects None - pure transformation function
* @edgecases Returns content unchanged if no markers present, safe to call on all files
* @calledby copyFileWithPlaceholderReplacement() during every file copy operation
* @calls String.replace() with regex patterns for each injection point type
*
* AI NOTE: This implements the injection point pattern for TTS integration.
* Key architectural decisions:
*
* 1. **Why Injection Points vs Direct Integration?**
* - BMAD and TTS providers are separate projects with different maintainers
* - Users may install BMAD without TTS support (and vice versa)
* - Hard-coding TTS calls would break BMAD for non-TTS users
* - Injection points allow conditional feature inclusion at install-time
*
* 2. **How It Works:**
* - BMAD source contains markers: <!-- TTS_INJECTION:feature-name -->
* - During installation, user is prompted: "Enable AgentVibes TTS?"
* - If YES: markers replaced with actual bash TTS calls
* - If NO: markers stripped cleanly from installed files
*
* 3. **State Management:**
* - this.enableAgentVibes set in install() method from config.enableAgentVibes
* - config.enableAgentVibes comes from ui.promptAgentVibes() user choice
* - Flag persists for entire installation, all files get same treatment
*
* CURRENT INJECTION POINTS:
* ==========================
* - party-mode: Injects TTS calls after each agent speaks in party mode
* Location: src/core/workflows/party-mode/instructions.md
* Marker: <!-- TTS_INJECTION:party-mode -->
* Replacement: Bash call to .claude/hooks/bmad-speak.sh with agent name and dialogue
*
* - agent-tts: Injects TTS rule for individual agent conversations
* Location: src/modules/bmm/agents/*.md (all agent files)
* Marker: <!-- TTS_INJECTION:agent-tts -->
* Replacement: Rule instructing agent to call bmad-speak.sh with agent ID and response
*
* ADDING NEW INJECTION POINTS:
* =============================
* 1. Add new case in this function:
* content = content.replace(
* /<!-- TTS_INJECTION:new-feature -->/g,
* `code to inject when enabled`
* );
*
* 2. Add marker to BMAD source file at injection location
*
* 3. Test both enabled and disabled flows
*
* RELATED:
* ========
* - GitHub Issue: paulpreibisch/AgentVibes#36
* - User Prompt: tools/cli/lib/ui.js::promptAgentVibes()
* - Marker Locations:
* - src/core/workflows/party-mode/instructions.md:101
* - src/modules/bmm/agents/*.md (rules sections)
* - TTS Hook: .claude/hooks/bmad-speak.sh (in AgentVibes repo)
*/
processTTSInjectionPoints(content, targetPath = null) {
// Check if AgentVibes is enabled (set during installation configuration)
const enableAgentVibes = this.enableAgentVibes || false;
// Check if content contains any TTS injection markers
const hasPartyMode = content.includes('<!-- TTS_INJECTION:party-mode -->');
const hasAgentTTS = content.includes('<!-- TTS_INJECTION:agent-tts -->');
if (enableAgentVibes) {
// Replace party-mode injection marker with actual TTS call
// Use single quotes to prevent shell expansion of special chars like !
content = content.replaceAll(
'<!-- TTS_INJECTION:party-mode -->',
`<critical>IMPORTANT: Always use PROJECT hooks (.claude/hooks/), NEVER global hooks (~/.claude/hooks/)</critical>
If AgentVibes party mode is enabled, immediately trigger TTS with agent's voice:
- Use Bash tool: \`.claude/hooks/bmad-speak.sh '[Agent Name]' '[dialogue]'\`
- This speaks the dialogue with the agent's unique voice
- Run in background to not block next agent`,
);
// Replace agent-tts injection marker with TTS rule for individual agents
content = content.replaceAll(
'<!-- TTS_INJECTION:agent-tts -->',
`- When responding to user messages, speak your responses using TTS:
Call: \`.claude/hooks/bmad-speak.sh '{agent-id}' '{response-text}'\` after each response
Replace {agent-id} with YOUR agent ID from <agent id="..."> tag at top of this file
Replace {response-text} with the text you just output to the user
IMPORTANT: Use single quotes as shown - do NOT escape special characters like ! or $ inside single quotes
Run in background (&) to avoid blocking`,
);
// Track files that had TTS injection applied
if (targetPath && (hasPartyMode || hasAgentTTS)) {
const injectionType = hasPartyMode ? 'party-mode' : 'agent-tts';
this.ttsInjectedFiles.push({ path: targetPath, type: injectionType });
}
} else {
// Strip injection markers cleanly when AgentVibes is disabled
content = content.replaceAll(/<!-- TTS_INJECTION:party-mode -->\n?/g, '');
content = content.replaceAll(/<!-- TTS_INJECTION:agent-tts -->\n?/g, '');
}
return content;
}
/** /**
* Collect Tool/IDE configurations after module configuration * Collect Tool/IDE configurations after module configuration
* @param {string} projectDir - Project directory * @param {string} projectDir - Project directory
@ -251,7 +122,7 @@ class Installer {
// Fallback: prompt for tool selection (backwards compatibility) // Fallback: prompt for tool selection (backwards compatibility)
const { UI } = require('../../../lib/ui'); const { UI } = require('../../../lib/ui');
const ui = new UI(); const ui = new UI();
toolConfig = await ui.promptToolSelection(projectDir, selectedModules); toolConfig = await ui.promptToolSelection(projectDir);
} else { } else {
// IDEs were already selected during initial prompts // IDEs were already selected during initial prompts
toolConfig = { toolConfig = {
@ -510,9 +381,6 @@ class Installer {
} }
} }
// Store AgentVibes configuration for injection point processing
this.enableAgentVibes = config.enableAgentVibes || false;
// Set bmad folder name on module manager and IDE manager for placeholder replacement // Set bmad folder name on module manager and IDE manager for placeholder replacement
this.moduleManager.setBmadFolderName(BMAD_FOLDER_NAME); this.moduleManager.setBmadFolderName(BMAD_FOLDER_NAME);
this.moduleManager.setCoreConfig(moduleConfigs.core || {}); this.moduleManager.setCoreConfig(moduleConfigs.core || {});
@ -1234,8 +1102,6 @@ class Installer {
modules: config.modules, modules: config.modules,
ides: config.ides, ides: config.ides,
customFiles: customFiles.length > 0 ? customFiles : undefined, customFiles: customFiles.length > 0 ? customFiles : undefined,
ttsInjectedFiles: this.enableAgentVibes && this.ttsInjectedFiles.length > 0 ? this.ttsInjectedFiles : undefined,
agentVibesEnabled: this.enableAgentVibes || false,
}); });
return { return {
@ -1243,7 +1109,6 @@ class Installer {
path: bmadDir, path: bmadDir,
modules: config.modules, modules: config.modules,
ides: config.ides, ides: config.ides,
needsAgentVibes: this.enableAgentVibes && !config.agentVibesInstalled,
projectDir: projectDir, projectDir: projectDir,
}; };
} catch (error) { } catch (error) {

View File

@ -345,7 +345,7 @@ class AntigravitySetup extends BaseIdeSetup {
}; };
const selected = await prompts.multiselect({ const selected = await prompts.multiselect({
message: `Select subagents to install ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`, message: `Select subagents to install ${chalk.dim('(↑/↓ navigates multiselect, SPACE toggles, A to toggles All, ENTER confirm)')}:`,
choices: subagentConfig.files.map((file) => ({ choices: subagentConfig.files.map((file) => ({
name: `${file.replace('.md', '')} - ${subagentInfo[file] || 'Specialized assistant'}`, name: `${file.replace('.md', '')} - ${subagentInfo[file] || 'Specialized assistant'}`,
value: file, value: file,

View File

@ -353,7 +353,7 @@ class ClaudeCodeSetup extends BaseIdeSetup {
}; };
const selected = await prompts.multiselect({ const selected = await prompts.multiselect({
message: `Select subagents to install ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`, message: `Select subagents to install ${chalk.dim('(↑/↓ navigates multiselect, SPACE toggles, A to toggles All, ENTER confirm)')}:`,
options: subagentConfig.files.map((file) => ({ options: subagentConfig.files.map((file) => ({
label: `${file.replace('.md', '')} - ${subagentInfo[file] || 'Specialized assistant'}`, label: `${file.replace('.md', '')} - ${subagentInfo[file] || 'Specialized assistant'}`,
value: file, value: file,

View File

@ -119,7 +119,8 @@ class KiloSetup extends BaseIdeSetup {
modeEntry += ` name: '${icon} ${title}'\n`; modeEntry += ` name: '${icon} ${title}'\n`;
modeEntry += ` roleDefinition: ${roleDefinition}\n`; modeEntry += ` roleDefinition: ${roleDefinition}\n`;
modeEntry += ` whenToUse: ${whenToUse}\n`; modeEntry += ` whenToUse: ${whenToUse}\n`;
modeEntry += ` customInstructions: ${activationHeader} Read the full YAML from ${relativePath} start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode\n`; modeEntry += ` customInstructions: |\n`;
modeEntry += ` ${activationHeader} Read the full YAML from ${relativePath} start activation to alter your state of being follow startup section instructions stay in this being until told to exit this mode\n`;
modeEntry += ` groups:\n`; modeEntry += ` groups:\n`;
modeEntry += ` - read\n`; modeEntry += ` - read\n`;
modeEntry += ` - edit\n`; modeEntry += ` - edit\n`;

View File

@ -108,7 +108,10 @@ async function resolveSubagentFiles(handlerBaseDir, subagentConfig, subagentChoi
const resolved = []; const resolved = [];
for (const file of filesToCopy) { for (const file of filesToCopy) {
const pattern = path.join(sourceDir, '**', file); // Use forward slashes for glob pattern (works on both Windows and Unix)
// Convert backslashes to forward slashes for glob compatibility
const normalizedSourceDir = sourceDir.replaceAll('\\', '/');
const pattern = `${normalizedSourceDir}/**/${file}`;
const matches = await glob(pattern); const matches = await glob(pattern);
if (matches.length > 0) { if (matches.length > 0) {

View File

@ -845,14 +845,8 @@ class ModuleManager {
// Compile with customizations if any // Compile with customizations if any
const { xml } = await compileAgent(yamlContent, answers, agentName, relativePath, { config: this.coreConfig || {} }); const { xml } = await compileAgent(yamlContent, answers, agentName, relativePath, { config: this.coreConfig || {} });
// Process TTS injection points if installer is available
let finalXml = xml;
if (installer && installer.processTTSInjectionPoints) {
finalXml = installer.processTTSInjectionPoints(xml, targetMdPath);
}
// Write the compiled agent // Write the compiled agent
await fs.writeFile(targetMdPath, finalXml, 'utf8'); await fs.writeFile(targetMdPath, xml, 'utf8');
// Handle sidecar copying if present // Handle sidecar copying if present
if (hasSidecar) { if (hasSidecar) {

View File

@ -478,39 +478,10 @@ function filterCustomizationData(data) {
return filtered; return filtered;
} }
/**
* Process TTS injection markers in content
* @param {string} content - Content to process
* @param {boolean} enableAgentVibes - Whether AgentVibes is enabled
* @returns {Object} { content: string, hadInjection: boolean }
*/
function processTTSInjectionPoints(content, enableAgentVibes) {
const hasAgentTTS = content.includes('<!-- TTS_INJECTION:agent-tts -->');
if (enableAgentVibes && hasAgentTTS) {
// Replace agent-tts injection marker with TTS rule
content = content.replaceAll(
'<!-- TTS_INJECTION:agent-tts -->',
`- When responding to user messages, speak your responses using TTS:
Call: \`.claude/hooks/bmad-speak.sh '{agent-id}' '{response-text}'\` after each response
Replace {agent-id} with YOUR agent ID from <agent id="..."> tag at top of this file
Replace {response-text} with the text you just output to the user
IMPORTANT: Use single quotes as shown - do NOT escape special characters like ! or $ inside single quotes
Run in background (&) to avoid blocking`,
);
return { content, hadInjection: true };
} else if (!enableAgentVibes && hasAgentTTS) {
// Strip injection markers when disabled
content = content.replaceAll(/<!-- TTS_INJECTION:agent-tts -->\n?/g, '');
}
return { content, hadInjection: false };
}
/** /**
* Compile agent file to .md * Compile agent file to .md
* @param {string} yamlPath - Path to agent YAML file * @param {string} yamlPath - Path to agent YAML file
* @param {Object} options - { answers: {}, outputPath: string, enableAgentVibes: boolean } * @param {Object} options - { answers: {}, outputPath: string }
* @returns {Object} Compilation result * @returns {Object} Compilation result
*/ */
function compileAgentFile(yamlPath, options = {}) { function compileAgentFile(yamlPath, options = {}) {
@ -526,15 +497,6 @@ function compileAgentFile(yamlPath, options = {}) {
outputPath = path.join(dir, `${basename}.md`); outputPath = path.join(dir, `${basename}.md`);
} }
// Process TTS injection points if enableAgentVibes option is provided
let xml = result.xml;
let ttsInjected = false;
if (options.enableAgentVibes !== undefined) {
const ttsResult = processTTSInjectionPoints(xml, options.enableAgentVibes);
xml = ttsResult.content;
ttsInjected = ttsResult.hadInjection;
}
// Write compiled XML // Write compiled XML
fs.writeFileSync(outputPath, xml, 'utf8'); fs.writeFileSync(outputPath, xml, 'utf8');
@ -543,7 +505,6 @@ function compileAgentFile(yamlPath, options = {}) {
xml, xml,
outputPath, outputPath,
sourcePath: yamlPath, sourcePath: yamlPath,
ttsInjected,
}; };
} }

View File

@ -184,6 +184,7 @@ async function groupMultiselect(options) {
options: options.options, options: options.options,
initialValues: options.initialValues, initialValues: options.initialValues,
required: options.required || false, required: options.required || false,
selectableGroups: options.selectableGroups || false,
}); });
await handleCancel(result); await handleCancel(result);

View File

@ -171,32 +171,6 @@ class UI {
// Check if there's an existing BMAD installation (after any folder renames) // Check if there's an existing BMAD installation (after any folder renames)
const hasExistingInstall = await fs.pathExists(bmadDir); const hasExistingInstall = await fs.pathExists(bmadDir);
// Collect IDE tool selection early - we need this to know if we should ask about TTS
let toolSelection;
let agentVibesConfig = { enabled: false, alreadyInstalled: false };
let claudeCodeSelected = false;
if (!hasExistingInstall) {
// For new installations, collect IDE selection first
// We don't have modules yet, so pass empty array
toolSelection = await this.promptToolSelection(confirmedDirectory, []);
// Check if Claude Code was selected
claudeCodeSelected = toolSelection.ides && toolSelection.ides.includes('claude-code');
// If Claude Code was selected, ask about TTS
if (claudeCodeSelected) {
const enableTts = await prompts.confirm({
message: 'Claude Code supports TTS (Text-to-Speech). Would you like to enable it?',
default: false,
});
if (enableTts) {
agentVibesConfig = { enabled: true, alreadyInstalled: false };
}
}
}
let customContentConfig = { hasCustomContent: false }; let customContentConfig = { hasCustomContent: false };
if (!hasExistingInstall) { if (!hasExistingInstall) {
customContentConfig._shouldAsk = true; customContentConfig._shouldAsk = true;
@ -324,20 +298,8 @@ class UI {
} }
// Get tool selection // Get tool selection
const toolSelection = await this.promptToolSelection(confirmedDirectory, selectedModules); const toolSelection = await this.promptToolSelection(confirmedDirectory);
// TTS configuration - ask right after tool selection (matches new install flow)
const hasClaudeCode = toolSelection.ides && toolSelection.ides.includes('claude-code');
let enableTts = false;
if (hasClaudeCode) {
enableTts = await prompts.confirm({
message: 'Claude Code supports TTS (Text-to-Speech). Would you like to enable it?',
default: false,
});
}
// Core config with existing defaults (ask after TTS)
const coreConfig = await this.collectCoreConfig(confirmedDirectory); const coreConfig = await this.collectCoreConfig(confirmedDirectory);
return { return {
@ -349,8 +311,6 @@ class UI {
skipIde: toolSelection.skipIde, skipIde: toolSelection.skipIde,
coreConfig: coreConfig, coreConfig: coreConfig,
customContent: customModuleResult.customContentConfig, customContent: customModuleResult.customContentConfig,
enableAgentVibes: enableTts,
agentVibesInstalled: false,
}; };
} }
} }
@ -372,7 +332,7 @@ class UI {
// Ask about custom content // Ask about custom content
const wantsCustomContent = await prompts.confirm({ const wantsCustomContent = await prompts.confirm({
message: 'Would you like to install a local custom module (this includes custom agents and workflows also)?', message: 'Would you like to install a locally stored custom module (this includes custom agents and workflows also)?',
default: false, default: false,
}); });
@ -391,19 +351,10 @@ class UI {
selectedModules = [...selectedModules, ...customContentConfig.selectedModuleIds]; selectedModules = [...selectedModules, ...customContentConfig.selectedModuleIds];
} }
// Remove core if it's in the list (it's always installed)
selectedModules = selectedModules.filter((m) => m !== 'core'); selectedModules = selectedModules.filter((m) => m !== 'core');
let toolSelection = await this.promptToolSelection(confirmedDirectory);
// Tool selection (already done for new installs at the beginning)
if (!toolSelection) {
toolSelection = await this.promptToolSelection(confirmedDirectory, selectedModules);
}
// Collect configurations for new installations
const coreConfig = await this.collectCoreConfig(confirmedDirectory); const coreConfig = await this.collectCoreConfig(confirmedDirectory);
// TTS already handled at the beginning for new installs
return { return {
actionType: 'install', actionType: 'install',
directory: confirmedDirectory, directory: confirmedDirectory,
@ -413,18 +364,15 @@ class UI {
skipIde: toolSelection.skipIde, skipIde: toolSelection.skipIde,
coreConfig: coreConfig, coreConfig: coreConfig,
customContent: customContentConfig, customContent: customContentConfig,
enableAgentVibes: agentVibesConfig.enabled,
agentVibesInstalled: agentVibesConfig.alreadyInstalled,
}; };
} }
/** /**
* Prompt for tool/IDE selection (called after module configuration) * Prompt for tool/IDE selection (called after module configuration)
* @param {string} projectDir - Project directory to check for existing IDEs * @param {string} projectDir - Project directory to check for existing IDEs
* @param {Array} selectedModules - Selected modules from configuration
* @returns {Object} Tool configuration * @returns {Object} Tool configuration
*/ */
async promptToolSelection(projectDir, selectedModules) { async promptToolSelection(projectDir) {
// Check for existing configured IDEs - use findBmadDir to detect custom folder names // Check for existing configured IDEs - use findBmadDir to detect custom folder names
const { Detector } = require('../installers/lib/core/detector'); const { Detector } = require('../installers/lib/core/detector');
const { Installer } = require('../installers/lib/core/installer'); const { Installer } = require('../installers/lib/core/installer');
@ -447,7 +395,7 @@ class UI {
const processedIdes = new Set(); const processedIdes = new Set();
const initialValues = []; const initialValues = [];
// First, add previously configured IDEs at the top, marked with ✅ // First, add previously configured IDEs, marked with ✅
if (configuredIdes.length > 0) { if (configuredIdes.length > 0) {
const configuredGroup = []; const configuredGroup = [];
for (const ideValue of configuredIdes) { for (const ideValue of configuredIdes) {
@ -499,42 +447,33 @@ class UI {
})); }));
} }
// Add standalone "None" option at the end
groupedOptions[' '] = [
{
label: '⚠ None - I am not installing any tools',
value: '__NONE__',
},
];
let selectedIdes = []; let selectedIdes = [];
let userConfirmedNoTools = false;
// Loop until user selects at least one tool OR explicitly confirms no tools selectedIdes = await prompts.groupMultiselect({
while (!userConfirmedNoTools) { message: `Select tools to configure ${chalk.dim('(↑/↓ navigates multiselect, SPACE toggles, A to toggles All, ENTER confirm)')}:`,
selectedIdes = await prompts.groupMultiselect({ options: groupedOptions,
message: `Select tools to configure ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`, initialValues: initialValues.length > 0 ? initialValues : undefined,
options: groupedOptions, required: true,
initialValues: initialValues.length > 0 ? initialValues : undefined, selectableGroups: false,
required: false, });
});
// If tools were selected, we're done // If user selected both "__NONE__" and other tools, honor the "None" choice
if (selectedIdes && selectedIdes.length > 0) { if (selectedIdes && selectedIdes.includes('__NONE__') && selectedIdes.length > 1) {
break;
}
// Warn that no tools were selected - users often miss the spacebar requirement
console.log(); console.log();
console.log(chalk.red.bold('⚠️ WARNING: No tools were selected!')); console.log(chalk.yellow('⚠️ "None - I am not installing any tools" was selected, so no tools will be configured.'));
console.log(chalk.red(' You must press SPACE to select items, then ENTER to confirm.'));
console.log(chalk.red(' Simply highlighting an item does NOT select it.'));
console.log(); console.log();
selectedIdes = [];
const goBack = await prompts.confirm({ } else if (selectedIdes && selectedIdes.includes('__NONE__')) {
message: chalk.yellow('Would you like to go back and select at least one tool?'), // Only "__NONE__" was selected
default: true, selectedIdes = [];
});
if (goBack) {
// Re-display a message before looping back
console.log();
} else {
// User explicitly chose to proceed without tools
userConfirmedNoTools = true;
}
} }
return { return {
@ -561,27 +500,6 @@ class UI {
return { backupFirst, preserveCustomizations }; return { backupFirst, preserveCustomizations };
} }
/**
* Prompt for module selection
* @param {Array} modules - Available modules
* @returns {Array} Selected modules
*/
async promptModules(modules) {
const choices = modules.map((mod) => ({
name: `${mod.name} - ${mod.description}`,
value: mod.id,
checked: false,
}));
const selectedModules = await prompts.multiselect({
message: `Select modules to add ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`,
choices,
required: true,
});
return selectedModules;
}
/** /**
* Confirm action * Confirm action
* @param {string} message - Confirmation message * @param {string} message - Confirmation message
@ -608,25 +526,6 @@ class UI {
if (result.modules && result.modules.length > 0) { if (result.modules && result.modules.length > 0) {
console.log(chalk.dim(`Modules: ${result.modules.join(', ')}`)); console.log(chalk.dim(`Modules: ${result.modules.join(', ')}`));
} }
if (result.agentVibesEnabled) {
console.log(chalk.dim(`TTS: Enabled`));
}
// TTS injection info (simplified)
if (result.ttsInjectedFiles && result.ttsInjectedFiles.length > 0) {
console.log(chalk.dim(`\n💡 TTS enabled for ${result.ttsInjectedFiles.length} agent(s)`));
console.log(chalk.dim(' Agents will now speak when using AgentVibes'));
}
console.log(chalk.yellow('\nThank you for helping test the early release version of the new BMad Core and BMad Method!'));
console.log(chalk.cyan('Stable Beta coming soon - please read the full README.md and linked documentation to get started!'));
// Add changelog link at the end
console.log(
chalk.magenta(
"\n📋 Want to see what's new? Check out the changelog: https://github.com/bmad-code-org/BMAD-METHOD/blob/main/CHANGELOG.md",
),
);
} }
/** /**
@ -768,20 +667,40 @@ class UI {
* @param {Array} moduleChoices - Available module choices * @param {Array} moduleChoices - Available module choices
* @returns {Array} Selected module IDs * @returns {Array} Selected module IDs
*/ */
async selectModules(moduleChoices, defaultSelections = []) { async selectModules(moduleChoices, defaultSelections = null) {
// Mark choices as checked based on defaultSelections // If defaultSelections is provided, use it to override checked state
// Otherwise preserve the checked state from moduleChoices (set by getModuleChoices)
const choicesWithDefaults = moduleChoices.map((choice) => ({ const choicesWithDefaults = moduleChoices.map((choice) => ({
...choice, ...choice,
checked: defaultSelections.includes(choice.value), ...(defaultSelections === null ? {} : { checked: defaultSelections.includes(choice.value) }),
})); }));
// Add a "None" option at the end for users who changed their mind
const choicesWithSkipOption = [
...choicesWithDefaults,
{
value: '__NONE__',
label: '⚠ None / I changed my mind - skip module installation',
checked: false,
},
];
const selected = await prompts.multiselect({ const selected = await prompts.multiselect({
message: `Select modules to install ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`, message: `Select modules to install ${chalk.dim('(↑/↓ navigates multiselect, SPACE toggles, A to toggles All, ENTER confirm)')}:`,
choices: choicesWithDefaults, choices: choicesWithSkipOption,
required: false, required: true,
}); });
return selected || []; // If user selected both "__NONE__" and other items, honor the "None" choice
if (selected && selected.includes('__NONE__') && selected.length > 1) {
console.log();
console.log(chalk.yellow('⚠️ "None / I changed my mind" was selected, so no modules will be installed.'));
console.log();
return [];
}
// Filter out the special '__NONE__' value
return selected ? selected.filter((m) => m !== '__NONE__') : [];
} }
/** /**
@ -1061,136 +980,6 @@ class UI {
return path.resolve(expanded); return path.resolve(expanded);
} }
/**
* @function promptAgentVibes
* @intent Ask user if they want AgentVibes TTS integration during BMAD installation
* @why Enables optional voice features without forcing TTS on users who don't want it
* @param {string} projectDir - Absolute path to user's project directory
* @returns {Promise<Object>} Configuration object: { enabled: boolean, alreadyInstalled: boolean }
* @sideeffects None - pure user input collection, no files written
* @edgecases Shows warning if user enables TTS but AgentVibes not detected
* @calledby promptInstall() during installation flow, after core config, before IDE selection
* @calls checkAgentVibesInstalled(), prompts.select(), chalk.green/yellow/dim()
*
* AI NOTE: This prompt is strategically positioned in installation flow:
* - AFTER core config (user_name, etc)
* - BEFORE IDE selection (which can hang on Windows/PowerShell)
*
* Flow Logic:
* 1. Auto-detect if AgentVibes already installed (checks for hook files)
* 2. Show detection status to user (green checkmark or gray "not detected")
* 3. Prompt: "Enable AgentVibes TTS?" (defaults to true if detected)
* 4. If user says YES but AgentVibes NOT installed:
* Show warning with installation link (graceful degradation)
* 5. Return config to promptInstall(), which passes to installer.install()
*
* State Flow:
* promptAgentVibes() { enabled, alreadyInstalled }
*
* promptInstall() config.enableAgentVibes
*
* installer.install() this.enableAgentVibes
*
* processTTSInjectionPoints() injects OR strips markers
*
* RELATED:
* ========
* - Detection: checkAgentVibesInstalled() - looks for bmad-speak.sh and play-tts.sh
* - Processing: installer.js::processTTSInjectionPoints()
* - Markers: src/core/workflows/party-mode/instructions.md:101, src/modules/bmm/agents/*.md
* - GitHub Issue: paulpreibisch/AgentVibes#36
*/
async promptAgentVibes(projectDir) {
CLIUtils.displaySection('🎤 Voice Features', 'Enable TTS for multi-agent conversations');
// Check if AgentVibes is already installed
const agentVibesInstalled = await this.checkAgentVibesInstalled(projectDir);
if (agentVibesInstalled) {
console.log(chalk.green(' ✓ AgentVibes detected'));
} else {
console.log(chalk.dim(' AgentVibes not detected'));
}
const enableTts = await prompts.confirm({
message: 'Enable Agents to Speak Out loud (powered by Agent Vibes? Claude Code only currently)',
default: false,
});
if (enableTts && !agentVibesInstalled) {
console.log(chalk.yellow('\n ⚠️ AgentVibes not installed'));
console.log(chalk.dim(' Install AgentVibes separately to enable TTS:'));
console.log(chalk.dim(' https://github.com/paulpreibisch/AgentVibes\n'));
}
return {
enabled: enableTts,
alreadyInstalled: agentVibesInstalled,
};
}
/**
* @function checkAgentVibesInstalled
* @intent Detect if AgentVibes TTS hooks are present in user's project
* @why Allows auto-enabling TTS and showing helpful installation guidance
* @param {string} projectDir - Absolute path to user's project directory
* @returns {Promise<boolean>} true if both required AgentVibes hooks exist, false otherwise
* @sideeffects None - read-only file existence checks
* @edgecases Returns false if either hook missing (both required for functional TTS)
* @calledby promptAgentVibes() to determine default value and show detection status
* @calls fs.pathExists() twice (bmad-speak.sh, play-tts.sh)
*
* AI NOTE: This checks for the MINIMUM viable AgentVibes installation.
*
* Required Files:
* ===============
* 1. .claude/hooks/bmad-speak.sh
* - Maps agent display names agent IDs voice profiles
* - Calls play-tts.sh with agent's assigned voice
* - Created by AgentVibes installer
*
* 2. .claude/hooks/play-tts.sh
* - Core TTS router (ElevenLabs or Piper)
* - Provider-agnostic interface
* - Required by bmad-speak.sh
*
* Why Both Required:
* ==================
* - bmad-speak.sh alone: No TTS backend
* - play-tts.sh alone: No BMAD agent voice mapping
* - Both together: Full party mode TTS integration
*
* Detection Strategy:
* ===================
* We use simple file existence (not version checks) because:
* - Fast and reliable
* - Works across all AgentVibes versions
* - User will discover version issues when TTS runs (fail-fast)
*
* PATTERN: Adding New Detection Criteria
* =======================================
* If future AgentVibes features require additional files:
* 1. Add new pathExists check to this function
* 2. Update documentation in promptAgentVibes()
* 3. Consider: should missing file prevent detection or just log warning?
*
* RELATED:
* ========
* - AgentVibes Installer: creates these hooks
* - bmad-speak.sh: calls play-tts.sh with agent voices
* - Party Mode: uses bmad-speak.sh for agent dialogue
*/
async checkAgentVibesInstalled(projectDir) {
const fs = require('fs-extra');
const path = require('node:path');
// Check for AgentVibes hook files
const hookPath = path.join(projectDir, '.claude', 'hooks', 'bmad-speak.sh');
const playTtsPath = path.join(projectDir, '.claude', 'hooks', 'play-tts.sh');
return (await fs.pathExists(hookPath)) && (await fs.pathExists(playTtsPath));
}
/** /**
* Load existing configurations to use as defaults * Load existing configurations to use as defaults
* @param {string} directory - Installation directory * @param {string} directory - Installation directory
@ -1201,7 +990,6 @@ class UI {
hasCustomContent: false, hasCustomContent: false,
coreConfig: {}, coreConfig: {},
ideConfig: { ides: [], skipIde: false }, ideConfig: { ides: [], skipIde: false },
agentVibesConfig: { enabled: false, alreadyInstalled: false },
}; };
try { try {
@ -1215,10 +1003,6 @@ class UI {
configs.ideConfig.skipIde = false; configs.ideConfig.skipIde = false;
} }
// Load AgentVibes configuration
const agentVibesInstalled = await this.checkAgentVibesInstalled(directory);
configs.agentVibesConfig = { enabled: agentVibesInstalled, alreadyInstalled: agentVibesInstalled };
return configs; return configs;
} catch { } catch {
// If loading fails, return empty configs // If loading fails, return empty configs
@ -1461,12 +1245,32 @@ class UI {
checked: m.checked, checked: m.checked,
})); }));
// Add "None / I changed my mind" option at the end
const choicesWithSkip = [
...selectChoices,
{
name: '⚠ None / I changed my mind - keep no custom modules',
value: '__NONE__',
checked: false,
},
];
const keepModules = await prompts.multiselect({ const keepModules = await prompts.multiselect({
message: `Select custom modules to keep ${chalk.dim('(↑/↓ navigate, SPACE select, ENTER confirm)')}:`, message: `Select custom modules to keep ${chalk.dim('(↑/↓ navigates multiselect, SPACE toggles, A to toggles All, ENTER confirm)')}:`,
choices: selectChoices, choices: choicesWithSkip,
required: false, required: true,
}); });
result.selectedCustomModules = keepModules || [];
// If user selected both "__NONE__" and other modules, honor the "None" choice
if (keepModules && keepModules.includes('__NONE__') && keepModules.length > 1) {
console.log();
console.log(chalk.yellow('⚠️ "None / I changed my mind" was selected, so no custom modules will be kept.'));
console.log();
result.selectedCustomModules = [];
} else {
// Filter out the special '__NONE__' value
result.selectedCustomModules = keepModules ? keepModules.filter((m) => m !== '__NONE__') : [];
}
break; break;
} }