Compare commits

...

5 Commits

Author SHA1 Message Date
Sjoerd Bozon 312ff23066
Merge 015c74c46f into eeebf152af 2026-01-13 09:53:10 +01:00
Murat K Ozcan eeebf152af
docs: tea editorial review (#1313)
* docs: tea editorial review

* fix: addressed nit pick
2026-01-13 02:27:25 +08:00
Sjoerd Bozon 015c74c46f
Merge branch 'bmad-code-org:main' into feat/workflow-prompt-recommendations 2026-01-05 10:24:53 +01:00
Sjoerd Bozon 9317ef5a62 fix: address Copilot review feedback on PR #1205
- Move step 6 before WORKFLOW COMPLETE marker (fixes workflow structure)
- Change PRD shortcut from PR to PD (avoids conflict with parallel-research)
- Clarify instructions for reading/updating VS Code settings
- Update phase 4 comment to match actual handoff flow
2025-12-29 00:16:38 +01:00
Sjoerd Bozon d662aee4b2 feat: add VS Code workflow prompt recommendations
Add chat.promptFilesRecommendations support for GitHub Copilot to show
workflow shortcuts as new chat starters.

- Add workflow-prompts-config.js with all BMM, BMGD, and core prompts
- Add workflow-prompt-generator.js to create .github/prompts/*.prompt.md
- Update github-copilot.js to generate prompts and configure VS Code
- Add phase-based prompt toggling to implementation-readiness workflow
- Add phase-based prompt toggling to sprint-planning workflow

When implementation-readiness passes or sprint-planning completes, the
workflows update VS Code settings to prioritize the 'keep going' cycle
(create-story → dev-story → code-review) over setup phase prompts.
2025-12-28 23:44:40 +01:00
6 changed files with 414 additions and 219 deletions

View File

@ -26,7 +26,7 @@ BMad does not mandate TEA. There are five valid ways to use it (or skip it). Pic
2. **TEA-only (Standalone)** 2. **TEA-only (Standalone)**
- Use TEA on a non-BMad project. Bring your own requirements, acceptance criteria, and environments. - Use TEA on a non-BMad project. Bring your own requirements, acceptance criteria, and environments.
- Typical sequence: `*test-design` (system or epic) -> `*atdd` and/or `*automate` -> optional `*test-review` -> `*trace` for coverage and gate decisions. - Typical sequence: `*test-design` (system or epic) -> `*atdd` and/or `*automate` -> optional `*test-review` -> `*trace` for coverage and gate decisions.
- Run `*framework` or `*ci` only if you want TEA to scaffold the harness or pipeline. - Run `*framework` or `*ci` only if you want TEA to scaffold the harness or pipeline; they work best after you decide the stack/architecture.
3. **Integrated: Greenfield - BMad Method (Simple/Standard Work)** 3. **Integrated: Greenfield - BMad Method (Simple/Standard Work)**
- Phase 3: system-level `*test-design`, then `*framework` and `*ci`. - Phase 3: system-level `*test-design`, then `*framework` and `*ci`.
@ -48,8 +48,29 @@ BMad does not mandate TEA. There are five valid ways to use it (or skip it). Pic
If you are unsure, default to the integrated path for your track and adjust later. If you are unsure, default to the integrated path for your track and adjust later.
## TEA Command Catalog
| Command | Primary Outputs | Notes | With Playwright MCP Enhancements |
| -------------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |
| `*framework` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists | - |
| `*ci` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) | - |
| `*test-design` | Combined risk assessment, mitigation plan, and coverage strategy | Risk scoring + optional exploratory mode | **+ Exploratory**: Interactive UI discovery with browser automation (uncover actual functionality) |
| `*atdd` | Failing acceptance tests + implementation checklist | TDD red phase + optional recording mode | **+ Recording**: AI generation verified with live browser (accurate selectors from real DOM) |
| `*automate` | Prioritized specs, fixtures, README/script updates, DoD summary | Optional healing/recording, avoid duplicate coverage | **+ Healing**: Pattern fixes enhanced with visual debugging + **+ Recording**: AI verified with live browser |
| `*test-review` | Test quality review report with 0-100 score, violations, fixes | Reviews tests against knowledge base patterns | - |
| `*nfr-assess` | NFR assessment report with actions | Focus on security/performance/reliability | - |
| `*trace` | Phase 1: Coverage matrix, recommendations. Phase 2: Gate decision (PASS/CONCERNS/FAIL/WAIVED) | Two-phase workflow: traceability + gate decision | - |
## TEA Workflow Lifecycle ## TEA Workflow Lifecycle
**Phase Numbering Note:** BMad uses a 4-phase methodology with optional Phase 1 and a documentation prerequisite:
- **Documentation** (Optional for brownfield): Prerequisite using `*document-project`
- **Phase 1** (Optional): Discovery/Analysis (`*brainstorm`, `*research`, `*product-brief`)
- **Phase 2** (Required): Planning (`*prd` creates PRD with FRs/NFRs)
- **Phase 3** (Track-dependent): Solutioning (`*architecture` → `*test-design` (system-level) → `*create-epics-and-stories` → TEA: `*framework`, `*ci``*implementation-readiness`)
- **Phase 4** (Required): Implementation (`*sprint-planning` → per-epic: `*test-design` → per-story: dev workflows)
TEA integrates into the BMad development lifecycle during Solutioning (Phase 3) and Implementation (Phase 4): TEA integrates into the BMad development lifecycle during Solutioning (Phase 3) and Implementation (Phase 4):
```mermaid ```mermaid
@ -132,62 +153,25 @@ graph TB
style Waived fill:#9c27b0,stroke:#4a148c,stroke-width:3px,color:#000 style Waived fill:#9c27b0,stroke:#4a148c,stroke-width:3px,color:#000
``` ```
**Phase Numbering Note:** BMad uses a 4-phase methodology with optional Phase 1 and documentation prerequisite:
- **Documentation** (Optional for brownfield): Prerequisite using `*document-project`
- **Phase 1** (Optional): Discovery/Analysis (`*brainstorm`, `*research`, `*product-brief`)
- **Phase 2** (Required): Planning (`*prd` creates PRD with FRs/NFRs)
- **Phase 3** (Track-dependent): Solutioning (`*architecture` → `*test-design` (system-level) → `*create-epics-and-stories` → TEA: `*framework`, `*ci``*implementation-readiness`)
- **Phase 4** (Required): Implementation (`*sprint-planning` → per-epic: `*test-design` → per-story: dev workflows)
**TEA workflows:** `*framework` and `*ci` run once in Phase 3 after architecture. `*test-design` is **dual-mode**: **TEA workflows:** `*framework` and `*ci` run once in Phase 3 after architecture. `*test-design` is **dual-mode**:
- **System-level (Phase 3):** Run immediately after architecture/ADR drafting to produce `test-design-system.md` (testability review, ADR → test mapping, Architecturally Significant Requirements (ASRs), environment needs). Feeds the implementation-readiness gate. - **System-level (Phase 3):** Run immediately after architecture/ADR drafting to produce `test-design-system.md` (testability review, ADR → test mapping, Architecturally Significant Requirements (ASRs), environment needs). Feeds the implementation-readiness gate.
- **Epic-level (Phase 4):** Run per-epic to produce `test-design-epic-N.md` (risk, priorities, coverage plan). - **Epic-level (Phase 4):** Run per-epic to produce `test-design-epic-N.md` (risk, priorities, coverage plan).
Quick Flow track skips Phases 1 and 3. The Quick Flow track skips Phases 1 and 3.
BMad Method and Enterprise use all phases based on project needs. BMad Method and Enterprise use all phases based on project needs.
When an ADR or architecture draft is produced, run `*test-design` in **system-level** mode before the implementation-readiness gate. This ensures the ADR has an attached testability review and ADR → test mapping. Keep the test-design updated if ADRs change. When an ADR or architecture draft is produced, run `*test-design` in **system-level** mode before the implementation-readiness gate. This ensures the ADR has an attached testability review and ADR → test mapping. Keep the test-design updated if ADRs change.
## Why TEA is Different from Other BMM Agents ## Why TEA Is Different from Other BMM Agents
TEA is the only BMM agent that operates in **multiple phases** (Phase 3 and Phase 4) and has its own **knowledge base architecture**. TEA spans multiple phases (Phase 3, Phase 4, and the release gate). Most BMM agents operate in a single phase. That multi-phase role is paired with a dedicated testing knowledge base so standards stay consistent across projects.
### Phase-Specific Agents (Standard Pattern)
Most BMM agents work in a single phase:
- **Phase 1 (Analysis)**: Analyst agent
- **Phase 2 (Planning)**: PM agent
- **Phase 3 (Solutioning)**: Architect agent
- **Phase 4 (Implementation)**: SM, DEV agents
### TEA: Multi-Phase Quality Agent (Unique Pattern)
TEA is **the only agent that operates in multiple phases**:
```
Phase 1 (Analysis) → [TEA not typically used]
Phase 2 (Planning) → [PM defines requirements - TEA not active]
Phase 3 (Solutioning) → TEA: *framework, *ci (test infrastructure AFTER architecture)
Phase 4 (Implementation) → TEA: *test-design (per epic: "how do I test THIS feature?")
→ TEA: *atdd, *automate, *test-review, *trace (per story)
Epic/Release Gate → TEA: *nfr-assess, *trace Phase 2 (release decision)
```
### TEA's 8 Workflows Across Phases ### TEA's 8 Workflows Across Phases
**Standard agents**: 1-3 workflows per phase
**TEA**: 8 workflows across Phase 3, Phase 4, and Release Gate
| Phase | TEA Workflows | Frequency | Purpose | | Phase | TEA Workflows | Frequency | Purpose |
| ----------- | --------------------------------------------------------- | ---------------- | ---------------------------------------------- | | ----------- | --------------------------------------------------------- | ---------------- | ---------------------------------------------- |
| **Phase 2** | (none) | - | Planning phase - PM defines requirements | | **Phase 2** | (none) | - | Planning phase - PM defines requirements |
| **Phase 3** | \*framework, \*ci | Once per project | Setup test infrastructure AFTER architecture | | **Phase 3** | \*test-design (system-level), \*framework, \*ci | Once per project | System testability review and test infrastructure setup |
| **Phase 4** | \*test-design, \*atdd, \*automate, \*test-review, \*trace | Per epic/story | Test planning per epic, then per-story testing | | **Phase 4** | \*test-design, \*atdd, \*automate, \*test-review, \*trace | Per epic/story | Test planning per epic, then per-story testing |
| **Release** | \*nfr-assess, \*trace (Phase 2: gate) | Per epic/release | Go/no-go decision | | **Release** | \*nfr-assess, \*trace (Phase 2: gate) | Per epic/release | Go/no-go decision |
@ -197,17 +181,17 @@ Epic/Release Gate → TEA: *nfr-assess, *trace Phase 2 (release decision)
TEA uniquely requires: TEA uniquely requires:
- **Extensive domain knowledge**: 30+ fragments covering test patterns, CI/CD, fixtures, quality practices, and optional playwright-utils integration - **Extensive domain knowledge**: Test patterns, CI/CD, fixtures, and quality practices
- **Cross-cutting concerns**: Domain-specific testing patterns that apply across all BMad projects (vs project-specific artifacts like PRDs/stories) - **Cross-cutting concerns**: Standards that apply across all BMad projects (not just PRDs or stories)
- **Optional integrations**: MCP capabilities (exploratory, verification) and playwright-utils support - **Optional integrations**: Playwright-utils and MCP enhancements
This architecture enables TEA to maintain consistent, production-ready testing patterns across all BMad projects while operating across multiple development phases. This architecture lets TEA maintain consistent, production-ready testing patterns while operating across multiple phases.
## High-Level Cheat Sheets ## Track Cheat Sheets (Condensed)
These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks** across the **4-Phase Methodology** (Phase 1: Analysis, Phase 2: Planning, Phase 3: Solutioning, Phase 4: Implementation). These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks** across the **4-Phase Methodology** (Phase 1: Analysis, Phase 2: Planning, Phase 3: Solutioning, Phase 4: Implementation).
**Note:** Quick Flow projects typically don't require TEA (covered in Overview). These cheat sheets focus on BMad Method and Enterprise tracks where TEA adds value. **Note:** The Quick Flow track typically doesn't require TEA (covered in Overview). These cheat sheets focus on BMad Method and Enterprise tracks where TEA adds value.
**Legend for Track Deltas:** **Legend for Track Deltas:**
@ -231,39 +215,15 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
| **Phase 4**: Story Review | Execute `*test-review` (optional), re-run `*trace` | Address recommendations, update code/tests | Quality report, refreshed coverage matrix | | **Phase 4**: Story Review | Execute `*test-review` (optional), re-run `*trace` | Address recommendations, update code/tests | Quality report, refreshed coverage matrix |
| **Phase 4**: Release Gate | (Optional) `*test-review` for final audit, Run `*trace` (Phase 2) | Confirm Definition of Done, share release notes | Quality audit, Gate YAML + release summary | | **Phase 4**: Release Gate | (Optional) `*test-review` for final audit, Run `*trace` (Phase 2) | Confirm Definition of Done, share release notes | Quality audit, Gate YAML + release summary |
<details> **Key notes:**
<summary>Execution Notes</summary> - Run `*framework` and `*ci` once in Phase 3 after architecture.
- Run `*test-design` per epic in Phase 4; use `*atdd` before dev when helpful.
- Run `*framework` only once per repo or when modern harness support is missing. - Use `*trace` for gate decisions; `*test-review` is an optional audit.
- **Phase 3 (Solutioning)**: After architecture is complete, run `*framework` and `*ci` to setup test infrastructure based on architectural decisions.
- **Phase 4 starts**: After solutioning is complete, sprint planning loads all epics.
- **`*test-design` runs per-epic**: At the beginning of working on each epic, run `*test-design` to create a test plan for THAT specific epic/feature. Output: `test-design-epic-N.md`.
- Use `*atdd` before coding when the team can adopt ATDD; share its checklist with the dev agent.
- Post-implementation, keep `*trace` current, expand coverage with `*automate`, optionally review test quality with `*test-review`. For release gate, run `*trace` with Phase 2 enabled to get deployment decision.
- Use `*test-review` after `*atdd` to validate generated tests, after `*automate` to ensure regression quality, or before gate for final audit.
- Clarification: `*test-review` is optional and only audits existing tests; run it after `*atdd` or `*automate` when you want a quality review, not as a required step.
- Clarification: `*atdd` outputs are not auto-consumed; share the ATDD doc/tests with the dev workflow. `*trace` does not run `*atdd`—it evaluates existing artifacts for coverage and gate readiness.
- Clarification: `*ci` is a one-time setup; recommended early (Phase 3 or before feature work), but it can be done later if it was skipped.
</details>
<details>
<summary>Worked Example “Nova CRM” Greenfield Feature</summary>
1. **Planning (Phase 2):** Analyst runs `*product-brief`; PM executes `*prd` to produce PRD with FRs/NFRs.
2. **Solutioning (Phase 3):** Architect completes `*architecture` for the new module; `*create-epics-and-stories` generates epics/stories based on architecture; TEA sets up test infrastructure via `*framework` and `*ci` based on architectural decisions; gate check validates planning completeness.
3. **Sprint Start (Phase 4):** Scrum Master runs `*sprint-planning` to load all epics into sprint status.
4. **Epic 1 Planning (Phase 4):** TEA runs `*test-design` to create test plan for Epic 1, producing `test-design-epic-1.md` with risk assessment.
5. **Story Implementation (Phase 4):** For each story in Epic 1, SM generates story via `*create-story`; TEA optionally runs `*atdd`; Dev implements with guidance from failing tests.
6. **Post-Dev (Phase 4):** TEA runs `*automate`, optionally `*test-review` to audit test quality, re-runs `*trace` to refresh coverage.
7. **Release Gate:** TEA runs `*trace` with Phase 2 enabled to generate gate decision.
</details>
### Brownfield - BMad Method or Enterprise (Simple or Complex) ### Brownfield - BMad Method or Enterprise (Simple or Complex)
**Planning Tracks:** BMad Method or Enterprise Method **Planning Tracks:** BMad Method or Enterprise Method
**Use Case:** Existing codebases - simple additions (BMad Method) or complex enterprise requirements (Enterprise Method) **Use Case:** Existing codebases: simple additions (BMad Method) or complex enterprise requirements (Enterprise Method)
**🔄 Brownfield Deltas from Greenfield:** **🔄 Brownfield Deltas from Greenfield:**
@ -284,31 +244,10 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
| **Phase 4**: Story Review | Apply `*test-review` (optional), re-run `*trace`, `*nfr-assess` if needed | Resolve gaps, update docs/tests | Quality report, refreshed coverage matrix, NFR report | | **Phase 4**: Story Review | Apply `*test-review` (optional), re-run `*trace`, `*nfr-assess` if needed | Resolve gaps, update docs/tests | Quality report, refreshed coverage matrix, NFR report |
| **Phase 4**: Release Gate | (Optional) `*test-review` for final audit, Run `*trace` (Phase 2) | Capture sign-offs, share release notes | Quality audit, Gate YAML + release summary | | **Phase 4**: Release Gate | (Optional) `*test-review` for final audit, Run `*trace` (Phase 2) | Capture sign-offs, share release notes | Quality audit, Gate YAML + release summary |
<details> **Key notes:**
<summary>Execution Notes</summary> - Start with `*trace` in Phase 2 to baseline coverage.
- Focus `*test-design` on regression hotspots and integration risk.
- Lead with `*trace` during Planning (Phase 2) to baseline existing test coverage before architecture work begins. - Run `*nfr-assess` before the gate if it wasn't done earlier.
- **Phase 3 (Solutioning)**: After architecture is complete, run `*framework` and `*ci` to modernize test infrastructure. For brownfield, framework may need to integrate with or replace existing test setup.
- **Phase 4 starts**: After solutioning is complete and sprint planning loads all epics.
- **`*test-design` runs per-epic**: At the beginning of working on each epic, run `*test-design` to identify regression hotspots, integration risks, and mitigation strategies for THAT specific epic/feature. Output: `test-design-epic-N.md`.
- Use `*atdd` when stories benefit from ATDD; otherwise proceed to implementation and rely on post-dev automation.
- After development, expand coverage with `*automate`, optionally review test quality with `*test-review`, re-run `*trace` (Phase 2 for gate decision). Run `*nfr-assess` now if non-functional risks weren't addressed earlier.
- Use `*test-review` to validate existing brownfield tests or audit new tests before gate.
</details>
<details>
<summary>Worked Example “Atlas Payments” Brownfield Story</summary>
1. **Planning (Phase 2):** PM executes `*prd` to create PRD with FRs/NFRs; TEA runs `*trace` to baseline existing coverage.
2. **Solutioning (Phase 3):** Architect triggers `*architecture` capturing legacy payment flows and integration architecture; `*create-epics-and-stories` generates Epic 1 (Payment Processing) based on architecture; TEA sets up `*framework` and `*ci` based on architectural decisions; gate check validates planning.
3. **Sprint Start (Phase 4):** Scrum Master runs `*sprint-planning` to load Epic 1 into sprint status.
4. **Epic 1 Planning (Phase 4):** TEA runs `*test-design` for Epic 1 (Payment Processing), producing `test-design-epic-1.md` that flags settlement edge cases, regression hotspots, and mitigation plans.
5. **Story Implementation (Phase 4):** For each story in Epic 1, SM generates story via `*create-story`; TEA runs `*atdd` producing failing Playwright specs; Dev implements with guidance from tests and checklist.
6. **Post-Dev (Phase 4):** TEA applies `*automate`, optionally `*test-review` to audit test quality, re-runs `*trace` to refresh coverage.
7. **Release Gate:** TEA performs `*nfr-assess` to validate SLAs, runs `*trace` with Phase 2 enabled to generate gate decision (PASS/CONCERNS/FAIL).
</details>
### Greenfield - Enterprise Method (Enterprise/Compliance Work) ### Greenfield - Enterprise Method (Enterprise/Compliance Work)
@ -332,105 +271,36 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
| **Phase 4**: Story Dev | (Optional) `*atdd`, `*automate`, `*test-review`, `*trace` per story | SM `*create-story`, DEV implements | Tests, fixtures, quality reports, coverage matrices | | **Phase 4**: Story Dev | (Optional) `*atdd`, `*automate`, `*test-review`, `*trace` per story | SM `*create-story`, DEV implements | Tests, fixtures, quality reports, coverage matrices |
| **Phase 4**: Release Gate | Final `*test-review` audit, Run `*trace` (Phase 2), 📦 archive artifacts | Capture sign-offs, 📦 compliance evidence | Quality audit, updated assessments, gate YAML, 📦 audit trail | | **Phase 4**: Release Gate | Final `*test-review` audit, Run `*trace` (Phase 2), 📦 archive artifacts | Capture sign-offs, 📦 compliance evidence | Quality audit, updated assessments, gate YAML, 📦 audit trail |
<details> **Key notes:**
<summary>Execution Notes</summary> - Run `*nfr-assess` early in Phase 2.
- `*test-design` emphasizes compliance, security, and performance alignment.
- Archive artifacts at the release gate for audits.
- `*nfr-assess` runs early in Planning (Phase 2) to capture compliance, security, and performance requirements upfront. **Related how-to guides:**
- **Phase 3 (Solutioning)**: After architecture is complete, run `*framework` and `*ci` with enterprise-grade configurations (selective testing, burn-in jobs, caching, notifications). - [How to Run Test Design](/docs/how-to/workflows/run-test-design.md)
- **Phase 4 starts**: After solutioning is complete and sprint planning loads all epics. - [How to Set Up a Test Framework](/docs/how-to/workflows/setup-test-framework.md)
- **`*test-design` runs per-epic**: At the beginning of working on each epic, run `*test-design` to create an enterprise-focused test plan for THAT specific epic, ensuring alignment with security architecture, performance targets, and compliance requirements. Output: `test-design-epic-N.md`.
- Use `*atdd` for stories when feasible so acceptance tests can lead implementation.
- Use `*test-review` per story or sprint to maintain quality standards and ensure compliance with testing best practices.
- Prior to release, rerun coverage (`*trace`, `*automate`), perform final quality audit with `*test-review`, and formalize the decision with `*trace` Phase 2 (gate decision); archive artifacts for compliance audits.
</details> ## Optional Integrations
<details> ### Playwright Utils (`@seontechnologies/playwright-utils`)
<summary>Worked Example “Helios Ledger” Enterprise Release</summary>
1. **Planning (Phase 2):** Analyst runs `*research` and `*product-brief`; PM completes `*prd` creating PRD with FRs/NFRs; TEA runs `*nfr-assess` to establish NFR targets. Production-ready fixtures and utilities that enhance TEA workflows.
2. **Solutioning (Phase 3):** Architect completes `*architecture` with enterprise considerations; `*create-epics-and-stories` generates epics/stories based on architecture; TEA sets up `*framework` and `*ci` with enterprise-grade configurations based on architectural decisions; gate check validates planning completeness.
3. **Sprint Start (Phase 4):** Scrum Master runs `*sprint-planning` to load all epics into sprint status.
4. **Per-Epic (Phase 4):** For each epic, TEA runs `*test-design` to create epic-specific test plan (e.g., `test-design-epic-1.md`, `test-design-epic-2.md`) with compliance-focused risk assessment.
5. **Per-Story (Phase 4):** For each story, TEA uses `*atdd`, `*automate`, `*test-review`, and `*trace`; Dev teams iterate on the findings.
6. **Release Gate:** TEA re-checks coverage, performs final quality audit with `*test-review`, and logs the final gate decision via `*trace` Phase 2, archiving artifacts for compliance.
</details> - Install: `npm install -D @seontechnologies/playwright-utils`
> Note: Playwright Utils is enabled via the installer. Only set `tea_use_playwright_utils` in `_bmad/bmm/config.yaml` if you need to override the installer choice.
- Impacts: `*framework`, `*atdd`, `*automate`, `*test-review`, `*ci`
- Utilities include: api-request, auth-session, network-recorder, intercept-network-call, recurse, log, file-utils, burn-in, network-error-monitor, fixtures-composition
## TEA Command Catalog ### Playwright MCP Enhancements
| Command | Primary Outputs | Notes | With Playwright MCP Enhancements | Live browser verification for test design and automation.
| -------------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |
| `*framework` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists | - |
| `*ci` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) | - |
| `*test-design` | Combined risk assessment, mitigation plan, and coverage strategy | Risk scoring + optional exploratory mode | **+ Exploratory**: Interactive UI discovery with browser automation (uncover actual functionality) |
| `*atdd` | Failing acceptance tests + implementation checklist | TDD red phase + optional recording mode | **+ Recording**: AI generation verified with live browser (accurate selectors from real DOM) |
| `*automate` | Prioritized specs, fixtures, README/script updates, DoD summary | Optional healing/recording, avoid duplicate coverage | **+ Healing**: Pattern fixes enhanced with visual debugging + **+ Recording**: AI verified with live browser |
| `*test-review` | Test quality review report with 0-100 score, violations, fixes | Reviews tests against knowledge base patterns | - |
| `*nfr-assess` | NFR assessment report with actions | Focus on security/performance/reliability | - |
| `*trace` | Phase 1: Coverage matrix, recommendations. Phase 2: Gate decision (PASS/CONCERNS/FAIL/WAIVED) | Two-phase workflow: traceability + gate decision | - |
## Playwright Utils Integration
TEA optionally integrates with `@seontechnologies/playwright-utils`, an open-source library providing fixture-based utilities for Playwright tests. This integration enhances TEA's test generation and review workflows with production-ready patterns.
<details>
<summary><strong>Installation & Configuration</strong></summary>
**Package**: `@seontechnologies/playwright-utils` ([npm](https://www.npmjs.com/package/@seontechnologies/playwright-utils) | [GitHub](https://github.com/seontechnologies/playwright-utils))
**Install**: `npm install -D @seontechnologies/playwright-utils`
**Enable during BMAD installation** by answering "Yes" when prompted, or manually set `tea_use_playwright_utils: true` in `_bmad/bmm/config.yaml`.
**To disable**: Set `tea_use_playwright_utils: false` in `_bmad/bmm/config.yaml`.
</details>
<details>
<summary><strong>How Playwright Utils Enhances TEA Workflows</strong></summary>
1. `*framework`:
- Default: Basic Playwright scaffold
- **+ playwright-utils**: Scaffold with api-request, network-recorder, auth-session, burn-in, network-error-monitor fixtures pre-configured
Benefit: Production-ready patterns from day one
2. `*automate`, `*atdd`:
- Default: Standard test patterns
- **+ playwright-utils**: Tests using api-request (schema validation), intercept-network-call (mocking), recurse (polling), log (structured logging), file-utils (CSV/PDF)
Benefit: Advanced patterns without boilerplate
3. `*test-review`:
- Default: Reviews against core knowledge base (22 fragments)
- **+ playwright-utils**: Reviews against expanded knowledge base (33 fragments: 22 core + 11 playwright-utils)
Benefit: Reviews include fixture composition, auth patterns, network recording best practices
4. `*ci`:
- Default: Standard CI workflow
- **+ playwright-utils**: CI workflow with burn-in script (smart test selection) and network-error-monitor integration
Benefit: Faster CI feedback, HTTP error detection
**Utilities available** (10 total): api-request, network-recorder, auth-session, intercept-network-call, recurse, log, file-utils, burn-in, network-error-monitor, fixtures-composition
</details>
## Playwright MCP Enhancements
TEA can leverage Playwright MCP servers to enhance test generation with live browser verification. MCP provides interactive capabilities on top of TEA's default AI-based approach.
<details>
<summary><strong>MCP Server Configuration</strong></summary>
**Two Playwright MCP servers** (actively maintained, continuously updated): **Two Playwright MCP servers** (actively maintained, continuously updated):
- `playwright` - Browser automation (`npx @playwright/mcp@latest`) - `playwright` - Browser automation (`npx @playwright/mcp@latest`)
- `playwright-test` - Test runner with failure analysis (`npx playwright run-test-mcp-server`) - `playwright-test` - Test runner with failure analysis (`npx playwright run-test-mcp-server`)
**Config example**: **Configuration example**:
```json ```json
{ {
@ -447,29 +317,8 @@ TEA can leverage Playwright MCP servers to enhance test generation with live bro
} }
``` ```
**To disable**: Set `tea_use_mcp_enhancements: false` in `_bmad/bmm/config.yaml` OR remove MCPs from IDE config. - Helps `*test-design` validate actual UI behavior.
- Helps `*atdd` and `*automate` verify selectors against the live DOM.
- Enhances healing with `browser_snapshot`, console, network, and locator tools.
</details> **To disable**: set `tea_use_mcp_enhancements: false` in `_bmad/bmm/config.yaml` or remove MCPs from IDE config.
<details>
<summary><strong>How MCP Enhances TEA Workflows</strong></summary>
1. `*test-design`:
- Default: Analysis + documentation
- **+ MCP**: Interactive UI discovery with `browser_navigate`, `browser_click`, `browser_snapshot`, behavior observation
Benefit: Discover actual functionality, edge cases, undocumented features
2. `*atdd`, `*automate`:
- Default: Infers selectors and interactions from requirements and knowledge fragments
- **+ MCP**: Generates tests **then** verifies with `generator_setup_page`, `browser_*` tools, validates against live app
Benefit: Accurate selectors from real DOM, verified behavior, refined test code
3. `*automate` (healing mode):
- Default: Pattern-based fixes from error messages + knowledge fragments
- **+ MCP**: Pattern fixes **enhanced with** `browser_snapshot`, `browser_console_messages`, `browser_network_requests`, `browser_generate_locator`
Benefit: Visual failure context, live DOM inspection, root cause discovery
</details>

View File

@ -111,6 +111,29 @@ Report generated: {outputFile}
The assessment found [number] issues requiring attention. Review the detailed report for specific findings and recommendations." The assessment found [number] issues requiring attention. Review the detailed report for specific findings and recommendations."
### 6. Update IDE Prompt Recommendations
If the readiness status is **READY**, update `.vscode/settings.json` to prioritize the implementation cycle prompts.
Read the existing `chat.promptFilesRecommendations` object and modify these keys:
**Set to `true` (implementation cycle - "keep going" loop):**
- `bmd-create-story`
- `bmd-dev-story`
- `bmd-code-review`
- `bmd-retrospective`
- `bmd-correct-course`
**Set to `false` (setup phase - already completed):**
- `bmd-workflow-init`
- `bmd-brainstorm`
- `bmd-prd`
- `bmd-ux-design`
- `bmd-create-architecture`
- `bmd-epics-stories`
- `bmd-implementation-readiness`
- `bmd-sprint-planning`
## WORKFLOW COMPLETE ## WORKFLOW COMPLETE
The implementation readiness workflow is now complete. The report contains all findings and recommendations for the user to consider. The implementation readiness workflow is now complete. The report contains all findings and recommendations for the user to consider.

View File

@ -179,6 +179,38 @@ development_status:
</step> </step>
<step n="6" goal="Update IDE prompt recommendations for implementation phase">
<action>Read the existing `.vscode/settings.json` and update the `chat.promptFilesRecommendations` object.</action>
**Set to `true` (implementation cycle - "keep going" loop):**
- `bmd-create-story`
- `bmd-dev-story`
- `bmd-code-review`
- `bmd-retrospective`
- `bmd-correct-course`
**Set to `false` (setup phase - already completed):**
- `bmd-workflow-init`
- `bmd-brainstorm`
- `bmd-prd`
- `bmd-ux-design`
- `bmd-create-architecture`
- `bmd-epics-stories`
- `bmd-implementation-readiness`
- `bmd-sprint-planning`
<action>Inform {user_name}:</action>
**IDE Updated for Implementation Phase**
The "keep going" cycle prompts are now prioritized in VS Code:
- **@bmd-custom-bmm-sm → *create-story** (prepare a story)
- **@bmd-custom-bmm-dev → *dev-story** (implement it)
- **Same chat → *code-review** (review the code)
- **Repeat!**
</step>
</workflow> </workflow>
## Additional Documentation ## Additional Documentation

View File

@ -2,6 +2,7 @@ const path = require('node:path');
const { BaseIdeSetup } = require('./_base-ide'); const { BaseIdeSetup } = require('./_base-ide');
const chalk = require('chalk'); const chalk = require('chalk');
const { AgentCommandGenerator } = require('./shared/agent-command-generator'); const { AgentCommandGenerator } = require('./shared/agent-command-generator');
const { WorkflowPromptGenerator } = require('./shared/workflow-prompt-generator');
/** /**
* GitHub Copilot setup handler * GitHub Copilot setup handler
@ -12,6 +13,7 @@ class GitHubCopilotSetup extends BaseIdeSetup {
super('github-copilot', 'GitHub Copilot', true); // preferred IDE super('github-copilot', 'GitHub Copilot', true); // preferred IDE
this.configDir = '.github'; this.configDir = '.github';
this.agentsDir = 'agents'; this.agentsDir = 'agents';
this.promptsDir = 'prompts';
this.vscodeDir = '.vscode'; this.vscodeDir = '.vscode';
} }
@ -94,14 +96,12 @@ class GitHubCopilotSetup extends BaseIdeSetup {
async setup(projectDir, bmadDir, options = {}) { async setup(projectDir, bmadDir, options = {}) {
console.log(chalk.cyan(`Setting up ${this.name}...`)); console.log(chalk.cyan(`Setting up ${this.name}...`));
// Configure VS Code settings using pre-collected config if available
const config = options.preCollectedConfig || {};
await this.configureVsCodeSettings(projectDir, { ...options, ...config });
// Create .github/agents directory // Create .github/agents directory
const githubDir = path.join(projectDir, this.configDir); const githubDir = path.join(projectDir, this.configDir);
const agentsDir = path.join(githubDir, this.agentsDir); const agentsDir = path.join(githubDir, this.agentsDir);
const promptsDir = path.join(githubDir, this.promptsDir);
await this.ensureDir(agentsDir); await this.ensureDir(agentsDir);
await this.ensureDir(promptsDir);
// Clean up any existing BMAD files before reinstalling // Clean up any existing BMAD files before reinstalling
await this.cleanup(projectDir); await this.cleanup(projectDir);
@ -117,22 +117,37 @@ class GitHubCopilotSetup extends BaseIdeSetup {
const agentContent = await this.createAgentContent({ module: artifact.module, name: artifact.name }, content); const agentContent = await this.createAgentContent({ module: artifact.module, name: artifact.name }, content);
// Use bmd- prefix: bmd-custom-{module}-{name}.agent.md // Use bmd- prefix: bmd-custom-{module}-{name}.agent.md
const targetPath = path.join(agentsDir, `bmd-custom-${artifact.module}-${artifact.name}.agent.md`); const agentFileName = `bmd-custom-${artifact.module}-${artifact.name}`;
const targetPath = path.join(agentsDir, `${agentFileName}.agent.md`);
await this.writeFile(targetPath, agentContent); await this.writeFile(targetPath, agentContent);
agentCount++; agentCount++;
console.log(chalk.green(` ✓ Created agent: bmd-custom-${artifact.module}-${artifact.name}`)); console.log(chalk.green(` ✓ Created agent: ${agentFileName}`));
} }
// Generate workflow prompts from config (shared logic)
// Each prompt includes nextSteps guidance for the agent to suggest next workflows
const promptGen = new WorkflowPromptGenerator();
const promptRecommendations = await promptGen.generatePromptFiles(promptsDir, options.selectedModules || []);
const promptCount = Object.keys(promptRecommendations).length;
// Configure VS Code settings using pre-collected config if available
const config = options.preCollectedConfig || {};
await this.configureVsCodeSettings(projectDir, { ...options, ...config, promptRecommendations });
console.log(chalk.green(`${this.name} configured:`)); console.log(chalk.green(`${this.name} configured:`));
console.log(chalk.dim(` - ${agentCount} agents created`)); console.log(chalk.dim(` - ${agentCount} agents created`));
console.log(chalk.dim(` - ${promptCount} workflow prompts configured`));
console.log(chalk.dim(` - Agents directory: ${path.relative(projectDir, agentsDir)}`)); console.log(chalk.dim(` - Agents directory: ${path.relative(projectDir, agentsDir)}`));
console.log(chalk.dim(` - Prompts directory: ${path.relative(projectDir, promptsDir)}`));
console.log(chalk.dim(` - VS Code settings configured`)); console.log(chalk.dim(` - VS Code settings configured`));
console.log(chalk.dim('\n Agents available in VS Code Chat view')); console.log(chalk.dim('\n Agents available in VS Code Chat view'));
console.log(chalk.dim(' Workflow prompts show as new chat starters'));
return { return {
success: true, success: true,
agents: agentCount, agents: agentCount,
prompts: promptCount,
settings: true, settings: true,
}; };
} }
@ -199,6 +214,11 @@ class GitHubCopilotSetup extends BaseIdeSetup {
}; };
} }
// Add prompt file recommendations for new chat starters
if (options.promptRecommendations && Object.keys(options.promptRecommendations).length > 0) {
bmadSettings['chat.promptFilesRecommendations'] = options.promptRecommendations;
}
// Merge settings (existing take precedence) // Merge settings (existing take precedence)
const mergedSettings = { ...bmadSettings, ...existingSettings }; const mergedSettings = { ...bmadSettings, ...existingSettings };
@ -307,6 +327,24 @@ ${cleanContent}
console.log(chalk.dim(` Cleaned up ${removed} existing BMAD agents`)); console.log(chalk.dim(` Cleaned up ${removed} existing BMAD agents`));
} }
} }
// Clean up prompts directory
const promptsDir = path.join(projectDir, this.configDir, this.promptsDir);
if (await fs.pathExists(promptsDir)) {
const files = await fs.readdir(promptsDir);
let removed = 0;
for (const file of files) {
if (file.startsWith('bmd-') && file.endsWith('.prompt.md')) {
await fs.remove(path.join(promptsDir, file));
removed++;
}
}
if (removed > 0) {
console.log(chalk.dim(` Cleaned up ${removed} existing BMAD prompt files`));
}
}
} }
/** /**

View File

@ -0,0 +1,61 @@
const path = require('node:path');
const fs = require('fs-extra');
const { workflowPromptsConfig } = require('./workflow-prompts-config');
/**
* Generate workflow prompt recommendations for IDE new chat starters
* Uses static configuration from workflow-prompts-config.js which mirrors
* the workflows documented in quick-start.md
*
* The implementation-readiness and sprint-planning workflows update
* VS Code settings to toggle which prompts are shown based on project phase.
*/
class WorkflowPromptGenerator {
/**
* Get workflow prompts for selected modules
* @param {Array<string>} selectedModules - Modules to include (e.g., ['bmm', 'bmgd'])
* @returns {Array<Object>} Array of workflow prompt configurations
*/
getWorkflowPrompts(selectedModules = []) {
const allPrompts = [];
// Always include core prompts
if (workflowPromptsConfig.core) {
allPrompts.push(...workflowPromptsConfig.core);
}
// Add prompts for each selected module
for (const moduleName of selectedModules) {
if (workflowPromptsConfig[moduleName]) {
allPrompts.push(...workflowPromptsConfig[moduleName]);
}
}
return allPrompts;
}
/**
* Generate prompt files for an IDE
* @param {string} promptsDir - Directory to write prompt files
* @param {Array<string>} selectedModules - Modules to include
* @returns {Object} Map of prompt names to true for VS Code settings
*/
async generatePromptFiles(promptsDir, selectedModules = []) {
const prompts = this.getWorkflowPrompts(selectedModules);
const recommendations = {};
for (const prompt of prompts) {
const promptContent = ['---', `agent: ${prompt.agent}`, `description: "${prompt.description}"`, '---', '', prompt.prompt, ''].join(
'\n',
);
const promptFilePath = path.join(promptsDir, `bmd-${prompt.name}.prompt.md`);
await fs.writeFile(promptFilePath, promptContent);
recommendations[`bmd-${prompt.name}`] = true;
}
return recommendations;
}
}
module.exports = { WorkflowPromptGenerator };

View File

@ -0,0 +1,192 @@
/**
* Workflow prompt configuration for IDE new chat starters
*
* This configuration defines the workflow prompts that appear as suggestions
* when starting a new chat in VS Code (via chat.promptFilesRecommendations).
*
* The implementation-readiness and sprint-planning workflows update the
* VS Code settings to toggle which prompts are shown based on project phase.
*
* Reference: docs/modules/bmm-bmad-method/quick-start.md
*/
const workflowPromptsConfig = {
// BMad Method Module (bmm) - Standard development workflow
bmm: [
// ═══════════════════════════════════════════════════════════════════════
// Phase 1 - Analysis (Optional)
// ═══════════════════════════════════════════════════════════════════════
{
name: 'workflow-init',
agent: 'bmd-custom-bmm-analyst',
shortcut: 'WI',
description: '[WI] Initialize workflow and choose planning track',
prompt: '*workflow-init',
},
{
name: 'brainstorm',
agent: 'bmd-custom-bmm-analyst',
shortcut: 'BP',
description: '[BP] Brainstorm project ideas and concepts',
prompt: '*brainstorm-project',
},
{
name: 'workflow-status',
agent: 'bmd-custom-bmm-pm',
shortcut: 'WS',
description: '[WS] Check current workflow status and next steps',
prompt: '*workflow-status',
},
// ═══════════════════════════════════════════════════════════════════════
// Phase 2 - Planning (Required)
// ═══════════════════════════════════════════════════════════════════════
{
name: 'prd',
agent: 'bmd-custom-bmm-pm',
shortcut: 'PD',
description: '[PD] Create Product Requirements Document (PRD)',
prompt: '*prd',
},
{
name: 'ux-design',
agent: 'bmd-custom-bmm-ux-designer',
shortcut: 'UD',
description: '[UD] Create UX Design specification',
prompt: '*ux-design',
},
// ═══════════════════════════════════════════════════════════════════════
// Phase 3 - Solutioning
// ═══════════════════════════════════════════════════════════════════════
{
name: 'create-architecture',
agent: 'bmd-custom-bmm-architect',
shortcut: 'CA',
description: '[CA] Create system architecture document',
prompt: '*create-architecture',
},
{
name: 'epics-stories',
agent: 'bmd-custom-bmm-pm',
shortcut: 'ES',
description: '[ES] Create Epics and User Stories from PRD',
prompt: '*epics-stories',
},
{
name: 'implementation-readiness',
agent: 'bmd-custom-bmm-architect',
shortcut: 'IR',
description: '[IR] Check implementation readiness across all docs',
prompt: '*implementation-readiness',
},
{
name: 'sprint-planning',
agent: 'bmd-custom-bmm-sm',
shortcut: 'SP',
description: '[SP] Initialize sprint planning from epics',
prompt: '*sprint-planning',
},
// ═══════════════════════════════════════════════════════════════════════
// Phase 4 - Implementation: The "Keep Going" Cycle
// SM → create-story → DEV → dev-story → code-review → (create-story | retrospective)
// ═══════════════════════════════════════════════════════════════════════
{
name: 'create-story',
agent: 'bmd-custom-bmm-sm',
shortcut: 'CS',
description: '[CS] Create developer-ready story from epic',
prompt: '*create-story',
},
{
name: 'dev-story',
agent: 'bmd-custom-bmm-dev',
shortcut: 'DS',
description: '[DS] Implement the current story',
prompt: '*dev-story',
},
{
name: 'code-review',
agent: 'bmd-custom-bmm-dev',
shortcut: 'CR',
description: '[CR] Perform code review on implementation',
prompt: '*code-review',
},
{
name: 'retrospective',
agent: 'bmd-custom-bmm-sm',
shortcut: 'ER',
description: '[ER] Run epic retrospective after completion',
prompt: '*epic-retrospective',
},
{
name: 'correct-course',
agent: 'bmd-custom-bmm-sm',
shortcut: 'CC',
description: '[CC] Course correction when things go off track',
prompt: '*correct-course',
},
],
// BMad Game Development Module (bmgd)
bmgd: [
// Implementation cycle
{
name: 'game-implement',
agent: 'bmd-custom-bmgd-game-dev',
shortcut: 'GI',
description: '[GI] Implement game feature',
prompt: '*game-implement',
},
{
name: 'game-qa',
agent: 'bmd-custom-bmgd-game-qa',
shortcut: 'GQ',
description: '[GQ] Test and QA game feature',
prompt: '*game-qa',
},
// Planning & Design
{
name: 'game-design',
agent: 'bmd-custom-bmgd-game-designer',
shortcut: 'GD',
description: '[GD] Design game mechanics and systems',
prompt: '*game-design',
},
{
name: 'game-architecture',
agent: 'bmd-custom-bmgd-game-architect',
shortcut: 'GA',
description: '[GA] Create game technical architecture',
prompt: '*game-architecture',
},
{
name: 'game-sprint',
agent: 'bmd-custom-bmgd-game-scrum-master',
shortcut: 'GS',
description: '[GS] Plan game development sprint',
prompt: '*game-sprint',
},
],
// Core agents (always available)
core: [
{
name: 'list-tasks',
agent: 'bmd-custom-core-bmad-master',
shortcut: 'LT',
description: '[LT] List available tasks',
prompt: '*list-tasks',
},
{
name: 'list-workflows',
agent: 'bmd-custom-core-bmad-master',
shortcut: 'LW',
description: '[LW] List available workflows',
prompt: '*list-workflows',
},
],
};
module.exports = { workflowPromptsConfig };