feat: test design for architecture level (phase 3) (#897)

* feat: test design for architecture level (phase 3)

* addressed review comments

---------

Co-authored-by: Murat Ozcan <murat@Murats-MacBook-Pro.local>
Co-authored-by: Murat Ozcan <murat@mac.lan>
This commit is contained in:
Murat K Ozcan 2025-11-11 10:03:00 -06:00 committed by GitHub
parent 03fbd2ae24
commit 487d1582a0
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
9 changed files with 360 additions and 144 deletions

View File

@ -12,7 +12,7 @@ last-redoc-date: 2025-11-05
## TEA Workflow Lifecycle ## TEA Workflow Lifecycle
TEA integrates into the BMad development lifecycle during Solutioning (Phase 3) and Implementation (Phase 4): TEA integrates into the BMad development lifecycle during Implementation (Phase 4):
```mermaid ```mermaid
%%{init: {'theme':'base', 'themeVariables': { 'primaryColor':'#fff','primaryTextColor':'#000','primaryBorderColor':'#000','lineColor':'#000','secondaryColor':'#fff','tertiaryColor':'#fff','fontSize':'16px','fontFamily':'arial'}}}%% %%{init: {'theme':'base', 'themeVariables': { 'primaryColor':'#fff','primaryTextColor':'#000','primaryBorderColor':'#000','lineColor':'#000','secondaryColor':'#fff','tertiaryColor':'#fff','fontSize':'16px','fontFamily':'arial'}}}%%
@ -25,18 +25,28 @@ graph TB
subgraph Phase3["<b>Phase 3: SOLUTIONING</b>"] subgraph Phase3["<b>Phase 3: SOLUTIONING</b>"]
Architecture["<b>Architect: *architecture</b>"] Architecture["<b>Architect: *architecture</b>"]
Framework["<b>TEA: *framework</b>"] TestDesignSys["<b>TEA: *test-design (system-level)</b>"]
CI["<b>TEA: *ci</b>"] ValidateArch["<b>Architect: *validate-architecture</b>"]
GateCheck["<b>Architect: *solutioning-gate-check</b>"] GateCheck["<b>Architect: *solutioning-gate-check</b>"]
Architecture --> Framework Architecture --> TestDesignSys
Framework --> CI TestDesignSys --> ValidateArch
CI --> GateCheck ValidateArch --> GateCheck
Phase3Note["<b>Test infrastructure AFTER architecture</b><br/>defines technology stack"] Phase3Note["<b>Testability review before gate</b><br/>Recommended: Method | Required: Enterprise"]
Framework -.-> Phase3Note TestDesignSys -.-> Phase3Note
end
subgraph Phase4["<b>Phase 4: IMPLEMENTATION</b>"]
subgraph Sprint0["<b>Sprint 0: Infrastructure Setup</b>"]
Framework["<b>TEA: *framework</b>"]
CI["<b>TEA: *ci</b>"]
Framework --> CI
Sprint0Note["<b>Test infrastructure setup</b><br/>based on architectural decisions"]
Framework -.-> Sprint0Note
end end
subgraph Phase4["<b>Phase 4: IMPLEMENTATION - Per Epic Cycle</b>"]
SprintPlan["<b>SM: *sprint-planning</b>"] SprintPlan["<b>SM: *sprint-planning</b>"]
subgraph PerEpic["<b>Per Epic Cycle</b>"]
TestDesign["<b>TEA: *test-design (per epic)</b>"] TestDesign["<b>TEA: *test-design (per epic)</b>"]
CreateStory["<b>SM: *create-story</b>"] CreateStory["<b>SM: *create-story</b>"]
ATDD["<b>TEA: *atdd (optional, before dev)</b>"] ATDD["<b>TEA: *atdd (optional, before dev)</b>"]
@ -45,7 +55,6 @@ graph TB
TestReview1["<b>TEA: *test-review (optional)</b>"] TestReview1["<b>TEA: *test-review (optional)</b>"]
Trace1["<b>TEA: *trace (refresh coverage)</b>"] Trace1["<b>TEA: *trace (refresh coverage)</b>"]
SprintPlan --> TestDesign
TestDesign --> CreateStory TestDesign --> CreateStory
CreateStory --> ATDD CreateStory --> ATDD
ATDD --> DevImpl ATDD --> DevImpl
@ -57,6 +66,10 @@ graph TB
TestDesign -.-> TestDesignNote TestDesign -.-> TestDesignNote
end end
CI --> SprintPlan
SprintPlan --> TestDesign
end
subgraph Gate["<b>EPIC/RELEASE GATE</b>"] subgraph Gate["<b>EPIC/RELEASE GATE</b>"]
NFR["<b>TEA: *nfr-assess (if not done earlier)</b>"] NFR["<b>TEA: *nfr-assess (if not done earlier)</b>"]
TestReview2["<b>TEA: *test-review (final audit, optional)</b>"] TestReview2["<b>TEA: *test-review (final audit, optional)</b>"]
@ -79,6 +92,8 @@ graph TB
style Phase2 fill:#bbdefb,stroke:#0d47a1,stroke-width:3px,color:#000 style Phase2 fill:#bbdefb,stroke:#0d47a1,stroke-width:3px,color:#000
style Phase3 fill:#c8e6c9,stroke:#2e7d32,stroke-width:3px,color:#000 style Phase3 fill:#c8e6c9,stroke:#2e7d32,stroke-width:3px,color:#000
style Phase4 fill:#e1bee7,stroke:#4a148c,stroke-width:3px,color:#000 style Phase4 fill:#e1bee7,stroke:#4a148c,stroke-width:3px,color:#000
style Sprint0 fill:#f3e5f5,stroke:#6a1b9a,stroke-width:2px,color:#000
style PerEpic fill:#f3e5f5,stroke:#6a1b9a,stroke-width:2px,color:#000
style Gate fill:#ffe082,stroke:#f57c00,stroke-width:3px,color:#000 style Gate fill:#ffe082,stroke:#f57c00,stroke-width:3px,color:#000
style Pass fill:#4caf50,stroke:#1b5e20,stroke-width:3px,color:#000 style Pass fill:#4caf50,stroke:#1b5e20,stroke-width:3px,color:#000
style Concerns fill:#ffc107,stroke:#f57f17,stroke-width:3px,color:#000 style Concerns fill:#ffc107,stroke:#f57f17,stroke-width:3px,color:#000
@ -91,16 +106,19 @@ graph TB
- **Phase 0** (Optional): Documentation (brownfield prerequisite - `*document-project`) - **Phase 0** (Optional): Documentation (brownfield prerequisite - `*document-project`)
- **Phase 1** (Optional): Discovery/Analysis (`*brainstorm`, `*research`, `*product-brief`) - **Phase 1** (Optional): Discovery/Analysis (`*brainstorm`, `*research`, `*product-brief`)
- **Phase 2** (Required): Planning (`*prd` creates PRD + epics) - **Phase 2** (Required): Planning (`*prd` creates PRD + epics)
- **Phase 3** (Track-dependent): Solutioning (`*architecture` → TEA: `*framework`, `*ci``*solutioning-gate-check`) - **Phase 3** (Required): Solutioning (`*architecture` → `*validate-architecture``*solutioning-gate-check`)
- **Phase 4** (Required): Implementation (`*sprint-planning` → per-epic: `*test-design` → per-story: dev workflows) - **Phase 4** (Required): Implementation
- **Sprint 0**: Test infrastructure setup (`*framework`, `*ci`) based on architectural decisions
- **Sprint Planning**: Load epics into sprint status
- **Per-Epic**: `*test-design` → per-story dev workflows
**TEA workflows:** `*framework` and `*ci` run once in Phase 3 after architecture. `*test-design` runs per-epic in Phase 4. Output: `test-design-epic-N.md`. **TEA workflows:** `*test-design` runs in Phase 3 (system-level testability review, recommended/required) and Phase 4 (per-epic planning). `*framework` and `*ci` run once in Phase 4 Sprint 0 (after architecture and testability are approved).
Quick Flow track skips Phases 0, 1, and 3. BMad Method and Enterprise use all phases based on project needs. Quick Flow track skips Phases 0, 1, and 3. BMad Method and Enterprise use all phases based on project needs.
### Why TEA is Different from Other BMM Agents ### Why TEA is Different from Other BMM Agents
TEA is the only BMM agent that operates in **multiple phases** (Phase 3 and Phase 4) and has its own **knowledge base architecture**. TEA is the only BMM agent that operates in **both Phase 3 (Solutioning) and Phase 4 (Implementation)** and has its own **knowledge base architecture**.
<details> <details>
<summary><strong>Cross-Phase Operation & Unique Architecture</strong></summary> <summary><strong>Cross-Phase Operation & Unique Architecture</strong></summary>
@ -114,36 +132,42 @@ Most BMM agents work in a single phase:
- **Phase 3 (Solutioning)**: Architect agent - **Phase 3 (Solutioning)**: Architect agent
- **Phase 4 (Implementation)**: SM, DEV agents - **Phase 4 (Implementation)**: SM, DEV agents
### TEA: Multi-Phase Quality Agent (Unique Pattern) ### TEA: Cross-Phase Quality Agent (Unique Pattern)
TEA is **the only agent that operates in multiple phases**: TEA is **the only agent that operates in both Phase 3 (Solutioning) and Phase 4 (Implementation)**:
``` ```
Phase 1 (Analysis) → [TEA not typically used] Phase 1 (Analysis) → [TEA not typically used]
Phase 2 (Planning) → [PM defines requirements - TEA not active] Phase 2 (Planning) → [PM defines requirements - TEA not active]
Phase 3 (Solutioning) → TEA: *framework, *ci (test infrastructure AFTER architecture) Phase 3 (Solutioning) → TEA: *test-design (system-level testability review before gate)
Phase 4 (Implementation) → TEA: *test-design (per epic: "how do I test THIS feature?") Phase 4 Sprint 0 → TEA: *framework, *ci (test infrastructure setup based on testability review)
→ TEA: *atdd, *automate, *test-review, *trace (per story)
Phase 4 Sprint Planning → [SM loads epics into sprint status]
Phase 4 Per-Epic → TEA: *test-design (per epic: "how do I test THIS feature?")
Phase 4 Per-Story → TEA: *atdd, *automate, *test-review, *trace (per story)
Epic/Release Gate → TEA: *nfr-assess, *trace Phase 2 (release decision) Epic/Release Gate → TEA: *nfr-assess, *trace Phase 2 (release decision)
``` ```
### TEA's 8 Workflows Across Phases ### TEA's 8 Workflows Across Phase 3-4
**Standard agents**: 1-3 workflows per phase **Standard agents**: 1-3 workflows per phase
**TEA**: 8 workflows across Phase 3, Phase 4, and Release Gate **TEA**: 8 workflows spanning Phase 3 Solutioning through Phase 4 Release Gate
| Phase | TEA Workflows | Frequency | Purpose | | Phase | TEA Workflows | Frequency | Purpose |
| ----------- | ----------------------------------------------------- | ---------------- | ---------------------------------------------- | | --------------------- | ---------------------------------------- | ---------------- | ---------------------------------------------- |
| **Phase 2** | (none) | - | Planning phase - PM defines requirements | | **Phase 2** | (none) | - | Planning phase - PM defines requirements |
| **Phase 3** | *framework, *ci | Once per project | Setup test infrastructure AFTER architecture | | **Phase 3** | \*test-design (system-level) | Once per project | Testability review before solutioning gate |
| **Phase 4** | *test-design, *atdd, *automate, *test-review, \*trace | Per epic/story | Test planning per epic, then per-story testing | | **Phase 4 Sprint 0** | *framework, *ci | Once per project | Setup test infrastructure based on testability |
| **Release** | *nfr-assess, *trace (Phase 2: gate) | Per epic/release | Go/no-go decision | | **Phase 4 Per-Epic** | \*test-design (epic-level) | Per epic | Test planning: "how do I test THIS epic?" |
| **Phase 4 Per-Story** | *atdd, *automate, \*test-review, \*trace | Per story | Test implementation and quality validation |
| **Release Gate** | *nfr-assess, *trace (Phase 2: gate) | Per epic/release | Go/no-go decision |
**Note**: `*trace` is a two-phase workflow: Phase 1 (traceability) + Phase 2 (gate decision). This reduces cognitive load while maintaining natural workflow. **Note**: Like `*trace`, `*test-design` is now a dual-mode workflow: system-level mode (testability review in Phase 3) and epic-level mode (test planning in Phase 4). Auto-detects mode based on project phase.
### Unique Directory Architecture ### Unique Directory Architecture
@ -192,12 +216,13 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
**Use Case:** New projects with standard complexity **Use Case:** New projects with standard complexity
| Workflow Stage | Test Architect | Dev / Team | Outputs | | Workflow Stage | Test Architect | Dev / Team | Outputs |
| -------------------------- | ----------------------------------------------------------------- | ---------------------------------------------------- | ---------------------------------------------------------- | | ---------------------------- | ----------------------------------------------------------------- | ---------------------------------------------------- | ---------------------------------------------------------- |
| **Phase 1**: Discovery | - | Analyst `*product-brief` (optional) | `product-brief.md` | | **Phase 1**: Discovery | - | Analyst `*product-brief` (optional) | `product-brief.md` |
| **Phase 2**: Planning | - | PM `*prd` (creates PRD + epics) | PRD, epics | | **Phase 2**: Planning | - | PM `*prd` (creates PRD + epics) | PRD, epics |
| **Phase 3**: Solutioning | Run `*framework`, `*ci` AFTER architecture | Architect `*architecture`, `*solutioning-gate-check` | Architecture, test scaffold, CI pipeline | | **Phase 3**: Solutioning | Run `*test-design` (system-level, recommended) | Architect `*architecture`, `*solutioning-gate-check` | Architecture, `test-design-system.md` (testability review) |
| **Phase 4**: Sprint Start | - | SM `*sprint-planning` | Sprint status file with all epics and stories | | **Phase 4**: Sprint 0 | Run `*framework`, `*ci` based on test-design-system.md | Setup repo structure, dependencies | Test scaffold, CI pipeline, development environment |
| **Phase 4**: Epic Planning | Run `*test-design` for THIS epic (per-epic test plan) | Review epic scope | `test-design-epic-N.md` with risk assessment and test plan | | **Phase 4**: Sprint Planning | - | SM `*sprint-planning` | Sprint status file with all epics and stories |
| **Phase 4**: Epic Planning | Run `*test-design` (epic-level, auto-detected) | Review epic scope | `test-design-epic-N.md` with risk assessment and test plan |
| **Phase 4**: Story Dev | (Optional) `*atdd` before dev, then `*automate` after | SM `*create-story`, DEV implements | Tests, story implementation | | **Phase 4**: Story Dev | (Optional) `*atdd` before dev, then `*automate` after | SM `*create-story`, DEV implements | Tests, story implementation |
| **Phase 4**: Story Review | Execute `*test-review` (optional), re-run `*trace` | Address recommendations, update code/tests | Quality report, refreshed coverage matrix | | **Phase 4**: Story Review | Execute `*test-review` (optional), re-run `*trace` | Address recommendations, update code/tests | Quality report, refreshed coverage matrix |
| **Phase 4**: Release Gate | (Optional) `*test-review` for final audit, Run `*trace` (Phase 2) | Confirm Definition of Done, share release notes | Quality audit, Gate YAML + release summary | | **Phase 4**: Release Gate | (Optional) `*test-review` for final audit, Run `*trace` (Phase 2) | Confirm Definition of Done, share release notes | Quality audit, Gate YAML + release summary |
@ -205,10 +230,11 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
<details> <details>
<summary>Execution Notes</summary> <summary>Execution Notes</summary>
- Run `*framework` only once per repo or when modern harness support is missing. - **Phase 3 (Solutioning)**: Architect creates architecture document; TEA runs `*test-design` (system-level mode, auto-detected) for testability review; gate check validates planning completeness including testability.
- **Phase 3 (Solutioning)**: After architecture is complete, run `*framework` and `*ci` to setup test infrastructure based on architectural decisions. - **`*test-design` auto-detects mode**: In Phase 3 outputs `test-design-system.md`, in Phase 4 outputs `test-design-epic-N.md`.
- **Phase 4 starts**: After solutioning is complete, sprint planning loads all epics. - **Phase 4 Sprint 0**: After architecture is approved and testability validated, run `*framework` and `*ci` to setup test infrastructure. This is implementation work (scaffolding code, installing dependencies, configuring CI), not planning.
- **`*test-design` runs per-epic**: At the beginning of working on each epic, run `*test-design` to create a test plan for THAT specific epic/feature. Output: `test-design-epic-N.md`. - **Phase 4 Sprint Planning**: After infrastructure is ready, sprint planning loads all epics.
- **`*test-design` runs per-epic** (Phase 4): At the beginning of each epic, run `*test-design` to create epic-specific test plan. Output: `test-design-epic-N.md`.
- Use `*atdd` before coding when the team can adopt ATDD; share its checklist with the dev agent. - Use `*atdd` before coding when the team can adopt ATDD; share its checklist with the dev agent.
- Post-implementation, keep `*trace` current, expand coverage with `*automate`, optionally review test quality with `*test-review`. For release gate, run `*trace` with Phase 2 enabled to get deployment decision. - Post-implementation, keep `*trace` current, expand coverage with `*automate`, optionally review test quality with `*test-review`. For release gate, run `*trace` with Phase 2 enabled to get deployment decision.
- Use `*test-review` after `*atdd` to validate generated tests, after `*automate` to ensure regression quality, or before gate for final audit. - Use `*test-review` after `*atdd` to validate generated tests, after `*automate` to ensure regression quality, or before gate for final audit.
@ -216,15 +242,16 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
</details> </details>
<details> <details>
<summary>Worked Example “Nova CRM” Greenfield Feature</summary> <summary>Worked Example "Nova CRM" Greenfield Feature</summary>
1. **Planning (Phase 2):** Analyst runs `*product-brief`; PM executes `*prd` to produce PRD and epics. 1. **Planning (Phase 2):** Analyst runs `*product-brief`; PM executes `*prd` to produce PRD and epics.
2. **Solutioning (Phase 3):** Architect completes `*architecture` for the new module; TEA sets up test infrastructure via `*framework` and `*ci` based on architectural decisions; gate check validates planning completeness. 2. **Solutioning (Phase 3):** Architect completes `*architecture` defining tech stack; TEA runs `*test-design` (auto-detects system-level mode) producing `test-design-system.md` with testability review; gate check validates planning completeness including testability.
3. **Sprint Start (Phase 4):** Scrum Master runs `*sprint-planning` to load all epics into sprint status. 3. **Sprint 0 (Phase 4):** TEA sets up test infrastructure via `*framework` and `*ci` based on test-design-system.md; team scaffolds repo structure and dependencies.
4. **Epic 1 Planning (Phase 4):** TEA runs `*test-design` to create test plan for Epic 1, producing `test-design-epic-1.md` with risk assessment. 4. **Sprint Planning (Phase 4):** Scrum Master runs `*sprint-planning` to load all epics into sprint status.
5. **Story Implementation (Phase 4):** For each story in Epic 1, SM generates story via `*create-story`; TEA optionally runs `*atdd`; Dev implements with guidance from failing tests. 5. **Epic 1 Planning (Phase 4):** TEA runs `*test-design` (auto-detects epic-level mode) to create test plan for Epic 1, producing `test-design-epic-1.md` with risk assessment.
6. **Post-Dev (Phase 4):** TEA runs `*automate`, optionally `*test-review` to audit test quality, re-runs `*trace` to refresh coverage. 6. **Story Implementation (Phase 4):** For each story in Epic 1, SM generates story via `*create-story`; TEA optionally runs `*atdd`; Dev implements with guidance from failing tests.
7. **Release Gate:** TEA runs `*trace` with Phase 2 enabled to generate gate decision. 7. **Post-Dev (Phase 4):** TEA runs `*automate`, optionally `*test-review` to audit test quality, re-runs `*trace` to refresh coverage.
8. **Release Gate:** TEA runs `*trace` with Phase 2 enabled to generate gate decision.
</details> </details>
@ -237,7 +264,9 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
- Phase 0 (Documentation) - Document existing codebase if undocumented - Phase 0 (Documentation) - Document existing codebase if undocumented
- Phase 2: `*trace` - Baseline existing test coverage before planning - Phase 2: `*trace` - Baseline existing test coverage before planning
- 🔄 Phase 4: `*test-design` - Focus on regression hotspots and brownfield risks - 🔄 Phase 3: `*test-design` (system-level) - Includes brownfield testability concerns
- 🔄 Phase 4 Sprint 0: `*framework`, `*ci` - May integrate with/replace existing test setup
- 🔄 Phase 4: `*test-design` (epic-level) - Focus on regression hotspots and brownfield risks
- 🔄 Phase 4: Story Review - May include `*nfr-assess` if not done earlier - 🔄 Phase 4: Story Review - May include `*nfr-assess` if not done earlier
| Workflow Stage | Test Architect | Dev / Team | Outputs | | Workflow Stage | Test Architect | Dev / Team | Outputs |
@ -245,9 +274,10 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
| **Phase 0**: Documentation | - | Analyst `*document-project` (if undocumented) | Comprehensive project documentation | | **Phase 0**: Documentation | - | Analyst `*document-project` (if undocumented) | Comprehensive project documentation |
| **Phase 1**: Discovery | - | Analyst/PM/Architect rerun planning workflows | Updated planning artifacts in `{output_folder}` | | **Phase 1**: Discovery | - | Analyst/PM/Architect rerun planning workflows | Updated planning artifacts in `{output_folder}` |
| **Phase 2**: Planning | Run `*trace` (baseline coverage) | PM `*prd` (creates PRD + epics) | PRD, epics, coverage baseline | | **Phase 2**: Planning | Run `*trace` (baseline coverage) | PM `*prd` (creates PRD + epics) | PRD, epics, coverage baseline |
| **Phase 3**: Solutioning | Run `*framework`, `*ci` AFTER architecture | Architect `*architecture`, `*solutioning-gate-check` | Architecture, test framework, CI pipeline | | **Phase 3**: Solutioning | Run `*test-design` (system-level, recommended) 🔄 | Architect `*architecture`, `*solutioning-gate-check` | Architecture, `test-design-system.md` (brownfield testability review) |
| **Phase 4**: Sprint Start | - | SM `*sprint-planning` | Sprint status file with all epics and stories | | **Phase 4**: Sprint 0 | Run `*framework`, `*ci` based on test-design-system.md 🔄 | Modernize/integrate test setup | Test scaffold, CI pipeline (may replace existing) |
| **Phase 4**: Epic Planning | Run `*test-design` for THIS epic 🔄 (regression hotspots) | Review epic scope and brownfield risks | `test-design-epic-N.md` with brownfield risk assessment and mitigation | | **Phase 4**: Sprint Planning | - | SM `*sprint-planning` | Sprint status file with all epics and stories |
| **Phase 4**: Epic Planning | Run `*test-design` (epic-level) 🔄 (regression hotspots) | Review epic scope and brownfield risks | `test-design-epic-N.md` with brownfield risk assessment and mitigation |
| **Phase 4**: Story Dev | (Optional) `*atdd` before dev, then `*automate` after | SM `*create-story`, DEV implements | Tests, story implementation | | **Phase 4**: Story Dev | (Optional) `*atdd` before dev, then `*automate` after | SM `*create-story`, DEV implements | Tests, story implementation |
| **Phase 4**: Story Review | Apply `*test-review` (optional), re-run `*trace`, `*nfr-assess` if needed | Resolve gaps, update docs/tests | Quality report, refreshed coverage matrix, NFR report | | **Phase 4**: Story Review | Apply `*test-review` (optional), re-run `*trace`, `*nfr-assess` if needed | Resolve gaps, update docs/tests | Quality report, refreshed coverage matrix, NFR report |
| **Phase 4**: Release Gate | (Optional) `*test-review` for final audit, Run `*trace` (Phase 2) | Capture sign-offs, share release notes | Quality audit, Gate YAML + release summary | | **Phase 4**: Release Gate | (Optional) `*test-review` for final audit, Run `*trace` (Phase 2) | Capture sign-offs, share release notes | Quality audit, Gate YAML + release summary |
@ -256,9 +286,11 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
<summary>Execution Notes</summary> <summary>Execution Notes</summary>
- Lead with `*trace` during Planning (Phase 2) to baseline existing test coverage before architecture work begins. - Lead with `*trace` during Planning (Phase 2) to baseline existing test coverage before architecture work begins.
- **Phase 3 (Solutioning)**: After architecture is complete, run `*framework` and `*ci` to modernize test infrastructure. For brownfield, framework may need to integrate with or replace existing test setup. - **Phase 3 (Solutioning)**: Architect creates architecture document; TEA runs `*test-design` (system-level mode, auto-detected) for testability review including brownfield concerns; gate check validates planning completeness including testability.
- **Phase 4 starts**: After solutioning is complete and sprint planning loads all epics. - **`*test-design` auto-detects mode**: In Phase 3 outputs `test-design-system.md`, in Phase 4 outputs `test-design-epic-N.md`.
- **`*test-design` runs per-epic**: At the beginning of working on each epic, run `*test-design` to identify regression hotspots, integration risks, and mitigation strategies for THAT specific epic/feature. Output: `test-design-epic-N.md`. - **Phase 4 Sprint 0**: After architecture is approved and testability validated, run `*framework` and `*ci` to modernize test infrastructure. For brownfield, this may integrate with or replace existing test setup.
- **Phase 4 Sprint Planning**: After infrastructure is ready, sprint planning loads all epics.
- **`*test-design` runs per-epic** (Phase 4): At the beginning of each epic, run `*test-design` to identify regression hotspots, integration risks, and mitigation strategies. Output: `test-design-epic-N.md`.
- Use `*atdd` when stories benefit from ATDD; otherwise proceed to implementation and rely on post-dev automation. - Use `*atdd` when stories benefit from ATDD; otherwise proceed to implementation and rely on post-dev automation.
- After development, expand coverage with `*automate`, optionally review test quality with `*test-review`, re-run `*trace` (Phase 2 for gate decision). Run `*nfr-assess` now if non-functional risks weren't addressed earlier. - After development, expand coverage with `*automate`, optionally review test quality with `*test-review`, re-run `*trace` (Phase 2 for gate decision). Run `*nfr-assess` now if non-functional risks weren't addressed earlier.
- Use `*test-review` to validate existing brownfield tests or audit new tests before gate. - Use `*test-review` to validate existing brownfield tests or audit new tests before gate.
@ -266,15 +298,16 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
</details> </details>
<details> <details>
<summary>Worked Example “Atlas Payments” Brownfield Story</summary> <summary>Worked Example "Atlas Payments" Brownfield Story</summary>
1. **Planning (Phase 2):** PM executes `*prd` to update PRD and `epics.md` (Epic 1: Payment Processing); TEA runs `*trace` to baseline existing coverage. 1. **Planning (Phase 2):** PM executes `*prd` to update PRD and `epics.md` (Epic 1: Payment Processing); TEA runs `*trace` to baseline existing coverage.
2. **Solutioning (Phase 3):** Architect triggers `*architecture` capturing legacy payment flows and integration architecture; TEA sets up `*framework` and `*ci` based on architectural decisions; gate check validates planning. 2. **Solutioning (Phase 3):** Architect triggers `*architecture` capturing legacy payment flows and integration architecture; TEA runs `*test-design` (auto-detects system-level mode) producing `test-design-system.md` with brownfield testability review; gate check validates planning.
3. **Sprint Start (Phase 4):** Scrum Master runs `*sprint-planning` to load Epic 1 into sprint status. 3. **Sprint 0 (Phase 4):** TEA sets up `*framework` and `*ci` based on test-design-system.md, integrating with existing test setup; team modernizes infrastructure.
4. **Epic 1 Planning (Phase 4):** TEA runs `*test-design` for Epic 1 (Payment Processing), producing `test-design-epic-1.md` that flags settlement edge cases, regression hotspots, and mitigation plans. 4. **Sprint Planning (Phase 4):** Scrum Master runs `*sprint-planning` to load Epic 1 into sprint status.
5. **Story Implementation (Phase 4):** For each story in Epic 1, SM generates story via `*create-story`; TEA runs `*atdd` producing failing Playwright specs; Dev implements with guidance from tests and checklist. 5. **Epic 1 Planning (Phase 4):** TEA runs `*test-design` (auto-detects epic-level mode) for Epic 1, producing `test-design-epic-1.md` that flags settlement edge cases, regression hotspots, and mitigation plans.
6. **Post-Dev (Phase 4):** TEA applies `*automate`, optionally `*test-review` to audit test quality, re-runs `*trace` to refresh coverage. 6. **Story Implementation (Phase 4):** For each story in Epic 1, SM generates story via `*create-story`; TEA runs `*atdd` producing failing Playwright specs; Dev implements with guidance from tests and checklist.
7. **Release Gate:** TEA performs `*nfr-assess` to validate SLAs, runs `*trace` with Phase 2 enabled to generate gate decision (PASS/CONCERNS/FAIL). 7. **Post-Dev (Phase 4):** TEA applies `*automate`, optionally `*test-review` to audit test quality, re-runs `*trace` to refresh coverage.
8. **Release Gate:** TEA performs `*nfr-assess` to validate SLAs, runs `*trace` with Phase 2 enabled to generate gate decision (PASS/CONCERNS/FAIL).
</details> </details>
@ -287,16 +320,19 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
- Phase 1: `*research` - Domain and compliance research (recommended) - Phase 1: `*research` - Domain and compliance research (recommended)
- Phase 2: `*nfr-assess` - Capture NFR requirements early (security/performance/reliability) - Phase 2: `*nfr-assess` - Capture NFR requirements early (security/performance/reliability)
- 🔄 Phase 4: `*test-design` - Enterprise focus (compliance, security architecture alignment) - 🔄 Phase 3: `*test-design` (system-level) - **Required** for enterprise (vs recommended for Method)
- 🔄 Phase 4 Sprint 0: `*framework`, `*ci` - Enterprise-grade configurations
- 🔄 Phase 4: `*test-design` (epic-level) - Enterprise focus (compliance, security architecture alignment)
- 📦 Release Gate - Archive artifacts and compliance evidence for audits - 📦 Release Gate - Archive artifacts and compliance evidence for audits
| Workflow Stage | Test Architect | Dev / Team | Outputs | | Workflow Stage | Test Architect | Dev / Team | Outputs |
| -------------------------- | ------------------------------------------------------------------------ | ---------------------------------------------------- | ------------------------------------------------------------------ | | ---------------------------- | ------------------------------------------------------------------------ | ---------------------------------------------------- | ------------------------------------------------------------------ |
| **Phase 1**: Discovery | - | Analyst `*research`, `*product-brief` | Domain research, compliance analysis, product brief | | **Phase 1**: Discovery | - | Analyst `*research`, `*product-brief` | Domain research, compliance analysis, product brief |
| **Phase 2**: Planning | Run `*nfr-assess` | PM `*prd` (creates PRD + epics), UX `*create-design` | Enterprise PRD, epics, UX design, NFR documentation | | **Phase 2**: Planning | Run `*nfr-assess` | PM `*prd` (creates PRD + epics), UX `*create-design` | Enterprise PRD, epics, UX design, NFR documentation |
| **Phase 3**: Solutioning | Run `*framework`, `*ci` AFTER architecture | Architect `*architecture`, `*solutioning-gate-check` | Architecture, test framework, CI pipeline | | **Phase 3**: Solutioning | Run `*test-design` (system-level, **required**) 🔄 | Architect `*architecture`, `*solutioning-gate-check` | Architecture, `test-design-system.md` (enterprise testability) |
| **Phase 4**: Sprint Start | - | SM `*sprint-planning` | Sprint plan with all epics | | **Phase 4**: Sprint 0 | Run `*framework`, `*ci` with enterprise configs 🔄 | Setup enterprise infrastructure | Test scaffold, CI pipeline (selective testing, burn-in, caching) |
| **Phase 4**: Epic Planning | Run `*test-design` for THIS epic 🔄 (compliance focus) | Review epic scope and compliance requirements | `test-design-epic-N.md` with security/performance/compliance focus | | **Phase 4**: Sprint Planning | - | SM `*sprint-planning` | Sprint plan with all epics |
| **Phase 4**: Epic Planning | Run `*test-design` (epic-level) 🔄 (compliance focus) | Review epic scope and compliance requirements | `test-design-epic-N.md` with security/performance/compliance focus |
| **Phase 4**: Story Dev | (Optional) `*atdd`, `*automate`, `*test-review`, `*trace` per story | SM `*create-story`, DEV implements | Tests, fixtures, quality reports, coverage matrices | | **Phase 4**: Story Dev | (Optional) `*atdd`, `*automate`, `*test-review`, `*trace` per story | SM `*create-story`, DEV implements | Tests, fixtures, quality reports, coverage matrices |
| **Phase 4**: Release Gate | Final `*test-review` audit, Run `*trace` (Phase 2), 📦 archive artifacts | Capture sign-offs, 📦 compliance evidence | Quality audit, updated assessments, gate YAML, 📦 audit trail | | **Phase 4**: Release Gate | Final `*test-review` audit, Run `*trace` (Phase 2), 📦 archive artifacts | Capture sign-offs, 📦 compliance evidence | Quality audit, updated assessments, gate YAML, 📦 audit trail |
@ -304,9 +340,11 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
<summary>Execution Notes</summary> <summary>Execution Notes</summary>
- `*nfr-assess` runs early in Planning (Phase 2) to capture compliance, security, and performance requirements upfront. - `*nfr-assess` runs early in Planning (Phase 2) to capture compliance, security, and performance requirements upfront.
- **Phase 3 (Solutioning)**: After architecture is complete, run `*framework` and `*ci` with enterprise-grade configurations (selective testing, burn-in jobs, caching, notifications). - **Phase 3 (Solutioning)**: Architect creates architecture document with enterprise considerations; TEA runs `*test-design` (system-level mode, **required** for enterprise) for comprehensive testability review; gate check validates planning completeness including testability.
- **Phase 4 starts**: After solutioning is complete and sprint planning loads all epics. - **`*test-design` auto-detects mode**: In Phase 3 outputs `test-design-system.md`, in Phase 4 outputs `test-design-epic-N.md`.
- **`*test-design` runs per-epic**: At the beginning of working on each epic, run `*test-design` to create an enterprise-focused test plan for THAT specific epic, ensuring alignment with security architecture, performance targets, and compliance requirements. Output: `test-design-epic-N.md`. - **Phase 4 Sprint 0**: After architecture is approved and testability validated, run `*framework` and `*ci` with enterprise-grade configurations (selective testing, burn-in jobs, caching, notifications).
- **Phase 4 Sprint Planning**: After infrastructure is ready, sprint planning loads all epics.
- **`*test-design` runs per-epic** (Phase 4): At the beginning of each epic, run `*test-design` to create enterprise-focused test plan ensuring alignment with security architecture, performance targets, and compliance requirements. Output: `test-design-epic-N.md`.
- Use `*atdd` for stories when feasible so acceptance tests can lead implementation. - Use `*atdd` for stories when feasible so acceptance tests can lead implementation.
- Use `*test-review` per story or sprint to maintain quality standards and ensure compliance with testing best practices. - Use `*test-review` per story or sprint to maintain quality standards and ensure compliance with testing best practices.
- Prior to release, rerun coverage (`*trace`, `*automate`), perform final quality audit with `*test-review`, and formalize the decision with `*trace` Phase 2 (gate decision); archive artifacts for compliance audits. - Prior to release, rerun coverage (`*trace`, `*automate`), perform final quality audit with `*test-review`, and formalize the decision with `*trace` Phase 2 (gate decision); archive artifacts for compliance audits.
@ -314,14 +352,15 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
</details> </details>
<details> <details>
<summary>Worked Example “Helios Ledger” Enterprise Release</summary> <summary>Worked Example "Helios Ledger" Enterprise Release</summary>
1. **Planning (Phase 2):** Analyst runs `*research` and `*product-brief`; PM completes `*prd` creating PRD and epics; TEA runs `*nfr-assess` to establish NFR targets. 1. **Planning (Phase 2):** Analyst runs `*research` and `*product-brief`; PM completes `*prd` creating PRD and epics; TEA runs `*nfr-assess` to establish NFR targets.
2. **Solutioning (Phase 3):** Architect completes `*architecture` with enterprise considerations; TEA sets up `*framework` and `*ci` with enterprise-grade configurations based on architectural decisions; gate check validates planning completeness. 2. **Solutioning (Phase 3):** Architect completes `*architecture` with enterprise considerations; TEA runs `*test-design` (auto-detects system-level mode, required) producing `test-design-system.md` with comprehensive testability review; gate check validates planning completeness.
3. **Sprint Start (Phase 4):** Scrum Master runs `*sprint-planning` to load all epics into sprint status. 3. **Sprint 0 (Phase 4):** TEA sets up `*framework` and `*ci` with enterprise-grade configurations based on test-design-system.md; team establishes infrastructure.
4. **Per-Epic (Phase 4):** For each epic, TEA runs `*test-design` to create epic-specific test plan (e.g., `test-design-epic-1.md`, `test-design-epic-2.md`) with compliance-focused risk assessment. 4. **Sprint Planning (Phase 4):** Scrum Master runs `*sprint-planning` to load all epics into sprint status.
5. **Per-Story (Phase 4):** For each story, TEA uses `*atdd`, `*automate`, `*test-review`, and `*trace`; Dev teams iterate on the findings. 5. **Per-Epic (Phase 4):** For each epic, TEA runs `*test-design` (auto-detects epic-level mode) to create epic-specific test plan (e.g., `test-design-epic-1.md`, `test-design-epic-2.md`) with compliance-focused risk assessment.
6. **Release Gate:** TEA re-checks coverage, performs final quality audit with `*test-review`, and logs the final gate decision via `*trace` Phase 2, archiving artifacts for compliance. 6. **Per-Story (Phase 4):** For each story, TEA uses `*atdd`, `*automate`, `*test-review`, and `*trace`; Dev teams iterate on the findings.
7. **Release Gate:** TEA re-checks coverage, performs final quality audit with `*test-review`, and logs the final gate decision via `*trace` Phase 2, archiving artifacts for compliance.
</details> </details>

3
.gitignore vendored
View File

@ -41,6 +41,7 @@ cursor
.mcp.json .mcp.json
CLAUDE.local.md CLAUDE.local.md
.serena/ .serena/
.claude/settings.local.json
# Project-specific # Project-specific
.bmad-core .bmad-core
@ -58,4 +59,4 @@ tools/template-test-generator/test-scenarios/
# Test Install Output # Test Install Output
z*/ z*/.claude/settings.local.json

View File

@ -204,6 +204,17 @@
- Over-engineering indicators - Over-engineering indicators
</action> </action>
<action>Check Testability Review (if test-design exists in Phase 3):
**Note:** test-design is recommended for BMad Method, required for Enterprise Method
- Check if {output_folder}/test-design-system.md exists
- If exists: Review testability assessment (Controllability, Observability, Reliability)
- If testability concerns documented: Flag for gate decision
- If missing AND track is Enterprise: Flag as CRITICAL gap
- If missing AND track is Method: Note as recommendation (not blocker)
</action>
<template-output>gap_risk_analysis</template-output> <template-output>gap_risk_analysis</template-output>
</step> </step>

View File

@ -9,24 +9,77 @@
## Overview ## Overview
Plans comprehensive test coverage strategy with risk assessment, priority classification, and execution ordering. This workflow generates a test design document that identifies high-risk areas, maps requirements to test levels, prioritizes scenarios (P0-P3), and provides resource estimates for the testing effort. Plans comprehensive test coverage strategy with risk assessment, priority classification, and execution ordering. This workflow operates in **two modes**:
- **System-Level Mode (Phase 3)**: Testability review of architecture before solutioning gate check
- **Epic-Level Mode (Phase 4)**: Per-epic test planning with risk assessment (current behavior)
The workflow auto-detects which mode to use based on project phase.
--- ---
## Preflight Requirements ## Preflight: Detect Mode and Load Context
**Critical:** Verify these requirements before proceeding. If any fail, HALT and notify the user. **Critical:** Determine mode before proceeding.
### Mode Detection
1. **Check for sprint-status.yaml**
- If `{output_folder}/bmm-sprint-status.yaml` exists → **Epic-Level Mode** (Phase 4)
- If NOT exists → Check workflow status
2. **Check workflow-status.yaml**
- Read `{output_folder}/bmm-workflow-status.yaml`
- If `solutioning-gate-check: required` or `solutioning-gate-check: recommended`**System-Level Mode** (Phase 3)
- Otherwise → **Epic-Level Mode** (Phase 4 without sprint status yet)
3. **Mode-Specific Requirements**
**System-Level Mode (Phase 3 - Testability Review):**
- ✅ Architecture document exists (architecture.md or tech-spec)
- ✅ PRD exists with functional and non-functional requirements
- ✅ Epics documented (epics.md)
- ⚠️ Output: `{output_folder}/test-design-system.md`
**Epic-Level Mode (Phase 4 - Per-Epic Planning):**
- ✅ Story markdown with acceptance criteria available - ✅ Story markdown with acceptance criteria available
- ✅ PRD or epic documentation exists for context - ✅ PRD or epic documentation exists for context
- ✅ Architecture documents available (optional but recommended) - ✅ Architecture documents available (optional but recommended)
- ✅ Requirements are clear and testable - ✅ Requirements are clear and testable
- ⚠️ Output: `{output_folder}/test-design-epic-{epic_num}.md`
**Halt Condition:** If mode cannot be determined or required files missing, HALT and notify user with missing prerequisites.
--- ---
## Step 1: Load Context and Requirements ## Step 1: Load Context (Mode-Aware)
### Actions **Mode-Specific Loading:**
### System-Level Mode (Phase 3)
1. **Read Architecture Documentation**
- Load architecture.md or tech-spec (REQUIRED)
- Load PRD.md for functional and non-functional requirements
- Load epics.md for feature scope
- Identify technology stack decisions (frameworks, databases, deployment targets)
- Note integration points and external system dependencies
- Extract NFR requirements (performance SLOs, security requirements, etc.)
2. **Load Knowledge Base Fragments (System-Level)**
**Critical:** Consult `{project-root}/{bmad_folder}/bmm/testarch/tea-index.csv` to load:
- `nfr-criteria.md` - NFR validation approach (security, performance, reliability, maintainability)
- `test-levels-framework.md` - Test levels strategy guidance
- `risk-governance.md` - Testability risk identification
- `test-quality.md` - Quality standards and Definition of Done
3. **Analyze Existing Test Setup (if brownfield)**
- Search for existing test directories
- Identify current test framework (if any)
- Note testability concerns in existing codebase
### Epic-Level Mode (Phase 4)
1. **Read Requirements Documentation** 1. **Read Requirements Documentation**
- Load PRD.md for high-level product requirements - Load PRD.md for high-level product requirements
@ -37,6 +90,7 @@ Plans comprehensive test coverage strategy with risk assessment, priority classi
2. **Load Architecture Context** 2. **Load Architecture Context**
- Read architecture.md for system design - Read architecture.md for system design
- Read tech-spec for implementation details - Read tech-spec for implementation details
- Read test-design-system.md (if exists from Phase 3)
- Identify technical constraints and dependencies - Identify technical constraints and dependencies
- Note integration points and external systems - Note integration points and external systems
@ -46,7 +100,7 @@ Plans comprehensive test coverage strategy with risk assessment, priority classi
- Note areas with insufficient testing - Note areas with insufficient testing
- Check for flaky or outdated tests - Check for flaky or outdated tests
4. **Load Knowledge Base Fragments** 4. **Load Knowledge Base Fragments (Epic-Level)**
**Critical:** Consult `{project-root}/{bmad_folder}/bmm/testarch/tea-index.csv` to load: **Critical:** Consult `{project-root}/{bmad_folder}/bmm/testarch/tea-index.csv` to load:
- `risk-governance.md` - Risk classification framework (6 categories: TECH, SEC, PERF, DATA, BUS, OPS), automated scoring, gate decision engine, owner tracking (625 lines, 4 examples) - `risk-governance.md` - Risk classification framework (6 categories: TECH, SEC, PERF, DATA, BUS, OPS), automated scoring, gate decision engine, owner tracking (625 lines, 4 examples)
@ -54,11 +108,118 @@ Plans comprehensive test coverage strategy with risk assessment, priority classi
- `test-levels-framework.md` - Test level selection guidance (E2E vs API vs Component vs Unit with decision matrix, characteristics, when to use each, 467 lines, 4 examples) - `test-levels-framework.md` - Test level selection guidance (E2E vs API vs Component vs Unit with decision matrix, characteristics, when to use each, 467 lines, 4 examples)
- `test-priorities-matrix.md` - P0-P3 prioritization criteria (automated priority calculation, risk-based mapping, tagging strategy, time budgets, 389 lines, 2 examples) - `test-priorities-matrix.md` - P0-P3 prioritization criteria (automated priority calculation, risk-based mapping, tagging strategy, time budgets, 389 lines, 2 examples)
**Halt Condition:** If story data or acceptance criteria are missing, check if brownfield exploration is needed. If neither requirements NOR exploration possible, HALT with message: "Test design requires clear requirements, acceptance criteria, or brownfield app URL for exploration" **Halt Condition (Epic-Level only):** If story data or acceptance criteria are missing, check if brownfield exploration is needed. If neither requirements NOR exploration possible, HALT with message: "Epic-level test design requires clear requirements, acceptance criteria, or brownfield app URL for exploration"
--- ---
## Step 1.5: Mode Selection (NEW - Phase 2.5) ## Step 1.5: System-Level Testability Review (Phase 3 Only)
**Skip this step if Epic-Level Mode.** This step only executes in System-Level Mode.
### Actions
1. **Review Architecture for Testability**
Evaluate architecture against these criteria:
**Controllability:**
- Can we control system state for testing? (API seeding, factories, database reset)
- Are external dependencies mockable? (interfaces, dependency injection)
- Can we trigger error conditions? (chaos engineering, fault injection)
**Observability:**
- Can we inspect system state? (logging, metrics, traces)
- Are test results deterministic? (no race conditions, clear success/failure)
- Can we validate NFRs? (performance metrics, security audit logs)
**Reliability:**
- Are tests isolated? (parallel-safe, stateless, cleanup discipline)
- Can we reproduce failures? (deterministic waits, HAR capture, seed data)
- Are components loosely coupled? (mockable, testable boundaries)
2. **Identify Architecturally Significant Requirements (ASRs)**
From PRD NFRs and architecture decisions, identify quality requirements that:
- Drive architecture decisions (e.g., "Must handle 10K concurrent users" → caching architecture)
- Pose testability challenges (e.g., "Sub-second response time" → performance test infrastructure)
- Require special test environments (e.g., "Multi-region deployment" → regional test instances)
Score each ASR using risk matrix (probability × impact).
3. **Define Test Levels Strategy**
Based on architecture (mobile, web, API, microservices, monolith):
- Recommend unit/integration/E2E split (e.g., 70/20/10 for API-heavy, 40/30/30 for UI-heavy)
- Identify test environment needs (local, staging, ephemeral, production-like)
- Define testing approach per technology (Playwright for web, Maestro for mobile, k6 for performance)
4. **Assess NFR Testing Approach**
For each NFR category:
- **Security**: Auth/authz tests, OWASP validation, secret handling (Playwright E2E + security tools)
- **Performance**: Load/stress/spike testing with k6, SLO/SLA thresholds
- **Reliability**: Error handling, retries, circuit breakers, health checks (Playwright + API tests)
- **Maintainability**: Coverage targets, code quality gates, observability validation
5. **Flag Testability Concerns**
Identify architecture decisions that harm testability:
- ❌ Tight coupling (no interfaces, hard dependencies)
- ❌ No dependency injection (can't mock external services)
- ❌ Hardcoded configurations (can't test different envs)
- ❌ Missing observability (can't validate NFRs)
- ❌ Stateful designs (can't parallelize tests)
**Critical:** If testability concerns are blockers (e.g., "Architecture makes performance testing impossible"), document as CONCERNS or FAIL recommendation for gate check.
6. **Output System-Level Test Design**
Write to `{output_folder}/test-design-system.md` containing:
```markdown
# System-Level Test Design
## Testability Assessment
- Controllability: [PASS/CONCERNS/FAIL with details]
- Observability: [PASS/CONCERNS/FAIL with details]
- Reliability: [PASS/CONCERNS/FAIL with details]
## Architecturally Significant Requirements (ASRs)
[Risk-scored quality requirements]
## Test Levels Strategy
- Unit: [X%] - [Rationale]
- Integration: [Y%] - [Rationale]
- E2E: [Z%] - [Rationale]
## NFR Testing Approach
- Security: [Approach with tools]
- Performance: [Approach with tools]
- Reliability: [Approach with tools]
- Maintainability: [Approach with tools]
## Test Environment Requirements
[Infrastructure needs based on deployment architecture]
## Testability Concerns (if any)
[Blockers or concerns that should inform solutioning gate check]
## Recommendations for Sprint 0
[Specific actions for *framework and *ci workflows]
```
**After System-Level Mode:** Skip to Step 4 (Generate Deliverables) - Steps 2-3 are epic-level only.
---
## Step 1.6: Exploratory Mode Selection (Epic-Level Only)
### Actions ### Actions

View File

@ -1,6 +1,6 @@
# Test Architect workflow: test-design # Test Architect workflow: test-design
name: testarch-test-design name: testarch-test-design
description: "Plan risk mitigation and test coverage strategy before development with risk assessment and prioritization" description: "Dual-mode workflow: (1) System-level testability review in Solutioning phase, or (2) Epic-level test planning in Implementation phase. Auto-detects mode based on project phase."
author: "BMad" author: "BMad"
# Critical variables from config # Critical variables from config
@ -20,8 +20,12 @@ template: "{installed_path}/test-design-template.md"
# Variables and inputs # Variables and inputs
variables: variables:
design_level: "full" # full, targeted, minimal - scope of design effort design_level: "full" # full, targeted, minimal - scope of design effort
mode: "auto-detect" # auto-detect (default), system-level, epic-level
# Output configuration # Output configuration
# Note: Actual output file determined dynamically based on mode detection
# - System-Level (Phase 3): {output_folder}/test-design-system.md
# - Epic-Level (Phase 4): {output_folder}/test-design-epic-{epic_num}.md
default_output_file: "{output_folder}/test-design-epic-{epic_num}.md" default_output_file: "{output_folder}/test-design-epic-{epic_num}.md"
# Required tools # Required tools

View File

@ -66,20 +66,6 @@ phases:
command: "create-design" command: "create-design"
note: "Recommended - must integrate with existing UX patterns" note: "Recommended - must integrate with existing UX patterns"
- id: "framework"
optional: true
agent: "tea"
command: "framework"
output: "Test framework scaffold (Playwright/Cypress)"
note: "Initialize or modernize test framework - critical if brownfield lacks proper test infrastructure"
- id: "ci"
optional: true
agent: "tea"
command: "ci"
output: "CI/CD test pipeline configuration"
note: "Establish or enhance CI pipeline with regression testing strategy"
- phase: 2 - phase: 2
name: "Solutioning" name: "Solutioning"
required: true required: true
@ -91,6 +77,13 @@ phases:
output: "Integration architecture with enterprise considerations" output: "Integration architecture with enterprise considerations"
note: "Distills brownfield context + adds security/scalability/compliance design" note: "Distills brownfield context + adds security/scalability/compliance design"
- id: "test-design"
required: true
agent: "tea"
command: "test-design"
output: "System-level testability review"
note: "Enterprise requires testability validation - auto-detects system-level mode"
- id: "create-security-architecture" - id: "create-security-architecture"
optional: true optional: true
agent: "architect" agent: "architect"
@ -106,7 +99,7 @@ phases:
note: "Future workflow - optional extended enterprise workflow for CI/CD integration, deployment strategy, monitoring" note: "Future workflow - optional extended enterprise workflow for CI/CD integration, deployment strategy, monitoring"
- id: "validate-architecture" - id: "validate-architecture"
optional: true recommended: true
agent: "architect" agent: "architect"
command: "validate-architecture" command: "validate-architecture"

View File

@ -54,20 +54,6 @@ phases:
command: "create-design" command: "create-design"
note: "Highly recommended for enterprise - design system and patterns" note: "Highly recommended for enterprise - design system and patterns"
- id: "framework"
optional: true
agent: "tea"
command: "framework"
output: "Test framework scaffold (Playwright/Cypress)"
note: "Initialize production-ready test framework - run once per project"
- id: "ci"
optional: true
agent: "tea"
command: "ci"
output: "CI/CD test pipeline configuration"
note: "Scaffold CI workflow with selective testing, burn-in, caching"
- phase: 2 - phase: 2
name: "Solutioning" name: "Solutioning"
required: true required: true
@ -79,6 +65,13 @@ phases:
output: "Enterprise-grade system architecture" output: "Enterprise-grade system architecture"
note: "Includes scalability, multi-tenancy, integration architecture" note: "Includes scalability, multi-tenancy, integration architecture"
- id: "test-design"
required: true
agent: "tea"
command: "test-design"
output: "System-level testability review"
note: "Enterprise requires testability validation - auto-detects system-level mode"
- id: "create-security-architecture" - id: "create-security-architecture"
optional: true optional: true
agent: "architect" agent: "architect"
@ -94,7 +87,7 @@ phases:
note: "Future workflow - optional extended enterprise workflow for CI/CD, deployment, monitoring" note: "Future workflow - optional extended enterprise workflow for CI/CD, deployment, monitoring"
- id: "validate-architecture" - id: "validate-architecture"
optional: true recommended: true
agent: "architect" agent: "architect"
command: "validate-architecture" command: "validate-architecture"
@ -102,7 +95,7 @@ phases:
required: true required: true
agent: "architect" agent: "architect"
command: "solutioning-gate-check" command: "solutioning-gate-check"
note: "Validates all planning artifacts align before implementation" note: "Validates all planning artifacts + testability align before implementation"
- phase: 3 - phase: 3
name: "Implementation" name: "Implementation"

View File

@ -75,6 +75,13 @@ phases:
output: "Integration architecture - solution design for THIS project" output: "Integration architecture - solution design for THIS project"
note: "HIGHLY RECOMMENDED: Distills massive brownfield context into focused solution design. Prevents agent confusion." note: "HIGHLY RECOMMENDED: Distills massive brownfield context into focused solution design. Prevents agent confusion."
- id: "test-design"
recommended: true
agent: "tea"
command: "test-design"
output: "System-level testability review"
note: "Testability assessment before gate check - auto-detects system-level mode"
- id: "validate-architecture" - id: "validate-architecture"
optional: true optional: true
agent: "architect" agent: "architect"

View File

@ -65,6 +65,13 @@ phases:
output: "System architecture document" output: "System architecture document"
note: "Complete system design for greenfield projects" note: "Complete system design for greenfield projects"
- id: "test-design"
recommended: true
agent: "tea"
command: "test-design"
output: "System-level testability review"
note: "Testability assessment before gate check - auto-detects system-level mode"
- id: "validate-architecture" - id: "validate-architecture"
optional: true optional: true
agent: "architect" agent: "architect"
@ -75,7 +82,7 @@ phases:
required: true required: true
agent: "architect" agent: "architect"
command: "solutioning-gate-check" command: "solutioning-gate-check"
note: "Validates PRD + UX + Architecture cohesion before implementation" note: "Validates PRD + UX + Architecture + Testability cohesion before implementation"
- phase: 3 - phase: 3
name: "Implementation" name: "Implementation"