docs: update test architecture documentation for clarity and consistency
This commit is contained in:
parent
d52c3ad288
commit
325a264412
|
|
@ -363,7 +363,7 @@ Planning (prd by PM - FRs/NFRs only)
|
||||||
→ Phase 4 (Implementation)
|
→ Phase 4 (Implementation)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note on TEA (Test Architect):** TEA is fully operational with 8 workflows across all phases. TEA validates architecture testability during Phase 3 reviews but does not have a dedicated solutioning workflow. TEA's primary setup occurs in Phase 2 (`*framework`, `*ci`, `*test-design`) and testing execution in Phase 4 (`*atdd`, `*automate`, `*test-review`, `*trace`, `*nfr-assess`).
|
**Note on TEA (Test Architect):** TEA is fully operational with 8 workflows across all phases. TEA validates architecture testability during Phase 3 reviews but does not have a dedicated solutioning workflow. TEA's primary setup occurs after architecture in Phase 3 (`*framework`, `*ci`, system-level `*test-design`), with optional Phase 2 baseline `*trace`. Testing execution happens in Phase 4 (`*atdd`, `*automate`, `*test-review`, `*trace`, `*nfr-assess`).
|
||||||
|
|
||||||
**Note:** Enterprise uses the same planning and architecture as BMad Method. The only difference is optional extended workflows added AFTER architecture but BEFORE create-epics-and-stories.
|
**Note:** Enterprise uses the same planning and architecture as BMad Method. The only difference is optional extended workflows added AFTER architecture but BEFORE create-epics-and-stories.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -290,13 +290,14 @@ test('should do something', async ({ {fixtureName} }) => {
|
||||||
|
|
||||||
## Next Steps
|
## Next Steps
|
||||||
|
|
||||||
1. **Review this checklist** with team in standup or planning
|
1. **Share this checklist and failing tests** with the dev workflow (manual handoff)
|
||||||
2. **Run failing tests** to confirm RED phase: `{test_command_all}`
|
2. **Review this checklist** with team in standup or planning
|
||||||
3. **Begin implementation** using implementation checklist as guide
|
3. **Run failing tests** to confirm RED phase: `{test_command_all}`
|
||||||
4. **Work one test at a time** (red → green for each)
|
4. **Begin implementation** using implementation checklist as guide
|
||||||
5. **Share progress** in daily standup
|
5. **Work one test at a time** (red → green for each)
|
||||||
6. **When all tests pass**, refactor code for quality
|
6. **Share progress** in daily standup
|
||||||
7. **When refactoring complete**, manually update story status to 'done' in sprint-status.yaml
|
7. **When all tests pass**, refactor code for quality
|
||||||
|
8. **When refactoring complete**, manually update story status to 'done' in sprint-status.yaml
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -184,6 +184,7 @@ Before starting this workflow, verify:
|
||||||
- [ ] Red-green-refactor workflow
|
- [ ] Red-green-refactor workflow
|
||||||
- [ ] Execution commands
|
- [ ] Execution commands
|
||||||
- [ ] Next steps for DEV team
|
- [ ] Next steps for DEV team
|
||||||
|
- [ ] Output shared with DEV workflow (manual handoff; not auto-consumed)
|
||||||
|
|
||||||
### All Tests Verified to Fail (RED Phase)
|
### All Tests Verified to Fail (RED Phase)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -772,6 +772,7 @@ After completing this workflow, provide a summary:
|
||||||
5. Share progress in daily standup
|
5. Share progress in daily standup
|
||||||
|
|
||||||
**Output File**: {output_file}
|
**Output File**: {output_file}
|
||||||
|
**Manual Handoff**: Share `{output_file}` and failing tests with the dev workflow (not auto-consumed).
|
||||||
|
|
||||||
**Knowledge Base References Applied**:
|
**Knowledge Base References Applied**:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -13,6 +13,7 @@ Before starting this workflow, verify:
|
||||||
**Halt only if:** Framework scaffolding is completely missing (run `framework` workflow first)
|
**Halt only if:** Framework scaffolding is completely missing (run `framework` workflow first)
|
||||||
|
|
||||||
**Note:** BMad artifacts (story, tech-spec, PRD) are OPTIONAL - workflow can run without them
|
**Note:** BMad artifacts (story, tech-spec, PRD) are OPTIONAL - workflow can run without them
|
||||||
|
**Note:** `automate` generates tests; it does not run `*atdd` or `*test-review`. If ATDD outputs exist, use them as input and avoid duplicate coverage.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
@ -421,6 +422,7 @@ Before starting this workflow, verify:
|
||||||
|
|
||||||
**With atdd Workflow:**
|
**With atdd Workflow:**
|
||||||
|
|
||||||
|
- [ ] ATDD artifacts provided or located (manual handoff; `atdd` not auto-run)
|
||||||
- [ ] Existing ATDD tests checked (if story had ATDD workflow run)
|
- [ ] Existing ATDD tests checked (if story had ATDD workflow run)
|
||||||
- [ ] Expansion beyond ATDD planned (edge cases, negative paths)
|
- [ ] Expansion beyond ATDD planned (edge cases, negative paths)
|
||||||
- [ ] No duplicate coverage with ATDD tests
|
- [ ] No duplicate coverage with ATDD tests
|
||||||
|
|
|
||||||
|
|
@ -9,6 +9,8 @@
|
||||||
- [ ] Team agrees on CI platform
|
- [ ] Team agrees on CI platform
|
||||||
- [ ] Access to CI platform settings (if updating)
|
- [ ] Access to CI platform settings (if updating)
|
||||||
|
|
||||||
|
Note: CI setup is typically a one-time task per repo and can be run any time after the test framework is configured.
|
||||||
|
|
||||||
## Process Steps
|
## Process Steps
|
||||||
|
|
||||||
### Step 1: Preflight Checks
|
### Step 1: Preflight Checks
|
||||||
|
|
|
||||||
|
|
@ -11,6 +11,8 @@
|
||||||
|
|
||||||
Scaffolds a production-ready CI/CD quality pipeline with test execution, burn-in loops for flaky test detection, parallel sharding, artifact collection, and notification configuration. This workflow creates platform-specific CI configuration optimized for fast feedback and reliable test execution.
|
Scaffolds a production-ready CI/CD quality pipeline with test execution, burn-in loops for flaky test detection, parallel sharding, artifact collection, and notification configuration. This workflow creates platform-specific CI configuration optimized for fast feedback and reliable test execution.
|
||||||
|
|
||||||
|
Note: This is typically a one-time setup per repo; run it any time after the test framework exists, ideally before feature work starts.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Preflight Requirements
|
## Preflight Requirements
|
||||||
|
|
|
||||||
|
|
@ -5,6 +5,8 @@
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
Note: `nfr-assess` evaluates existing evidence; it does not run tests or CI workflows.
|
||||||
|
|
||||||
## Prerequisites Validation
|
## Prerequisites Validation
|
||||||
|
|
||||||
- [ ] Implementation is deployed and accessible for evaluation
|
- [ ] Implementation is deployed and accessible for evaluation
|
||||||
|
|
|
||||||
|
|
@ -6,6 +6,8 @@
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
Note: This assessment summarizes existing evidence; it does not run tests or CI workflows.
|
||||||
|
|
||||||
## Executive Summary
|
## Executive Summary
|
||||||
|
|
||||||
**Assessment:** {PASS_COUNT} PASS, {CONCERNS_COUNT} CONCERNS, {FAIL_COUNT} FAIL
|
**Assessment:** {PASS_COUNT} PASS, {CONCERNS_COUNT} CONCERNS, {FAIL_COUNT} FAIL
|
||||||
|
|
|
||||||
|
|
@ -153,6 +153,7 @@
|
||||||
### Workflow Dependencies
|
### Workflow Dependencies
|
||||||
|
|
||||||
- [ ] Can proceed to `atdd` workflow with P0 scenarios
|
- [ ] Can proceed to `atdd` workflow with P0 scenarios
|
||||||
|
- [ ] `atdd` is a separate workflow and must be run explicitly (not auto-run)
|
||||||
- [ ] Can proceed to `automate` workflow with full coverage plan
|
- [ ] Can proceed to `automate` workflow with full coverage plan
|
||||||
- [ ] Risk assessment informs `gate` workflow criteria
|
- [ ] Risk assessment informs `gate` workflow criteria
|
||||||
- [ ] Integrates with `ci` workflow execution order
|
- [ ] Integrates with `ci` workflow execution order
|
||||||
|
|
@ -176,7 +177,7 @@
|
||||||
1. [ ] Review risk assessment with team
|
1. [ ] Review risk assessment with team
|
||||||
2. [ ] Prioritize mitigation for high-priority risks (score ≥6)
|
2. [ ] Prioritize mitigation for high-priority risks (score ≥6)
|
||||||
3. [ ] Allocate resources per estimates
|
3. [ ] Allocate resources per estimates
|
||||||
4. [ ] Run `atdd` workflow to generate P0 tests
|
4. [ ] Run `atdd` workflow to generate P0 tests (separate workflow; not auto-run)
|
||||||
5. [ ] Set up test data factories and fixtures
|
5. [ ] Set up test data factories and fixtures
|
||||||
6. [ ] Schedule team review of test design document
|
6. [ ] Schedule team review of test design document
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -764,7 +764,7 @@ After completing this workflow, provide a summary:
|
||||||
|
|
||||||
1. Review risk assessment with team
|
1. Review risk assessment with team
|
||||||
2. Prioritize mitigation for high-risk items (score ≥6)
|
2. Prioritize mitigation for high-risk items (score ≥6)
|
||||||
3. Run `atdd` workflow to generate failing tests for P0 scenarios
|
3. Run `atdd` workflow to generate failing tests for P0 scenarios (separate workflow; not auto-run by `test-design`)
|
||||||
4. Allocate resources per effort estimates
|
4. Allocate resources per effort estimates
|
||||||
5. Set up test data factories and fixtures
|
5. Set up test data factories and fixtures
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -246,6 +246,15 @@
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Follow-on Workflows (Manual)
|
||||||
|
|
||||||
|
- Run `atdd` to generate failing P0 tests (separate workflow; not auto-run).
|
||||||
|
- Run `automate` for broader coverage once implementation exists.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Approval
|
## Approval
|
||||||
|
|
||||||
**Test Design Approved By:**
|
**Test Design Approved By:**
|
||||||
|
|
|
||||||
|
|
@ -6,6 +6,8 @@ Use this checklist to validate that the test quality review workflow completed s
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
|
Note: `test-review` is optional and only audits existing tests; it does not generate tests.
|
||||||
|
|
||||||
### Test File Discovery
|
### Test File Discovery
|
||||||
|
|
||||||
- [ ] Test file(s) identified for review (single/directory/suite scope)
|
- [ ] Test file(s) identified for review (single/directory/suite scope)
|
||||||
|
|
|
||||||
|
|
@ -7,6 +7,8 @@
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
Note: This review audits existing tests; it does not generate tests.
|
||||||
|
|
||||||
## Executive Summary
|
## Executive Summary
|
||||||
|
|
||||||
**Overall Assessment**: {Excellent | Good | Acceptable | Needs Improvement | Critical Issues}
|
**Overall Assessment**: {Excellent | Good | Acceptable | Needs Improvement | Critical Issues}
|
||||||
|
|
|
||||||
|
|
@ -16,6 +16,7 @@ This checklist covers **two sequential phases**:
|
||||||
|
|
||||||
- [ ] Acceptance criteria are available (from story file OR inline)
|
- [ ] Acceptance criteria are available (from story file OR inline)
|
||||||
- [ ] Test suite exists (or gaps are acknowledged and documented)
|
- [ ] Test suite exists (or gaps are acknowledged and documented)
|
||||||
|
- [ ] If tests are missing, recommend `*atdd` (trace does not run it automatically)
|
||||||
- [ ] Test directory path is correct (`test_dir` variable)
|
- [ ] Test directory path is correct (`test_dir` variable)
|
||||||
- [ ] Story file is accessible (if using BMad mode)
|
- [ ] Story file is accessible (if using BMad mode)
|
||||||
- [ ] Knowledge base is loaded (test-priorities, traceability, risk-governance)
|
- [ ] Knowledge base is loaded (test-priorities, traceability, risk-governance)
|
||||||
|
|
|
||||||
|
|
@ -52,6 +52,8 @@ This workflow operates in two sequential phases to validate test coverage and de
|
||||||
- If acceptance criteria are completely missing, halt and request them
|
- If acceptance criteria are completely missing, halt and request them
|
||||||
- If Phase 2 enabled but test execution results missing, warn and skip gate decision
|
- If Phase 2 enabled but test execution results missing, warn and skip gate decision
|
||||||
|
|
||||||
|
Note: `*trace` never runs `*atdd` automatically; it only recommends running it when tests are missing.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## PHASE 1: REQUIREMENTS TRACEABILITY
|
## PHASE 1: REQUIREMENTS TRACEABILITY
|
||||||
|
|
|
||||||
|
|
@ -6,6 +6,8 @@
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
Note: This workflow does not generate tests. If gaps exist, run `atdd` or `automate` to create coverage.
|
||||||
|
|
||||||
## PHASE 1: REQUIREMENTS TRACEABILITY
|
## PHASE 1: REQUIREMENTS TRACEABILITY
|
||||||
|
|
||||||
### Coverage Summary
|
### Coverage Summary
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue