docs: remove old links in test architect (#1183)

* docs: remove dead links in  test architecture documentation

* docs: updated test architecture documentation for clarity and consistenc

* docs: update test architecture documentation for clarity and consistency

* docs: addressed PR comments
This commit is contained in:
Murat K Ozcan 2025-12-23 09:09:08 -06:00 committed by GitHub
parent 19df17b261
commit 1a1a806d99
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
18 changed files with 56 additions and 23 deletions

View File

@ -228,6 +228,9 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
- Use `*atdd` before coding when the team can adopt ATDD; share its checklist with the dev agent. - Use `*atdd` before coding when the team can adopt ATDD; share its checklist with the dev agent.
- Post-implementation, keep `*trace` current, expand coverage with `*automate`, optionally review test quality with `*test-review`. For release gate, run `*trace` with Phase 2 enabled to get deployment decision. - Post-implementation, keep `*trace` current, expand coverage with `*automate`, optionally review test quality with `*test-review`. For release gate, run `*trace` with Phase 2 enabled to get deployment decision.
- Use `*test-review` after `*atdd` to validate generated tests, after `*automate` to ensure regression quality, or before gate for final audit. - Use `*test-review` after `*atdd` to validate generated tests, after `*automate` to ensure regression quality, or before gate for final audit.
- Clarification: `*test-review` is optional and only audits existing tests; run it after `*atdd` or `*automate` when you want a quality review, not as a required step.
- Clarification: `*atdd` outputs are not auto-consumed; share the ATDD doc/tests with the dev workflow. `*trace` does not run `*atdd`—it evaluates existing artifacts for coverage and gate readiness.
- Clarification: `*ci` is a one-time setup; recommended early (Phase 3 or before feature work), but it can be done later if it was skipped.
</details> </details>
@ -440,15 +443,13 @@ Provides fixture-based utilities that integrate into TEA's test generation and r
<br></br> <br></br>
| Command | Workflow README | Primary Outputs | Notes | With Playwright MCP Enhancements | | Command | Primary Outputs | Notes | With Playwright MCP Enhancements |
| -------------- | ------------------------------------------------------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | | -------------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |
| `*framework` | [📖](../workflows/testarch/framework/instructions.md) | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists | - | | `*framework` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists | - |
| `*ci` | [📖](../workflows/testarch/ci/instructions.md) | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) | - | | `*ci` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) | - |
| `*test-design` | [📖](../workflows/testarch/test-design/instructions.md) | Combined risk assessment, mitigation plan, and coverage strategy | Risk scoring + optional exploratory mode | **+ Exploratory**: Interactive UI discovery with browser automation (uncover actual functionality) | | `*test-design` | Combined risk assessment, mitigation plan, and coverage strategy | Risk scoring + optional exploratory mode | **+ Exploratory**: Interactive UI discovery with browser automation (uncover actual functionality) |
| `*atdd` | [📖](../workflows/testarch/atdd/instructions.md) | Failing acceptance tests + implementation checklist | TDD red phase + optional recording mode | **+ Recording**: AI generation verified with live browser (accurate selectors from real DOM) | | `*atdd` | Failing acceptance tests + implementation checklist | TDD red phase + optional recording mode | **+ Recording**: AI generation verified with live browser (accurate selectors from real DOM) |
| `*automate` | [📖](../workflows/testarch/automate/instructions.md) | Prioritized specs, fixtures, README/script updates, DoD summary | Optional healing/recording, avoid duplicate coverage | **+ Healing**: Pattern fixes enhanced with visual debugging + **+ Recording**: AI verified with live browser | | `*automate` | Prioritized specs, fixtures, README/script updates, DoD summary | Optional healing/recording, avoid duplicate coverage | **+ Healing**: Pattern fixes enhanced with visual debugging + **+ Recording**: AI verified with live browser |
| `*test-review` | [📖](../workflows/testarch/test-review/instructions.md) | Test quality review report with 0-100 score, violations, fixes | Reviews tests against knowledge base patterns | - | | `*test-review` | Test quality review report with 0-100 score, violations, fixes | Reviews tests against knowledge base patterns | - |
| `*nfr-assess` | [📖](../workflows/testarch/nfr-assess/instructions.md) | NFR assessment report with actions | Focus on security/performance/reliability | - | | `*nfr-assess` | NFR assessment report with actions | Focus on security/performance/reliability | - |
| `*trace` | [📖](../workflows/testarch/trace/instructions.md) | Phase 1: Coverage matrix, recommendations. Phase 2: Gate decision (PASS/CONCERNS/FAIL/WAIVED) | Two-phase workflow: traceability + gate decision | - | | `*trace` | Phase 1: Coverage matrix, recommendations. Phase 2: Gate decision (PASS/CONCERNS/FAIL/WAIVED) | Two-phase workflow: traceability + gate decision | - |
**📖** = Click to view detailed workflow documentation

View File

@ -363,7 +363,7 @@ Planning (prd by PM - FRs/NFRs only)
→ Phase 4 (Implementation) → Phase 4 (Implementation)
``` ```
**Note on TEA (Test Architect):** TEA is fully operational with 8 workflows across all phases. TEA validates architecture testability during Phase 3 reviews but does not have a dedicated solutioning workflow. TEA's primary setup occurs in Phase 2 (`*framework`, `*ci`, `*test-design`) and testing execution in Phase 4 (`*atdd`, `*automate`, `*test-review`, `*trace`, `*nfr-assess`). **Note on TEA (Test Architect):** TEA is fully operational with 8 workflows across all phases. TEA validates architecture testability during Phase 3 reviews but does not have a dedicated solutioning workflow. TEA's primary setup occurs after architecture in Phase 3 (`*framework`, `*ci`, system-level `*test-design`), with optional Phase 2 baseline `*trace`. Testing execution happens in Phase 4 (`*atdd`, `*automate`, `*test-review`, `*trace`, `*nfr-assess`).
**Note:** Enterprise uses the same planning and architecture as BMad Method. The only difference is optional extended workflows added AFTER architecture but BEFORE create-epics-and-stories. **Note:** Enterprise uses the same planning and architecture as BMad Method. The only difference is optional extended workflows added AFTER architecture but BEFORE create-epics-and-stories.

View File

@ -290,13 +290,14 @@ test('should do something', async ({ {fixtureName} }) => {
## Next Steps ## Next Steps
1. **Review this checklist** with team in standup or planning 1. **Share this checklist and failing tests** with the dev workflow (manual handoff)
2. **Run failing tests** to confirm RED phase: `{test_command_all}` 2. **Review this checklist** with team in standup or planning
3. **Begin implementation** using implementation checklist as guide 3. **Run failing tests** to confirm RED phase: `{test_command_all}`
4. **Work one test at a time** (red → green for each) 4. **Begin implementation** using implementation checklist as guide
5. **Share progress** in daily standup 5. **Work one test at a time** (red → green for each)
6. **When all tests pass**, refactor code for quality 6. **Share progress** in daily standup
7. **When refactoring complete**, manually update story status to 'done' in sprint-status.yaml 7. **When all tests pass**, refactor code for quality
8. **When refactoring complete**, manually update story status to 'done' in sprint-status.yaml
--- ---

View File

@ -184,6 +184,7 @@ Before starting this workflow, verify:
- [ ] Red-green-refactor workflow - [ ] Red-green-refactor workflow
- [ ] Execution commands - [ ] Execution commands
- [ ] Next steps for DEV team - [ ] Next steps for DEV team
- [ ] Output shared with DEV workflow (manual handoff; not auto-consumed)
### All Tests Verified to Fail (RED Phase) ### All Tests Verified to Fail (RED Phase)

View File

@ -772,6 +772,7 @@ After completing this workflow, provide a summary:
5. Share progress in daily standup 5. Share progress in daily standup
**Output File**: {output_file} **Output File**: {output_file}
**Manual Handoff**: Share `{output_file}` and failing tests with the dev workflow (not auto-consumed).
**Knowledge Base References Applied**: **Knowledge Base References Applied**:

View File

@ -13,6 +13,7 @@ Before starting this workflow, verify:
**Halt only if:** Framework scaffolding is completely missing (run `framework` workflow first) **Halt only if:** Framework scaffolding is completely missing (run `framework` workflow first)
**Note:** BMad artifacts (story, tech-spec, PRD) are OPTIONAL - workflow can run without them **Note:** BMad artifacts (story, tech-spec, PRD) are OPTIONAL - workflow can run without them
**Note:** `automate` generates tests; it does not run `*atdd` or `*test-review`. If ATDD outputs exist, use them as input and avoid duplicate coverage.
--- ---
@ -421,6 +422,7 @@ Before starting this workflow, verify:
**With atdd Workflow:** **With atdd Workflow:**
- [ ] ATDD artifacts provided or located (manual handoff; `atdd` not auto-run)
- [ ] Existing ATDD tests checked (if story had ATDD workflow run) - [ ] Existing ATDD tests checked (if story had ATDD workflow run)
- [ ] Expansion beyond ATDD planned (edge cases, negative paths) - [ ] Expansion beyond ATDD planned (edge cases, negative paths)
- [ ] No duplicate coverage with ATDD tests - [ ] No duplicate coverage with ATDD tests

View File

@ -9,6 +9,8 @@
- [ ] Team agrees on CI platform - [ ] Team agrees on CI platform
- [ ] Access to CI platform settings (if updating) - [ ] Access to CI platform settings (if updating)
Note: CI setup is typically a one-time task per repo and can be run any time after the test framework is configured.
## Process Steps ## Process Steps
### Step 1: Preflight Checks ### Step 1: Preflight Checks

View File

@ -11,6 +11,8 @@
Scaffolds a production-ready CI/CD quality pipeline with test execution, burn-in loops for flaky test detection, parallel sharding, artifact collection, and notification configuration. This workflow creates platform-specific CI configuration optimized for fast feedback and reliable test execution. Scaffolds a production-ready CI/CD quality pipeline with test execution, burn-in loops for flaky test detection, parallel sharding, artifact collection, and notification configuration. This workflow creates platform-specific CI configuration optimized for fast feedback and reliable test execution.
Note: This is typically a one-time setup per repo; run it any time after the test framework exists, ideally before feature work starts.
--- ---
## Preflight Requirements ## Preflight Requirements

View File

@ -5,6 +5,8 @@
--- ---
Note: `nfr-assess` evaluates existing evidence; it does not run tests or CI workflows.
## Prerequisites Validation ## Prerequisites Validation
- [ ] Implementation is deployed and accessible for evaluation - [ ] Implementation is deployed and accessible for evaluation

View File

@ -6,6 +6,8 @@
--- ---
Note: This assessment summarizes existing evidence; it does not run tests or CI workflows.
## Executive Summary ## Executive Summary
**Assessment:** {PASS_COUNT} PASS, {CONCERNS_COUNT} CONCERNS, {FAIL_COUNT} FAIL **Assessment:** {PASS_COUNT} PASS, {CONCERNS_COUNT} CONCERNS, {FAIL_COUNT} FAIL

View File

@ -152,7 +152,8 @@
### Workflow Dependencies ### Workflow Dependencies
- [ ] Can proceed to `atdd` workflow with P0 scenarios - [ ] Can proceed to `*atdd` workflow with P0 scenarios
- [ ] `*atdd` is a separate workflow and must be run explicitly (not auto-run)
- [ ] Can proceed to `automate` workflow with full coverage plan - [ ] Can proceed to `automate` workflow with full coverage plan
- [ ] Risk assessment informs `gate` workflow criteria - [ ] Risk assessment informs `gate` workflow criteria
- [ ] Integrates with `ci` workflow execution order - [ ] Integrates with `ci` workflow execution order
@ -176,7 +177,7 @@
1. [ ] Review risk assessment with team 1. [ ] Review risk assessment with team
2. [ ] Prioritize mitigation for high-priority risks (score ≥6) 2. [ ] Prioritize mitigation for high-priority risks (score ≥6)
3. [ ] Allocate resources per estimates 3. [ ] Allocate resources per estimates
4. [ ] Run `atdd` workflow to generate P0 tests 4. [ ] Run `*atdd` workflow to generate P0 tests (separate workflow; not auto-run)
5. [ ] Set up test data factories and fixtures 5. [ ] Set up test data factories and fixtures
6. [ ] Schedule team review of test design document 6. [ ] Schedule team review of test design document

View File

@ -764,7 +764,7 @@ After completing this workflow, provide a summary:
1. Review risk assessment with team 1. Review risk assessment with team
2. Prioritize mitigation for high-risk items (score ≥6) 2. Prioritize mitigation for high-risk items (score ≥6)
3. Run `atdd` workflow to generate failing tests for P0 scenarios 3. Run `*atdd` to generate failing tests for P0 scenarios (separate workflow; not auto-run by `*test-design`)
4. Allocate resources per effort estimates 4. Allocate resources per effort estimates
5. Set up test data factories and fixtures 5. Set up test data factories and fixtures
``` ```

View File

@ -246,6 +246,15 @@
--- ---
---
## Follow-on Workflows (Manual)
- Run `*atdd` to generate failing P0 tests (separate workflow; not auto-run).
- Run `*automate` for broader coverage once implementation exists.
---
## Approval ## Approval
**Test Design Approved By:** **Test Design Approved By:**

View File

@ -6,6 +6,8 @@ Use this checklist to validate that the test quality review workflow completed s
## Prerequisites ## Prerequisites
Note: `test-review` is optional and only audits existing tests; it does not generate tests.
### Test File Discovery ### Test File Discovery
- [ ] Test file(s) identified for review (single/directory/suite scope) - [ ] Test file(s) identified for review (single/directory/suite scope)

View File

@ -7,6 +7,8 @@
--- ---
Note: This review audits existing tests; it does not generate tests.
## Executive Summary ## Executive Summary
**Overall Assessment**: {Excellent | Good | Acceptable | Needs Improvement | Critical Issues} **Overall Assessment**: {Excellent | Good | Acceptable | Needs Improvement | Critical Issues}

View File

@ -16,6 +16,7 @@ This checklist covers **two sequential phases**:
- [ ] Acceptance criteria are available (from story file OR inline) - [ ] Acceptance criteria are available (from story file OR inline)
- [ ] Test suite exists (or gaps are acknowledged and documented) - [ ] Test suite exists (or gaps are acknowledged and documented)
- [ ] If tests are missing, recommend `*atdd` (trace does not run it automatically)
- [ ] Test directory path is correct (`test_dir` variable) - [ ] Test directory path is correct (`test_dir` variable)
- [ ] Story file is accessible (if using BMad mode) - [ ] Story file is accessible (if using BMad mode)
- [ ] Knowledge base is loaded (test-priorities, traceability, risk-governance) - [ ] Knowledge base is loaded (test-priorities, traceability, risk-governance)

View File

@ -52,6 +52,8 @@ This workflow operates in two sequential phases to validate test coverage and de
- If acceptance criteria are completely missing, halt and request them - If acceptance criteria are completely missing, halt and request them
- If Phase 2 enabled but test execution results missing, warn and skip gate decision - If Phase 2 enabled but test execution results missing, warn and skip gate decision
Note: `*trace` never runs `*atdd` automatically; it only recommends running it when tests are missing.
--- ---
## PHASE 1: REQUIREMENTS TRACEABILITY ## PHASE 1: REQUIREMENTS TRACEABILITY

View File

@ -6,6 +6,8 @@
--- ---
Note: This workflow does not generate tests. If gaps exist, run `*atdd` or `*automate` to create coverage.
## PHASE 1: REQUIREMENTS TRACEABILITY ## PHASE 1: REQUIREMENTS TRACEABILITY
### Coverage Summary ### Coverage Summary