diff --git a/src/modules/bmm/agents/tea.md b/src/modules/bmm/agents/tea.md
index e26e0213..f8bba936 100644
--- a/src/modules/bmm/agents/tea.md
+++ b/src/modules/bmm/agents/tea.md
@@ -12,19 +12,21 @@
Load into memory {project-root}/bmad/bmm/config.yaml and set variable project_name, output_folder, user_name, communication_language
+ Load into memory {project-root}/bmad/bmm/testarch/tea-knowledge.md and {project-root}/bmad/bmm/testarch/test-resources-for-ai-flat.txt for Murat’s latest guidance and examples
+ Cross-check recommendations with the current official Playwright, Cypress, Pact, and CI platform documentation when repo guidance appears outdated
Remember the users name is {user_name}
ALWAYS communicate in {communication_language}
Show numbered cmd list
- Initialize production-ready test framework architecture
- Generate E2E tests first, before starting implementation
- Generate comprehensive test automation
- Create comprehensive test scenarios
- Map requirements to tests Given-When-Then BDD format
- Validate non-functional requirements
- Scaffold CI/CD quality pipeline
- Write/update quality gate decision assessment
+ Initialize production-ready test framework architecture
+ Generate E2E tests first, before starting implementation
+ Generate comprehensive test automation
+ Create comprehensive test scenarios
+ Map requirements to tests Given-When-Then BDD format
+ Validate non-functional requirements
+ Scaffold CI/CD quality pipeline
+ Write/update quality gate decision assessment
Goodbye+exit persona
diff --git a/src/modules/bmm/testarch/README.md b/src/modules/bmm/testarch/README.md
index ab0bf433..402c2499 100644
--- a/src/modules/bmm/testarch/README.md
+++ b/src/modules/bmm/testarch/README.md
@@ -18,7 +18,7 @@ last-redoc-date: 2025-09-30
- Architect `*solution-architecture`
2. Confirm `bmad/bmm/config.yaml` defines `project_name`, `output_folder`, `dev_story_location`, and language settings.
3. Ensure a test test framework setup exists; if not, use `*framework` command to create a test framework setup, prior to development.
-4. Skim supporting references under `./testarch/`:
+4. Skim supporting references (knowledge under `testarch/`, command workflows under `workflows/testarch/`).
- `tea-knowledge.md`
- `test-levels-framework.md`
- `test-priorities-matrix.md`
@@ -125,31 +125,35 @@ last-redoc-date: 2025-09-30
## Command Catalog
-| Command | Task File | Primary Outputs | Notes |
-| -------------- | -------------------------------- | -------------------------------------------------------------------- | ------------------------------------------------ |
-| `*framework` | `testarch/framework.md` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists |
-| `*atdd` | `testarch/atdd.md` | Failing Acceptance-Test Driven Development, implementation checklist | Requires approved story + harness |
-| `*automate` | `testarch/automate.md` | Prioritized specs, fixtures, README/script updates, DoD summary | Avoid duplicate coverage (see priority matrix) |
-| `*ci` | `testarch/ci.md` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) |
-| `*test-design` | `testarch/test-design.md` | Combined risk assessment, mitigation plan, and coverage strategy | Handles risk scoring and test design in one pass |
-| `*trace` | `testarch/trace-requirements.md` | Coverage matrix, recommendations, gate snippet | Requires access to story/tests repositories |
-| `*nfr-assess` | `testarch/nfr-assess.md` | NFR assessment report with actions | Focus on security/performance/reliability |
-| `*gate` | `testarch/gate.md` | Gate YAML + summary (PASS/CONCERNS/FAIL/WAIVED) | Deterministic decision rules + rationale |
+| Command | Task File | Primary Outputs | Notes |
+| -------------- | ------------------------------------------------ | ------------------------------------------------------------------- | ------------------------------------------------ |
+| `*framework` | `workflows/testarch/framework/instructions.md` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists |
+| `*atdd` | `workflows/testarch/atdd/instructions.md` | Failing acceptance tests + implementation checklist | Requires approved story + harness |
+| `*automate` | `workflows/testarch/automate/instructions.md` | Prioritized specs, fixtures, README/script updates, DoD summary | Avoid duplicate coverage (see priority matrix) |
+| `*ci` | `workflows/testarch/ci/instructions.md` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) |
+| `*test-design` | `workflows/testarch/test-design/instructions.md` | Combined risk assessment, mitigation plan, and coverage strategy | Handles risk scoring and test design in one pass |
+| `*trace` | `workflows/testarch/trace/instructions.md` | Coverage matrix, recommendations, gate snippet | Requires access to story/tests repositories |
+| `*nfr-assess` | `workflows/testarch/nfr-assess/instructions.md` | NFR assessment report with actions | Focus on security/performance/reliability |
+| `*gate` | `workflows/testarch/gate/instructions.md` | Gate YAML + summary (PASS/CONCERNS/FAIL/WAIVED) | Deterministic decision rules + rationale |
Command Guidance and Context Loading
-- Each task reads one row from `tea-commands.csv` via `command_key`, expanding pipe-delimited (`|`) values into checklists.
-- Keep CSV rows lightweight; place in-depth heuristics in `tea-knowledge.md` and reference via `knowledge_tags`.
-- If the CSV grows substantially, consider splitting into scoped registries (e.g., planning vs execution) or upgrading to Markdown tables for humans.
+- Each task now carries its own preflight/flow/deliverable guidance inline.
+- `tea-knowledge.md` still stores heuristics; update the brief alongside task edits.
+- Consider future modularization into orchestrated workflows if additional automation is needed.
- `tea-knowledge.md` encapsulates Murat’s philosophy—update both CSV and knowledge file together to avoid drift.
+## Workflow Placement
+
+We keep every Test Architect workflow under `workflows/testarch/` instead of scattering them across the phase folders. TEA steps show up during planning (`*framework`), implementation (`*atdd`, `*automate`, `*trace`), and release (`*gate`), so a single directory keeps the command catalog and examples coherent while still letting the orchestrator treat each command as a first-class workflow. When phase-specific navigation improves, we can add lightweight entrypoints without losing this central reference.
+
## Appendix
- **Supporting Knowledge:**
- `tea-knowledge.md` – Murat’s testing philosophy, heuristics, and risk scales.
- `test-levels-framework.md` – Decision matrix for unit/integration/E2E selection.
- `test-priorities-matrix.md` – Priority (P0–P3) criteria and target coverage percentages.
- s
+ - `test-resources-for-ai-flat.txt` – Flattened 347 KB bundle of Murat’s blogs, philosophy notes, and training material. Each `FILE:` section can be loaded on demand when the agent needs deeper examples or rationale.
diff --git a/src/modules/bmm/testarch/atdd.md b/src/modules/bmm/testarch/atdd.md
deleted file mode 100644
index b02699a8..00000000
--- a/src/modules/bmm/testarch/atdd.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-# Acceptance TDD v2.0 (Slim)
-
-```xml
-
-
- Set command_key="*tdd"
- Load {project-root}/bmad/bmm/testarch/tea-commands.csv and parse the row where command equals command_key
- Load {project-root}/bmad/bmm/testarch/tea-knowledge.md into context
- Use CSV columns preflight, flow_cues, deliverables, halt_rules, notes, knowledge_tags to guide execution
- Split pipe-delimited fields into individual checklist items
- Map knowledge_tags to sections in the knowledge brief and apply them while writing tests
- Keep responses concise and focused on generating the failing acceptance tests plus the implementation checklist
-
-
-
- Verify each preflight requirement; gather missing info from user when needed
- Abort if halt_rules are triggered
-
-
- Walk through flow_cues sequentially, adapting to story context
- Use knowledge brief heuristics to enforce Murat's patterns (one test = one concern, explicit assertions, etc.)
-
-
- Produce artifacts described in deliverables
- Summarize failing tests and checklist items for the developer
-
-
-
- Apply halt_rules from the CSV row exactly
-
-
- Use the notes column for additional constraints or reminders
-
-
-
-```
diff --git a/src/modules/bmm/testarch/automate.md b/src/modules/bmm/testarch/automate.md
deleted file mode 100644
index f91f860c..00000000
--- a/src/modules/bmm/testarch/automate.md
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-# Automation Expansion v2.0 (Slim)
-
-```xml
-
-
- Set command_key="*automate"
- Load {project-root}/bmad/bmm/testarch/tea-commands.csv and read the row where command equals command_key
- Load {project-root}/bmad/bmm/testarch/tea-knowledge.md for heuristics
- Follow CSV columns preflight, flow_cues, deliverables, halt_rules, notes, knowledge_tags
- Convert pipe-delimited values into actionable checklists
- Apply Murat's opinions from the knowledge brief when filling gaps or refactoring tests
-
-
-
- Confirm prerequisites; stop if halt_rules are triggered
-
-
- Walk through flow_cues to analyse existing coverage and add only necessary specs
- Use knowledge heuristics (composable helpers, deterministic waits, network boundary) while generating code
-
-
- Create or update artifacts listed in deliverables
- Summarize coverage deltas and remaining recommendations
-
-
-
- Apply halt_rules from the CSV row as written
-
-
- Reference notes column for additional guardrails
-
-
-
-```
diff --git a/src/modules/bmm/testarch/ci.md b/src/modules/bmm/testarch/ci.md
deleted file mode 100644
index 3db84e64..00000000
--- a/src/modules/bmm/testarch/ci.md
+++ /dev/null
@@ -1,39 +0,0 @@
-
-
-# CI/CD Enablement v2.0 (Slim)
-
-```xml
-
-
- Set command_key="*ci"
- Load {project-root}/bmad/bmm/testarch/tea-commands.csv and read the row where command equals command_key
- Load {project-root}/bmad/bmm/testarch/tea-knowledge.md to recall CI heuristics
- Follow CSV columns preflight, flow_cues, deliverables, halt_rules, notes, knowledge_tags
- Split pipe-delimited values into actionable lists
- Keep output focused on workflow YAML, scripts, and guidance explicitly requested in deliverables
-
-
-
- Confirm prerequisites and required permissions
- Stop if halt_rules trigger
-
-
- Apply flow_cues to design the pipeline stages
- Leverage knowledge brief guidance (cost vs confidence, sharding, artifacts) when making trade-offs
-
-
- Create artifacts listed in deliverables (workflow files, scripts, documentation)
- Summarize the pipeline, selective testing strategy, and required secrets
-
-
-
- Use halt_rules from the CSV row verbatim
-
-
- Reference notes column for optimization reminders
-
-
-
-```
diff --git a/src/modules/bmm/testarch/framework.md b/src/modules/bmm/testarch/framework.md
deleted file mode 100644
index d754f0ae..00000000
--- a/src/modules/bmm/testarch/framework.md
+++ /dev/null
@@ -1,41 +0,0 @@
-
-
-# Test Framework Setup v2.0 (Slim)
-
-```xml
-
-
- Set command_key="*framework"
- Load {project-root}/bmad/bmm/testarch/tea-commands.csv and parse the row where command equals command_key
- Load {project-root}/bmad/bmm/testarch/tea-knowledge.md to internal memory
- Use the CSV columns preflight, flow_cues, deliverables, halt_rules, notes, knowledge_tags to guide behaviour
- Split pipe-delimited values (|) into individual checklist items
- Map knowledge_tags to matching sections in the knowledge brief and apply those heuristics throughout execution
- DO NOT expand beyond the guidance unless the user supplies extra context; keep instructions lean and adaptive
-
-
-
- Evaluate each item in preflight; confirm or collect missing information
- If any preflight requirement fails, follow halt_rules and stop
-
-
- Follow flow_cues sequence, adapting to the project's stack
- When deciding frameworks or patterns, apply relevant heuristics from tea-knowledge.md via knowledge_tags
- Keep generated assets minimal—only what the CSV specifies
-
-
- Create artifacts listed in deliverables
- Capture a concise summary for the user explaining what was scaffolded
-
-
-
- Follow halt_rules from the CSV row verbatim
-
-
- Use notes column for additional guardrails while executing
-
-
-
-```
diff --git a/src/modules/bmm/testarch/gate.md b/src/modules/bmm/testarch/gate.md
deleted file mode 100644
index 1bcc805e..00000000
--- a/src/modules/bmm/testarch/gate.md
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-# Quality Gate v2.0 (Slim)
-
-```xml
-
-
- Set command_key="*gate"
- Load {project-root}/bmad/bmm/testarch/tea-commands.csv and read the matching row
- Load {project-root}/bmad/bmm/testarch/tea-knowledge.md to reinforce risk-model heuristics
- Use CSV columns preflight, flow_cues, deliverables, halt_rules, notes, knowledge_tags
- Split pipe-delimited values into actionable items
- Apply deterministic rules for PASS/CONCERNS/FAIL/WAIVED; capture rationale and approvals
-
-
-
- Gather latest assessments and confirm prerequisites; halt per halt_rules if missing
-
-
- Follow flow_cues to determine status, residual risk, follow-ups
- Use knowledge heuristics to balance cost vs confidence when negotiating waivers
-
-
- Update gate YAML specified in deliverables
- Summarize decision, rationale, owners, and deadlines
-
-
-
- Apply halt_rules from the CSV row
-
-
- Use notes column for quality bar reminders
-
-
-
-```
diff --git a/src/modules/bmm/testarch/nfr-assess.md b/src/modules/bmm/testarch/nfr-assess.md
deleted file mode 100644
index 9985f6d8..00000000
--- a/src/modules/bmm/testarch/nfr-assess.md
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-# NFR Assessment v2.0 (Slim)
-
-```xml
-
-
- Set command_key="*nfr-assess"
- Load {project-root}/bmad/bmm/testarch/tea-commands.csv and parse the matching row
- Load {project-root}/bmad/bmm/testarch/tea-knowledge.md focusing on NFR guidance
- Use CSV columns preflight, flow_cues, deliverables, halt_rules, notes, knowledge_tags
- Split pipe-delimited values into actionable lists
- Demand evidence for each non-functional claim (tests, telemetry, logs)
-
-
-
- Confirm prerequisites; halt per halt_rules if unmet
-
-
- Follow flow_cues to evaluate Security, Performance, Reliability, Maintainability
- Use knowledge heuristics to suggest monitoring and fail-fast patterns
-
-
- Produce assessment document and recommendations defined in deliverables
- Summarize status, gaps, and actions
-
-
-
- Apply halt_rules from the CSV row
-
-
- Reference notes column for negotiation framing (cost vs confidence)
-
-
-
-```
diff --git a/src/modules/bmm/testarch/tea-commands.csv b/src/modules/bmm/testarch/tea-commands.csv
deleted file mode 100644
index 4451a457..00000000
--- a/src/modules/bmm/testarch/tea-commands.csv
+++ /dev/null
@@ -1,9 +0,0 @@
-command,title,when_to_use,preflight,flow_cues,deliverables,halt_rules,notes,knowledge_tags
-*automate,Automation expansion,After implementation or when reforging coverage,all acceptance criteria satisfied|code builds locally|framework configured,"Review story source/diff to confirm automation target; ensure fixture architecture exists (mergeTests for Playwright, commands for Cypress) and implement apiRequest/network/auth/log fixtures if missing; map acceptance criteria with test-levels-framework.md guidance and avoid duplicate coverage; assign priorities using test-priorities-matrix.md; generate unit/integration/E2E specs with naming convention feature-name.spec.ts, covering happy, negative, and edge paths; enforce deterministic waits, self-cleaning factories, and <=1.5 minute execution per test; run suite and capture Definition of Done results; update package.json scripts and README instructions",New or enhanced spec files grouped by level; fixture modules under support/; data factory utilities; updated package.json scripts and README notes; DoD summary with remaining gaps; gate-ready coverage summary,"If automation target unclear or framework missing, halt and request clarification",Never create page objects; keep tests <300 lines and stateless; forbid hard waits and conditional flow in tests; co-locate tests near source; flag flaky patterns immediately,philosophy/core|patterns/helpers|patterns/waits|patterns/dod
-*ci,CI/CD quality pipeline,Once automation suite exists or needs optimization,git repository initialized|tests pass locally|team agrees on target environments|access to CI platform settings,"Detect CI platform (default GitHub Actions, ask if GitLab/CircleCI/etc); scaffold workflow (.github/workflows/test.yml or platform equivalent) with triggers; set Node.js version from .nvmrc and cache node_modules + browsers; stage jobs: lint -> unit -> component -> e2e with matrix parallelization (shard by file not test); add selective execution script for affected tests; create burn-in job that reruns changed specs 3x to catch flakiness; attach artifacts on failure (traces/videos/HAR); configure retries/backoff and concurrency controls; document required secrets and environment variables; add Slack/email notifications and local script mirroring CI",.github/workflows/test.yml (or platform equivalent); scripts/test-changed.sh; scripts/burn-in-changed.sh; updated README/ci.md instructions; secrets checklist; dashboard or badge configuration,"If git repo absent, test framework missing, or CI platform unspecified, halt and request setup",Target 20x speedups via parallel shards + caching; shard by file; keep jobs under 10 minutes; wait-on-timeout 120s for app startup; ensure npm test locally matches CI run; mention alternative platform paths when not on GitHub,philosophy/core|ci-strategy
-*framework,Initialize test architecture,Run once per repo or when no production-ready harness exists,package.json present|no existing E2E framework detected|architectural context available,"Identify stack from package.json (React/Vue/Angular/Next.js); detect bundler (Vite/Webpack/Rollup/esbuild); match test language to source (JS/TS frontend -> JS/TS tests); choose Playwright for large or performance-critical repos, Cypress for small DX-first teams; create {framework}/tests/ and {framework}/support/fixtures/ and {framework}/support/helpers/; configure config files with timeouts (action 15s, navigation 30s, test 60s) and reporters (HTML + JUnit); create .env.example with TEST_ENV, BASE_URL, API_URL; implement pure function->fixture->mergeTests pattern and faker-based data factories; enable failure-only screenshots/videos and ensure .nvmrc recorded",playwright/ or cypress/ folder with config + support tree; .env.example; .nvmrc; example tests; README with setup instructions,"If package.json missing OR framework already configured, halt and instruct manual review","Playwright: worker parallelism, trace viewer, multi-language support; Cypress: avoid if many dependent API calls; Component testing: Vitest (large) or Cypress CT (small); Contract testing: Pact for microservices; always use data-cy/data-testid selectors",philosophy/core|patterns/fixtures|patterns/selectors
-*gate,Quality gate decision,After review or mitigation updates,latest assessments gathered|team consensus on fixes,"Assemble story metadata (id, title); choose gate status using deterministic rules (PASS all critical issues resolved, CONCERNS minor residual risk, FAIL critical blockers, WAIVED approved by business); update YAML schema with sections: metadata, waiver status, top_issues, risk_summary totals, recommendations (must_fix, monitor), nfr_validation statuses, history; capture rationale, owners, due dates, and summary comment back to story","docs/qa/gates/{story}.yml updated with schema fields (schema, story, story_title, gate, status_reason, reviewer, updated, waiver, top_issues, risk_summary, recommendations, nfr_validation, history); summary message for team","If review incomplete or risk data outdated, halt and request rerun","FAIL whenever unresolved P0 risks/tests or security holes remain; CONCERNS when mitigations planned but residual risk exists; WAIVED requires reason, approver, and expiry; maintain audit trail in history",philosophy/core|risk-model
-*nfr-assess,NFR validation,Late development or pre-review for critical stories,implementation deployed locally|non-functional goals defined or discoverable,"Ask which NFRs to assess; default to core four (security, performance, reliability, maintainability); gather thresholds from story/architecture/technical-preferences and mark unknown targets; inspect evidence (tests, telemetry, logs) for each NFR; classify status using deterministic pass/concerns/fail rules and list quick wins; produce gate block and assessment doc with recommended actions",NFR assessment markdown with findings; gate YAML block capturing statuses and notes; checklist of evidence gaps and follow-up owners,"If NFR targets undefined and no guidance available, request definition and halt","Unknown thresholds -> CONCERNS, never guess; ensure each NFR has evidence or call it out; suggest monitoring hooks and fail-fast mechanisms when gaps exist",philosophy/core|nfr
-*tdd,Acceptance Test Driven Development,Before implementation when team commits to TDD,story approved with acceptance criteria|dev sandbox ready|framework scaffolding in place,Clarify acceptance criteria and affected systems; pick appropriate test level (E2E/API/Component); write failing acceptance tests using Given-When-Then with network interception first then navigation; create data factories and fixture stubs for required entities; outline mocks/fixtures infrastructure the dev team must supply; generate component tests for critical UI logic; compile implementation checklist mapping each test to source work; share failing tests with dev agent and maintain red -> green -> refactor loop,Failing acceptance test files; component test stubs; fixture/mocks skeleton; implementation checklist with test-to-code mapping; documented data-testid requirements,"If criteria ambiguous or framework missing, halt for clarification",Start red; one assertion per test; use beforeEach for visible setup (no shared state); remind devs to run tests before writing production code; update checklist as each test goes green,philosophy/core|patterns/test-structure
-*test-design,Risk and test design planning,"After story approval, before development",story markdown present|acceptance criteria clear|architecture/PRD accessible,"Filter requirements so only genuine risks remain; review PRD/architecture/story for unresolved gaps; classify risks across TECH, SEC, PERF, DATA, BUS, OPS using category definitions; request clarification when evidence missing; score probability (1 unlikely, 2 possible, 3 likely) and impact (1 minor, 2 degraded, 3 critical) then compute totals; highlight risks >=6 and plan mitigations with owners and timelines; break acceptance criteria into atomic scenarios mapped to mitigations; reference test-levels-framework.md to pick unit/integration/E2E/component levels; avoid duplicate coverage, prefer lower levels when possible; assign priorities using test-priorities-matrix.md; outline data/tooling prerequisites and execution order",Risk assessment markdown in docs/qa/assessments; table of category/probability/impact/score; mitigation matrix with owners and due dates; coverage matrix with requirement/level/priority/mitigation; gate YAML snippet summarizing risk totals and scenario counts; recommended execution order,"If story missing or criteria unclear, halt for clarification","Category definitions: TECH=architecture flaws; SEC=missing controls/vulnerabilities; PERF=SLA risk; DATA=loss/corruption; BUS=user/business harm; OPS=deployment/run failures; rely on evidence, not speculation; tie scenarios back to risk mitigations; keep scenarios independent and maintainable",philosophy/core|risk-model|patterns/test-structure
-*trace,Requirements traceability,Mid-development checkpoint or before review,tests exist for story|access to source + specs,"Gather acceptance criteria and implemented tests; map each criterion to concrete tests (file + describe/it) using Given-When-Then narrative; classify coverage status as FULL, PARTIAL, NONE, UNIT-ONLY, INTEGRATION-ONLY; flag severity based on priority (P0 gaps critical); recommend additional tests or refactors; generate gate YAML coverage summary",Traceability report saved under docs/qa/assessments; coverage matrix with status per criterion; gate YAML snippet for coverage totals and gaps,"If story lacks implemented tests, pause and advise running *tdd or writing tests","Definitions: FULL=all scenarios validated, PARTIAL=some coverage exists, NONE=no validation, UNIT-ONLY=missing higher level, INTEGRATION-ONLY=lacks lower confidence; ensure assertions explicit and avoid duplicate coverage",philosophy/core|patterns/assertions
diff --git a/src/modules/bmm/testarch/tea-knowledge.md b/src/modules/bmm/testarch/tea-knowledge.md
index aeb6c900..21e7ae5c 100644
--- a/src/modules/bmm/testarch/tea-knowledge.md
+++ b/src/modules/bmm/testarch/tea-knowledge.md
@@ -2,7 +2,7 @@
# Murat Test Architecture Foundations (Slim Brief)
-This brief distills Murat Ozcan's testing philosophy used by the Test Architect agent. Use it as the north star after loading `tea-commands.csv`.
+This brief distills Murat Ozcan's testing philosophy used by the Test Architect agent. Use it as the north star while executing the TEA workflows.
## Core Principles
@@ -14,8 +14,10 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
- Composition over inheritance: prefer functional helpers and fixtures that compose behaviour; page objects and deep class trees hide duplication.
- Setup via API, assert via UI. Keep tests user-centric while priming state through fast interfaces.
- One test = one concern. Explicit assertions live in the test body, not buried in helpers.
+- Test at the lowest level possible first: favour component/unit coverage before integration/E2E (target ~1:3–1:5 ratio of high-level to low-level tests).
+- Zero tolerance for flakiness: if a test flakes, fix the cause immediately or delete the test—shipping with flakes is not acceptable evidence.
-## Patterns and Heuristics
+## Patterns & Heuristics
- Selector order: `data-cy` / `data-testid` -> ARIA -> text. Avoid brittle CSS, IDs, or index based locators.
- Network boundary is the mock boundary. Stub at the edge, never mid-service unless risk demands.
@@ -44,9 +46,37 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
...overrides,
});
```
+- Standard test skeleton keeps intent clear—`describe` the feature, `context` specific scenarios, make setup visible, and follow Arrange → Act → Assert explicitly:
+
+ ```javascript
+ describe('Checkout', () => {
+ context('when inventory is available', () => {
+ beforeEach(async () => {
+ await seedInventory();
+ await interceptOrders(); // intercept BEFORE navigation
+ await test.step('navigate', () => page.goto('/checkout'));
+ });
+
+ it('completes purchase', async () => {
+ await cart.fillDetails(validUser);
+ await expect(page.getByTestId('order-confirmed')).toBeVisible();
+ });
+ });
+ });
+ ```
+
+- Helper/fixture thresholds: 3+ call sites → promote to fixture with subpath export, 2-3 → shared utility module, 1-off → keep inline to avoid premature abstraction.
+- Deterministic waits only: prefer `page.waitForResponse`, `cy.wait('@alias')`, or element disappearance (e.g., `cy.get('[data-cy="spinner"]').should('not.exist')`). Ban `waitForTimeout`/`cy.wait(ms)` unless quarantined in TODO and slated for removal.
+- Data is created via APIs or tasks, not UI flows:
+ ```javascript
+ beforeEach(() => {
+ cy.task('db:seed', { users: [createUser({ role: 'admin' })] });
+ });
+ ```
+- Assertions stay in tests; when shared state varies, assert on ranges (`expect(count).toBeGreaterThanOrEqual(3)`) rather than brittle exact values.
- Visual debugging: keep component/test runner UIs available (Playwright trace viewer, Cypress runner) to accelerate feedback.
-## Risk and Coverage
+## Risk & Coverage
- Risk score = probability (1-3) × impact (1-3). Score 9 => gate FAIL, ≥6 => CONCERNS. Most stories have 0-1 high risks.
- Test level ratio: heavy unit/component coverage, but always include E2E for critical journeys and integration seams.
@@ -60,7 +90,7 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
- **Media**: screenshot only-on-failure, video retain-on-failure
- **Language Matching**: Tests should match source code language (JS/TS frontend -> JS/TS tests)
-## Automation and CI
+## Automation & CI
- Prefer Playwright for multi-language teams, worker parallelism, rich debugging; Cypress suits smaller DX-first repos or component-heavy spikes.
- **Framework Selection**: Large repo + performance = Playwright, Small repo + DX = Cypress
@@ -71,7 +101,7 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
- Burn-in testing: run new or changed specs multiple times (e.g., 3-10x) to flush flakes before they land in main.
- Keep helper scripts handy (`scripts/test-changed.sh`, `scripts/burn-in-changed.sh`) so CI and local workflows stay in sync.
-## Project Structure and Config
+## Project Structure & Config
- **Directory structure**:
```
@@ -92,8 +122,10 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
};
export default configs[process.env.TEST_ENV || 'local'];
```
+- Validate environment input up-front (fail fast when `TEST_ENV` is missing) and keep Playwright/Cypress configs small by delegating per-env overrides to files under `config/`.
+- Keep `.env.example`, `.nvmrc`, and scripts (burn-in, test-changed) in source control so CI and local machines share tooling defaults.
-## Test Hygiene and Independence
+## Test Hygiene & Independence
- Tests must be independent and stateless; never rely on execution order.
- Cleanup all data created during tests (afterEach or API cleanup).
@@ -101,7 +133,7 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
- No shared mutable state; prefer factory functions per test.
- Tests must run in parallel safely; never commit `.only`.
- Prefer co-location: component tests next to components, integration in `tests/integration`, etc.
-- Feature flags: centralise enum definitions (e.g., `export const FLAGS = Object.freeze({ NEW_FEATURE: 'new-feature' })`), provide helpers to set/clear targeting, and write dedicated flag tests that clean up targeting after each run.
+- Feature flags: centralise enum definitions (e.g., `export const FLAGS = Object.freeze({ NEW_FEATURE: 'new-feature' })`), provide helpers to set/clear targeting, write dedicated flag suites that clean up targeting after each run, and exercise both enabled/disabled paths in CI.
## CCTDD (Component Test-Driven Development)
@@ -117,6 +149,8 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
- **HAR recording**: Record network traffic for offline playback in CI.
- **Selective reruns**: Only rerun failed specs, not entire suite.
- **Network recording**: capture HAR files during stable runs so CI can replay network traffic when external systems are flaky.
+- Stage jobs: cache dependencies once, run `test-changed` before full suite, then execute sharded E2E jobs with `fail-fast: false` so one failure doesn’t cancel other evidence.
+- Ship burn-in scripts (e.g., `scripts/burn-in-changed.sh`) that loop 5–10x over changed specs and stop on first failure; wire them into CI for flaky detection before merge.
## Package Scripts
@@ -127,25 +161,20 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
"test:component": "cypress run --component",
"test:contract": "jest --testMatch='**/pact/*.spec.ts'",
"test:debug": "playwright test --headed",
- "test:ci": "npm run test:unit andand npm run test:e2e",
+ "test:ci": "npm run test:unit && npm run test:e2e",
"contract:publish": "pact-broker publish"
```
-## Contract Testing (Pact)
+## Online Resources & Examples
-- Use for microservices with integration points.
-- Consumer generates contracts, provider verifies.
-- Structure: `pact/` directory at root, `pact/config.ts` for broker settings.
-- Reference repos: pact-js-example-consumer, pact-js-example-provider, pact-js-example-react-consumer.
+- Full-text mirrors of Murat's public repos live in the `test-resources-for-ai/sample-repos` knowledge pack so TEA can stay offline. Key origins include Playwright patterns (`pw-book`), Cypress vs Playwright comparisons, Tour of Heroes, and Pact consumer/provider examples.
-## Online Resources and Examples
-
-- Fixture architecture: https://github.com/muratkeremozcan/cy-vs-pw-murats-version
+- - Fixture architecture: https://github.com/muratkeremozcan/cy-vs-pw-murats-version
- Playwright patterns: https://github.com/muratkeremozcan/pw-book
- Component testing (CCTDD): https://github.com/muratkeremozcan/cctdd
- Contract testing: https://github.com/muratkeremozcan/pact-js-example-consumer
- Full app example: https://github.com/muratkeremozcan/tour-of-heroes-react-vite-cypress-ts
-- Blog posts: https://dev.to/muratkeremozcan
+- Blog essays at https://dev.to/muratkeremozcan provide narrative rationale—distil any new actionable guidance back into this brief when processes evolve.
## Risk Model Details
@@ -156,7 +185,7 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
- BUS: Business or user harm, revenue-impacting failures, compliance gaps.
- OPS: Deployment, infrastructure, or observability gaps that block releases.
-## Probability and Impact Scale
+## Probability & Impact Scale
- Probability 1 = Unlikely (standard implementation, low risk).
- Probability 2 = Possible (edge cases, needs attention).
@@ -168,8 +197,8 @@ This brief distills Murat Ozcan's testing philosophy used by the Test Architect
## Test Design Frameworks
-- Use `docs/docs-v6/v6-bmm/test-levels-framework.md` for level selection and anti-patterns.
-- Use `docs/docs-v6/v6-bmm/test-priorities-matrix.md` for P0-P3 priority criteria.
+- Use [`test-levels-framework.md`](./test-levels-framework.md) for level selection and anti-patterns.
+- Use [`test-priorities-matrix.md`](./test-priorities-matrix.md) for P0–P3 priority criteria.
- Naming convention: `{epic}.{story}-{LEVEL}-{sequence}` (e.g., `2.4-E2E-01`).
- Tie each scenario to risk mitigations or acceptance criteria.
@@ -270,6 +299,65 @@ history:
- Describe blocks: `describe('Feature/Component Name', () => { context('when condition', ...) })`.
- Data attributes: always kebab-case (`data-cy="submit-button"`, `data-testid="user-email"`).
-## Reference Materials
+## Contract Testing Rules (Pact)
-If deeper context is needed, consult Murat's testing philosophy notes, blog posts, and sample repositories in https://github.com/muratkeremozcan/test-resources-for-ai/blob/main/gitingest-full-repo-text-version.txt.
+- Use Pact for microservice integrations; keep a `pact/` directory with broker config and share contracts as first-class artifacts in the repo.
+- Keep consumer contracts beside the integration specs that exercise them; version with semantic tags so downstream teams understand breaking changes.
+- Publish contracts on every CI run and enforce provider verification before merge—failing verification blocks release and acts as a quality gate.
+- Capture fallback behaviour (timeouts, retries, circuit breakers) inside interactions so resilience expectations stay explicit.
+- Sample interaction scaffold:
+ ```javascript
+ const interaction = {
+ state: 'user with id 1 exists',
+ uponReceiving: 'a request for user 1',
+ withRequest: {
+ method: 'GET',
+ path: '/users/1',
+ headers: { Accept: 'application/json' },
+ },
+ willRespondWith: {
+ status: 200,
+ headers: { 'Content-Type': 'application/json' },
+ body: like({ id: 1, name: string('Jane Doe'), email: email('jane@example.com') }),
+ },
+ };
+ ```
+
+## Reference Capsules (Summaries Bundled In)
+
+- **Fixture Architecture Quick Wins**
+ - Compose Playwright or Cypress suites with additive fixtures; use `mergeTests`/`extend` to layer auth, network, and telemetry helpers without inheritance.
+ - Keep HTTP helpers framework-agnostic so the same function fuels unit tests, API smoke checks, and runtime fixtures.
+ - Normalize selectors (`data-testid`/`data-cy`) and lint new UI code for missing attributes to prevent brittle locators.
+
+- **Playwright Patterns Digest**
+ - Register network interceptions before navigation, assert on typed responses, and capture HAR files for regression.
+ - Treat timeouts and retries as configuration, not inline magic numbers; expose overrides via fixtures.
+ - Name specs and test IDs with intent (`checkout.complete-happy-path`) so CI shards and triage stay meaningful.
+
+- **Component TDD Highlights**
+ - Begin UI work with failing component specs; rebuild providers/stores per spec to avoid state bleed.
+ - Use factories to exercise prop variations and edge cases; assert through accessible queries (`getByRole`, `getByLabelText`).
+ - Document mount helpers and cleanup expectations so component tests stay deterministic.
+
+- **Contract Testing Cliff Notes**
+ - Store consumer contracts alongside integration specs; version with semantic tags and publish on every CI run.
+ - Enforce provider verification prior to merge to act as a release gate for service integrations.
+ - Capture fallback behaviour (timeouts, retries, circuit breakers) inside contracts to keep resilience expectations explicit.
+
+- **End-to-End Reference Flow**
+ - Prime end-to-end journeys through API fixtures, then assert through UI steps mirroring real user narratives.
+ - Pair burn-in scripts (`npm run test:e2e -- --repeat-each=3`) with selective retries to flush flakes before promotion.
+
+- **Philosophy & Heuristics Articles**
+ - Use long-form articles for rationale; extract checklists, scripts, and thresholds back into this brief whenever teams adopt new practices.
+
+These capsules distil Murat's sample repositories (Playwright patterns, Cypress vs Playwright comparisons, CCTDD, Pact examples, Tour of Heroes walkthrough) captured in the `test-resources-for-ai` knowledge pack so the TEA agent can operate offline while reflecting those techniques.
+
+## Reference Assets
+
+- [Test Architect README](./README.md) — high-level usage guidance and phase checklists.
+- [Test Levels Framework](./test-levels-framework.md) — choose the right level for each scenario.
+- [Test Priorities Matrix](./test-priorities-matrix.md) — assign P0–P3 priorities consistently.
+- [TEA Workflows](../workflows/testarch/README.md) — per-command instructions executed by the agent.
+- [Murat Knowledge Bundle](./test-resources-for-ai-flat.txt) — 347 KB flattened snapshot of Murat’s blogs, philosophy notes, and course material. Sections are delimited with `FILE:` headers; load relevant portions when deeper examples or rationales are required.
diff --git a/src/modules/bmm/testarch/test-design.md b/src/modules/bmm/testarch/test-design.md
deleted file mode 100644
index d86f9ab8..00000000
--- a/src/modules/bmm/testarch/test-design.md
+++ /dev/null
@@ -1,43 +0,0 @@
-
-
-# Risk and Test Design v3.0 (Slim)
-
-```xml
-
-
- Set command_key="*test-design"
- Load {project-root}/bmad/bmm/testarch/tea-commands.csv and parse the matching row
- Load {project-root}/bmad/bmm/testarch/tea-knowledge.md for risk-model and coverage heuristics
- Use CSV columns preflight, flow_cues, deliverables, halt_rules, notes, knowledge_tags as the execution blueprint
- Split pipe-delimited values into actionable checklists
- Stay evidence-based—link risks and scenarios directly to PRD/architecture/story artifacts
-
-
-
- Confirm story markdown, acceptance criteria, and architecture/PRD access.
- Stop immediately if halt_rules trigger (missing inputs or unclear requirements).
-
-
- Follow flow_cues to filter genuine risks, classify them (TECH/SEC/PERF/DATA/BUS/OPS), and score probability × impact.
- Document mitigations with owners, timelines, and residual risk expectations.
-
-
- Break acceptance criteria into atomic scenarios mapped to mitigations.
- Choose test levels using test-levels-framework.md, assign priorities via test-priorities-matrix.md, and note tooling/data prerequisites.
-
-
- Generate the combined risk report and test design artifacts described in deliverables.
- Summarize key risks, mitigations, coverage plan, and recommended execution order.
-
-
-
- Apply halt_rules from the CSV row verbatim.
-
-
- Use notes column for calibration reminders and coverage heuristics.
-
-
-
-```
diff --git a/src/modules/bmm/testarch/test-resources-for-ai-flat.txt b/src/modules/bmm/testarch/test-resources-for-ai-flat.txt
new file mode 100644
index 00000000..5218f822
--- /dev/null
+++ b/src/modules/bmm/testarch/test-resources-for-ai-flat.txt
@@ -0,0 +1,7607 @@
+Directory structure:
+└── muratkeremozcan-test-resources-for-ai/
+ ├── README.md
+ ├── murat-testing-philosophy-for-tea-agent.md
+ ├── Quotes about testing.md
+ ├── blog/
+ │ ├── (mostly incomplete) List of Test Methodologies.mkd
+ │ ├── API e2e testing event driven systems.mkd
+ │ ├── Automating API Documentation A Journey from TypeScript to OpenAPI and Schema Governence with Optic.mkd
+ │ ├── Building Custom Request Filters for PactJs Verifications in Express and Non-Express Environments.mkd
+ │ ├── Building the test architecture, increasing adoption, improving the developer experience.mkd
+ │ ├── CI CD strategies for UI apps and deployed services.mkd
+ │ ├── Cypress Component Testing vs React Testing Library - the complete comparison.mkd
+ │ ├── Documenting and Testing Schemas of Serverless Stacks with Optic & Cypress.mkd
+ │ ├── Functional Programming Test Patterns with Cypress.mkd
+ │ ├── Handling Pact Breaking Changes Dynamically in CICD.mkd
+ │ ├── Improve Cypress e2e test latency by a factor of 20!!.mkd
+ │ ├── Page Objects vs. Functional Helpers.mkd
+ │ ├── Solving Cross-Execution Issues in Pact Testing with Kafka and Message Queues.mkd
+ │ └── Testing Email-Based Authentication Systems with Cypress, Mailosaur and cypress-data-session.mkd
+ └── Test_architect_content/
+ └── Test_architect_course/
+ ├── test design techniques.docx
+ └── Training itself/
+ ├── Homework 1/
+ │ ├── Horizon Cloud presentation - Murat .pptx
+ │ ├── Horizon Cloud presentation - Murat .url
+ │ ├── Marketplace Input.docx
+ │ └── Role - 5 Differences.docx
+ ├── Homework 3/
+ │ ├── Horizon Cloud auto pen testing.pptx
+ │ └── Horizon Cloud BOIC Test Automation.pptx
+ ├── phase 1 slides/
+ │ └── TeA_WS1-3-1a_RBT-Worksheet_A1.xls
+ └── testarchitectnotes/
+ ├── README.md
+ ├── TeA_notes.docx
+ ├── Business_Understanding/
+ │ └── README.md
+ ├── Requirements_Engineering/
+ │ └── README.md
+ ├── slides/
+ │ └── RBT-Worksheet.xls
+ ├── Social_Capability/
+ │ └── README.md
+ ├── Test_Architecture/
+ │ └── README.md
+ └── Testing_&_Quality/
+ ├── README.md
+ └── RBT-Worksheet.xls
+
+================================================
+FILE: README.md
+================================================
+Resources to help AI reference.
+
+
+
+================================================
+FILE: murat-testing-philosophy-for-tea-agent.md
+================================================
+# Murat Ozcan Testing Philosophy & Patterns for TEA Agent Enhancement
+
+## Purpose
+
+This document captures the comprehensive testing philosophy, patterns, and implementation details extracted from Murat Ozcan's books, blog posts, and sample repositories. It serves as the knowledge base for enhancing the Test Architect (TEA) agent to generate tests, configurations, and architectures in Murat's distinctive style.
+
+## Reference Resources
+
+### Books
+
+- **CCTDD: Cypress Component Test Driven Design** - [https://github.com/muratkeremozcan/cctdd](https://github.com/muratkeremozcan/cctdd)
+- **UI Testing Best Practices** - [https://github.com/NoriSte/ui-testing-best-practices](https://github.com/NoriSte/ui-testing-best-practices)
+
+### Blog Posts
+
+All blog posts available at: [https://dev.to/muratkeremozcan](https://dev.to/muratkeremozcan)
+
+Key posts include:
+- Functional Programming Test Patterns with Cypress
+- Page Objects vs. Functional Helpers
+- Building the test architecture, increasing adoption, improving the developer experience
+- Effective Test Strategies for Front-end Applications using LaunchDarkly Feature Flags
+- CI CD strategies for UI apps and deployed services
+- Cypress Component Testing vs React Testing Library - the complete comparison
+- API e2e testing event driven systems
+- Cypress and Pact contract testing patterns
+- Testing Email-Based Authentication Systems
+- The 32+ ways of selective testing with Cypress
+
+### Sample Repositories
+
+- **Cypress vs Playwright Examples** - [https://github.com/muratkeremozcan/cy-vs-pw-murats-version](https://github.com/muratkeremozcan/cy-vs-pw-murats-version)
+- **Playwright Book Examples** - [https://github.com/muratkeremozcan/pw-book](https://github.com/muratkeremozcan/pw-book)
+- **Tour of Heroes (React/Vite/Cypress/TS)** - [https://github.com/muratkeremozcan/tour-of-heroes-react-vite-cypress-ts](https://github.com/muratkeremozcan/tour-of-heroes-react-vite-cypress-ts)
+- **Pact.js Consumer Example** - [https://github.com/muratkeremozcan/pact-js-example-consumer](https://github.com/muratkeremozcan/pact-js-example-consumer)
+- **Pact.js React Consumer** - [https://github.com/muratkeremozcan/pact-js-example-react-consumer](https://github.com/muratkeremozcan/pact-js-example-react-consumer)
+- **Pact.js Provider Example** - [https://github.com/muratkeremozcan/pact-js-example-provider](https://github.com/muratkeremozcan/pact-js-example-provider)
+
+## Core Philosophy
+
+### The Murat Testing Manifesto
+1. **"Functional helpers over Page Objects"** - Composition wins over inheritance
+2. **"Test at the lowest level possible"** - Component > Integration > E2E (1:3 to 1:5 ratio)
+3. **"Network boundary is the test boundary"** - Mock at the edge, not service level
+4. **"Visual debugging changes everything"** - See the component while testing
+5. **"No flaky tests, ever"** - Deterministic or delete (0 tolerance)
+6. **"Setup via API, assert via UI"** - Fast setup, user-centric assertions
+7. **"One test = one concern"** - Focused and clear
+8. **"Explicit over implicit"** - Assertions in tests, not hidden in helpers
+9. **"Data factories over fixtures"** - Dynamic > Static
+10. **"Shift left, test often"** - Early and continuous
+
+## Testing Patterns
+
+### 1. Test Structure Pattern
+```javascript
+// ALWAYS this structure:
+describe('Feature/Component', () => {
+ context('specific scenario', () => { // Group related tests
+ beforeEach(() => {
+ // Setup that's VISIBLE in the test
+ // Network mocks BEFORE navigation
+ // Data setup via API, not UI
+ })
+
+ it('should when ', () => {
+ // Arrange - Act - Assert clearly separated
+ // Assertions explicit in test, not helpers
+ })
+ })
+})
+```
+
+### 2. Fixture & Helper Architecture
+
+#### Composable Fixture System (SEON Production Pattern)
+
+```typescript
+// playwright/support/merged-fixtures.ts - The Murat Way
+import { test as base, mergeTests } from '@playwright/test'
+import { test as apiRequestFixture } from './fixtures/api-request-fixture'
+import { test as networkFixture } from './fixtures/network-fixture'
+import { test as authFixture } from './fixtures/auth-fixture'
+import { test as logFixture } from './fixtures/log-fixture'
+
+// Merge all fixtures for comprehensive capabilities
+export const test = mergeTests(
+ base,
+ apiRequestFixture,
+ networkFixture,
+ authFixture,
+ logFixture
+)
+```
+
+#### Pure Function → Fixture Pattern
+
+```typescript
+// Step 1: Pure function (always first!)
+export async function apiRequest({ request, method, url }) {
+ // Core implementation - testable independently
+}
+
+// Step 2: Fixture wrapper
+export const apiRequestFixture = base.extend({
+ apiRequest: async ({ request }, use) => {
+ await use((params) => apiRequest({ request, ...params }))
+ }
+})
+
+// Step 3: Export for subpath imports
+// package.json exports: "./api-request", "./api-request/fixtures"
+```
+
+#### Helper Function Rules
+
+- **3+ uses** → Create fixture with subpath export
+- **2-3 uses** → Create utility module
+- **1 use** → Keep inline
+- **Complex logic** → Factory function pattern
+
+### 3. Network Interception Strategy
+
+#### The Network-First Pattern
+```javascript
+// ALWAYS intercept before action
+const networkCall = interceptNetworkCall({ url: '**/api/data' })
+await page.goto('/page') // THEN navigate
+const response = await networkCall // THEN await
+
+// Cypress equivalent
+cy.intercept('GET', '**/api/data').as('getData')
+cy.visit('/page')
+cy.wait('@getData')
+```
+
+### 4. Selector Strategy (Non-Negotiable)
+```javascript
+// Priority order - ALWAYS
+1. data-cy="element" // Cypress
+2. data-testid="element" // Playwright/RTL
+3. ARIA attributes // Future-proof
+4. Text content // User-centric
+
+// Never use:
+- CSS classes (unless no other option)
+- IDs (unless absolutely necessary)
+- Complex XPath
+- Index-based selectors
+```
+
+### 5. Waiting Strategies
+```javascript
+// ✅ Deterministic waiting
+await page.waitForResponse('**/api/data')
+cy.wait('@getUsers')
+
+// ✅ Event-based waiting
+await page.waitForLoadState('networkidle')
+cy.get('[data-cy="spinner"]').should('not.exist')
+
+// ❌ NEVER use hard waits
+await page.waitForTimeout(3000) // NEVER
+cy.wait(3000) // NEVER
+```
+
+### 6. Test Data Management
+
+#### Factory Pattern (Always)
+```typescript
+export const createUser = (overrides: Partial = {}): User => ({
+ id: faker.string.uuid(),
+ email: faker.internet.email(),
+ name: faker.person.fullName(),
+ role: 'user',
+ ...overrides
+})
+
+// Usage in tests:
+const adminUser = createUser({ role: 'admin' })
+```
+
+#### API-First Setup
+```javascript
+beforeEach(() => {
+ // ✅ Setup via API
+ cy.task('db:seed', { users: [createUser()] })
+
+ // ❌ NOT via UI
+ // cy.visit('/signup')
+ // cy.fill('#email', 'test@example.com')
+})
+```
+
+### 7. Assertion Patterns
+
+#### Flexible for Shared State
+```typescript
+// When state might be shared:
+expect(await items.count()).toBeGreaterThanOrEqual(3)
+// NOT: expect(await items.count()).toBe(3)
+```
+
+#### Explicit in Tests
+```javascript
+// ✅ Assertions in test
+cy.get('@apiCall').should((xhr) => {
+ expect(xhr.request.body).to.deep.equal(expectedPayload)
+ expect(xhr.response.statusCode).to.equal(200)
+})
+
+// ❌ NOT hidden in helpers
+// validateApiCall('@apiCall', expectedPayload, 200)
+```
+
+## Configuration Templates
+
+### Playwright Configuration (The Murat Way - SEON Production Pattern)
+
+```javascript
+// playwright.config.ts - Environment-based config loading
+import { config as dotenvConfig } from 'dotenv'
+import path from 'path'
+
+dotenvConfig({
+ path: path.resolve(__dirname, '../../.env')
+})
+
+const envConfigMap = {
+ local: require('./playwright/config/local.config').default,
+ staging: require('./playwright/config/staging.config').default,
+ production: require('./playwright/config/production.config').default
+}
+
+const environment = process.env.TEST_ENV || 'local'
+
+if (!Object.keys(envConfigMap).includes(environment)) {
+ console.error(`No configuration found for environment: ${environment}`)
+ process.exit(1)
+}
+
+export default envConfigMap[environment as keyof typeof envConfigMap]
+```
+
+#### Base Configuration Pattern
+```javascript
+// playwright/config/base.config.ts
+export default defineConfig({
+ testDir: path.resolve(__dirname, '../tests'),
+ outputDir: path.resolve(__dirname, '../../test-results'),
+ fullyParallel: true,
+ forbidOnly: !!process.env.CI,
+ retries: process.env.CI ? 2 : 0,
+ workers: process.env.CI ? 1 : undefined,
+ reporter: [
+ ['html', { outputFolder: 'playwright-report', open: 'never' }],
+ ['junit', { outputFile: 'results.xml' }],
+ ['list']
+ ],
+ use: {
+ actionTimeout: 15000,
+ navigationTimeout: 30000,
+ trace: 'on-first-retry',
+ screenshot: 'only-on-failure',
+ video: 'retain-on-failure'
+ },
+ globalSetup: path.resolve(__dirname, '../support/global-setup.ts'),
+ timeout: 60000,
+ expect: { timeout: 10000 }
+})
+```
+
+### Cypress Configuration (The Murat Way)
+```javascript
+import { defineConfig } from 'cypress'
+
+export default defineConfig({
+ e2e: {
+ baseUrl: 'http://localhost:3000',
+ viewportWidth: 1920,
+ viewportHeight: 1080,
+ video: false,
+ screenshotOnRunFailure: true,
+ defaultCommandTimeout: 10000,
+ requestTimeout: 10000,
+ responseTimeout: 10000,
+ retries: {
+ runMode: 2,
+ openMode: 0
+ },
+ env: {
+ API_URL: 'http://localhost:3001/api',
+ coverage: false
+ },
+ setupNodeEvents(on, config) {
+ on('task', {
+ 'db:seed': seedDatabase,
+ 'db:reset': resetDatabase,
+ log(message) {
+ console.log(message)
+ return null
+ }
+ })
+ return config
+ }
+ },
+ component: {
+ devServer: {
+ framework: 'react',
+ bundler: 'vite'
+ },
+ specPattern: 'src/**/*.cy.{js,jsx,ts,tsx}',
+ supportFile: 'cypress/support/component.tsx'
+ }
+})
+```
+
+## CI/CD Patterns
+
+### GitHub Actions Workflow Template
+```yaml
+name: E2E Tests
+on:
+ pull_request:
+ push:
+ branches: [main]
+
+jobs:
+ install:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ - uses: actions/setup-node@v4
+ with:
+ node-version-file: '.nvmrc'
+ cache: 'npm'
+ - run: npm ci --prefer-offline
+ - uses: actions/cache@v4
+ with:
+ path: ~/.npm
+ key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
+
+ test-changed:
+ needs: install
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ fetch-depth: 0
+ - name: Run changed tests first
+ run: |
+ CHANGED_SPECS=$(git diff --name-only HEAD~1 | grep -E '\.(cy|spec)\.(ts|js)x?$' || true)
+ if [ ! -z "$CHANGED_SPECS" ]; then
+ npm run test -- --spec "$CHANGED_SPECS"
+ fi
+
+ test-e2e:
+ needs: install
+ runs-on: ubuntu-latest
+ strategy:
+ fail-fast: false
+ matrix:
+ containers: [1, 2, 3, 4]
+ steps:
+ - uses: actions/checkout@v4
+ - uses: cypress-io/github-action@v6
+ with:
+ start: npm run dev
+ wait-on: 'http://localhost:3000'
+ wait-on-timeout: 120
+ browser: chrome
+ record: true
+ parallel: true
+ group: 'E2E'
+ env:
+ CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
+```
+
+### Burn-in Testing Script
+```bash
+#!/bin/bash
+# scripts/burn-in-changed.sh
+CHANGED_SPECS=$(git diff --name-only HEAD~1 | grep -E '\.(cy|spec)\.(ts|js)x?$')
+if [ ! -z "$CHANGED_SPECS" ]; then
+ for i in {1..10}; do
+ echo "Burn-in run $i of 10"
+ npm run test -- --spec "$CHANGED_SPECS"
+ if [ $? -ne 0 ]; then
+ echo "Burn-in failed on run $i"
+ exit 1
+ fi
+ done
+ echo "Burn-in passed - 10 successful runs"
+fi
+```
+
+## Feature Flag Testing
+
+### Feature Flag Enum Pattern
+```javascript
+// src/utils/flags.js
+export const FLAGS = Object.freeze({
+ NEW_FEATURE: 'new-feature',
+ EXPERIMENT: 'experiment-flag',
+ DARK_MODE: 'dark-mode'
+})
+```
+
+### Stubbing Feature Flags
+```javascript
+// Stub in most tests
+cy.stubFeatureFlags({
+ [FLAGS.NEW_FEATURE]: true,
+ [FLAGS.DARK_MODE]: false
+})
+
+// Dedicated FF tests with cleanup
+describe('Feature Flag Tests', () => {
+ const userId = generateUserId()
+
+ afterEach(() => {
+ removeUserTarget(FLAGS.NEW_FEATURE, userId)
+ })
+
+ it('should test flag variation', () => {
+ setFlagVariation(FLAGS.NEW_FEATURE, userId, 1)
+ // test the feature
+ })
+})
+```
+
+## Component Testing (CCTDD)
+
+### Component Test Driven Development Flow
+```typescript
+// 1. Start with failing test
+it('should render button', () => {
+ cy.mount()
+ cy.get('button').should('exist')
+})
+
+// 2. Make it pass with minimal code
+const Button = () =>
+
+// 3. Refactor and add behavior
+it('should handle click', () => {
+ const onClick = cy.stub().as('click')
+ cy.mount()
+ cy.get('button').click()
+ cy.get('@click').should('have.been.called')
+})
+```
+
+### Component Test Utilities
+```typescript
+// test-utils.tsx
+const AllTheProviders: FC<{children: React.ReactNode}> = ({children}) => {
+ return (
+
+ }>
+ }>
+ {children}
+
+
+
+ )
+}
+
+// Custom mount command
+Cypress.Commands.add('wrappedMount', (component) => {
+ return cy.mount({component})
+})
+```
+
+## Directory Structure
+
+```
+project/
+├── .nvmrc # ALWAYS specify Node version
+├── .npmrc # Registry configuration
+├── playwright.config.ts # OR cypress.config.ts
+├── tests/ (or cypress/)
+│ ├── fixtures/ # Test data
+│ ├── support/
+│ │ ├── fixtures.ts # Fixture composition
+│ │ ├── fixture-helpers/ # Pure functions
+│ │ ├── commands.ts # Custom commands
+│ │ └── utils/ # Shared utilities
+│ ├── unit/ # Unit tests
+│ ├── integration/ # API/Integration
+│ ├── e2e/ # User journeys
+│ └── feature-flags/ # FF-specific tests
+├── src/
+│ ├── components/
+│ │ └── **/*.cy.tsx # Component tests near source
+│ └── utils/
+│ └── flags.js # Feature flag enums
+└── .github/
+ └── workflows/ # CI/CD pipelines
+```
+
+## Performance Optimization Strategies
+
+### The 20x Speed Improvement Pattern
+1. **Parallel Execution** - Run tests in parallel with proper isolation
+2. **Smart Test Selection** - 32+ ways to select relevant tests
+3. **API Setup** - Use API for data setup instead of UI navigation
+4. **Network Recording** - HAR file recording and playback
+5. **Changed File Detection** - Run only affected tests first
+6. **Burn-in Testing** - Stress test changed components
+
+### Test Selection Strategies
+```bash
+# By tag/grep
+npm run test -- --grep "@smoke"
+npm run test -- --grep "@critical"
+
+# By file pattern
+npm run test -- --spec "**/*checkout*"
+npm run test -- --spec "cypress/e2e/user/*"
+
+# Changed files only
+npm run test:changed
+
+# By test level
+npm run test:unit
+npm run test:integration
+npm run test:e2e
+```
+
+## Contract Testing (Pact)
+
+### Consumer-Driven Contract Pattern
+```javascript
+const interaction = {
+ state: 'user with id 1 exists',
+ uponReceiving: 'a request for user 1',
+ withRequest: {
+ method: 'GET',
+ path: '/users/1',
+ headers: {
+ Accept: 'application/json'
+ }
+ },
+ willRespondWith: {
+ status: 200,
+ headers: {
+ 'Content-Type': 'application/json'
+ },
+ body: like({
+ id: 1,
+ name: string('John Doe'),
+ email: email('john@example.com')
+ })
+ }
+}
+```
+
+## Definition of Done & Quality Checklist
+
+### Core Requirements
+Every test must pass these criteria:
+
+- [ ] **No Flaky Tests** - 0 tolerance. Ensure reliability through proper async handling, explicit waits, and atomic test design
+- [ ] **No Hard Waits/Sleeps** - Use dynamic waiting strategies (polling, event-based triggers)
+- [ ] **Stateless & Parallelizable** - Tests run independently; use cron jobs or semaphores only if unavoidable
+- [ ] **No Order Dependency** - Every it/describe/context block works in isolation (supports .only execution)
+- [ ] **Self-Cleaning Tests** - Test sets up its own data and automatically deletes/deactivates entities created
+- [ ] **Tests Live Near Source Code** - Co-locate test files with code they validate (e.g., *.spec.js alongside components)
+- [ ] **< 200 lines** - Keep test files focused; split/chunk large tests logically
+- [ ] **< 1.5 minutes** - Individual test execution time; optimize slow setups with shared fixtures
+- [ ] **Explicit Assertions** - Keep them in tests; avoid abstraction into helpers
+- [ ] **Proper Selectors** - data-cy/data-testid priority
+- [ ] **No Conditionals** - Tests should refrain from if/else or try/catch to control flow
+
+### Test Coverage Strategy
+
+- **Shifted Left**: Start with local environments, validate across all stages (local → dev → stage → prod)
+- **Happy Path**: Core user journeys are prioritized
+- **Edge Cases**: Critical error/validation scenarios covered
+- **Feature Flags**: Test both enabled and disabled states where applicable
+- **API Coverage**: Both happy path and negative/error cases
+- **Idempotency**: Test duplicate requests where applicable
+
+### CI/CD Integration
+
+- [ ] **CI Execution Evidence** - Integrate into pipelines with clear logs/artifacts
+- [ ] **Visibility** - Generate test reports (JUnit XML, HTML) for failures and trends
+- [ ] **Response Logs** - Only printed in case of failure
+- [ ] **Auth Tests** - Validate token expiration and renewal
+
+### API Testing Specific
+
+- [ ] **Factory Data** - No hardcoded data, use factories and per-test setup
+- [ ] **Parallel Safe** - No global state shared
+- [ ] **Data Cleanup** - Tests clean up their data
+- [ ] **Error Testing** - Always test error scenarios
+
+## Error Handling Patterns
+
+### Expected Error Testing
+```javascript
+// Cypress
+describe('Error Handling', () => {
+ it('should handle API errors gracefully', () => {
+ Cypress.on('uncaught:exception', (err) => {
+ if (err.message.includes('NetworkError')) return false
+ })
+
+ cy.intercept('GET', '/api/data', { statusCode: 500 })
+ cy.visit('/dashboard')
+ cy.get('[data-cy="error-message"]').should('be.visible')
+ })
+})
+
+// Playwright
+test('should handle API errors gracefully', async ({ page }) => {
+ page.on('pageerror', (error) => {
+ if (!error.message.includes('NetworkError')) throw error
+ })
+
+ await page.route('/api/data', route =>
+ route.fulfill({ status: 500 })
+ )
+ await page.goto('/dashboard')
+ await expect(page.getByTestId('error-message')).toBeVisible()
+})
+```
+
+## Naming Conventions
+
+### File Naming
+```
+ComponentName.cy.tsx # Cypress component test
+ComponentName.test.tsx # Jest/RTL test
+component-name.spec.ts # Playwright test
+component-name-KEY.spec.ts # Key pattern demonstration
+```
+
+### Test Naming
+```javascript
+describe('Feature/Component Name', () => {
+ context('when condition exists', () => {
+ it('should perform expected action', () => {})
+ it('should handle error case', () => {})
+ })
+
+ context('when condition does not exist', () => {
+ it('should show default state', () => {})
+ })
+})
+```
+
+### Data Attributes
+```html
+
+
+
+```
+
+## Risk-Based Testing Philosophy
+
+### Test Level Decision Matrix
+- **P0 (Critical)**: Payment, authentication, data integrity
+- **P1 (Core)**: Primary user journeys, business logic
+- **P2 (Secondary)**: Supporting features, edge cases
+- **P3 (Nice-to-have)**: Cosmetic, non-functional improvements
+
+### Coverage Strategy
+```
+Unit Tests: 70% - Pure functions, algorithms, utilities
+Integration: 20% - API contracts, service boundaries
+E2E: 10% - Critical user paths only
+```
+
+## Testing Tools Preferences
+
+### Framework Choices
+- **Component Testing**: Cypress Component Testing > RTL
+- **E2E Testing**: Playwright = Cypress (context-dependent)
+- **API Testing**: Playwright API testing > REST clients
+- **Contract Testing**: Pact.js
+- **Visual Testing**: Percy, Chromatic, or Playwright screenshots
+
+### Supporting Tools
+- **Test Data**: Faker.js for dynamic data
+- **Network**: MSW for service workers, cy.intercept for Cypress
+- **Assertions**: Cypress chains, Playwright expects
+- **Reporting**: HTML reports, JUnit for CI
+
+## Key Principles Summary
+
+1. **Shift Left** - Test early in development cycle
+2. **Test Pyramid** - More unit/component, fewer E2E
+3. **Deterministic** - No randomness, no flakiness
+4. **Independent** - Tests don't affect each other
+5. **Fast Feedback** - Quick execution, clear failures
+6. **User-Centric** - Test from user perspective
+7. **Maintainable** - Easy to understand and modify
+8. **Documented** - Clear intent and requirements
+9. **Automated** - CI/CD integration
+10. **Measured** - Track metrics and improve
+
+## TEA Agent Enhancement Commands
+
+The Test Architect agent should implement these streamlined commands that incorporate all the Murat testing philosophy:
+
+```
+*automate - Create comprehensive test automation following all patterns
+ (Generates tests with proper fixtures, helpers, factories,
+ selectors, network handling, data management - all in Murat style)
+
+*ci - Setup complete CI/CD pipeline with GitHub Actions
+ (Includes parallel execution, burn-in for changed files,
+ environment-based configs, proper reporting)
+
+*framework - Initial test framework setup (one-time)
+ (Creates folder structure, base configs, fixture architecture,
+ auth setup, global setup, all utilities)
+```
+
+These three commands should internally apply ALL the patterns and principles:
+- Functional helpers over Page Objects
+- Pure function → Fixture pattern
+- Network-first interception
+- Factory-based test data
+- Proper selector strategy (data-cy/data-testid)
+- No hard waits, deterministic waiting
+- Self-cleaning tests
+- Component-first approach
+- Risk-based test design
+- Environment-based configuration
+- Modular fixture composition
+
+The commands execute comprehensively without requiring users to know the underlying patterns - they just get production-quality test architecture in the Murat style.
+
+## Reference Links for Agent
+
+When the TEA agent needs specific examples or deeper context, it should reference:
+
+### Primary Sources
+- CCTDD Book: Component testing patterns and philosophy
+- Blog posts in test-resources-for-ai/blog/: Specific patterns and strategies
+- Sample repos: Working implementations
+
+### Pattern References
+- Functional Helpers: Page-Objects-vs-Functional-Helpers.mkd
+- Feature Flags: Effective-Test-Strategies-Feature-Flags.mkd
+- CI/CD: CI-CD-strategies.mkd
+- Component Testing: Cypress-Component-Testing-vs-RTL.mkd
+
+### Implementation Examples
+- **cy-vs-pw-murats-version**: Framework comparison and parallel implementations
+- **tour-of-heroes**: Full application with component and E2E tests
+- **pact-js-example repos**: Contract testing patterns
+- **pw-book**: Comprehensive Playwright patterns and examples
+
+## TODO: TEA Agent Implementation Plan
+
+### Phase 1: Core Integration
+- [ ] Integrate this testing philosophy document into the TEA agent at `src/modules/bmm/agents/tea.md`
+- [ ] Update existing TEA commands to reference this document for patterns
+- [ ] Ensure `*tea-automate`, `*tea-ci`, and `*tea-framework` commands are implemented
+
+### BMAD Method Integration Structure
+
+The BMAD method follows an `agent -> task -> template` architecture. For the three new TEA commands:
+
+#### Task File Locations
+
+**Note**: BMAD is transitioning to a 3-file system per task (md template, yaml process, md checklist).
+For now, we create XML-based task definitions in markdown files.
+
+```
+src/modules/bmm/testarch/
+├── automate.md # Test automation task (XML format in .md)
+├── ci.md # CI/CD pipeline task (XML format in .md)
+└── framework.md # Framework setup task (XML format in .md)
+```
+
+Templates are embedded within the task XML definitions. Focus on good LLM prompts for user interaction.
+
+### Phase 2: Command Implementation Details
+
+#### *automate Command
+When executed, should:
+1. **Check for Epic/Story Context FIRST**:
+ - If story/epic exists, collaborate with dev agent to understand:
+ - What was implemented (check story tasks/subtasks)
+ - Source files modified (from File List in story)
+ - API specs if backend work
+ - UI components if frontend work
+ - If no story/epic, ask user what they're automating:
+ - UI feature? → analyze components
+ - API endpoint? → check API specs
+ - Full stack? → gather both
+2. Analyze the current codebase structure (React, Vue, Node.js, etc.)
+3. Auto-detect existing test framework (Playwright, Cypress, Jest, Vitest)
+ - If no framework detected, prompt user for preference
+4. Generate tests following:
+ - Functional helper pattern (not Page Objects)
+ - Proper fixture architecture with mergeTests
+ - Factory-based test data generation
+ - Network-first interception setup
+ - Proper selector strategy (data-cy/data-testid)
+ - Self-cleaning test patterns
+ - API setup for data, UI for assertions
+
+#### *ci Command
+When executed, should:
+1. Detect repository type (GitHub, GitLab, etc.)
+2. Generate workflow files with:
+ - Environment-based configuration
+ - Parallel execution matrix
+ - Burn-in testing for changed files
+ - Proper caching strategies
+ - Test result reporting
+ - Artifact storage
+
+#### *framework Command
+When executed, should:
+1. Create complete folder structure
+2. Setup configuration files:
+ - Environment-based configs (local, staging, production)
+ - Global setup for auth
+ - Fixture architecture
+3. Install necessary dependencies
+4. Create utility functions and helpers
+5. Setup example tests demonstrating patterns
+
+### Task Definition Structure
+
+#### *tea-automate Task
+```xml
+
+
+ 1. Analyze Codebase Structure
+ 2. Setup Test Architecture
+ 3. Implement Core Test Fixtures
+ 4. Generate Test Scenarios
+ 5. Implement Test Patterns
+ 6. Validate Test Quality
+
+
+
+```
+
+#### *tea-ci Task
+```xml
+
+
+ 1. Detect Repository Type
+ 2. Analyze Test Framework
+ 3. Generate Workflow Files
+ 4. Setup Parallel Execution
+ 5. Configure Burn-in Testing
+ 6. Setup Reporting
+
+
+
+```
+
+#### *tea-framework Task
+```xml
+
+
+ 1. Detect Application Framework
+ 2. Create Directory Structure
+ 3. Setup Base Configuration
+ 4. Implement Fixture Architecture
+ 5. Create Helper Functions
+ 6. Generate Example Tests
+
+
+
+```
+
+### Template Definitions
+
+#### test-automation-plan.yaml Template
+```yaml
+template:
+ id: test-automation-plan-v1
+ name: Test Automation Plan
+ output:
+ format: yaml
+ filename: tea.teaLocation/automation/{{epic_num}}.{{story_num}}-test-plan.yml
+
+schema: 1
+story: "{{epic_num}}.{{story_num}}"
+framework: "{{test_framework}}" # playwright|cypress|jest|vitest
+application_type: "{{app_type}}" # react|vue|angular|node
+generated: "{{iso_timestamp}}"
+
+test_architecture:
+ pattern: "functional_helpers" # ALWAYS
+ fixture_composition: true
+ factory_data: true
+ network_interception: true
+ selector_strategy: "data-testid" # or data-cy
+
+test_distribution:
+ unit: {{unit_count}}
+ integration: {{integration_count}}
+ e2e: {{e2e_count}}
+
+files_generated:
+ fixtures: []
+ helpers: []
+ tests: []
+
+coverage_summary:
+ acceptance_criteria_covered: []
+ risk_mitigations_addressed: []
+```
+
+#### ci-pipeline.yaml Template
+```yaml
+template:
+ id: ci-pipeline-v1
+ name: CI/CD Pipeline Configuration
+ output:
+ format: yaml
+ filename: tea.teaLocation/ci/pipeline-config.yml
+
+schema: 1
+repository_type: "{{repo_type}}" # github|gitlab|bitbucket
+test_framework: "{{test_framework}}"
+generated: "{{iso_timestamp}}"
+
+pipeline_features:
+ parallel_execution: true
+ matrix_strategy:
+ containers: {{container_count}}
+ burn_in_testing: true
+ changed_file_detection: true
+ environment_configs: ["local", "staging", "production"]
+
+workflow_files:
+ - path: ".github/workflows/e2e.yml"
+ triggers: ["pull_request", "push"]
+ - path: ".github/workflows/burn-in.yml"
+ triggers: ["pull_request"]
+
+optimizations:
+ cache_strategy: "npm"
+ artifact_retention: "30 days"
+ test_sharding: true
+```
+
+#### test-framework-config.yaml Template
+```yaml
+template:
+ id: test-framework-config-v1
+ name: Test Framework Configuration
+ output:
+ format: yaml
+ filename: tea.teaLocation/framework/setup-config.yml
+
+schema: 1
+framework: "{{test_framework}}"
+application: "{{app_framework}}"
+generated: "{{iso_timestamp}}"
+
+directory_structure:
+ tests_root: "{{test_dir}}" # tests/ or cypress/
+ fixtures: "{{test_dir}}/fixtures"
+ support: "{{test_dir}}/support"
+ helpers: "{{test_dir}}/support/helpers"
+
+configuration_files:
+ - name: "{{framework}}.config.ts"
+ environments: ["local", "staging", "production"]
+ - name: ".nvmrc"
+ node_version: "{{node_version}}"
+ - name: ".npmrc"
+ registry: "{{npm_registry}}"
+
+fixture_architecture:
+ base_fixtures:
+ - apiRequest
+ - network
+ - auth
+ - log
+ composition_method: "mergeTests"
+
+utilities_created:
+ factories: []
+ helpers: []
+ commands: []
+```
+
+### Phase 3: Advanced Patterns Integration
+- [ ] Auth session management patterns from sample repos
+- [ ] Contract testing patterns from pact-js repos
+- [ ] Component testing patterns from tour-of-heroes
+- [ ] Framework comparison patterns from cy-vs-pw repo
+- [ ] Advanced intercept patterns with stateful mocking
+
+### Phase 4: Documentation & Examples
+- [ ] Create example implementations for each pattern
+- [ ] Document migration path from Page Objects to functional helpers
+- [ ] Provide framework comparison guide (when to use Playwright vs Cypress)
+- [ ] Create troubleshooting guide for common issues
+
+### Integration Notes
+
+**Current TEA Agent Location**: `src/modules/bmm/agents/tea.md`
+
+**Key Integration Points**:
+1. The TEA agent should reference this document as its knowledge base
+2. Existing commands (*risk, *design, *trace, *nfr, *review, *gate) remain unchanged
+3. New commands (*tea-automate, *tea-ci, *tea-framework) are additive
+
+**BMAD Method Task Execution Flow**:
+1. User invokes command (e.g., `*automate`)
+2. TEA agent triggers corresponding task in `src/modules/bmm/testarch/`
+3. Task follows XML-defined flow steps
+4. Task generates output using embedded template definitions
+5. Results saved to `tea.teaLocation` directory structure
+
+**Task XML Structure Requirements**:
+- Must include `` instructions
+- Define explicit `` with numbered steps
+- Include `` for error handling
+- Specify `