diff --git a/docs/explanation/features/tea-overview.md b/docs/explanation/features/tea-overview.md index 968c8de4..279f45b5 100644 --- a/docs/explanation/features/tea-overview.md +++ b/docs/explanation/features/tea-overview.md @@ -60,8 +60,8 @@ If you are unsure, default to the integrated path for your track and adjust late | `*framework` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists | - | | `*ci` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) | - | | `*test-design` | Combined risk assessment, mitigation plan, and coverage strategy | Risk scoring + optional exploratory mode | **+ Exploratory**: Interactive UI discovery with browser automation (uncover actual functionality) | -| `*atdd` | Failing acceptance tests + implementation checklist | TDD red phase + optional recording mode | **+ Recording**: AI generation verified with live browser (accurate selectors from real DOM) | -| `*automate` | Prioritized specs, fixtures, README/script updates, DoD summary | Optional healing/recording, avoid duplicate coverage | **+ Healing**: Pattern fixes enhanced with visual debugging + **+ Recording**: AI verified with live browser | +| `*atdd` | Failing acceptance tests + implementation checklist | TDD red phase + optional recording mode | **+ Recording**: UI selectors verified with live browser; API tests benefit from trace analysis | +| `*automate` | Prioritized specs, fixtures, README/script updates, DoD summary | Optional healing/recording, avoid duplicate coverage | **+ Healing**: Visual debugging + trace analysis for test fixes; **+ Recording**: Verified selectors (UI) + network inspection (API) | | `*test-review` | Test quality review report with 0-100 score, violations, fixes | Reviews tests against knowledge base patterns | - | | `*nfr-assess` | NFR assessment report with actions | Focus on security/performance/reliability | - | | `*trace` | Phase 1: Coverage matrix, recommendations. Phase 2: Gate decision (PASS/CONCERNS/FAIL/WAIVED) | Two-phase workflow: traceability + gate decision | - | @@ -308,7 +308,7 @@ Want to understand TEA principles and patterns in depth? - [Engagement Models](/docs/explanation/tea/engagement-models.md) - TEA Lite, TEA Solo, TEA Integrated (5 models explained) **Philosophy:** -- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - Why TEA exists, problem statement +- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Start here to understand WHY TEA exists** - The problem with AI-generated tests and TEA's three-part solution ## Optional Integrations diff --git a/docs/explanation/tea/engagement-models.md b/docs/explanation/tea/engagement-models.md index 57c03c48..7bb65afd 100644 --- a/docs/explanation/tea/engagement-models.md +++ b/docs/explanation/tea/engagement-models.md @@ -594,7 +594,7 @@ Client project 3 (Ad-hoc): **When:** Adopt BMad Method, want full integration. **Steps:** -1. Install BMad Method (`npx bmad-method@alpha install`) +1. Install BMad Method (see installation guide) 2. Run planning workflows (PRD, architecture) 3. Integrate TEA into Phase 3 (system-level test design) 4. Follow integrated lifecycle (per epic workflows) @@ -690,7 +690,7 @@ Each model uses different TEA workflows. See: **Use-Case Guides:** - [Using TEA with Existing Tests](/docs/how-to/brownfield/use-tea-with-existing-tests.md) - Model 5: Brownfield -- [Running TEA for Enterprise](/docs/how-to/workflows/run-tea-for-enterprise.md) - Enterprise integration +- [Running TEA for Enterprise](/docs/how-to/enterprise/use-tea-for-enterprise.md) - Enterprise integration **All Workflow Guides:** - [How to Run Test Design](/docs/how-to/workflows/run-test-design.md) - Used in TEA Solo and Integrated diff --git a/docs/explanation/tea/fixture-architecture.md b/docs/explanation/tea/fixture-architecture.md index 9afecd7e..6b58d64f 100644 --- a/docs/explanation/tea/fixture-architecture.md +++ b/docs/explanation/tea/fixture-architecture.md @@ -220,8 +220,8 @@ test('should update profile', async ({ apiRequest, authToken, log }) => { // Use API request fixture (matches pure function signature) const { status, body } = await apiRequest({ method: 'PATCH', - url: '/api/profile', // Pure function uses 'url' (not 'path') - data: { name: 'New Name' }, // Pure function uses 'data' (not 'body') + url: '/api/profile', + data: { name: 'New Name' }, headers: { Authorization: `Bearer ${authToken}` } }); diff --git a/docs/explanation/tea/knowledge-base-system.md b/docs/explanation/tea/knowledge-base-system.md index d98df067..e0a40a89 100644 --- a/docs/explanation/tea/knowledge-base-system.md +++ b/docs/explanation/tea/knowledge-base-system.md @@ -484,22 +484,31 @@ await page.waitForSelector('.success', { timeout: 30000 }); All developers: ```typescript -import { test } from '@seontechnologies/playwright-utils/recurse/fixtures'; +import { test } from '@seontechnologies/playwright-utils/fixtures'; -test('job completion', async ({ page, recurse }) => { - await page.click('button'); - - const result = await recurse({ - fn: () => apiRequest({ method: 'GET', path: '/api/job/123' }), - predicate: (job) => job.status === 'complete', - timeout: 30000 +test('job completion', async ({ apiRequest, recurse }) => { + // Start async job + const { body: job } = await apiRequest({ + method: 'POST', + path: '/api/jobs' }); - expect(result.status).toBe('complete'); + // Poll until complete (correct API: command, predicate, options) + const result = await recurse( + () => apiRequest({ method: 'GET', path: `/api/jobs/${job.id}` }), + (response) => response.body.status === 'completed', // response.body from apiRequest + { + timeout: 30000, + interval: 2000, + log: 'Waiting for job to complete' + } + ); + + expect(result.body.status).toBe('completed'); }); ``` -**Result:** Consistent pattern, established best practice. +**Result:** Consistent pattern using correct playwright-utils API (command, predicate, options). ## Technical Implementation @@ -520,7 +529,7 @@ For details on the knowledge base index, see: **Overview:** - [TEA Overview](/docs/explanation/features/tea-overview.md) - Knowledge base in workflows -- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - Context engineering philosophy +- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Foundation: Context engineering philosophy** (why knowledge base solves AI test problems) ## Practical Guides diff --git a/docs/explanation/tea/network-first-patterns.md b/docs/explanation/tea/network-first-patterns.md index 8f599111..4be84dbb 100644 --- a/docs/explanation/tea/network-first-patterns.md +++ b/docs/explanation/tea/network-first-patterns.md @@ -125,6 +125,40 @@ test('should load dashboard data', async ({ page }) => { - No fixed timeout (fast when API is fast) - Validates API response (catch backend errors) +**With Playwright Utils (Even Cleaner):** +```typescript +import { test } from '@seontechnologies/playwright-utils/fixtures'; +import { expect } from '@playwright/test'; + +test('should load dashboard data', async ({ page, interceptNetworkCall }) => { + // Set up interception BEFORE navigation + const dashboardCall = interceptNetworkCall({ + method: 'GET', + url: '**/api/dashboard' + }); + + // Navigate + await page.goto('/dashboard'); + + // Wait for API response (automatic JSON parsing) + const { status, responseJson: data } = await dashboardCall; + + // Validate API response + expect(status).toBe(200); + expect(data.items).toBeDefined(); + + // Assert UI matches API data + await expect(page.locator('.data-table')).toBeVisible(); + await expect(page.locator('.data-table tr')).toHaveCount(data.items.length); +}); +``` + +**Playwright Utils Benefits:** +- Automatic JSON parsing (no `await response.json()`) +- Returns `{ status, responseJson, requestJson }` structure +- Cleaner API (no need to check `resp.ok()`) +- Same intercept-before-navigate pattern + ### Intercept-Before-Navigate Pattern **Key insight:** Set up wait BEFORE triggering the action. @@ -196,6 +230,7 @@ sequenceDiagram ### TEA Generates Network-First Tests +**Vanilla Playwright:** ```typescript // When you run *atdd or *automate, TEA generates: @@ -219,6 +254,37 @@ test('should create user', async ({ page }) => { }); ``` +**With Playwright Utils (if `tea_use_playwright_utils: true`):** +```typescript +import { test } from '@seontechnologies/playwright-utils/fixtures'; +import { expect } from '@playwright/test'; + +test('should create user', async ({ page, interceptNetworkCall }) => { + // TEA uses interceptNetworkCall for cleaner interception + const createUserCall = interceptNetworkCall({ + method: 'POST', + url: '**/api/users' + }); + + await page.getByLabel('Name').fill('Test User'); + await page.getByRole('button', { name: 'Submit' }).click(); + + // Wait for response (automatic JSON parsing) + const { status, responseJson: user } = await createUserCall; + + // Validate both API and UI + expect(status).toBe(201); + expect(user.id).toBeDefined(); + await expect(page.locator('.success')).toContainText(user.name); +}); +``` + +**Playwright Utils Benefits:** +- Automatic JSON parsing (`responseJson` ready to use) +- No manual `await response.json()` +- Returns `{ status, responseJson }` structure +- Cleaner, more readable code + ### TEA Reviews for Hard Waits When you run `*test-review`: @@ -252,6 +318,7 @@ await responsePromise; // ✅ ### Basic Response Wait +**Vanilla Playwright:** ```typescript // Wait for any successful response const promise = page.waitForResponse(resp => resp.ok()); @@ -259,8 +326,23 @@ await page.click('button'); await promise; ``` +**With Playwright Utils:** +```typescript +import { test } from '@seontechnologies/playwright-utils/fixtures'; + +test('basic wait', async ({ page, interceptNetworkCall }) => { + const responseCall = interceptNetworkCall({ url: '**' }); // Match any + await page.click('button'); + const { status } = await responseCall; + expect(status).toBe(200); +}); +``` + +--- + ### Specific URL Match +**Vanilla Playwright:** ```typescript // Wait for specific endpoint const promise = page.waitForResponse( @@ -270,8 +352,21 @@ await page.goto('/user/123'); await promise; ``` +**With Playwright Utils:** +```typescript +test('specific URL', async ({ page, interceptNetworkCall }) => { + const userCall = interceptNetworkCall({ url: '**/api/users/123' }); + await page.goto('/user/123'); + const { status, responseJson } = await userCall; + expect(status).toBe(200); +}); +``` + +--- + ### Method + Status Match +**Vanilla Playwright:** ```typescript // Wait for POST that returns 201 const promise = page.waitForResponse( @@ -284,8 +379,24 @@ await page.click('button[type="submit"]'); await promise; ``` +**With Playwright Utils:** +```typescript +test('method and status', async ({ page, interceptNetworkCall }) => { + const createCall = interceptNetworkCall({ + method: 'POST', + url: '**/api/users' + }); + await page.click('button[type="submit"]'); + const { status, responseJson } = await createCall; + expect(status).toBe(201); // Explicit status check +}); +``` + +--- + ### Multiple Responses +**Vanilla Playwright:** ```typescript // Wait for multiple API calls const [usersResp, postsResp] = await Promise.all([ @@ -298,8 +409,29 @@ const users = await usersResp.json(); const posts = await postsResp.json(); ``` +**With Playwright Utils:** +```typescript +test('multiple responses', async ({ page, interceptNetworkCall }) => { + const usersCall = interceptNetworkCall({ url: '**/api/users' }); + const postsCall = interceptNetworkCall({ url: '**/api/posts' }); + + await page.goto('/dashboard'); // Triggers both + + const [{ responseJson: users }, { responseJson: posts }] = await Promise.all([ + usersCall, + postsCall + ]); + + expect(users).toBeInstanceOf(Array); + expect(posts).toBeInstanceOf(Array); +}); +``` + +--- + ### Validate Response Data +**Vanilla Playwright:** ```typescript // Verify API response before asserting UI const promise = page.waitForResponse( @@ -319,6 +451,28 @@ expect(order.total).toBeGreaterThan(0); await expect(page.locator('.order-confirmation')).toContainText(order.id); ``` +**With Playwright Utils:** +```typescript +test('validate response data', async ({ page, interceptNetworkCall }) => { + const checkoutCall = interceptNetworkCall({ + method: 'POST', + url: '**/api/checkout' + }); + + await page.click('button:has-text("Complete Order")'); + + const { status, responseJson: order } = await checkoutCall; + + // Response validation (automatic JSON parsing) + expect(status).toBe(200); + expect(order.status).toBe('confirmed'); + expect(order.total).toBeGreaterThan(0); + + // UI validation + await expect(page.locator('.order-confirmation')).toContainText(order.id); +}); +``` + ## Advanced Patterns ### HAR Recording for Offline Testing @@ -481,6 +635,36 @@ test('dashboard loads data', async ({ page }) => { - Validates UI matches API (catch frontend bugs) - Works in any environment (local, CI, staging) +**With Playwright Utils (Even Better):** +```typescript +import { test } from '@seontechnologies/playwright-utils/fixtures'; + +test('dashboard loads data', async ({ page, interceptNetworkCall }) => { + const dashboardCall = interceptNetworkCall({ + method: 'GET', + url: '**/api/dashboard' + }); + + await page.goto('/dashboard'); + + const { status, responseJson: { items } } = await dashboardCall; + + // Validate API response (automatic JSON parsing) + expect(status).toBe(200); + expect(items).toHaveLength(5); + + // Validate UI matches API + await expect(page.locator('table tr')).toHaveCount(items.length); +}); +``` + +**Additional Benefits:** +- No manual `await response.json()` (automatic parsing) +- Cleaner destructuring of nested data +- Consistent API across all network calls + +--- + ### Form Submission **Traditional (Flaky):** @@ -513,6 +697,35 @@ test('form submission', async ({ page }) => { }); ``` +**With Playwright Utils:** +```typescript +import { test } from '@seontechnologies/playwright-utils/fixtures'; + +test('form submission', async ({ page, interceptNetworkCall }) => { + const submitCall = interceptNetworkCall({ + method: 'POST', + url: '**/api/submit' + }); + + await page.getByLabel('Email').fill('test@example.com'); + await page.getByRole('button', { name: 'Submit' }).click(); + + const { status, responseJson: result } = await submitCall; + + // Automatic JSON parsing, no manual await + expect(status).toBe(200); + expect(result.success).toBe(true); + await expect(page.locator('.success')).toBeVisible(); +}); +``` + +**Progression:** +- Traditional: Hard waits (flaky) +- Network-First (Vanilla): waitForResponse (deterministic) +- Network-First (PW-Utils): interceptNetworkCall (deterministic + cleaner API) + +--- + ## Common Misconceptions ### "I Already Use waitForSelector" @@ -545,29 +758,57 @@ await page.waitForSelector('.success'); // Then validate UI ### "Too Much Boilerplate" -**Solution:** Extract to fixtures (see Fixture Architecture) +**Problem:** `waitForResponse` is verbose, repeated in every test. +**Solution:** Use Playwright Utils `interceptNetworkCall` - built-in fixture that reduces boilerplate. + +**Vanilla Playwright (Repetitive):** ```typescript -// Create reusable fixture -export const test = base.extend({ - waitForApi: async ({ page }, use) => { - await use((urlPattern: string) => { - // Returns promise immediately (doesn't await) - return page.waitForResponse( - resp => resp.url().includes(urlPattern) && resp.ok() - ); - }); - } +test('test 1', async ({ page }) => { + const promise = page.waitForResponse( + resp => resp.url().includes('/api/submit') && resp.ok() + ); + await page.click('button'); + await promise; }); -// Use in tests -test('test', async ({ page, waitForApi }) => { - const promise = waitForApi('/api/submit'); // Get promise - await page.click('button'); // Trigger action - await promise; // Wait for response +test('test 2', async ({ page }) => { + const promise = page.waitForResponse( + resp => resp.url().includes('/api/load') && resp.ok() + ); + await page.click('button'); + await promise; +}); +// Repeated pattern in every test +``` + +**With Playwright Utils (Cleaner):** +```typescript +import { test } from '@seontechnologies/playwright-utils/fixtures'; + +test('test 1', async ({ page, interceptNetworkCall }) => { + const submitCall = interceptNetworkCall({ url: '**/api/submit' }); + await page.click('button'); + const { status, responseJson } = await submitCall; + expect(status).toBe(200); +}); + +test('test 2', async ({ page, interceptNetworkCall }) => { + const loadCall = interceptNetworkCall({ url: '**/api/load' }); + await page.click('button'); + const { responseJson } = await loadCall; + // Automatic JSON parsing, cleaner API }); ``` +**Benefits:** +- Less boilerplate (fixture handles complexity) +- Automatic JSON parsing +- Glob pattern matching (`**/api/**`) +- Consistent API across all tests + +See [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md#intercept-network-call) for setup. + ## Technical Implementation For detailed network-first patterns, see the knowledge base: diff --git a/docs/explanation/tea/risk-based-testing.md b/docs/explanation/tea/risk-based-testing.md index 88c58c29..554afbb3 100644 --- a/docs/explanation/tea/risk-based-testing.md +++ b/docs/explanation/tea/risk-based-testing.md @@ -573,7 +573,7 @@ flowchart TD - [How to Run NFR Assessment](/docs/how-to/workflows/run-nfr-assess.md) - NFR risk assessment **Use-Case Guides:** -- [Running TEA for Enterprise](/docs/how-to/workflows/run-tea-for-enterprise.md) - Enterprise risk management +- [Running TEA for Enterprise](/docs/how-to/enterprise/use-tea-for-enterprise.md) - Enterprise risk management ## Reference diff --git a/docs/explanation/tea/test-quality-standards.md b/docs/explanation/tea/test-quality-standards.md index 3f5c17f1..58d5d318 100644 --- a/docs/explanation/tea/test-quality-standards.md +++ b/docs/explanation/tea/test-quality-standards.md @@ -107,7 +107,7 @@ test('flaky test', async ({ page }) => { }); ``` -**Good Example:** +**Good Example (Vanilla Playwright):** ```typescript test('deterministic test', async ({ page }) => { const responsePromise = page.waitForResponse( @@ -126,12 +126,43 @@ test('deterministic test', async ({ page }) => { }); ``` -**Why it works:** +**With Playwright Utils (Even Cleaner):** +```typescript +import { test } from '@seontechnologies/playwright-utils/fixtures'; +import { expect } from '@playwright/test'; + +test('deterministic test', async ({ page, interceptNetworkCall }) => { + const submitCall = interceptNetworkCall({ + method: 'POST', + url: '**/api/submit' + }); + + await page.click('button'); + + // Wait for actual response (automatic JSON parsing) + const { status, responseJson } = await submitCall; + expect(status).toBe(200); + + // Modal should ALWAYS show (make it deterministic) + await expect(page.locator('.modal')).toBeVisible(); + await page.click('.dismiss'); + + // Explicit assertion (fails if not visible) + await expect(page.locator('.success')).toBeVisible(); +}); +``` + +**Why both work:** - Waits for actual event (network response) - No conditionals (behavior is deterministic) - Assertions fail loudly (no silent failures) - Same result every run (deterministic) +**Playwright Utils additional benefits:** +- Automatic JSON parsing +- `{ status, responseJson }` structure (can validate response data) +- No manual `await response.json()` + ### 2. Isolation (No Dependencies) **Rule:** Test runs independently, no shared state. @@ -152,7 +183,7 @@ test('create user', async ({ apiRequest }) => { const { body } = await apiRequest({ method: 'POST', path: '/api/users', - body: { email: 'test@example.com' } // 'body' not 'data' (hard-coded) + body: { email: 'test@example.com' } (hard-coded) }); userId = body.id; // Store in global }); @@ -162,7 +193,7 @@ test('update user', async ({ apiRequest }) => { await apiRequest({ method: 'PATCH', path: `/api/users/${userId}`, - body: { name: 'Updated' } // 'body' not 'data' + body: { name: 'Updated' } }); // No cleanup - leaves user in database }); @@ -213,7 +244,7 @@ test('should update user profile', async ({ apiRequest }) => { const { status: createStatus, body: user } = await apiRequest({ method: 'POST', path: '/api/users', - body: { email: testEmail, name: faker.person.fullName() } // 'body' not 'data' + body: { email: testEmail, name: faker.person.fullName() } }); expect(createStatus).toBe(201); @@ -222,7 +253,7 @@ test('should update user profile', async ({ apiRequest }) => { const { status, body: updated } = await apiRequest({ method: 'PATCH', path: `/api/users/${user.id}`, - body: { name: 'Updated Name' } // 'body' not 'data' + body: { name: 'Updated Name' } }); expect(status).toBe(200); @@ -412,7 +443,7 @@ test('slow test', async ({ page }) => { **Total time:** 3+ minutes (95 seconds wasted on hard waits) -**Good Example:** +**Good Example (Vanilla Playwright):** ```typescript // ✅ Fast test (< 10 seconds) test('fast test', async ({ page }) => { @@ -436,8 +467,50 @@ test('fast test', async ({ page }) => { }); ``` +**With Playwright Utils:** +```typescript +import { test } from '@seontechnologies/playwright-utils/fixtures'; +import { expect } from '@playwright/test'; + +test('fast test', async ({ page, interceptNetworkCall }) => { + // Set up interception + const resultCall = interceptNetworkCall({ + method: 'GET', + url: '**/api/result' + }); + + await page.goto('/'); + + // Direct navigation (skip intermediate pages) + await page.goto('/page-10'); + + // Efficient selector + await page.getByRole('button', { name: 'Submit' }).click(); + + // Wait for actual response (automatic JSON parsing) + const { status, responseJson } = await resultCall; + + expect(status).toBe(200); + await expect(page.locator('.result')).toBeVisible(); + + // Can also validate response data if needed + // expect(responseJson.data).toBeDefined(); +}); +``` + **Total time:** < 10 seconds (no wasted waits) +**Both examples achieve:** +- No hard waits (wait for actual events) +- Direct navigation (skip unnecessary steps) +- Efficient selectors (getByRole) +- Fast execution + +**Playwright Utils bonus:** +- Can validate API response data easily +- Automatic JSON parsing +- Cleaner API + ## TEA's Quality Scoring TEA reviews tests against these standards in `*test-review`: @@ -821,7 +894,7 @@ For detailed test quality patterns, see: **Use-Case Guides:** - [Using TEA with Existing Tests](/docs/how-to/brownfield/use-tea-with-existing-tests.md) - Improve legacy quality -- [Running TEA for Enterprise](/docs/how-to/workflows/run-tea-for-enterprise.md) - Enterprise quality thresholds +- [Running TEA for Enterprise](/docs/how-to/enterprise/use-tea-for-enterprise.md) - Enterprise quality thresholds ## Reference diff --git a/docs/how-to/brownfield/use-tea-with-existing-tests.md b/docs/how-to/brownfield/use-tea-with-existing-tests.md index 231a0279..b2590373 100644 --- a/docs/how-to/brownfield/use-tea-with-existing-tests.md +++ b/docs/how-to/brownfield/use-tea-with-existing-tests.md @@ -150,34 +150,40 @@ test('checkout completes', async ({ page }) => { }); ``` -**After (With Playwright Utils + Auto Error Detection):** +**After (With Playwright Utils - Cleaner API):** ```typescript -import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures'; +import { test } from '@seontechnologies/playwright-utils/fixtures'; +import { expect } from '@playwright/test'; -// That's it! Just import the fixture - monitoring is automatic -test('checkout completes', async ({ page }) => { - const checkoutPromise = page.waitForResponse( - resp => resp.url().includes('/api/checkout') && resp.ok() - ); +test('checkout completes', async ({ page, interceptNetworkCall }) => { + // Use interceptNetworkCall for cleaner network interception + const checkoutCall = interceptNetworkCall({ + method: 'POST', + url: '**/api/checkout' + }); await page.click('button[name="checkout"]'); - const response = await checkoutPromise; - const order = await response.json(); + // Wait for response (automatic JSON parsing) + const { status, responseJson: order } = await checkoutCall; + + // Validate API response + expect(status).toBe(200); expect(order.status).toBe('confirmed'); - await expect(page.locator('.confirmation')).toBeVisible(); - // Zero setup - automatically fails if ANY 4xx/5xx occurred - // Error message: "Network errors detected: POST 500 /api/payment" + // Validate UI + await expect(page.locator('.confirmation')).toBeVisible(); }); ``` **Playwright Utils Benefits:** -- Auto-enabled by fixture import (zero code changes) -- Catches silent backend errors (500, 503, 504) -- Test fails even if UI shows cached/stale success message -- Structured error report in test output -- No manual error checking needed +- `interceptNetworkCall` for cleaner network interception +- Automatic JSON parsing (`responseJson` ready to use) +- No manual `await response.json()` +- Glob pattern matching (`**/api/checkout`) +- Cleaner, more maintainable code + +**For automatic error detection,** use `network-error-monitor` fixture separately. See [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md#network-error-monitor). **Priority 3: P1 Requirements** ``` @@ -352,10 +358,10 @@ test.skip('flaky test - needs fixing', async ({ page }) => { ```markdown # Quarantined Tests -| Test | Reason | Owner | Target Fix Date | -|------|--------|-------|----------------| -| checkout.spec.ts:45 | Hard wait causes flakiness | QA Team | 2026-01-20 | -| profile.spec.ts:28 | Conditional flow control | Dev Team | 2026-01-25 | +| Test | Reason | Owner | Target Fix Date | +| ------------------- | -------------------------- | -------- | --------------- | +| checkout.spec.ts:45 | Hard wait causes flakiness | QA Team | 2026-01-20 | +| profile.spec.ts:28 | Conditional flow control | Dev Team | 2026-01-25 | ``` **Fix systematically:** @@ -398,12 +404,12 @@ Same process ```markdown # Test Suite Status -| Directory | Tests | Quality Score | Status | Notes | -|-----------|-------|---------------|--------|-------| -| tests/auth/ | 15 | 85/100 | ✅ Modernized | Week 1 cleanup | -| tests/api/ | 32 | 78/100 | ⚠️ In Progress | Week 2 | -| tests/e2e/ | 28 | 62/100 | ❌ Legacy | Week 3 planned | -| tests/integration/ | 12 | 45/100 | ❌ Legacy | Week 4 planned | +| Directory | Tests | Quality Score | Status | Notes | +| ------------------ | ----- | ------------- | ------------- | -------------- | +| tests/auth/ | 15 | 85/100 | ✅ Modernized | Week 1 cleanup | +| tests/api/ | 32 | 78/100 | ⚠️ In Progress | Week 2 | +| tests/e2e/ | 28 | 62/100 | ❌ Legacy | Week 3 planned | +| tests/integration/ | 12 | 45/100 | ❌ Legacy | Week 4 planned | **Legend:** - ✅ Modernized: Quality >80, no critical issues @@ -465,15 +471,26 @@ Incremental changes = lower risk **Solution:** ``` -1. Run *ci to add selective testing -2. Run only affected tests on PR -3. Run full suite nightly -4. Parallelize with sharding +1. Configure parallel execution (shard tests across workers) +2. Add selective testing (run only affected tests on PR) +3. Run full suite nightly only +4. Optimize slow tests (remove hard waits, improve selectors) Before: 4 hours sequential After: 15 minutes with sharding + selective testing ``` +**How `*ci` helps:** +- Scaffolds CI configuration with parallel sharding examples +- Provides selective testing script templates +- Documents burn-in and optimization strategies +- But YOU configure workers, test selection, and optimization + +**With Playwright Utils burn-in:** +- Smart selective testing based on git diff +- Volume control (run percentage of affected tests) +- See [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md#burn-in) + ### "We Have Tests But They Always Fail" **Problem:** Tests are so flaky they're ignored. @@ -530,43 +547,6 @@ Don't let perfect be the enemy of good *trace Phase 2 - Gate decision ``` -## Success Stories - -### Example: E-Commerce Platform - -**Starting Point:** -- 200 E2E tests, 30% passing, 15-minute flakiness -- No API tests -- No coverage visibility - -**After 3 Months with TEA:** -- 150 E2E tests (removed duplicates), 95% passing, <1% flakiness -- 300 API tests added (faster, more reliable) -- P0 coverage: 100%, P1 coverage: 85% -- Quality score: 82/100 - -**How:** -- Month 1: Baseline + fix top 20 flaky tests -- Month 2: Add API tests for critical path -- Month 3: Improve quality + expand P1 coverage - -### Example: SaaS Application - -**Starting Point:** -- 50 tests, quality score 48/100 -- Hard waits everywhere -- Tests take 45 minutes - -**After 6 Weeks with TEA:** -- 120 tests, quality score 78/100 -- No hard waits (network-first patterns) -- Tests take 8 minutes (parallel execution) - -**How:** -- Week 1-2: Replace hard waits with network-first -- Week 3-4: Add selective testing + CI parallelization -- Week 5-6: Generate tests for gaps with *automate - ## Related Guides **Workflow Guides:** diff --git a/docs/how-to/customization/enable-tea-mcp-enhancements.md b/docs/how-to/customization/enable-tea-mcp-enhancements.md index 0b6bb158..830ee22f 100644 --- a/docs/how-to/customization/enable-tea-mcp-enhancements.md +++ b/docs/how-to/customization/enable-tea-mcp-enhancements.md @@ -18,17 +18,25 @@ MCP (Model Context Protocol) servers enable AI agents to interact with live brow ## When to Use This +**For UI Testing:** - Want exploratory mode in `*test-design` (browser-based UI discovery) -- Want recording mode in `*atdd` (verify selectors with live browser) +- Want recording mode in `*atdd` or `*automate` (verify selectors with live browser) - Want healing mode in `*automate` (fix tests with visual debugging) -- Debugging complex UI issues - Need accurate selectors from actual DOM +- Debugging complex UI interactions + +**For API Testing:** +- Want healing mode in `*automate` (analyze failures with trace data) +- Need to debug test failures (network responses, request/response data, timing) +- Want to inspect trace files (network traffic, errors, race conditions) + +**For Both:** +- Visual debugging (trace viewer shows network + UI) +- Test failure analysis (MCP can run tests and extract errors) +- Understanding complex test failures (network + DOM together) **Don't use if:** -- You're new to TEA (adds complexity) - You don't have MCP servers configured -- Your tests work fine without it -- You're testing APIs only (no UI) ## Prerequisites @@ -71,13 +79,11 @@ MCP (Model Context Protocol) servers enable AI agents to interact with live brow Both servers work together to provide full TEA MCP capabilities. -## Installation +## Setup -### Step 1: Configure MCP Servers in IDE +### 1. Configure MCP Servers -Add this configuration to your IDE's MCP settings. See [TEA Overview](/docs/explanation/features/tea-overview.md#playwright-mcp-enhancements) for IDE-specific configuration locations. - -**MCP Configuration:** +Add to your IDE's MCP configuration: ```json { @@ -94,36 +100,20 @@ Add this configuration to your IDE's MCP settings. See [TEA Overview](/docs/expl } ``` -### Step 2: Install Playwright Browsers +See [TEA Overview](/docs/explanation/features/tea-overview.md#playwright-mcp-enhancements) for IDE-specific config locations. -```bash -npx playwright install -``` +### 2. Enable in BMAD -### Step 3: Enable in TEA Config - -Edit `_bmad/bmm/config.yaml`: +Answer "Yes" when prompted during installation, or set in config: ```yaml +# _bmad/bmm/config.yaml tea_use_mcp_enhancements: true ``` -### Step 4: Restart IDE +### 3. Verify MCPs Running -Restart your IDE to load MCP server configuration. - -### Step 5: Verify MCP Servers - -Check MCP servers are running: - -**In Cursor:** -- Open command palette (Cmd/Ctrl + Shift + P) -- Search "MCP" -- Should see "Playwright" and "Playwright Test" servers listed - -**In VS Code:** -- Check Claude extension settings -- Verify MCP servers are enabled +Ensure your MCP servers are running in your IDE. ## How MCP Enhances TEA Workflows @@ -162,16 +152,14 @@ I'll design tests for these interactions." **Without MCP:** - TEA generates selectors from best practices -- May use `getByRole()` that doesn't match actual app -- Selectors might need adjustment +- TEA infers API patterns from documentation -**With MCP:** -TEA verifies selectors with live browser: +**With MCP (Recording Mode):** + +**For UI Tests:** ``` -"Let me verify the login form selectors" - -[TEA navigates to /login] -[Inspects form fields] +[TEA navigates to /login with live browser] +[Inspects actual form fields] "I see: - Email input has label 'Email Address' (not 'Email') @@ -181,47 +169,58 @@ TEA verifies selectors with live browser: I'll use these exact selectors." ``` -**Generated test:** -```typescript -await page.getByLabel('Email Address').fill('test@example.com'); -await page.getByLabel('Your Password').fill('password'); -await page.getByRole('button', { name: 'Sign In' }).click(); -// Selectors verified against actual DOM +**For API Tests:** +``` +[TEA analyzes trace files from test runs] +[Inspects network requests/responses] + +"I see the API returns: +- POST /api/login → 200 with { token, userId } +- Response time: 150ms +- Required headers: Content-Type, Authorization + +I'll validate these in tests." ``` **Benefits:** -- Accurate selectors from real DOM -- Tests work on first run -- No trial-and-error selector debugging +- UI: Accurate selectors from real DOM +- API: Validated request/response patterns from trace +- Both: Tests work on first run -### *automate: Healing Mode +### *automate: Healing + Recording Modes **Without MCP:** - TEA analyzes test code only - Suggests fixes based on static analysis -- Can't verify fixes work +- Generates tests from documentation/code **With MCP:** -TEA uses visual debugging: + +**Healing Mode (UI + API):** ``` -"This test is failing. Let me debug with trace viewer" - [TEA opens trace file] -[Analyzes screenshots] -[Identifies selector changed] +[Analyzes screenshots + network tab] -"The button selector changed from 'Save' to 'Save Changes' -I'll update the test and verify it works" +UI failures: "Button selector changed from 'Save' to 'Save Changes'" +API failures: "Response structure changed, expected {id} got {userId}" -[TEA makes fix] -[Runs test with MCP] -[Confirms test passes] +[TEA makes fixes] +[Verifies with trace analysis] +``` + +**Recording Mode (UI + API):** +``` +UI: [Inspects actual DOM, generates verified selectors] +API: [Analyzes network traffic, validates request/response patterns] + +[Generates tests with verified patterns] +[Tests work on first run] ``` **Benefits:** -- Visual debugging during healing -- Verified fixes (not guesses) -- Faster resolution +- Visual debugging + trace analysis (not just UI) +- Verified selectors (UI) + network patterns (API) +- Tests verified against actual application behavior ## Usage Examples @@ -290,43 +289,6 @@ Fixing selector and verifying... Updated test with corrected selector. ``` -## Configuration Options - -### MCP Server Arguments - -**Playwright MCP with custom port:** -```json -{ - "mcpServers": { - "playwright": { - "command": "npx", - "args": ["@playwright/mcp@latest", "--port", "3000"] - } - } -} -``` - -**Playwright Test with specific browser:** -```json -{ - "mcpServers": { - "playwright-test": { - "command": "npx", - "args": ["playwright", "run-test-mcp-server", "--browser", "chromium"] - } - } -} -``` - -### Environment Variables - -```bash -# .env -PLAYWRIGHT_BROWSER=chromium # Browser for MCP -PLAYWRIGHT_HEADLESS=false # Show browser during MCP -PLAYWRIGHT_SLOW_MO=100 # Slow down for visibility -``` - ## Troubleshooting ### MCP Servers Not Running @@ -433,107 +395,6 @@ tea_use_mcp_enhancements: true tea_use_mcp_enhancements: false ``` -## Best Practices - -### Use MCP for Complex UIs - -**Simple UI (skip MCP):** -``` -Standard login form with email/password -TEA can infer selectors without MCP -``` - -**Complex UI (use MCP):** -``` -Multi-step wizard with dynamic fields -Conditional UI elements -Third-party components -Custom form widgets -``` - -### Start Without MCP, Enable When Needed - -**Learning path:** -1. Week 1-2: TEA without MCP (learn basics) -2. Week 3: Enable MCP (explore advanced features) -3. Week 4+: Use MCP selectively (when it adds value) - -### Combine with Playwright Utils - -**Powerful combination:** -```yaml -tea_use_playwright_utils: true -tea_use_mcp_enhancements: true -``` - -**Benefits:** -- Playwright Utils provides production-ready utilities -- MCP verifies utilities work with actual app -- Best of both worlds - -### Use for Test Healing - -**Scenario:** Test suite has 50 failing tests after UI update. - -**With MCP:** -``` -*automate (healing mode) - -TEA: -1. Opens trace viewer for each failure -2. Identifies changed selectors -3. Updates tests with corrected selectors -4. Verifies fixes with browser -5. Provides updated tests - -Result: 45/50 tests auto-healed -``` - -### Use for New Team Members - -**Onboarding:** -``` -New developer: "I don't know this codebase's UI" - -Senior: "Run *test-design with MCP exploratory mode" - -TEA explores UI and generates documentation: -- UI structure discovered -- Interactive elements mapped -- Test design created automatically -``` - -## Security Considerations - -### MCP Servers Have Browser Access - -**What MCP can do:** -- Navigate to any URL -- Click any element -- Fill any form -- Access browser storage -- Read page content - -**Best practices:** -- Only configure MCP in trusted environments -- Don't use MCP on production sites (use staging/dev) -- Review generated tests before running on production -- Keep MCP config in local files (not committed) - -### Protect Credentials - -**Don't:** -``` -"TEA, login with mypassword123" -# Password visible in chat history -``` - -**Do:** -``` -"TEA, login using credentials from .env" -# Password loaded from environment, not in chat -``` - ## Related Guides **Getting Started:** diff --git a/docs/how-to/customization/integrate-playwright-utils.md b/docs/how-to/customization/integrate-playwright-utils.md index 35450f06..32d6fcff 100644 --- a/docs/how-to/customization/integrate-playwright-utils.md +++ b/docs/how-to/customization/integrate-playwright-utils.md @@ -62,7 +62,7 @@ Edit `_bmad/bmm/config.yaml`: tea_use_playwright_utils: true ``` -**Note:** If you enabled this during installation (`npx bmad-method@alpha install`), it's already set. +**Note:** If you enabled this during BMad installation, it's already set. ### Step 3: Verify Installation @@ -175,13 +175,16 @@ Reviews against playwright-utils best practices: ### *ci Workflow **Without Playwright Utils:** -Basic CI configuration +- Parallel sharding +- Burn-in loops (basic shell scripts) +- CI triggers (PR, push, schedule) +- Artifact collection **With Playwright Utils:** -Enhanced CI with: -- Burn-in utility for smart test selection -- Selective testing based on git diff -- Test prioritization +Enhanced with smart testing: +- Burn-in utility (git diff-based, volume control) +- Selective testing (skip config/docs/types changes) +- Test prioritization by file changes ## Available Utilities @@ -189,6 +192,18 @@ Enhanced CI with: Typed HTTP client with schema validation. +**Official Docs:** + +**Why Use This?** + +| Vanilla Playwright | api-request Utility | +|-------------------|---------------------| +| Manual `await response.json()` | Automatic JSON parsing | +| `response.status()` + separate body parsing | Returns `{ status, body }` structure | +| No built-in retry | Automatic retry for 5xx errors | +| No schema validation | Single-line `.validateSchema()` | +| Verbose status checking | Clean destructuring | + **Usage:** ```typescript import { test } from '@seontechnologies/playwright-utils/api-request/fixtures'; @@ -206,7 +221,7 @@ test('should create user', async ({ apiRequest }) => { method: 'POST', path: '/api/users', // Note: 'path' not 'url' body: { name: 'Test User', email: 'test@example.com' } // Note: 'body' not 'data' - }).validateSchema(UserSchema); // Note: chained method + }).validateSchema(UserSchema); // Chained method (can await separately if needed) expect(status).toBe(201); expect(body.id).toBeDefined(); @@ -224,6 +239,17 @@ test('should create user', async ({ apiRequest }) => { Authentication session management with token persistence. +**Official Docs:** + +**Why Use This?** + +| Vanilla Playwright Auth | auth-session | +|------------------------|--------------| +| Re-authenticate every test run (slow) | Authenticate once, persist to disk | +| Single user per setup | Multi-user support (roles, accounts) | +| No token expiration handling | Automatic token renewal | +| Manual session management | Provider pattern (flexible auth) | + **Usage:** ```typescript import { test } from '@seontechnologies/playwright-utils/auth-session/fixtures'; @@ -262,6 +288,17 @@ async function globalSetup() { Record and replay network traffic (HAR) for offline testing. +**Official Docs:** + +**Why Use This?** + +| Vanilla Playwright HAR | network-recorder | +|------------------------|------------------| +| Manual `routeFromHAR()` configuration | Automatic HAR management with `PW_NET_MODE` | +| Separate record/playback test files | Same test, switch env var | +| No CRUD detection | Stateful mocking (POST/PUT/DELETE work) | +| Manual HAR file paths | Auto-organized by test name | + **Usage:** ```typescript import { test } from '@seontechnologies/playwright-utils/network-recorder/fixtures'; @@ -301,6 +338,17 @@ PW_NET_MODE=playback npx playwright test Spy or stub network requests with automatic JSON parsing. +**Official Docs:** + +**Why Use This?** + +| Vanilla Playwright | interceptNetworkCall | +|-------------------|----------------------| +| Route setup + response waiting (separate steps) | Single declarative call | +| Manual `await response.json()` | Automatic JSON parsing (`responseJson`) | +| Complex filter predicates | Simple glob patterns (`**/api/**`) | +| Verbose syntax | Concise, readable API | + **Usage:** ```typescript import { test } from '@seontechnologies/playwright-utils/fixtures'; @@ -337,6 +385,17 @@ test('should handle API errors', async ({ page, interceptNetworkCall }) => { Async polling for eventual consistency (Cypress-style). +**Official Docs:** + +**Why Use This?** + +| Manual Polling | recurse Utility | +|----------------|-----------------| +| `while` loops with `waitForTimeout` | Smart polling with exponential backoff | +| Hard-coded retry logic | Configurable timeout/interval | +| No logging visibility | Optional logging with custom messages | +| Verbose, error-prone | Clean, readable API | + **Usage:** ```typescript import { test } from '@seontechnologies/playwright-utils/fixtures'; @@ -373,6 +432,17 @@ test('should wait for async job completion', async ({ apiRequest, recurse }) => Structured logging that integrates with Playwright reports. +**Official Docs:** + +**Why Use This?** + +| Console.log / print | log Utility | +|--------------------|-------------| +| Not in test reports | Integrated with Playwright reports | +| No step visualization | `.step()` shows in Playwright UI | +| Manual object formatting | Logs objects seamlessly | +| No structured output | JSON artifacts for debugging | + **Usage:** ```typescript import { log } from '@seontechnologies/playwright-utils'; @@ -396,13 +466,24 @@ test('should login', async ({ page }) => { - Direct import (no fixture needed for basic usage) - Structured logs in test reports - `.step()` shows in Playwright UI -- Supports object logging with `.debug()` +- Logs objects seamlessly (no special handling needed) - Trace test execution ### file-utils Read and validate CSV, PDF, XLSX, ZIP files. +**Official Docs:** + +**Why Use This?** + +| Vanilla Playwright | file-utils | +|-------------------|------------| +| ~80 lines per CSV flow | ~10 lines end-to-end | +| Manual download event handling | `handleDownload()` encapsulates all | +| External parsing libraries | Auto-parsing (CSV, XLSX, PDF, ZIP) | +| No validation helpers | Built-in validation (headers, row count) | + **Usage:** ```typescript import { handleDownload, readCSV } from '@seontechnologies/playwright-utils/file-utils'; @@ -444,6 +525,17 @@ test('should export valid CSV', async ({ page }) => { Smart test selection with git diff analysis for CI optimization. +**Official Docs:** + +**Why Use This?** + +| Playwright `--only-changed` | burn-in Utility | +|-----------------------------|-----------------| +| Config changes trigger all tests | Smart filtering (skip configs, types, docs) | +| All or nothing | Volume control (run percentage) | +| No customization | Custom dependency analysis | +| Slow CI on minor changes | Fast CI with intelligent selection | + **Usage:** ```typescript // scripts/burn-in-changed.ts @@ -490,6 +582,7 @@ export default config; ``` **Benefits:** +- **Ensure flake-free tests upfront** - Never deal with test flake again - Smart filtering (skip config, types, docs changes) - Volume control (run percentage of affected tests) - Git diff-based test selection @@ -499,6 +592,17 @@ export default config; Automatically detect HTTP 4xx/5xx errors during tests. +**Official Docs:** + +**Why Use This?** + +| Vanilla Playwright | network-error-monitor | +|-------------------|----------------------| +| UI passes, backend 500 ignored | Auto-fails on any 4xx/5xx | +| Manual error checking | Zero boilerplate (auto-enabled) | +| Silent failures slip through | Acts like Sentry for tests | +| No domino effect prevention | Limits cascading failures | + **Usage:** ```typescript import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures'; @@ -540,98 +644,76 @@ test.describe('error handling', **Benefits:** - Auto-enabled (zero setup) -- Catches silent backend failures -- Opt-out with annotations -- Structured error reporting +- Catches silent backend failures (500, 503, 504) +- **Prevents domino effect** (limits cascading failures from one bad endpoint) +- Opt-out with annotations for validation tests +- Structured error reporting (JSON artifacts) ## Fixture Composition -Combine utilities using `mergeTests`: +**Option 1: Use Package's Combined Fixtures (Simplest)** -**Option 1: Use Combined Fixtures (Simplest)** ```typescript // Import all utilities at once import { test } from '@seontechnologies/playwright-utils/fixtures'; import { log } from '@seontechnologies/playwright-utils'; import { expect } from '@playwright/test'; -test('full test', async ({ apiRequest, authToken, interceptNetworkCall }) => { - await log.info('Starting test'); // log is direct import +test('api test', async ({ apiRequest, interceptNetworkCall }) => { + await log.info('Fetching users'); const { status, body } = await apiRequest({ method: 'GET', - path: '/api/data', - headers: { Authorization: `Bearer ${authToken}` } + path: '/api/users' }); - await log.info('Data fetched', body); expect(status).toBe(200); }); ``` -**Note:** `log` is imported directly (not a fixture). `authToken` requires auth-session provider setup. +**Option 2: Create Custom Merged Fixtures (Selective)** -**Option 2: Merge Individual Fixtures (Selective)** +**File 1: support/merged-fixtures.ts** ```typescript -import { test as base } from '@playwright/test'; -import { mergeTests } from '@playwright/test'; -import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures'; -import { test as recurseFixture } from '@seontechnologies/playwright-utils/recurse/fixtures'; +import { test as base, mergeTests } from '@playwright/test'; +import { test as apiRequest } from '@seontechnologies/playwright-utils/api-request/fixtures'; +import { test as interceptNetworkCall } from '@seontechnologies/playwright-utils/intercept-network-call/fixtures'; +import { test as networkErrorMonitor } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures'; import { log } from '@seontechnologies/playwright-utils'; -// Merge only the fixtures you need +// Merge only what you need export const test = mergeTests( - apiRequestFixture, - recurseFixture + base, + apiRequest, + interceptNetworkCall, + networkErrorMonitor ); -export { expect } from '@playwright/test'; +export const expect = base.expect; +export { log }; +``` -// Use merged utilities in tests -test('selective test', async ({ apiRequest, recurse }) => { - await log.info('Starting test'); // log is direct import, not fixture +**File 2: tests/api/users.spec.ts** +```typescript +import { test, expect, log } from '../support/merged-fixtures'; + +test('api test', async ({ apiRequest, interceptNetworkCall }) => { + await log.info('Fetching users'); const { status, body } = await apiRequest({ method: 'GET', - path: '/api/data' + path: '/api/users' }); - await log.info('Data fetched', body); expect(status).toBe(200); }); ``` -**Note:** `log` is a direct utility (not a fixture), so import it separately. +**Contrast:** +- Option 1: All utilities available, zero setup +- Option 2: Pick utilities you need, one central file -**Recommended:** Use Option 1 (combined fixtures) unless you need fine control over which utilities are included. - -## Configuration - -### Environment Variables - -```bash -# .env -PLAYWRIGHT_UTILS_LOG_LEVEL=debug # debug | info | warn | error -PLAYWRIGHT_UTILS_RETRY_ATTEMPTS=3 -PLAYWRIGHT_UTILS_TIMEOUT=30000 -``` - -### Playwright Config - -```typescript -// playwright.config.ts -import { defineConfig } from '@playwright/test'; - -export default defineConfig({ - use: { - // Playwright Utils works with standard Playwright config - baseURL: process.env.BASE_URL || 'http://localhost:3000', - extraHTTPHeaders: { - // Add headers used by utilities - } - } -}); -``` +**See working examples:** ## Troubleshooting @@ -698,47 +780,6 @@ expect(status).toBe(200); ## Migration Guide -### Migrating Existing Tests - -**Before (Vanilla Playwright):** -```typescript -test('should access protected route', async ({ page, request }) => { - // Manual auth token fetch - const response = await request.post('/api/auth/login', { - data: { email: 'test@example.com', password: 'pass' } - }); - const { token } = await response.json(); - - // Manual token storage - await page.goto('/dashboard'); - await page.evaluate((token) => { - localStorage.setItem('authToken', token); - }, token); - - await expect(page).toHaveURL('/dashboard'); -}); -``` - -**After (With Playwright Utils):** -```typescript -import { test } from '@seontechnologies/playwright-utils/auth-session/fixtures'; - -test('should access protected route', async ({ page, authToken }) => { - // authToken automatically fetched and persisted by fixture - await page.goto('/dashboard'); - - // Token is already in place (no manual storage needed) - await expect(page).toHaveURL('/dashboard'); -}); -``` - -**Benefits:** -- Token fetched once, reused across all tests (persisted to disk) -- No manual token storage or management -- Automatic token renewal if expired -- Multi-user support via `authOptions.userIdentifier` -- 10 lines → 5 lines (less code) - ## Related Guides **Getting Started:** @@ -755,6 +796,7 @@ test('should access protected route', async ({ page, authToken }) => { ## Understanding the Concepts +- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Why Playwright Utils matters** (part of TEA's three-part solution) - [Fixture Architecture](/docs/explanation/tea/fixture-architecture.md) - Pure function → fixture pattern - [Network-First Patterns](/docs/explanation/tea/network-first-patterns.md) - Network utilities explained - [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - Patterns PW-Utils enforces diff --git a/docs/how-to/workflows/run-tea-for-enterprise.md b/docs/how-to/enterprise/use-tea-for-enterprise.md similarity index 100% rename from docs/how-to/workflows/run-tea-for-enterprise.md rename to docs/how-to/enterprise/use-tea-for-enterprise.md diff --git a/docs/how-to/workflows/run-atdd.md b/docs/how-to/workflows/run-atdd.md index a4517d57..9b55ecb3 100644 --- a/docs/how-to/workflows/run-atdd.md +++ b/docs/how-to/workflows/run-atdd.md @@ -90,17 +90,15 @@ TEA will ask what test levels to generate: - E2E tests (browser-based, full user journey) - API tests (backend only, faster) - Component tests (UI components in isolation) -- Mix of levels - -**Recommended approach:** Generate API tests first, then E2E tests (see [API Tests First, E2E Later](#api-tests-first-e2e-later) tip below). +- Mix of levels (see [API Tests First, E2E Later](#api-tests-first-e2e-later) tip) ### Component Testing by Framework TEA generates component tests using framework-appropriate tools: -| Your Framework | Component Testing Tool | -|----------------|----------------------| -| **Cypress** | Cypress Component Testing (*.cy.tsx) | +| Your Framework | Component Testing Tool | +| -------------- | ------------------------------------------- | +| **Cypress** | Cypress Component Testing (*.cy.tsx) | | **Playwright** | Vitest + React Testing Library (*.test.tsx) | **Example response:** @@ -190,7 +188,7 @@ test.describe('Profile API', () => { const { status, body } = await apiRequest({ method: 'PATCH', path: '/api/profile', - body: { // 'body' not 'data' + body: { name: 'Updated Name', email: 'updated@example.com' } @@ -205,7 +203,7 @@ test.describe('Profile API', () => { const { status, body } = await apiRequest({ method: 'PATCH', path: '/api/profile', - body: { email: 'invalid-email' } // 'body' not 'data' + body: { email: 'invalid-email' } }); expect(status).toBe(400); @@ -226,52 +224,28 @@ test.describe('Profile API', () => { ```typescript import { test, expect } from '@playwright/test'; -test.describe('Profile Page', () => { - test.beforeEach(async ({ page }) => { - // Login first - await page.goto('/login'); - await page.getByLabel('Email').fill('test@example.com'); - await page.getByLabel('Password').fill('password123'); - await page.getByRole('button', { name: 'Sign in' }).click(); - }); +test('should edit and save profile', async ({ page }) => { + // Login first + await page.goto('/login'); + await page.getByLabel('Email').fill('test@example.com'); + await page.getByLabel('Password').fill('password123'); + await page.getByRole('button', { name: 'Sign in' }).click(); - test('should display current profile information', async ({ page }) => { - await page.goto('/profile'); + // Navigate to profile + await page.goto('/profile'); - await expect(page.getByText('test@example.com')).toBeVisible(); - await expect(page.getByText('Test User')).toBeVisible(); - }); + // Edit profile + await page.getByRole('button', { name: 'Edit Profile' }).click(); + await page.getByLabel('Name').fill('Updated Name'); + await page.getByRole('button', { name: 'Save' }).click(); - test('should edit and save profile', async ({ page }) => { - await page.goto('/profile'); - - // Click edit - await page.getByRole('button', { name: 'Edit Profile' }).click(); - - // Modify fields - await page.getByLabel('Name').fill('Updated Name'); - await page.getByLabel('Email').fill('updated@example.com'); - - // Save - await page.getByRole('button', { name: 'Save' }).click(); - - // Verify success - await expect(page.getByText('Profile updated successfully')).toBeVisible(); - await expect(page.getByText('Updated Name')).toBeVisible(); - }); - - test('should show validation error for invalid email', async ({ page }) => { - await page.goto('/profile'); - await page.getByRole('button', { name: 'Edit Profile' }).click(); - - await page.getByLabel('Email').fill('invalid-email'); - await page.getByRole('button', { name: 'Save' }).click(); - - await expect(page.getByText('Invalid email format')).toBeVisible(); - }); + // Verify success + await expect(page.getByText('Profile updated')).toBeVisible(); }); ``` +TEA generates additional E2E tests for display, validation errors, etc. based on acceptance criteria. + #### Implementation Checklist TEA also provides an implementation checklist: @@ -400,18 +374,13 @@ Run `*test-design` before `*atdd` for better results: *atdd # Generate tests based on design ``` -### Recording Mode Note +### MCP Enhancements (Optional) -**Recording mode is NOT typically used with ATDD** because ATDD generates tests for features that don't exist yet (no UI to record against). +If you have MCP servers configured (`tea_use_mcp_enhancements: true`), TEA can use them during `*atdd`. -If you have a skeleton UI or are refining existing tests, use `*automate` with recording mode instead. See [How to Run Automate](/docs/how-to/workflows/run-automate.md). +**Note:** ATDD is for features that don't exist yet, so recording mode (verify selectors with live UI) only applies if you have skeleton/mockup UI already implemented. For typical ATDD (no UI yet), TEA infers selectors from best practices. -**Recording mode is only applicable for ATDD in the rare case where:** -- You have skeleton/mockup UI already implemented -- You want to verify selector patterns before full implementation -- You're doing "UI-first" development (unusual for TDD) - -For most ATDD workflows, **skip recording mode** - TEA will infer selectors from best practices. +See [Enable MCP Enhancements](/docs/how-to/customization/enable-tea-mcp-enhancements.md) for setup. ### Focus on P0/P1 Scenarios @@ -444,43 +413,6 @@ TEA generates deterministic tests by default: Don't modify these patterns - they prevent flakiness! -## Common Issues - -### Tests Don't Fail Initially - -**Problem:** Tests pass on first run but feature doesn't exist. - -**Cause:** Tests are hitting wrong endpoints or checking wrong things. - -**Solution:** Review generated tests - ensure they match your feature requirements. - -### Too Many Tests Generated - -**Problem:** TEA generated 50 tests for a simple feature. - -**Cause:** Didn't specify priorities or scope. - -**Solution:** Be specific: -``` -Generate ONLY: -- P0 scenarios (2-3 tests) -- Happy path for API -- One E2E test for full flow -``` - -### Selectors Are Fragile - -**Problem:** E2E tests use brittle selectors (CSS, XPath). - -**Solution:** Use MCP recording mode or specify accessible selectors: -``` -Use accessible locators: -- getByRole() -- getByLabel() -- getByText() -Avoid CSS selectors -``` - ## Related Guides - [How to Run Test Design](/docs/how-to/workflows/run-test-design.md) - Plan before generating @@ -489,6 +421,7 @@ Avoid CSS selectors ## Understanding the Concepts +- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Why TEA generates quality tests** (foundational) - [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - Why P0 vs P3 matters - [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - What makes tests good - [Network-First Patterns](/docs/explanation/tea/network-first-patterns.md) - Avoiding flakiness diff --git a/docs/how-to/workflows/run-automate.md b/docs/how-to/workflows/run-automate.md index 444a6182..8c5b9e78 100644 --- a/docs/how-to/workflows/run-automate.md +++ b/docs/how-to/workflows/run-automate.md @@ -221,7 +221,7 @@ testWithAuth.describe('Profile API', () => { const { status, body } = await apiRequest({ method: 'PATCH', path: '/api/profile', - body: { name: 'Updated Name', bio: 'Test bio' }, // 'body' not 'data' + body: { name: 'Updated Name', bio: 'Test bio' }, headers: { Authorization: `Bearer ${authToken}` } }).validateSchema(ProfileSchema); // Chained validation @@ -233,7 +233,7 @@ testWithAuth.describe('Profile API', () => { const { status, body } = await apiRequest({ method: 'PATCH', path: '/api/profile', - body: { email: 'invalid-email' }, // 'body' not 'data' + body: { email: 'invalid-email' }, headers: { Authorization: `Bearer ${authToken}` } }); @@ -250,58 +250,31 @@ testWithAuth.describe('Profile API', () => { - Automatic retry for 5xx errors - Less boilerplate (no manual `await response.json()` everywhere) -#### E2E Tests (`tests/e2e/profile-workflow.spec.ts`): +#### E2E Tests (`tests/e2e/profile.spec.ts`): ```typescript import { test, expect } from '@playwright/test'; -test.describe('Profile Management Workflow', () => { - test.beforeEach(async ({ page }) => { - // Login - await page.goto('/login'); - await page.getByLabel('Email').fill('test@example.com'); - await page.getByLabel('Password').fill('password123'); - await page.getByRole('button', { name: 'Sign in' }).click(); +test('should edit profile', async ({ page }) => { + // Login + await page.goto('/login'); + await page.getByLabel('Email').fill('test@example.com'); + await page.getByLabel('Password').fill('password123'); + await page.getByRole('button', { name: 'Sign in' }).click(); - // Wait for login to complete - await expect(page).toHaveURL(/\/dashboard/); - }); + // Edit profile + await page.goto('/profile'); + await page.getByRole('button', { name: 'Edit Profile' }).click(); + await page.getByLabel('Name').fill('New Name'); + await page.getByRole('button', { name: 'Save' }).click(); - test('should view and edit profile', async ({ page }) => { - // Navigate to profile - await page.goto('/profile'); - - // Verify profile displays - await expect(page.getByText('test@example.com')).toBeVisible(); - - // Edit profile - await page.getByRole('button', { name: 'Edit Profile' }).click(); - await page.getByLabel('Name').fill('New Name'); - await page.getByRole('button', { name: 'Save' }).click(); - - // Verify success - await expect(page.getByText('Profile updated')).toBeVisible(); - await expect(page.getByText('New Name')).toBeVisible(); - }); - - test('should show validation errors', async ({ page }) => { - await page.goto('/profile'); - await page.getByRole('button', { name: 'Edit Profile' }).click(); - - // Enter invalid email - await page.getByLabel('Email').fill('invalid'); - await page.getByRole('button', { name: 'Save' }).click(); - - // Verify error shown - await expect(page.getByText('Invalid email format')).toBeVisible(); - - // Profile should not be updated - await page.reload(); - await expect(page.getByText('test@example.com')).toBeVisible(); - }); + // Verify success + await expect(page.getByText('Profile updated')).toBeVisible(); }); ``` +TEA generates additional tests for validation, edge cases, etc. based on priorities. + #### Fixtures (`tests/support/fixtures/profile.ts`): **Vanilla Playwright:** @@ -504,9 +477,9 @@ Compare against: TEA supports component testing using framework-appropriate tools: -| Your Framework | Component Testing Tool | Tests Location | -|----------------|----------------------|----------------| -| **Cypress** | Cypress Component Testing | `tests/component/` | +| Your Framework | Component Testing Tool | Tests Location | +| -------------- | ------------------------------ | ----------------------------------------- | +| **Cypress** | Cypress Component Testing | `tests/component/` | | **Playwright** | Vitest + React Testing Library | `tests/component/` or `src/**/*.test.tsx` | **Note:** Component tests use separate tooling from E2E tests: @@ -568,25 +541,14 @@ Don't duplicate that coverage TEA will analyze existing tests and only generate new scenarios. -### Use Healing Mode (Optional) +### MCP Enhancements (Optional) -If MCP enhancements enabled (`tea_use_mcp_enhancements: true`): +If you have MCP servers configured (`tea_use_mcp_enhancements: true`), TEA can use them during `*automate` for: -When prompted, select "healing mode" to: -- Fix broken selectors in existing tests -- Update outdated assertions -- Enhance with trace viewer insights +- **Healing mode:** Fix broken selectors, update assertions, enhance with trace analysis +- **Recording mode:** Verify selectors with live browser, capture network requests -See [Enable MCP Enhancements](/docs/how-to/customization/enable-tea-mcp-enhancements.md) - -### Use Recording Mode (Optional) - -If MCP enhancements enabled: - -When prompted, select "recording mode" to: -- Verify selectors against live browser -- Generate accurate locators from actual DOM -- Capture network requests +No prompts - TEA uses MCPs automatically when available. See [Enable MCP Enhancements](/docs/how-to/customization/enable-tea-mcp-enhancements.md) for setup. ### Generate Tests Incrementally @@ -662,21 +624,11 @@ We already have these tests: Generate tests for scenarios NOT covered by those files ``` -### Selectors Are Fragile +### MCP Enhancements for Better Selectors -**Problem:** E2E tests use brittle CSS selectors. +If you have MCP servers configured, TEA verifies selectors against live browser. Otherwise, TEA generates accessible selectors (`getByRole`, `getByLabel`) by default. -**Solution:** Request accessible selectors: -``` -Use accessible locators: -- getByRole() -- getByLabel() -- getByText() - -Avoid CSS selectors like .class-name or #id -``` - -Or use MCP recording mode for verified selectors. +Setup: Answer "Yes" to MCPs in BMad installer + configure MCP servers in your IDE. See [Enable MCP Enhancements](/docs/how-to/customization/enable-tea-mcp-enhancements.md). ## Related Guides @@ -686,6 +638,7 @@ Or use MCP recording mode for verified selectors. ## Understanding the Concepts +- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Why TEA generates quality tests** (foundational) - [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - Why prioritize P0 over P3 - [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - What makes tests good - [Fixture Architecture](/docs/explanation/tea/fixture-architecture.md) - Reusable test patterns diff --git a/docs/how-to/workflows/run-nfr-assess.md b/docs/how-to/workflows/run-nfr-assess.md index 9f3e24a6..82e0e03f 100644 --- a/docs/how-to/workflows/run-nfr-assess.md +++ b/docs/how-to/workflows/run-nfr-assess.md @@ -662,7 +662,7 @@ Assess categories incrementally, not all at once. - [How to Run Trace](/docs/how-to/workflows/run-trace.md) - Gate decision complements NFR - [How to Run Test Review](/docs/how-to/workflows/run-test-review.md) - Quality complements NFR -- [Run TEA for Enterprise](/docs/how-to/workflows/run-tea-for-enterprise.md) - Enterprise workflow +- [Run TEA for Enterprise](/docs/how-to/enterprise/use-tea-for-enterprise.md) - Enterprise workflow ## Understanding the Concepts diff --git a/docs/how-to/workflows/run-trace.md b/docs/how-to/workflows/run-trace.md index f35403da..bd352f39 100644 --- a/docs/how-to/workflows/run-trace.md +++ b/docs/how-to/workflows/run-trace.md @@ -62,12 +62,12 @@ TEA will ask where requirements are defined. **Options:** -| Source | Example | Best For | -|--------|---------|----------| -| **Story file** | `story-profile-management.md` | Single story coverage | -| **Test design** | `test-design-epic-1.md` | Epic coverage | -| **PRD** | `PRD.md` | System-level coverage | -| **Multiple** | All of the above | Comprehensive analysis | +| Source | Example | Best For | +| --------------- | ----------------------------- | ---------------------- | +| **Story file** | `story-profile-management.md` | Single story coverage | +| **Test design** | `test-design-epic-1.md` | Epic coverage | +| **PRD** | `PRD.md` | System-level coverage | +| **Multiple** | All of the above | Comprehensive analysis | **Example Response:** ``` @@ -113,21 +113,21 @@ TEA generates a comprehensive traceability matrix. ## Coverage Summary -| Metric | Count | Percentage | -|--------|-------|------------| -| **Total Requirements** | 15 | 100% | -| **Full Coverage** | 11 | 73% | -| **Partial Coverage** | 3 | 20% | -| **No Coverage** | 1 | 7% | +| Metric | Count | Percentage | +| ---------------------- | ----- | ---------- | +| **Total Requirements** | 15 | 100% | +| **Full Coverage** | 11 | 73% | +| **Partial Coverage** | 3 | 20% | +| **No Coverage** | 1 | 7% | ### By Priority -| Priority | Total | Covered | Percentage | -|----------|-------|---------|------------| -| **P0** | 5 | 5 | 100% ✅ | -| **P1** | 6 | 5 | 83% ⚠️ | -| **P2** | 3 | 1 | 33% ⚠️ | -| **P3** | 1 | 0 | 0% ✅ (acceptable) | +| Priority | Total | Covered | Percentage | +| -------- | ----- | ------- | ----------------- | +| **P0** | 5 | 5 | 100% ✅ | +| **P1** | 6 | 5 | 83% ⚠️ | +| **P2** | 3 | 1 | 33% ⚠️ | +| **P3** | 1 | 0 | 0% ✅ (acceptable) | --- @@ -223,10 +223,10 @@ TEA generates a comprehensive traceability matrix. ### Critical Gaps (Must Fix Before Release) -| Gap | Requirement | Priority | Risk | Recommendation | -|-----|-------------|----------|------|----------------| -| 1 | Bio field not tested | P0 | High | Add E2E + API tests | -| 2 | Avatar upload not tested | P0 | High | Add E2E + API tests | +| Gap | Requirement | Priority | Risk | Recommendation | +| --- | ------------------------ | -------- | ---- | ------------------- | +| 1 | Bio field not tested | P0 | High | Add E2E + API tests | +| 2 | Avatar upload not tested | P0 | High | Add E2E + API tests | **Estimated Effort:** 3 hours **Owner:** QA team @@ -234,9 +234,9 @@ TEA generates a comprehensive traceability matrix. ### Non-Critical Gaps (Can Defer) -| Gap | Requirement | Priority | Risk | Recommendation | -|-----|-------------|----------|------|----------------| -| 3 | Profile export not tested | P2 | Low | Add in v1.3 release | +| Gap | Requirement | Priority | Risk | Recommendation | +| --- | ------------------------- | -------- | ---- | ------------------- | +| 3 | Profile export not tested | P2 | Low | Add in v1.3 release | **Estimated Effort:** 2 hours **Owner:** QA team @@ -297,7 +297,7 @@ test('should update bio via API', async ({ apiRequest, authToken }) => { const { status, body } = await apiRequest({ method: 'PATCH', path: '/api/profile', - body: { bio: 'Updated bio' }, // 'body' not 'data' + body: { bio: 'Updated bio' }, headers: { Authorization: `Bearer ${authToken}` } }); @@ -442,12 +442,12 @@ TEA makes evidence-based gate decision and writes to separate file. ## Coverage Analysis -| Priority | Required Coverage | Actual Coverage | Status | -|----------|------------------|-----------------|--------| -| **P0** | 100% | 100% | ✅ PASS | -| **P1** | 90% | 100% | ✅ PASS | -| **P2** | 50% | 33% | ⚠️ Below (acceptable) | -| **P3** | 20% | 0% | ✅ PASS (low priority) | +| Priority | Required Coverage | Actual Coverage | Status | +| -------- | ----------------- | --------------- | --------------------- | +| **P0** | 100% | 100% | ✅ PASS | +| **P1** | 90% | 100% | ✅ PASS | +| **P2** | 50% | 33% | ⚠️ Below (acceptable) | +| **P3** | 20% | 0% | ✅ PASS (low priority) | **Rationale:** - All critical path (P0) requirements fully tested @@ -456,11 +456,11 @@ TEA makes evidence-based gate decision and writes to separate file. ## Quality Metrics -| Metric | Threshold | Actual | Status | -|--------|-----------|--------|--------| -| P0/P1 Coverage | >95% | 100% | ✅ | -| Test Quality Score | >80 | 84 | ✅ | -| NFR Status | PASS | PASS | ✅ | +| Metric | Threshold | Actual | Status | +| ------------------ | --------- | ------ | ------ | +| P0/P1 Coverage | >95% | 100% | ✅ | +| Test Quality Score | >80 | 84 | ✅ | +| NFR Status | PASS | PASS | ✅ | ## Risks and Mitigations @@ -501,14 +501,14 @@ TEA makes evidence-based gate decision and writes to separate file. TEA uses deterministic rules when decision_mode = "deterministic": -| P0 Coverage | P1 Coverage | Overall Coverage | Decision | -|-------------|-------------|------------------|----------| -| 100% | ≥90% | ≥80% | **PASS** ✅ | -| 100% | 80-89% | ≥80% | **CONCERNS** ⚠️ | -| <100% | Any | Any | **FAIL** ❌ | -| Any | <80% | Any | **FAIL** ❌ | -| Any | Any | <80% | **FAIL** ❌ | -| Any | Any | Any | **WAIVED** ⏭️ (with approval) | +| P0 Coverage | P1 Coverage | Overall Coverage | Decision | +| ----------- | ----------- | ---------------- | ---------------------------- | +| 100% | ≥90% | ≥80% | **PASS** ✅ | +| 100% | 80-89% | ≥80% | **CONCERNS** ⚠️ | +| <100% | Any | Any | **FAIL** ❌ | +| Any | <80% | Any | **FAIL** ❌ | +| Any | Any | <80% | **FAIL** ❌ | +| Any | Any | Any | **WAIVED** ⏭️ (with approval) | **Detailed Rules:** - **PASS:** P0=100%, P1≥90%, Overall≥80% @@ -683,12 +683,12 @@ Track improvement over time: ```markdown ## Coverage Trend -| Date | Epic | P0/P1 Coverage | Quality Score | Status | -|------|------|----------------|---------------|--------| -| 2026-01-01 | Baseline | 45% | - | Starting point | -| 2026-01-08 | Epic 1 | 78% | 72 | Improving | -| 2026-01-15 | Epic 2 | 92% | 84 | Near target | -| 2026-01-20 | Epic 3 | 100% | 88 | Ready! | +| Date | Epic | P0/P1 Coverage | Quality Score | Status | +| ---------- | -------- | -------------- | ------------- | -------------- | +| 2026-01-01 | Baseline | 45% | - | Starting point | +| 2026-01-08 | Epic 1 | 78% | 72 | Improving | +| 2026-01-15 | Epic 2 | 92% | 84 | Near target | +| 2026-01-20 | Epic 3 | 100% | 88 | Ready! | ``` ### Set Coverage Targets by Priority diff --git a/docs/how-to/workflows/setup-ci.md b/docs/how-to/workflows/setup-ci.md index fc68a08b..7f64ae56 100644 --- a/docs/how-to/workflows/setup-ci.md +++ b/docs/how-to/workflows/setup-ci.md @@ -290,137 +290,84 @@ burn-in: - if: $CI_PIPELINE_SOURCE == "merge_request_event" ``` -#### Helper Scripts +#### Burn-In Testing -TEA generates shell scripts for CI and local development. - -**Test Scripts** (`package.json`): +**Option 1: Classic Burn-In (Playwright Built-In)** ```json { "scripts": { "test": "playwright test", - "test:headed": "playwright test --headed", - "test:debug": "playwright test --debug", - "test:smoke": "playwright test --grep @smoke", - "test:critical": "playwright test --grep @critical", - "test:changed": "./scripts/test-changed.sh", - "test:burn-in": "./scripts/burn-in.sh", - "test:report": "playwright show-report", - "ci:local": "./scripts/ci-local.sh" + "test:burn-in": "playwright test --repeat-each=5 --retries=0" } } ``` -**Selective Testing Script** (`scripts/test-changed.sh`): +**How it works:** +- Runs every test 5 times +- Fails if any iteration fails +- Detects flakiness before merge -```bash -#!/bin/bash -# Run only tests for changed files +**Use when:** Small test suite, want to run everything multiple times -CHANGED_FILES=$(git diff --name-only origin/main...HEAD) +--- -if echo "$CHANGED_FILES" | grep -q "src/.*\.ts$"; then - echo "Running affected tests..." - npm run test:e2e -- --grep="$(echo $CHANGED_FILES | sed 's/src\///g' | sed 's/\.ts//g')" -else - echo "No test-affecting changes detected" -fi -``` +**Option 2: Smart Burn-In (Playwright Utils)** -**Burn-In Script** (`scripts/burn-in.sh`): - -```bash -#!/bin/bash -# Run tests multiple times to detect flakiness - -ITERATIONS=${BURN_IN_ITERATIONS:-5} -FAILURES=0 - -for i in $(seq 1 $ITERATIONS); do - echo "=== Burn-in iteration $i/$ITERATIONS ===" - - if npm test; then - echo "✓ Iteration $i passed" - else - echo "✗ Iteration $i failed" - FAILURES=$((FAILURES + 1)) - fi -done - -if [ $FAILURES -gt 0 ]; then - echo "❌ Tests failed in $FAILURES/$ITERATIONS iterations" - exit 1 -fi - -echo "✅ All $ITERATIONS iterations passed" -``` - -**Local CI Mirror Script** (`scripts/ci-local.sh`): - -```bash -#!/bin/bash -# Mirror CI execution locally for debugging - -echo "🔍 Running CI pipeline locally..." - -# Lint -npm run lint || exit 1 - -# Tests -npm run test || exit 1 - -# Burn-in (reduced iterations for local) -for i in {1..3}; do - echo "🔥 Burn-in $i/3" - npm test || exit 1 -done - -echo "✅ Local CI pipeline passed" -``` - -**Make scripts executable:** -```bash -chmod +x scripts/*.sh -``` - -**Alternative: Smart Burn-In with Playwright Utils** - -If `tea_use_playwright_utils: true`, you can use git diff-based burn-in: +If `tea_use_playwright_utils: true`: +**scripts/burn-in-changed.ts:** ```typescript -// scripts/burn-in-changed.ts import { runBurnIn } from '@seontechnologies/playwright-utils/burn-in'; -async function main() { - await runBurnIn({ - configPath: 'playwright.burn-in.config.ts', - baseBranch: 'main' - }); -} - -main().catch(console.error); +await runBurnIn({ + configPath: 'playwright.burn-in.config.ts', + baseBranch: 'main' +}); ``` +**playwright.burn-in.config.ts:** ```typescript -// playwright.burn-in.config.ts import type { BurnInConfig } from '@seontechnologies/playwright-utils/burn-in'; const config: BurnInConfig = { skipBurnInPatterns: ['**/config/**', '**/*.md', '**/*types*'], - burnInTestPercentage: 0.3, // Run 30% of affected tests - burnIn: { repeatEach: 5, retries: 1 } + burnInTestPercentage: 0.3, + burnIn: { repeatEach: 5, retries: 0 } }; export default config; ``` -**Benefits over shell script:** -- Only runs tests affected by git changes (faster) -- Smart filtering (skips config, docs, types) -- Volume control (run percentage, not all tests) +**package.json:** +```json +{ + "scripts": { + "test:burn-in": "tsx scripts/burn-in-changed.ts" + } +} +``` -**Example:** Changed 1 file → runs 3 affected tests 5 times = 15 runs (not 500 tests × 5 = 2500 runs) +**How it works:** +- Git diff analysis (only affected tests) +- Smart filtering (skip configs, docs, types) +- Volume control (run 30% of affected tests) +- Each test runs 5 times + +**Use when:** Large test suite, want intelligent selection + +--- + +**Comparison:** + +| Feature | Classic Burn-In | Smart Burn-In (PW-Utils) | +|---------|----------------|--------------------------| +| Changed 1 file | Runs all 500 tests × 5 = 2500 runs | Runs 3 affected tests × 5 = 15 runs | +| Config change | Runs all tests | Skips (no tests affected) | +| Type change | Runs all tests | Skips (no runtime impact) | +| Setup | Zero config | Requires config file | + +**Recommendation:** Start with classic (simple), upgrade to smart (faster) when suite grows. ### 6. Configure Secrets diff --git a/docs/reference/tea/commands.md b/docs/reference/tea/commands.md index 6e86a6c4..ed1ad8c2 100644 --- a/docs/reference/tea/commands.md +++ b/docs/reference/tea/commands.md @@ -1,1372 +1,253 @@ --- title: "TEA Command Reference" -description: Complete reference for all TEA (Test Architect) workflows and commands +description: Quick reference for all 8 TEA workflows - inputs, outputs, and links to detailed guides --- # TEA Command Reference -Complete reference for all 8 TEA (Test Architect) workflows. Use this for quick lookup of commands, parameters, and outputs. +Quick reference for all 8 TEA (Test Architect) workflows. For detailed step-by-step guides, see the how-to documentation. ## Quick Index - [*framework](#framework) - Scaffold test framework - [*ci](#ci) - Setup CI/CD pipeline - [*test-design](#test-design) - Risk-based test planning -- [*atdd](#atdd) - Acceptance TDD (failing tests first) -- [*automate](#automate) - Test automation expansion -- [*test-review](#test-review) - Test quality audit -- [*nfr-assess](#nfr-assess) - Non-functional requirements assessment -- [*trace](#trace) - Coverage traceability and gate decisions - -**Note:** `*workflow-status` is a shared BMM workflow available to all agents, not TEA-specific. See [Core Workflows](/docs/reference/workflows/core-workflows.md). +- [*atdd](#atdd) - Acceptance TDD +- [*automate](#automate) - Test automation +- [*test-review](#test-review) - Quality audit +- [*nfr-assess](#nfr-assess) - NFR assessment +- [*trace](#trace) - Coverage traceability --- ## *framework -Scaffold production-ready test framework (Playwright or Cypress). +**Purpose:** Scaffold production-ready test framework (Playwright or Cypress) -### Purpose +**Phase:** Phase 3 (Solutioning) -Initialize test infrastructure with best practices, environment configuration, and sample tests. +**Frequency:** Once per project -### Phase +**Key Inputs:** +- Tech stack, test framework choice, testing scope -Phase 3 (Solutioning) - Run once per project after architecture is complete. +**Key Outputs:** +- `tests/` directory with `support/fixtures/` and `support/helpers/` +- `playwright.config.ts` or `cypress.config.ts` +- `.env.example`, `.nvmrc` +- Sample tests with best practices -### Frequency - -Once per project (one-time setup). - -### When to Use - -- No existing test framework in your project -- Current test setup isn't production-ready -- Starting new project needing test infrastructure -- Want to adopt Playwright or Cypress with proper structure - -### Inputs - -TEA will ask: - -| Question | Example Answer | Notes | -|----------|----------------|-------| -| Tech stack | "React web application" | Helps determine test approach | -| Test framework | "Playwright" | Playwright or Cypress | -| Testing scope | "E2E and API testing" | E2E, integration, unit, or mix | -| CI/CD platform | "GitHub Actions" | For future `*ci` setup | - -### Outputs - -**Generated Files:** -``` -tests/ -├── e2e/ # E2E test directory -│ └── example.spec.ts # Sample E2E test -├── api/ # API test directory (if requested) -│ └── example.spec.ts # Sample API test -├── support/ # Support directory -│ ├── fixtures/ # Shared fixtures -│ │ └── index.ts # Fixture composition -│ └── helpers/ # Pure utility functions -│ └── api-request.ts # Example helper -├── playwright.config.ts # Framework configuration -└── README.md # Testing documentation - -.env.example # Environment variable template -.nvmrc # Node version specification -``` - -**Configuration Includes:** -- Multiple environments (dev, staging, prod) -- Timeout standards -- Retry logic -- Artifact collection (screenshots, videos, traces) -- Reporter configuration - -**Sample Tests Include:** -- Network-first patterns (no hard waits) -- Proper fixture usage -- Explicit assertions -- Deterministic test structure - -**Framework-Specific Examples:** - -**Vanilla Playwright:** -```typescript -// tests/e2e/example.spec.ts -import { test, expect } from '@playwright/test'; - -test('example test', async ({ page, request }) => { - // Manual API call - const response = await request.get('/api/data'); - const data = await response.json(); - - await page.goto('/'); - await expect(page.locator('h1')).toContainText(data.title); -}); -``` - -**With Playwright Utils:** -```typescript -// tests/e2e/example.spec.ts -import { test } from '@seontechnologies/playwright-utils/api-request/fixtures'; -import { expect } from '@playwright/test'; - -test('example test', async ({ page, apiRequest }) => { - // Utility handles status/body separation - const { status, body } = await apiRequest({ - method: 'GET', - path: '/api/data' - }); - - expect(status).toBe(200); - await page.goto('/'); - await expect(page.locator('h1')).toContainText(body.title); -}); -``` - -### Optional Integrations - -**Playwright Utils:** -If `tea_use_playwright_utils: true` in config: -- Includes `@seontechnologies/playwright-utils` in scaffold -- Adds fixture composition examples -- Provides utility import examples - -See [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md) - -### Related Commands - -- `*ci` - Run after framework to setup CI/CD -- `*test-design` - Run concurrently or after for test planning - -### How-To Guide - -[How to Set Up a Test Framework](/docs/how-to/workflows/setup-test-framework.md) +**How-To Guide:** [Setup Test Framework](/docs/how-to/workflows/setup-test-framework.md) --- ## *ci -Setup CI/CD quality pipeline with selective testing and burn-in loops. +**Purpose:** Setup CI/CD pipeline with selective testing and burn-in -### Purpose +**Phase:** Phase 3 (Solutioning) -Scaffold production-ready CI/CD configuration for automated test execution. +**Frequency:** Once per project -### Phase +**Key Inputs:** +- CI platform (GitHub Actions, GitLab CI, etc.) +- Sharding strategy, burn-in preferences -Phase 3 (Solutioning) - Run once per project after framework setup. +**Key Outputs:** +- Platform-specific CI workflow (`.github/workflows/test.yml`, etc.) +- Parallel execution configuration +- Burn-in loops for flakiness detection +- Secrets checklist -### Frequency - -Once per project (one-time setup). - -### When to Use - -- Need to automate test execution in CI/CD -- Want selective testing (only run affected tests) -- Need burn-in loops for flakiness detection -- Setting up new CI/CD pipeline - -### Inputs - -TEA will ask: - -| Question | Example Answer | Notes | -|----------|----------------|-------| -| CI/CD platform | "GitHub Actions" | GitHub Actions, GitLab CI, Circle CI, Jenkins | -| Repository structure | "Monorepo with multiple apps" | Affects test selection strategy | -| Sharding strategy | "Yes, run tests in parallel" | Shard across multiple workers | -| Burn-in loops | "Yes, for flakiness detection" | Run tests multiple times | - -### Outputs - -**Platform-Specific Workflow:** - -**GitHub Actions** (`.github/workflows/test.yml`): -```yaml -name: Test Suite - -on: - pull_request: - push: - branches: [main, develop] - -jobs: - test: - runs-on: ubuntu-latest - strategy: - matrix: - shard: [1, 2, 3, 4] - steps: - - uses: actions/checkout@v4 - - uses: actions/setup-node@v4 - with: - node-version-file: '.nvmrc' - - - name: Install dependencies - run: npm ci - - - name: Run tests - run: npx playwright test --shard=${{ matrix.shard }}/4 - - - name: Upload artifacts - if: always() - uses: actions/upload-artifact@v4 - with: - name: test-results-${{ matrix.shard }} - path: test-results/ - - burn-in: - runs-on: ubuntu-latest - if: github.event_name == 'pull_request' - steps: - - name: Run burn-in loop - run: | - for i in {1..5}; do - npx playwright test --grep-invert @skip - done -``` - -**Also Generates:** -- **Test scripts** (`package.json`) - Selective testing commands -- **Secrets checklist** - Required environment variables -- **Sharding configuration** - Parallel execution setup -- **Artifact collection** - Save screenshots, videos, traces - -### Selective Testing - -Generated CI includes selective test execution: - -```bash -# Run only tests affected by changes -npm run test:selective - -# Run specific tags -npm run test:smoke # @smoke tagged tests -npm run test:critical # @critical tagged tests -``` - -### Burn-in Loops - -Detects flaky tests by running multiple times: - -```bash -# Run tests 5 times to detect flakiness -npm run test:burn-in - -# Run specific test file 10 times -npm run test:burn-in -- tests/e2e/checkout.spec.ts --repeat 10 -``` - -### Optional Integrations - -**Playwright Utils:** -If `tea_use_playwright_utils: true`: -- Includes `burn-in` utility for smart test selection -- Adds git diff-based selective testing -- Provides test prioritization - -See [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md) - -### Related Commands - -- `*framework` - Run before CI setup -- `*test-review` - Run to ensure CI-ready tests - -### How-To Guide - -[How to Set Up CI Pipeline](/docs/how-to/workflows/setup-ci.md) +**How-To Guide:** [Setup CI Pipeline](/docs/how-to/workflows/setup-ci.md) --- ## *test-design -Create comprehensive test scenarios with risk assessment and coverage strategies. +**Purpose:** Risk-based test planning with coverage strategy -### Purpose +**Phase:** Phase 3 (system-level), Phase 4 (epic-level) -Risk-based test planning that identifies what to test, at what level, and with what priority. +**Frequency:** Once (system), per epic (epic-level) -### Phase +**Modes:** +- **System-level:** Architecture testability review +- **Epic-level:** Per-epic risk assessment -**Dual Mode:** -- **Phase 3:** System-level (architecture testability review) -- **Phase 4:** Epic-level (per-epic test planning) +**Key Inputs:** +- Architecture/epic, requirements, ADRs -### Frequency +**Key Outputs:** +- `test-design-system.md` or `test-design-epic-N.md` +- Risk assessment (probability × impact scores) +- Test priorities (P0-P3) +- Coverage strategy -- **System-level:** Once per project (or when architecture changes) -- **Epic-level:** Per epic in implementation cycle +**MCP Enhancement:** Exploratory mode (live browser UI discovery) -### When to Use - -**System-level:** -- After architecture is complete -- Before implementation-readiness gate -- To validate architecture testability -- When ADRs (Architecture Decision Records) are updated - -**Epic-level:** -- At the start of each epic -- Before implementing stories in the epic -- To identify epic-specific testing needs -- When planning sprint work - -### Inputs - -TEA will ask: - -**Mode Selection:** -| Question | Example Answer | Notes | -|----------|----------------|-------| -| System or epic level? | "Epic-level" | Determines scope | - -**System-Level Inputs:** -- Architecture document location -- ADRs (if available) -- PRD with FRs/NFRs -- Technology stack decisions - -**Epic-Level Inputs:** -- Epic description and goals -- Stories with acceptance criteria -- Related PRD sections -- Integration points with existing system - -### Outputs - -**System-Level Output** (`test-design-system.md`): - -```markdown -# Test Design - System Level - -## Architecture Testability Review - -### Strengths -- Microservices architecture enables API-level testing -- Event-driven design allows message interception -- Clear service boundaries support contract testing - -### Concerns -- Complex distributed tracing may be hard to test -- No test environment for third-party integrations -- Database migrations need test data management - -## ADR → Test Mapping - -| ADR | Decision | Test Impact | Test Strategy | -|-----|----------|-------------|---------------| -| ADR-001 | Use PostgreSQL | Data integrity critical | API tests + DB assertions | -| ADR-002 | Event sourcing | Event replay testing | Integration tests for event handlers | -| ADR-003 | OAuth2 authentication | Auth flows complex | E2E + API tests for all flows | - -## Architecturally Significant Requirements (ASRs) - -### Performance -- API response time < 200ms (P99) -- Test strategy: Load testing with k6, API response time assertions - -### Security -- All endpoints require authentication -- Test strategy: Security testing suite, unauthorized access scenarios - -### Reliability -- 99.9% uptime requirement -- Test strategy: Chaos engineering, failover testing - -## Environment Needs - -- **Dev:** Local Docker compose setup -- **Staging:** Replica of production -- **Production:** Read-only test accounts - -## Test Infrastructure Recommendations - -- [ ] Set up contract testing with Pact -- [ ] Create test data factories -- [ ] Implement API mocking for third-party services -- [ ] Add performance test suite -``` - -**Epic-Level Output** (`test-design-epic-N.md`): - -```markdown -# Test Design - Epic 1: User Profile Management - -## Risk Assessment - -| Risk Category | Probability | Impact | Score | Mitigation | -|---------------|-------------|--------|-------|------------| -| DATA | 3 (High) | 3 (High) | 9 | Validate all profile updates, test data corruption scenarios | -| SEC | 2 (Medium) | 3 (High) | 6 | Test authorization (users can only edit own profiles) | -| BUS | 2 (Medium) | 2 (Medium) | 4 | Verify profile data appears correctly across app | -| PERF | 1 (Low) | 2 (Medium) | 2 | Profile load should be < 500ms | - -## Test Priorities - -### P0 - Critical Path (Must Test) -- User can view their own profile -- User can update profile fields -- Changes are persisted correctly -- **Coverage Target:** 100% (all scenarios) - -### P1 - High Value (Should Test) -- Validation prevents invalid data (email format, etc.) -- Unauthorized users cannot edit profiles -- Profile updates trigger notifications -- **Coverage Target:** 80% (major scenarios) - -### P2 - Medium Value (Nice to Test) -- Profile picture upload and display -- Profile history/audit log -- **Coverage Target:** 50% (happy path) - -### P3 - Low Value (Optional) -- Advanced profile customization -- Profile export functionality -- **Coverage Target:** 20% (smoke test) - -## Coverage Strategy - -### E2E Tests (5 tests) -- View profile page (P0) -- Edit and save profile (P0) -- Profile validation errors (P1) -- Unauthorized access prevented (P1) -- Profile picture upload (P2) - -### API Tests (8 tests) -- GET /api/profile returns profile (P0) -- PATCH /api/profile updates profile (P0) -- Validation for each field (P1) -- Authorization checks (P1) -- Profile picture upload API (P2) -- Profile history endpoint (P2) - -### Component Tests (3 tests) -- ProfileForm component renders (P1) -- ProfileForm validation (P1) -- ProfilePictureUpload component (P2) - -## Integration Risks - -- Profile data stored in PostgreSQL - ensure transaction integrity -- Profile updates trigger notification service - test event propagation -- Profile pictures stored in S3 - test upload/download flows - -## Regression Hotspots (Brownfield) - -N/A - New feature - -## Implementation Order - -1. API tests for profile CRUD (P0) -2. E2E test for viewing profile (P0) -3. E2E test for editing profile (P0) -4. Validation tests API + E2E (P1) -5. Authorization tests (P1) -6. Profile picture tests (P2) -``` - -### Optional: Exploratory Mode - -If MCP enhancements enabled (`tea_use_mcp_enhancements: true` in config): - -When prompted, select "exploratory mode" to: -- Open live browser for UI discovery -- Validate test scenarios against real behavior -- Capture accurate selectors interactively - -See [Enable MCP Enhancements](/docs/how-to/customization/enable-tea-mcp-enhancements.md) - -### Related Commands - -- `*atdd` - Generate tests based on test design -- `*automate` - Generate tests based on test design -- `*framework` - Run first if no test infrastructure - -### How-To Guide - -[How to Run Test Design](/docs/how-to/workflows/run-test-design.md) +**How-To Guide:** [Run Test Design](/docs/how-to/workflows/run-test-design.md) --- ## *atdd -Generate failing acceptance tests BEFORE implementation (TDD red phase). +**Purpose:** Generate failing acceptance tests BEFORE implementation (TDD red phase) -### Purpose +**Phase:** Phase 4 (Implementation) -Create failing tests that guide feature implementation following test-driven development. +**Frequency:** Per story (optional) -### Phase +**Key Inputs:** +- Story with acceptance criteria, test design, test levels -Phase 4 (Implementation) - Before implementing each story. - -### Frequency - -Per story (optional - only if practicing TDD). - -### When to Use - -- Feature doesn't exist yet -- Want tests to guide implementation -- Practicing test-driven development -- Want clear success criteria before coding - -**Don't use if:** -- Feature already exists (use `*automate` instead) -- Want tests that pass immediately - -### Inputs - -TEA will ask: - -| Question | Example Answer | Notes | -|----------|----------------|-------| -| Story/feature details | "User profile page with CRUD ops" | What are you building? | -| Acceptance criteria | "User can view, edit, save profile" | What defines "done"? | -| Reference docs | "test-design-epic-1.md, story-123.md" | Optional context | -| Test levels | "API + E2E tests, focus P0/P1" | Which test types? | - -### Outputs - -**Failing Tests:** -- API tests (`tests/api/`) - Backend endpoint tests -- E2E tests (`tests/e2e/`) - Full user workflow tests -- Component tests (`tests/component/`) - UI component tests (if requested) +**Key Outputs:** +- Failing tests (`tests/api/`, `tests/e2e/`) +- Implementation checklist - All tests fail initially (red phase) -### Component Testing by Framework +**MCP Enhancement:** Recording mode (for skeleton UI only - rare) -TEA generates component tests using framework-appropriate tools: - -| Your Framework | Component Testing Tool | What TEA Generates | -|----------------|----------------------|-------------------| -| **Cypress** | Cypress Component Testing | Cypress component specs (*.cy.tsx) | -| **Playwright** | Vitest + React Testing Library | Vitest component tests (*.test.tsx) | - -**Note:** Component tests use separate tooling: -- Cypress: Run with `cypress run-ct` -- Vitest: Run with `vitest` or `npm run test:unit` - -**Implementation Checklist:** -```markdown -## Implementation Checklist - -### Backend -- [ ] Create endpoints -- [ ] Add validation -- [ ] Write unit tests - -### Frontend -- [ ] Create components -- [ ] Add form handling -- [ ] Handle errors - -### Tests -- [x] API tests generated (failing) -- [x] E2E tests generated (failing) -- [ ] Make tests pass -``` - -**Test Structure:** - -```typescript -// tests/api/profile.spec.ts -import { test, expect } from '@playwright/test'; - -test('should fetch user profile', async ({ request }) => { - const response = await request.get('/api/profile'); - expect(response.status()).toBe(200); // FAILS - endpoint doesn't exist yet -}); - -// tests/e2e/profile.spec.ts -test('should display profile page', async ({ page }) => { - await page.goto('/profile'); - await expect(page.getByText('Profile')).toBeVisible(); // FAILS - page doesn't exist -}); -``` - -### Recording Mode Note - -**Recording mode is NOT typically used with ATDD** because ATDD generates tests for features that don't exist yet. - -Use `*automate` with recording mode for existing features instead. See [`*automate`](#automate). - -**Only use recording mode with ATDD if:** -- You have skeleton/mockup UI implemented -- You want to verify selectors before full implementation -- You're doing UI-first development (rare for TDD) - -For typical ATDD (feature doesn't exist), skip recording mode. - -### TDD Workflow - -1. **Red**: Run `*atdd` → tests fail -2. **Green**: Implement feature → tests pass -3. **Refactor**: Improve code → tests still pass - -### Related Commands - -- `*test-design` - Run first for better test generation -- `*automate` - Use after implementation for additional tests -- `*test-review` - Audit generated test quality - -### How-To Guide - -[How to Run ATDD](/docs/how-to/workflows/run-atdd.md) +**How-To Guide:** [Run ATDD](/docs/how-to/workflows/run-atdd.md) --- ## *automate -Expand test automation coverage after story implementation. +**Purpose:** Expand test coverage after implementation -### Purpose +**Phase:** Phase 4 (Implementation) -Generate comprehensive tests for existing features, avoiding duplicate coverage. +**Frequency:** Per story/feature -### Phase +**Key Inputs:** +- Feature description, test design, existing tests to avoid duplication -Phase 4 (Implementation) - After implementing each story. +**Key Outputs:** +- Comprehensive test suite (`tests/e2e/`, `tests/api/`) +- Updated fixtures, README +- Definition of Done summary -### Frequency +**MCP Enhancement:** Healing + Recording modes (fix tests, verify selectors) -Per story or feature (after implementation complete). - -### When to Use - -- Feature already exists and works -- Want to add test coverage -- Need tests that pass immediately -- Expanding existing test suite - -**Don't use if:** -- Feature doesn't exist yet (use `*atdd` instead) -- Want failing tests to guide development - -### Inputs - -TEA will ask: - -| Question | Example Answer | Notes | -|----------|----------------|-------| -| What are you testing? | "TodoMVC React app" | Feature/app description | -| Reference docs | "test-design-epic-1.md" | Optional test design | -| Specific scenarios | "Cover P0 and P1 from test design" | Focus areas | -| Existing tests | "tests/e2e/basic.spec.ts" | Avoid duplication | - -### Modes - -**BMad-Integrated Mode:** -- Works with story, tech-spec, PRD, test-design -- Comprehensive context for test generation -- Recommended for BMad Method projects - -**Standalone Mode:** -- Analyzes codebase independently -- Works without BMad artifacts -- Good for TEA Solo usage - -### Outputs - -**Comprehensive Test Suite:** - -``` -tests/ -├── e2e/ -│ ├── profile-view.spec.ts # View profile tests -│ ├── profile-edit.spec.ts # Edit profile tests -│ └── profile-validation.spec.ts # Validation tests -├── api/ -│ ├── profile-crud.spec.ts # CRUD operations -│ └── profile-auth.spec.ts # Authorization tests -└── component/ - ├── ProfileForm.test.tsx # Component tests (Vitest for Playwright) - └── ProfileForm.cy.tsx # Component tests (Cypress CT) -``` - -**Component Testing Note:** Framework-dependent - Cypress users get Cypress CT, Playwright users get Vitest tests. - -**Test Quality Features:** -- Network-first patterns (waits for responses, not timeouts) -- Explicit assertions (no conditionals) -- Self-cleaning (tests clean up after themselves) -- Deterministic (no flakiness) - -**Additional Artifacts:** -- **Updated fixtures** - Shared test utilities -- **Updated factories** - Test data generation -- **README updates** - How to run new tests -- **Definition of Done summary** - Quality checklist - -### Prioritization - -TEA generates tests based on: -- Test design priorities (P0 → P1 → P2 → P3) -- Risk assessment scores -- Existing test coverage (avoids duplication) - -**Example:** -``` -Generated 12 tests: -- 4 P0 tests (critical path) -- 5 P1 tests (high value) -- 3 P2 tests (medium value) -- Skipped P3 (low value) -``` - -### Optional: Healing Mode - -If MCP enhancements enabled (`tea_use_mcp_enhancements: true` in config): - -When prompted, select "healing mode" to: -- Fix broken selectors with visual debugging -- Update outdated assertions interactively -- Enhance tests with trace viewer insights - -See [Enable MCP Enhancements](/docs/how-to/customization/enable-tea-mcp-enhancements.md) - -### Optional: Recording Mode - -If MCP enhancements enabled: - -When prompted, select "recording mode" to verify tests with live browser for accurate selectors. - -### Related Commands - -- `*test-design` - Run first for prioritized test generation -- `*atdd` - Use before implementation (TDD approach) -- `*test-review` - Audit generated test quality - -### How-To Guide - -[How to Run Automate](/docs/how-to/workflows/run-automate.md) +**How-To Guide:** [Run Automate](/docs/how-to/workflows/run-automate.md) --- ## *test-review -Review test quality using comprehensive knowledge base and best practices. +**Purpose:** Audit test quality with 0-100 scoring -### Purpose +**Phase:** Phase 4 (optional per story), Release Gate -Audit test suite quality with 0-100 scoring and actionable feedback. +**Frequency:** Per epic or before release -### Phase +**Key Inputs:** +- Test scope (file, directory, or entire suite) -- **Phase 4:** Optional per-story review -- **Release Gate:** Final audit before release +**Key Outputs:** +- `test-review.md` with quality score (0-100) +- Critical issues with fixes +- Recommendations +- Category scores (Determinism, Isolation, Assertions, Structure, Performance) -### Frequency +**Scoring Categories:** +- Determinism: 35 points +- Isolation: 25 points +- Assertions: 20 points +- Structure: 10 points +- Performance: 10 points -- Per story (optional) -- Per epic (recommended) -- Before release (recommended for quality gates, required if using formal gate process) - -### When to Use - -- Want to validate test quality -- Need objective quality metrics -- Preparing for release gate -- Reviewing team-written tests -- Auditing AI-generated tests - -### Inputs - -TEA will ask: - -| Question | Example Answer | Notes | -|----------|----------------|-------| -| Review scope | "tests/e2e/ directory" | File, directory, or entire suite | -| Focus areas | "Check for flakiness patterns" | Optional specific concerns | -| Strictness level | "Standard" | Relaxed, standard, or strict | - -### Review Criteria - -TEA reviews against knowledge base patterns: - -**Determinism (35 points):** -- No hard waits (`waitForTimeout`) -- No conditionals (if/else) for flow control -- No try-catch for flow control -- Network-first patterns used - -**Isolation (25 points):** -- Self-cleaning (cleanup after test) -- No global state dependencies -- Can run in parallel -- Independent of execution order - -**Assertions (20 points):** -- Explicit in test body (not abstracted) -- Specific and meaningful -- Covers actual behavior -- No weak assertions (`toBeTrue`, `toBeDefined`) - -**Structure (10 points):** -- Test size < 300 lines -- Clear describe/test names -- Proper setup/teardown -- Single responsibility per test - -**Performance (10 points):** -- Execution time < 1.5 minutes -- Efficient selectors -- Minimal redundant actions - -### Outputs - -**Quality Report** (`test-review.md`): - -```markdown -# Test Quality Review Report - -**Date:** 2026-01-13 -**Scope:** tests/e2e/ -**Score:** 78/100 - -## Summary - -- **Tests Reviewed:** 15 -- **Passing Quality:** 12 tests (80%) -- **Needs Improvement:** 3 tests (20%) -- **Critical Issues:** 2 -- **Recommendations:** 8 - -## Critical Issues - -### 1. Hard Waits Detected (tests/e2e/checkout.spec.ts:45) - -**Issue:** Using `waitForTimeout(3000)` -**Impact:** Flakiness, slow execution -**Fix:** -```typescript -// ❌ Bad -await page.waitForTimeout(3000); - -// ✅ Good -await page.waitForResponse(resp => resp.url().includes('/api/checkout')); -``` - -### 2. Conditional Flow Control (tests/e2e/profile.spec.ts:28) - -**Issue:** Using if/else to handle optional elements -**Impact:** Non-deterministic behavior -**Fix:** -```typescript -// ❌ Bad -if (await page.locator('.banner').isVisible()) { - await page.click('.dismiss'); -} - -// ✅ Good -// Make test deterministic - either banner always shows or doesn't -await expect(page.locator('.banner')).toBeVisible(); -await page.click('.dismiss'); -``` - -## Recommendations - -1. **Extract repeated setup** (tests/e2e/login.spec.ts) - Consider using fixtures -2. **Add network assertions** (tests/e2e/api-calls.spec.ts) - Verify API responses -3. **Improve test names** (tests/e2e/checkout.spec.ts) - Use descriptive names -4. **Reduce test size** (tests/e2e/full-flow.spec.ts) - Split into smaller tests - -## Quality Scores by Category - -| Category | Score | Status | -|----------|-------|--------| -| Determinism | 28/35 | ⚠️ Needs Improvement | -| Isolation | 22/25 | ✅ Good | -| Assertions | 18/20 | ✅ Good | -| Structure | 7/10 | ⚠️ Needs Improvement | -| Performance | 3/10 | ❌ Critical | - -## Next Steps - -1. Fix critical issues (hard waits, conditionals) -2. Address performance concerns (slow tests) -3. Apply recommendations -4. Re-run `*test-review` to verify improvements -``` - -### Review Scope Options - -**Single File:** -``` -*test-review tests/e2e/checkout.spec.ts -``` - -**Directory:** -``` -*test-review tests/e2e/ -``` - -**Entire Suite:** -``` -*test-review tests/ -``` - -### Related Commands - -- `*atdd` - Review tests generated by ATDD -- `*automate` - Review tests generated by automate -- `*trace` - Coverage analysis complements quality review - -### How-To Guide - -[How to Review Test Quality](/docs/how-to/workflows/run-test-review.md) +**How-To Guide:** [Run Test Review](/docs/how-to/workflows/run-test-review.md) --- ## *nfr-assess -Validate non-functional requirements before release. +**Purpose:** Validate non-functional requirements with evidence -### Purpose +**Phase:** Phase 2 (enterprise), Release Gate -Assess security, performance, reliability, and maintainability with evidence-based decisions. +**Frequency:** Per release (enterprise projects) -### Phase +**Key Inputs:** +- NFR categories (Security, Performance, Reliability, Maintainability) +- Thresholds, evidence location -- **Phase 2:** Optional (enterprise, capture NFRs early) -- **Release Gate:** Validate before release +**Key Outputs:** +- `nfr-assessment.md` +- Category assessments (PASS/CONCERNS/FAIL) +- Mitigation plans +- Gate decision inputs -### Frequency - -- Per epic (optional) -- Per release (mandatory for enterprise/compliance) - -### When to Use - -- Enterprise projects with compliance needs -- Projects with strict NFRs -- Before production release -- When NFRs are critical to success - -### Inputs - -TEA will ask: - -| Question | Example Answer | Notes | -|----------|----------------|-------| -| NFR focus areas | "Security, Performance" | Categories to assess | -| Thresholds | "API < 200ms P99, 0 critical vulns" | Specific requirements | -| Evidence location | "Load test results in /reports" | Where to find data | - -### NFR Categories - -**Security:** -- Authentication/authorization -- Data encryption -- Vulnerability scanning -- Security headers -- Input validation - -**Performance:** -- Response time (P50, P95, P99) -- Throughput (requests/second) -- Resource usage (CPU, memory) -- Database query performance -- Frontend load time - -**Reliability:** -- Error handling -- Recovery mechanisms -- Availability/uptime -- Failover testing -- Data backup/restore - -**Maintainability:** -- Code quality metrics -- Test coverage -- Technical debt tracking -- Documentation completeness -- Dependency health - -### Outputs - -**NFR Assessment Report** (`nfr-assessment.md`): - -```markdown -# Non-Functional Requirements Assessment - -**Date:** 2026-01-13 -**Epic:** User Profile Management -**Decision:** CONCERNS ⚠️ - -## Summary - -- **Security:** PASS ✅ -- **Performance:** CONCERNS ⚠️ -- **Reliability:** PASS ✅ -- **Maintainability:** PASS ✅ - -## Security Assessment - -**Status:** PASS ✅ - -**Requirements:** -- All endpoints require authentication: ✅ Verified -- Data encryption at rest: ✅ PostgreSQL TDE enabled -- Input validation: ✅ Zod schemas on all endpoints -- No critical vulnerabilities: ✅ npm audit clean - -**Evidence:** -- Security scan report: `/reports/security-scan.pdf` -- Auth tests: 15/15 passing -- Penetration test results: No critical findings - -## Performance Assessment - -**Status:** CONCERNS ⚠️ - -**Requirements:** -| Metric | Target | Actual | Status | -|--------|--------|--------|--------| -| API response (P99) | < 200ms | 350ms | ❌ | -| API response (P95) | < 150ms | 180ms | ⚠️ | -| Throughput | > 1000 rps | 850 rps | ⚠️ | -| Frontend load | < 2s | 1.8s | ✅ | - -**Issues:** -1. **P99 latency exceeds target** - Database queries not optimized -2. **Throughput below target** - Missing database indexes - -**Mitigation Plan:** -- Add indexes to profile queries (owner: backend team, deadline: before release) -- Implement query caching (owner: backend team, deadline: before release) -- Re-run load tests after optimization - -**Evidence:** -- Load test report: `/reports/k6-load-test.json` -- APM data: Datadog dashboard link - -## Reliability Assessment - -**Status:** PASS ✅ - -**Requirements:** -- Error handling: ✅ All endpoints return structured errors -- Recovery: ✅ Graceful degradation tested -- Database failover: ✅ Tested successfully - -**Evidence:** -- Chaos engineering test results -- Error rate in staging: 0.01% (target < 0.1%) - -## Maintainability Assessment - -**Status:** PASS ✅ - -**Requirements:** -- Test coverage: ✅ 85% (target > 80%) -- Code quality: ✅ SonarQube grade A -- Documentation: ✅ API docs complete - -**Evidence:** -- Coverage report: `/reports/coverage/index.html` -- SonarQube: Link to project dashboard - -## Gate Decision - -**Decision:** CONCERNS ⚠️ - -**Rationale:** -- Performance metrics below target (P99, throughput) -- Mitigation plan in place with clear owners and deadlines -- Security and reliability meet requirements - -**Actions Required:** -1. Optimize database queries (backend team, 3 days) -2. Re-run performance tests (QA team, 1 day) -3. Update this assessment with new results - -**Waiver Option:** -If business approves deploying with current performance: -- Document waiver justification -- Set monitoring alerts for P99 latency -- Plan optimization for next release -``` - -### Decision Rules - -**PASS** ✅: All NFRs met, no concerns -**CONCERNS** ⚠️: Some NFRs not met, mitigation plan exists -**FAIL** ❌: Critical NFRs not met, blocks release -**WAIVED** ⏭️: Business-approved waiver with documented justification - -### Critical Principle - -**Never guess thresholds.** If you don't know the NFR target, mark as CONCERNS and request clarification. - -### Related Commands - -- `*trace` - Coverage traceability complements NFR assessment -- `*test-review` - Quality assessment complements NFR - -### How-To Guide - -[How to Run NFR Assessment](/docs/how-to/workflows/run-nfr-assess.md) +**How-To Guide:** [Run NFR Assessment](/docs/how-to/workflows/run-nfr-assess.md) --- ## *trace -Map requirements to tests (Phase 1) and make quality gate decision (Phase 2). +**Purpose:** Requirements traceability + quality gate decision -### Purpose +**Phase:** Phase 2/4 (traceability), Release Gate (decision) -Two-phase workflow: traceability analysis followed by go/no-go gate decision. +**Frequency:** Baseline, per epic refresh, release gate -### Phase +**Two-Phase Workflow:** -- **Phase 2 (Baseline):** Brownfield projects establishing baseline -- **Phase 4 (Refresh):** After each story/epic to update coverage -- **Release Gate:** Final gate decision before deployment +**Phase 1: Traceability** +- Requirements → test mapping +- Coverage classification (FULL/PARTIAL/NONE) +- Gap prioritization +- Output: `traceability-matrix.md` -### Frequency +**Phase 2: Gate Decision** +- PASS/CONCERNS/FAIL/WAIVED decision +- Evidence-based (coverage %, quality scores, NFRs) +- Output: `gate-decision-{gate_type}-{story_id}.md` -- Brownfield Phase 2: Once (baseline) -- Phase 4: Per story or epic (refresh) -- Release Gate: Once (final gate decision) +**Gate Rules:** +- P0 coverage: 100% required +- P1 coverage: ≥90% for PASS, 80-89% for CONCERNS, <80% FAIL +- Overall coverage: ≥80% required -### Two-Phase Workflow - -### Phase 1: Requirements Traceability - -Map acceptance criteria to implemented tests. - -**Inputs:** - -| Question | Example Answer | Notes | -|----------|----------------|-------| -| Requirements source | "story-123.md, test-design-epic-1.md" | Where are acceptance criteria? | -| Test location | "tests/" | Where are tests? | -| Focus areas | "Profile CRUD operations" | Optional scope | - -**Outputs:** - -**Coverage Matrix** (`traceability-matrix.md`): - -```markdown -# Requirements Traceability Matrix - -**Date:** 2026-01-13 -**Scope:** Epic 1 - User Profile Management - -## Coverage Summary - -- **Total Requirements:** 12 -- **Full Coverage:** 8 (67%) -- **Partial Coverage:** 3 (25%) -- **No Coverage:** 1 (8%) - -## Detailed Traceability - -### Requirement 1: User can view their profile - -**Acceptance Criteria:** -- User navigates to /profile -- Profile displays name, email, avatar - -**Test Coverage:** FULL ✅ - -**Tests:** -- `tests/e2e/profile-view.spec.ts:10` - "should display profile page" -- `tests/api/profile.spec.ts:5` - "should fetch user profile" - -### Requirement 2: User can edit profile - -**Acceptance Criteria:** -- User clicks "Edit Profile" -- Can modify name and email -- Can upload avatar -- Changes are saved - -**Test Coverage:** PARTIAL ⚠️ - -**Tests:** -- `tests/e2e/profile-edit.spec.ts:15` - "should edit and save profile" (name/email only) -- `tests/api/profile.spec.ts:20` - "should update profile via API" - -**Missing Coverage:** -- Avatar upload not tested - -### Requirement 3: Invalid email shows error - -**Acceptance Criteria:** -- Enter invalid email format -- See error message -- Cannot save - -**Test Coverage:** FULL ✅ - -**Tests:** -- `tests/e2e/profile-edit.spec.ts:35` - "should show validation error" -- `tests/api/profile.spec.ts:40` - "should validate email format" - -### Requirement 12: Profile history - -**Acceptance Criteria:** -- View audit log of profile changes - -**Test Coverage:** NONE ❌ - -**Gap Analysis:** -- Priority: P2 (medium) -- Risk: Low (audit feature, not critical path) -- Recommendation: Add in next iteration - -## Gap Prioritization - -| Gap | Priority | Risk | Recommendation | -|-----|----------|------|----------------| -| Avatar upload not tested | P1 | Medium | Add before release | -| Profile history not tested | P2 | Low | Add in next iteration | - -## Recommendations - -1. **Add avatar upload tests** (High priority) - - E2E test for upload flow - - API test for image validation - -2. **Add profile history tests** (Medium priority) - - Can defer to next release - - Low risk (non-critical feature) -``` - -### Phase 2: Quality Gate Decision - -Make go/no-go decision for release. - -**Inputs:** -- Phase 1 traceability results -- Test review results (if available) -- NFR assessment results (if available) -- Business context (deadlines, criticality) - -**Gate Decision Rules:** - -| Coverage | Test Quality | NFRs | Decision | -|----------|--------------|------|----------| -| >95% P0/P1 | >80 score | PASS | PASS ✅ | -| >85% P0/P1 | >70 score | CONCERNS | CONCERNS ⚠️ | -| <85% P0/P1 | <70 score | FAIL | FAIL ❌ | -| Any | Any | Any | WAIVED ⏭️ (with approval) | - -**Outputs:** - -**Gate Decision** (written to traceability-matrix.md or separate gate file): - -```yaml -decision: PASS -date: 2026-01-13 -epic: User Profile Management -release: v1.2.0 - -summary: - total_requirements: 12 - covered_requirements: 11 - coverage_percentage: 92% - critical_gaps: 0 - test_quality_score: 82 - -criteria: - p0_coverage: 100% - p1_coverage: 90% - test_quality: 82 - nfr_assessment: PASS - -rationale: | - All P0 requirements have full test coverage. - One P1 gap (avatar upload) has low risk. - Test quality exceeds 80% threshold. - NFR assessment passed. - -actions: - - Add avatar upload tests in next iteration (P1) - - Monitor profile performance in production - -approvers: - - name: Product Manager - approved: true - date: 2026-01-13 - - name: Tech Lead - approved: true - date: 2026-01-13 - -next_steps: - - Deploy to staging - - Run smoke tests - - Deploy to production -``` - -### Usage Patterns - -**Greenfield:** -- Run Phase 1 after Phase 3 (system-level test design) -- Run Phase 1 refresh in Phase 4 (per epic) -- Run Phase 2 at release gate - -**Brownfield:** -- Run Phase 1 in Phase 2 (establish baseline) -- Run Phase 1 refresh in Phase 4 (per epic) -- Run Phase 2 at release gate - -### Related Commands - -- `*test-design` - Provides requirements for traceability -- `*test-review` - Quality scores feed gate decision -- `*nfr-assess` - NFR results feed gate decision - -### How-To Guide - -[How to Run Trace](/docs/how-to/workflows/run-trace.md) +**How-To Guide:** [Run Trace](/docs/how-to/workflows/run-trace.md) --- ## Summary Table -| Command | Phase | Frequency | Purpose | Output | -|---------|-------|-----------|---------|--------| -| `*framework` | 3 | Once | Scaffold test framework | Config, sample tests | -| `*ci` | 3 | Once | Setup CI/CD pipeline | Workflow files, scripts | -| `*test-design` | 3, 4 | Once (system), Per epic | Risk-based test planning | Test design documents | -| `*atdd` | 4 | Per story (optional) | Generate failing tests (TDD) | Failing tests, checklist | -| `*automate` | 4 | Per story/feature | Generate passing tests | Passing tests, fixtures | -| `*test-review` | 4, Gate | Per epic/release | Audit test quality | Quality report (0-100) | -| `*nfr-assess` | 2, Gate | Per release | Validate NFRs | NFR assessment report | -| `*trace` | 2, 4, Gate | Baseline, refresh, gate | Coverage + gate decision | Coverage matrix, gate YAML | +| Command | Phase | Frequency | Primary Output | +|---------|-------|-----------|----------------| +| `*framework` | 3 | Once | Test infrastructure | +| `*ci` | 3 | Once | CI/CD pipeline | +| `*test-design` | 3, 4 | System + per epic | Test design doc | +| `*atdd` | 4 | Per story (optional) | Failing tests | +| `*automate` | 4 | Per story | Passing tests | +| `*test-review` | 4, Gate | Per epic/release | Quality report | +| `*nfr-assess` | 2, Gate | Per release | NFR assessment | +| `*trace` | 2, 4, Gate | Baseline + refresh + gate | Coverage matrix + decision | --- ## See Also -### How-To Guides -- [Set Up Test Framework](/docs/how-to/workflows/setup-test-framework.md) +**How-To Guides (Detailed Instructions):** +- [Setup Test Framework](/docs/how-to/workflows/setup-test-framework.md) +- [Setup CI Pipeline](/docs/how-to/workflows/setup-ci.md) - [Run Test Design](/docs/how-to/workflows/run-test-design.md) - [Run ATDD](/docs/how-to/workflows/run-atdd.md) - [Run Automate](/docs/how-to/workflows/run-automate.md) - [Run Test Review](/docs/how-to/workflows/run-test-review.md) -- [Set Up CI Pipeline](/docs/how-to/workflows/setup-ci.md) - [Run NFR Assessment](/docs/how-to/workflows/run-nfr-assess.md) - [Run Trace](/docs/how-to/workflows/run-trace.md) -### Explanation -- [TEA Overview](/docs/explanation/features/tea-overview.md) -- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) -- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) +**Explanation:** +- [TEA Overview](/docs/explanation/features/tea-overview.md) - Complete TEA lifecycle +- [Engagement Models](/docs/explanation/tea/engagement-models.md) - When to use which workflows -### Reference -- [TEA Configuration](/docs/reference/tea/configuration.md) -- [Knowledge Base Index](/docs/reference/tea/knowledge-base.md) -- [Glossary](/docs/reference/glossary/index.md) +**Reference:** +- [TEA Configuration](/docs/reference/tea/configuration.md) - Config options +- [Knowledge Base Index](/docs/reference/tea/knowledge-base.md) - Pattern fragments --- diff --git a/docs/reference/tea/configuration.md b/docs/reference/tea/configuration.md index 60688b4a..ba6e2e51 100644 --- a/docs/reference/tea/configuration.md +++ b/docs/reference/tea/configuration.md @@ -15,9 +15,9 @@ Complete reference for all TEA (Test Architect) configuration options. **Purpose:** Project-specific configuration values for your repository -**Created By:** `npx bmad-method@alpha install` command +**Created By:** BMad installer -**Status:** Gitignored (not committed to repository) +**Status:** Typically gitignored (user-specific values) **Usage:** Edit this file to change TEA behavior in your project @@ -155,17 +155,7 @@ Would you like to enable MCP enhancements in Test Architect? } ``` -**Configuration Location (IDE-Specific):** - -**Cursor:** -``` -~/.cursor/config.json or workspace .cursor/config.json -``` - -**VS Code with Claude:** -``` -~/Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json -``` +**Configuration:** Refer to your AI agent's documentation for MCP server setup instructions. **Example (Enable):** ```yaml @@ -364,9 +354,9 @@ tea_use_playwright_utils: true tea_use_mcp_enhancements: false ``` -**Individual config (gitignored):** +**Individual config (typically gitignored):** ```yaml -# _bmad/bmm/config.yaml (gitignored) +# _bmad/bmm/config.yaml (user adds to .gitignore) user_name: John Doe user_skill_level: expert tea_use_mcp_enhancements: true # Individual preference @@ -407,7 +397,7 @@ _bmad/bmm/config.yaml.example # Template for team package.json # Dependencies ``` -**Gitignore:** +**Recommended for .gitignore:** ``` _bmad/bmm/config.yaml # User-specific values .env # Secrets @@ -420,8 +410,7 @@ _bmad/bmm/config.yaml # User-specific values ```markdown ## Setup -1. Install BMad: - npx bmad-method@alpha install +1. Install BMad 2. Copy config template: cp _bmad/bmm/config.yaml.example _bmad/bmm/config.yaml @@ -558,48 +547,48 @@ npx playwright install ## Configuration Examples -### Minimal Setup (Defaults) +### Recommended Setup (Full Stack) + +```yaml +# _bmad/bmm/config.yaml +project_name: my-project +user_skill_level: beginner # or intermediate/expert +output_folder: _bmad-output +tea_use_playwright_utils: true # Recommended +tea_use_mcp_enhancements: true # Recommended +``` + +**Why recommended:** +- Playwright Utils: Production-ready fixtures and utilities +- MCP enhancements: Live browser verification, visual debugging +- Together: The three-part stack (see [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md)) + +**Prerequisites:** +```bash +npm install -D @seontechnologies/playwright-utils +# Configure MCP servers in IDE (see Enable MCP Enhancements guide) +``` + +**Best for:** Everyone (beginners learn good patterns from day one) + +--- + +### Minimal Setup (Learning Only) ```yaml # _bmad/bmm/config.yaml project_name: my-project -user_skill_level: intermediate output_folder: _bmad-output tea_use_playwright_utils: false tea_use_mcp_enhancements: false ``` **Best for:** -- New projects -- Learning TEA -- Simple testing needs +- First-time TEA users (keep it simple initially) +- Quick experiments +- Learning basics before adding integrations ---- - -### Advanced Setup (All Features) - -```yaml -# _bmad/bmm/config.yaml -project_name: enterprise-app -user_skill_level: expert -output_folder: docs/testing -planning_artifacts: docs/planning -implementation_artifacts: docs/implementation -project_knowledge: docs -tea_use_playwright_utils: true -tea_use_mcp_enhancements: true -``` - -**Prerequisites:** -```bash -npm install -D @seontechnologies/playwright-utils -# Configure MCP servers in IDE -``` - -**Best for:** -- Enterprise projects -- Teams with established testing practices -- Projects needing advanced TEA features +**Note:** Can enable integrations later as you learn --- @@ -622,7 +611,7 @@ output_folder: ../../_bmad-output/web # apps/api/_bmad/bmm/config.yaml project_name: api-service output_folder: ../../_bmad-output/api -tea_use_playwright_utils: false # API tests don't need it +tea_use_playwright_utils: false # Using vanilla Playwright only ``` --- @@ -642,9 +631,9 @@ planning_artifacts: _bmad-output/planning-artifacts implementation_artifacts: _bmad-output/implementation-artifacts project_knowledge: docs -# TEA Configuration -tea_use_playwright_utils: false # Set true if using @seontechnologies/playwright-utils -tea_use_mcp_enhancements: false # Set true if MCP servers configured in IDE +# TEA Configuration (Recommended: Enable both for full stack) +tea_use_playwright_utils: true # Recommended - production-ready utilities +tea_use_mcp_enhancements: true # Recommended - live browser verification # Languages communication_language: english @@ -668,74 +657,6 @@ document_output_language: english --- -## FAQ - -### When should I enable playwright-utils? - -**Enable if:** -- You're using or planning to use `@seontechnologies/playwright-utils` -- You want production-ready fixtures and utilities -- Your team benefits from standardized patterns -- You need utilities like `apiRequest`, `authSession`, `networkRecorder` - -**Skip if:** -- You're just learning TEA (keep it simple) -- You have your own fixture library -- You don't need the utilities - -### When should I enable MCP enhancements? - -**Enable if:** -- You want live browser verification during test generation -- You're debugging complex UI issues -- You want exploratory mode in `*test-design` -- You want recording mode in `*atdd` for accurate selectors - -**Skip if:** -- You're new to TEA (adds complexity) -- You don't have MCP servers configured -- Your tests work fine without it - -### Can I change config after installation? - -**Yes!** Edit `_bmad/bmm/config.yaml` anytime. - -**Important:** Start fresh chat after config changes (TEA loads config at workflow start). - -### Can I have different configs per branch? - -**Yes:** -```bash -# feature branch -git checkout feature/new-testing -# Edit config for experimentation -vim _bmad/bmm/config.yaml - -# main branch -git checkout main -# Config reverts to main branch values -``` - -Config is gitignored, so each branch can have different values. - -### How do I share config with team? - -**Use config.yaml.example:** -```bash -# Commit template -cp _bmad/bmm/config.yaml _bmad/bmm/config.yaml.example -git add _bmad/bmm/config.yaml.example -git commit -m "docs: add BMad config template" -``` - -**Team members copy template:** -```bash -cp _bmad/bmm/config.yaml.example _bmad/bmm/config.yaml -# Edit with their values -``` - ---- - ## See Also ### How-To Guides diff --git a/docs/reference/tea/knowledge-base.md b/docs/reference/tea/knowledge-base.md index 04fd9c21..6224d2ad 100644 --- a/docs/reference/tea/knowledge-base.md +++ b/docs/reference/tea/knowledge-base.md @@ -167,11 +167,10 @@ Feature flag testing, contract testing, and API testing patterns. ### Playwright-Utils Integration -Patterns for using `@seontechnologies/playwright-utils` package (11 utilities). +Patterns for using `@seontechnologies/playwright-utils` package (9 utilities). | Fragment | Description | Key Topics | |----------|-------------|-----------| -| overview | Playwright Utils installation, design principles, fixture patterns | Getting started, principles, setup | | [api-request](../../../src/modules/bmm/testarch/knowledge/api-request.md) | Typed HTTP client, schema validation, retry logic | API calls, HTTP, validation | | [auth-session](../../../src/modules/bmm/testarch/knowledge/auth-session.md) | Token persistence, multi-user, API/browser authentication | Auth patterns, session management | | [network-recorder](../../../src/modules/bmm/testarch/knowledge/network-recorder.md) | HAR record/playback, CRUD detection for offline testing | Offline testing, network replay | @@ -181,9 +180,8 @@ Patterns for using `@seontechnologies/playwright-utils` package (11 utilities). | [file-utils](../../../src/modules/bmm/testarch/knowledge/file-utils.md) | CSV/XLSX/PDF/ZIP handling with download support | File validation, exports | | [burn-in](../../../src/modules/bmm/testarch/knowledge/burn-in.md) | Smart test selection with git diff analysis | CI optimization, selective testing | | [network-error-monitor](../../../src/modules/bmm/testarch/knowledge/network-error-monitor.md) | Auto-detect HTTP 4xx/5xx errors during tests | Error monitoring, silent failures | -| [fixtures-composition](../../../src/modules/bmm/testarch/knowledge/fixtures-composition.md) | mergeTests composition patterns for combining utilities | Fixture merging, utility composition | -**Note:** All 11 playwright-utils fragments are in the same `knowledge/` directory as other fragments. +**Note:** `fixtures-composition` is listed under Architecture & Fixtures (general Playwright `mergeTests` pattern, applies to all fixtures). **Used in:** `*framework` (if `tea_use_playwright_utils: true`), `*atdd`, `*automate`, `*test-review`, `*ci` @@ -211,51 +209,11 @@ risk-governance,Risk Governance,Risk scoring and gate decisions,risk;governance, - `tags` - Searchable tags (semicolon-separated) - `fragment_file` - Relative path to fragment markdown file -## Fragment Locations +**Fragment Location:** `src/modules/bmm/testarch/knowledge/` (all 33 fragments in single directory) -**Knowledge Base Directory:** -``` -src/modules/bmm/testarch/knowledge/ -├── api-request.md -├── api-testing-patterns.md -├── auth-session.md -├── burn-in.md -├── ci-burn-in.md -├── component-tdd.md -├── contract-testing.md -├── data-factories.md -├── email-auth.md -├── error-handling.md -├── feature-flags.md -├── file-utils.md -├── fixture-architecture.md -├── fixtures-composition.md -├── intercept-network-call.md -├── log.md -├── network-error-monitor.md -├── network-first.md -├── network-recorder.md -├── nfr-criteria.md -├── playwright-config.md -├── probability-impact.md -├── recurse.md -├── risk-governance.md -├── selector-resilience.md -├── selective-testing.md -├── test-healing-patterns.md -├── test-levels-framework.md -├── test-priorities-matrix.md -├── test-quality.md -├── timing-debugging.md -└── visual-debugging.md -``` +**Manifest:** `src/modules/bmm/testarch/tea-index.csv` -**All fragments in single directory** (no subfolders) - -**Manifest:** -``` -src/modules/bmm/testarch/tea-index.csv -``` +--- ## Workflow Fragment Loading @@ -371,207 +329,6 @@ Each TEA workflow loads specific fragments: --- -## Key Fragments Explained - -### test-quality.md - -**What it covers:** -- Execution time limits (< 1.5 minutes) -- Test size limits (< 300 lines) -- No hard waits (waitForTimeout banned) -- No conditionals for flow control -- No try-catch for flow control -- Assertions must be explicit -- Self-cleaning tests for parallel execution - -**Why it matters:** -This is the Definition of Done for test quality. All TEA workflows reference this for quality standards. - -**Code examples:** 12+ - ---- - -### network-first.md - -**What it covers:** -- Intercept-before-navigate pattern -- Wait for network responses, not timeouts -- HAR capture for offline testing -- Deterministic waiting strategies - -**Why it matters:** -Prevents 90% of test flakiness. Core pattern for reliable E2E tests. - -**Code examples:** 15+ - ---- - -### fixture-architecture.md - -**What it covers:** -- Build pure functions first -- Wrap in framework fixtures second -- Compose with mergeTests -- Enable reusability and testability - -**Why it matters:** -Foundation of scalable test architecture. Makes utilities reusable and unit-testable. - -**Code examples:** 10+ - ---- - -### risk-governance.md - -**What it covers:** -- Risk scoring matrix (Probability × Impact) -- Risk categories (TECH, SEC, PERF, DATA, BUS, OPS) -- Gate decision rules (PASS/CONCERNS/FAIL/WAIVED) -- Mitigation planning - -**Why it matters:** -Objective, data-driven release decisions. Removes politics from quality gates. - -**Code examples:** 5 - ---- - -### test-priorities-matrix.md - -**What it covers:** -- P0: Critical path (100% coverage required) -- P1: High value (90% coverage target) -- P2: Medium value (50% coverage target) -- P3: Low value (20% coverage target) -- Execution ordering (P0 → P1 → P2 → P3) - -**Why it matters:** -Focus testing effort on what matters. Don't waste time on P3 edge cases. - -**Code examples:** 8 - ---- - -## Using Fragments Directly - -### As a Learning Resource - -Read fragments to learn patterns: - -```bash -# Read fixture architecture pattern -cat src/modules/bmm/testarch/knowledge/fixture-architecture.md - -# Read network-first pattern -cat src/modules/bmm/testarch/knowledge/network-first.md -``` - -### As Team Guidelines - -Use fragments as team documentation: - -```markdown -# Team Testing Guidelines - -## Fixture Architecture -See: src/modules/bmm/testarch/knowledge/fixture-architecture.md - -All fixtures must follow the pure function → fixture wrapper pattern. - -## Network Patterns -See: src/modules/bmm/testarch/knowledge/network-first.md - -All tests must use network-first patterns. No hard waits allowed. -``` - -### As Code Review Checklist - -Reference fragments in code review: - -```markdown -## PR Review Checklist - -- [ ] Tests follow test-quality.md standards (no hard waits, < 300 lines) -- [ ] Selectors follow selector-resilience.md (prefer getByRole) -- [ ] Network patterns follow network-first.md (wait for responses) -- [ ] Fixtures follow fixture-architecture.md (pure functions) -``` - -## Fragment Statistics - -**Total Fragments:** 33 -**Total Size:** ~600 KB (all fragments combined) -**Average Fragment Size:** ~18 KB -**Largest Fragment:** contract-testing.md (~28 KB) -**Smallest Fragment:** burn-in.md (~7 KB) - -**By Category:** -- Architecture & Fixtures: 4 fragments -- Data & Setup: 3 fragments -- Network & Reliability: 4 fragments -- Test Execution & CI: 3 fragments -- Quality & Standards: 5 fragments -- Risk & Gates: 3 fragments -- Selectors & Timing: 3 fragments -- Feature Flags & Patterns: 3 fragments -- Playwright-Utils Integration: 8 fragments - -**Note:** Statistics may drift with updates. All fragments are in the same `knowledge/` directory. - -## Contributing to Knowledge Base - -### Adding New Fragments - -1. Create fragment in `src/modules/bmm/testarch/knowledge/` -2. Follow existing format (Principle, Rationale, Pattern Examples) -3. Add to `tea-index.csv` with metadata -4. Update workflow instructions to load fragment -5. Test with TEA workflow - -### Updating Existing Fragments - -1. Edit fragment markdown file -2. Update `tea-index.csv` if metadata changes (line count, examples) -3. Test with affected workflows -4. Ensure no breaking changes to patterns - -### Fragment Quality Standards - -**Good fragment:** -- Principle stated clearly -- Rationale explains why -- Multiple pattern examples with code -- Good vs bad comparisons -- Self-contained (links to other fragments minimal) - -**Example structure:** -```markdown -# Fragment Name - -## Principle -[One sentence - what is this pattern?] - -## Rationale -[Why use this instead of alternatives?] - -## Pattern Examples - -### Example 1: Basic Usage -[Code example with explanation] - -### Example 2: Advanced Pattern -[Code example with explanation] - -## Anti-Patterns - -### Don't Do This -[Bad code example] -[Why it's bad] - -## Related Patterns -- [Other fragment](../other-fragment.md) -``` - ## Related - [TEA Overview](/docs/explanation/features/tea-overview.md) - How knowledge base fits in TEA diff --git a/docs/tutorials/getting-started/tea-lite-quickstart.md b/docs/tutorials/getting-started/tea-lite-quickstart.md index 707722ed..db13c0a4 100644 --- a/docs/tutorials/getting-started/tea-lite-quickstart.md +++ b/docs/tutorials/getting-started/tea-lite-quickstart.md @@ -51,9 +51,7 @@ You've just explored the features we'll test! ### Install BMad Method -```bash -npx bmad-method@alpha install -``` +Install BMad (see installation guide for latest command). When prompted: - **Select modules:** Choose "BMM: BMad Method" (press Space, then Enter) @@ -272,7 +270,7 @@ test('should mark todo as complete', async ({ page, apiRequest }) => { const { status, body: todo } = await apiRequest({ method: 'POST', path: '/api/todos', - body: { title: 'Complete tutorial' } // 'body' not 'data' + body: { title: 'Complete tutorial' } }); expect(status).toBe(201); @@ -393,7 +391,7 @@ See [How to Run ATDD](/docs/how-to/workflows/run-atdd.md) for the TDD approach. **Explanation** (understanding-oriented): - [TEA Overview](/docs/explanation/features/tea-overview.md) - Complete TEA capabilities -- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - Design philosophy +- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Why TEA exists** (problem + solution) - [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - How risk scoring works **Reference** (quick lookup):