docs: refined the docs

This commit is contained in:
murat 2026-01-14 12:51:28 -06:00
parent 638892289a
commit c83da03621
20 changed files with 961 additions and 2365 deletions

View File

@ -60,8 +60,8 @@ If you are unsure, default to the integrated path for your track and adjust late
| `*framework` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists | - |
| `*ci` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) | - |
| `*test-design` | Combined risk assessment, mitigation plan, and coverage strategy | Risk scoring + optional exploratory mode | **+ Exploratory**: Interactive UI discovery with browser automation (uncover actual functionality) |
| `*atdd` | Failing acceptance tests + implementation checklist | TDD red phase + optional recording mode | **+ Recording**: AI generation verified with live browser (accurate selectors from real DOM) |
| `*automate` | Prioritized specs, fixtures, README/script updates, DoD summary | Optional healing/recording, avoid duplicate coverage | **+ Healing**: Pattern fixes enhanced with visual debugging + **+ Recording**: AI verified with live browser |
| `*atdd` | Failing acceptance tests + implementation checklist | TDD red phase + optional recording mode | **+ Recording**: UI selectors verified with live browser; API tests benefit from trace analysis |
| `*automate` | Prioritized specs, fixtures, README/script updates, DoD summary | Optional healing/recording, avoid duplicate coverage | **+ Healing**: Visual debugging + trace analysis for test fixes; **+ Recording**: Verified selectors (UI) + network inspection (API) |
| `*test-review` | Test quality review report with 0-100 score, violations, fixes | Reviews tests against knowledge base patterns | - |
| `*nfr-assess` | NFR assessment report with actions | Focus on security/performance/reliability | - |
| `*trace` | Phase 1: Coverage matrix, recommendations. Phase 2: Gate decision (PASS/CONCERNS/FAIL/WAIVED) | Two-phase workflow: traceability + gate decision | - |
@ -308,7 +308,7 @@ Want to understand TEA principles and patterns in depth?
- [Engagement Models](/docs/explanation/tea/engagement-models.md) - TEA Lite, TEA Solo, TEA Integrated (5 models explained)
**Philosophy:**
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - Why TEA exists, problem statement
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Start here to understand WHY TEA exists** - The problem with AI-generated tests and TEA's three-part solution
## Optional Integrations

View File

@ -594,7 +594,7 @@ Client project 3 (Ad-hoc):
**When:** Adopt BMad Method, want full integration.
**Steps:**
1. Install BMad Method (`npx bmad-method@alpha install`)
1. Install BMad Method (see installation guide)
2. Run planning workflows (PRD, architecture)
3. Integrate TEA into Phase 3 (system-level test design)
4. Follow integrated lifecycle (per epic workflows)
@ -690,7 +690,7 @@ Each model uses different TEA workflows. See:
**Use-Case Guides:**
- [Using TEA with Existing Tests](/docs/how-to/brownfield/use-tea-with-existing-tests.md) - Model 5: Brownfield
- [Running TEA for Enterprise](/docs/how-to/workflows/run-tea-for-enterprise.md) - Enterprise integration
- [Running TEA for Enterprise](/docs/how-to/enterprise/use-tea-for-enterprise.md) - Enterprise integration
**All Workflow Guides:**
- [How to Run Test Design](/docs/how-to/workflows/run-test-design.md) - Used in TEA Solo and Integrated

View File

@ -220,8 +220,8 @@ test('should update profile', async ({ apiRequest, authToken, log }) => {
// Use API request fixture (matches pure function signature)
const { status, body } = await apiRequest({
method: 'PATCH',
url: '/api/profile', // Pure function uses 'url' (not 'path')
data: { name: 'New Name' }, // Pure function uses 'data' (not 'body')
url: '/api/profile',
data: { name: 'New Name' },
headers: { Authorization: `Bearer ${authToken}` }
});

View File

@ -484,22 +484,31 @@ await page.waitForSelector('.success', { timeout: 30000 });
All developers:
```typescript
import { test } from '@seontechnologies/playwright-utils/recurse/fixtures';
import { test } from '@seontechnologies/playwright-utils/fixtures';
test('job completion', async ({ page, recurse }) => {
await page.click('button');
const result = await recurse({
fn: () => apiRequest({ method: 'GET', path: '/api/job/123' }),
predicate: (job) => job.status === 'complete',
timeout: 30000
test('job completion', async ({ apiRequest, recurse }) => {
// Start async job
const { body: job } = await apiRequest({
method: 'POST',
path: '/api/jobs'
});
expect(result.status).toBe('complete');
// Poll until complete (correct API: command, predicate, options)
const result = await recurse(
() => apiRequest({ method: 'GET', path: `/api/jobs/${job.id}` }),
(response) => response.body.status === 'completed', // response.body from apiRequest
{
timeout: 30000,
interval: 2000,
log: 'Waiting for job to complete'
}
);
expect(result.body.status).toBe('completed');
});
```
**Result:** Consistent pattern, established best practice.
**Result:** Consistent pattern using correct playwright-utils API (command, predicate, options).
## Technical Implementation
@ -520,7 +529,7 @@ For details on the knowledge base index, see:
**Overview:**
- [TEA Overview](/docs/explanation/features/tea-overview.md) - Knowledge base in workflows
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - Context engineering philosophy
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Foundation: Context engineering philosophy** (why knowledge base solves AI test problems)
## Practical Guides

View File

@ -125,6 +125,40 @@ test('should load dashboard data', async ({ page }) => {
- No fixed timeout (fast when API is fast)
- Validates API response (catch backend errors)
**With Playwright Utils (Even Cleaner):**
```typescript
import { test } from '@seontechnologies/playwright-utils/fixtures';
import { expect } from '@playwright/test';
test('should load dashboard data', async ({ page, interceptNetworkCall }) => {
// Set up interception BEFORE navigation
const dashboardCall = interceptNetworkCall({
method: 'GET',
url: '**/api/dashboard'
});
// Navigate
await page.goto('/dashboard');
// Wait for API response (automatic JSON parsing)
const { status, responseJson: data } = await dashboardCall;
// Validate API response
expect(status).toBe(200);
expect(data.items).toBeDefined();
// Assert UI matches API data
await expect(page.locator('.data-table')).toBeVisible();
await expect(page.locator('.data-table tr')).toHaveCount(data.items.length);
});
```
**Playwright Utils Benefits:**
- Automatic JSON parsing (no `await response.json()`)
- Returns `{ status, responseJson, requestJson }` structure
- Cleaner API (no need to check `resp.ok()`)
- Same intercept-before-navigate pattern
### Intercept-Before-Navigate Pattern
**Key insight:** Set up wait BEFORE triggering the action.
@ -196,6 +230,7 @@ sequenceDiagram
### TEA Generates Network-First Tests
**Vanilla Playwright:**
```typescript
// When you run *atdd or *automate, TEA generates:
@ -219,6 +254,37 @@ test('should create user', async ({ page }) => {
});
```
**With Playwright Utils (if `tea_use_playwright_utils: true`):**
```typescript
import { test } from '@seontechnologies/playwright-utils/fixtures';
import { expect } from '@playwright/test';
test('should create user', async ({ page, interceptNetworkCall }) => {
// TEA uses interceptNetworkCall for cleaner interception
const createUserCall = interceptNetworkCall({
method: 'POST',
url: '**/api/users'
});
await page.getByLabel('Name').fill('Test User');
await page.getByRole('button', { name: 'Submit' }).click();
// Wait for response (automatic JSON parsing)
const { status, responseJson: user } = await createUserCall;
// Validate both API and UI
expect(status).toBe(201);
expect(user.id).toBeDefined();
await expect(page.locator('.success')).toContainText(user.name);
});
```
**Playwright Utils Benefits:**
- Automatic JSON parsing (`responseJson` ready to use)
- No manual `await response.json()`
- Returns `{ status, responseJson }` structure
- Cleaner, more readable code
### TEA Reviews for Hard Waits
When you run `*test-review`:
@ -252,6 +318,7 @@ await responsePromise; // ✅
### Basic Response Wait
**Vanilla Playwright:**
```typescript
// Wait for any successful response
const promise = page.waitForResponse(resp => resp.ok());
@ -259,8 +326,23 @@ await page.click('button');
await promise;
```
**With Playwright Utils:**
```typescript
import { test } from '@seontechnologies/playwright-utils/fixtures';
test('basic wait', async ({ page, interceptNetworkCall }) => {
const responseCall = interceptNetworkCall({ url: '**' }); // Match any
await page.click('button');
const { status } = await responseCall;
expect(status).toBe(200);
});
```
---
### Specific URL Match
**Vanilla Playwright:**
```typescript
// Wait for specific endpoint
const promise = page.waitForResponse(
@ -270,8 +352,21 @@ await page.goto('/user/123');
await promise;
```
**With Playwright Utils:**
```typescript
test('specific URL', async ({ page, interceptNetworkCall }) => {
const userCall = interceptNetworkCall({ url: '**/api/users/123' });
await page.goto('/user/123');
const { status, responseJson } = await userCall;
expect(status).toBe(200);
});
```
---
### Method + Status Match
**Vanilla Playwright:**
```typescript
// Wait for POST that returns 201
const promise = page.waitForResponse(
@ -284,8 +379,24 @@ await page.click('button[type="submit"]');
await promise;
```
**With Playwright Utils:**
```typescript
test('method and status', async ({ page, interceptNetworkCall }) => {
const createCall = interceptNetworkCall({
method: 'POST',
url: '**/api/users'
});
await page.click('button[type="submit"]');
const { status, responseJson } = await createCall;
expect(status).toBe(201); // Explicit status check
});
```
---
### Multiple Responses
**Vanilla Playwright:**
```typescript
// Wait for multiple API calls
const [usersResp, postsResp] = await Promise.all([
@ -298,8 +409,29 @@ const users = await usersResp.json();
const posts = await postsResp.json();
```
**With Playwright Utils:**
```typescript
test('multiple responses', async ({ page, interceptNetworkCall }) => {
const usersCall = interceptNetworkCall({ url: '**/api/users' });
const postsCall = interceptNetworkCall({ url: '**/api/posts' });
await page.goto('/dashboard'); // Triggers both
const [{ responseJson: users }, { responseJson: posts }] = await Promise.all([
usersCall,
postsCall
]);
expect(users).toBeInstanceOf(Array);
expect(posts).toBeInstanceOf(Array);
});
```
---
### Validate Response Data
**Vanilla Playwright:**
```typescript
// Verify API response before asserting UI
const promise = page.waitForResponse(
@ -319,6 +451,28 @@ expect(order.total).toBeGreaterThan(0);
await expect(page.locator('.order-confirmation')).toContainText(order.id);
```
**With Playwright Utils:**
```typescript
test('validate response data', async ({ page, interceptNetworkCall }) => {
const checkoutCall = interceptNetworkCall({
method: 'POST',
url: '**/api/checkout'
});
await page.click('button:has-text("Complete Order")');
const { status, responseJson: order } = await checkoutCall;
// Response validation (automatic JSON parsing)
expect(status).toBe(200);
expect(order.status).toBe('confirmed');
expect(order.total).toBeGreaterThan(0);
// UI validation
await expect(page.locator('.order-confirmation')).toContainText(order.id);
});
```
## Advanced Patterns
### HAR Recording for Offline Testing
@ -481,6 +635,36 @@ test('dashboard loads data', async ({ page }) => {
- Validates UI matches API (catch frontend bugs)
- Works in any environment (local, CI, staging)
**With Playwright Utils (Even Better):**
```typescript
import { test } from '@seontechnologies/playwright-utils/fixtures';
test('dashboard loads data', async ({ page, interceptNetworkCall }) => {
const dashboardCall = interceptNetworkCall({
method: 'GET',
url: '**/api/dashboard'
});
await page.goto('/dashboard');
const { status, responseJson: { items } } = await dashboardCall;
// Validate API response (automatic JSON parsing)
expect(status).toBe(200);
expect(items).toHaveLength(5);
// Validate UI matches API
await expect(page.locator('table tr')).toHaveCount(items.length);
});
```
**Additional Benefits:**
- No manual `await response.json()` (automatic parsing)
- Cleaner destructuring of nested data
- Consistent API across all network calls
---
### Form Submission
**Traditional (Flaky):**
@ -513,6 +697,35 @@ test('form submission', async ({ page }) => {
});
```
**With Playwright Utils:**
```typescript
import { test } from '@seontechnologies/playwright-utils/fixtures';
test('form submission', async ({ page, interceptNetworkCall }) => {
const submitCall = interceptNetworkCall({
method: 'POST',
url: '**/api/submit'
});
await page.getByLabel('Email').fill('test@example.com');
await page.getByRole('button', { name: 'Submit' }).click();
const { status, responseJson: result } = await submitCall;
// Automatic JSON parsing, no manual await
expect(status).toBe(200);
expect(result.success).toBe(true);
await expect(page.locator('.success')).toBeVisible();
});
```
**Progression:**
- Traditional: Hard waits (flaky)
- Network-First (Vanilla): waitForResponse (deterministic)
- Network-First (PW-Utils): interceptNetworkCall (deterministic + cleaner API)
---
## Common Misconceptions
### "I Already Use waitForSelector"
@ -545,29 +758,57 @@ await page.waitForSelector('.success'); // Then validate UI
### "Too Much Boilerplate"
**Solution:** Extract to fixtures (see Fixture Architecture)
**Problem:** `waitForResponse` is verbose, repeated in every test.
**Solution:** Use Playwright Utils `interceptNetworkCall` - built-in fixture that reduces boilerplate.
**Vanilla Playwright (Repetitive):**
```typescript
// Create reusable fixture
export const test = base.extend({
waitForApi: async ({ page }, use) => {
await use((urlPattern: string) => {
// Returns promise immediately (doesn't await)
return page.waitForResponse(
resp => resp.url().includes(urlPattern) && resp.ok()
test('test 1', async ({ page }) => {
const promise = page.waitForResponse(
resp => resp.url().includes('/api/submit') && resp.ok()
);
});
}
await page.click('button');
await promise;
});
// Use in tests
test('test', async ({ page, waitForApi }) => {
const promise = waitForApi('/api/submit'); // Get promise
await page.click('button'); // Trigger action
await promise; // Wait for response
test('test 2', async ({ page }) => {
const promise = page.waitForResponse(
resp => resp.url().includes('/api/load') && resp.ok()
);
await page.click('button');
await promise;
});
// Repeated pattern in every test
```
**With Playwright Utils (Cleaner):**
```typescript
import { test } from '@seontechnologies/playwright-utils/fixtures';
test('test 1', async ({ page, interceptNetworkCall }) => {
const submitCall = interceptNetworkCall({ url: '**/api/submit' });
await page.click('button');
const { status, responseJson } = await submitCall;
expect(status).toBe(200);
});
test('test 2', async ({ page, interceptNetworkCall }) => {
const loadCall = interceptNetworkCall({ url: '**/api/load' });
await page.click('button');
const { responseJson } = await loadCall;
// Automatic JSON parsing, cleaner API
});
```
**Benefits:**
- Less boilerplate (fixture handles complexity)
- Automatic JSON parsing
- Glob pattern matching (`**/api/**`)
- Consistent API across all tests
See [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md#intercept-network-call) for setup.
## Technical Implementation
For detailed network-first patterns, see the knowledge base:

View File

@ -573,7 +573,7 @@ flowchart TD
- [How to Run NFR Assessment](/docs/how-to/workflows/run-nfr-assess.md) - NFR risk assessment
**Use-Case Guides:**
- [Running TEA for Enterprise](/docs/how-to/workflows/run-tea-for-enterprise.md) - Enterprise risk management
- [Running TEA for Enterprise](/docs/how-to/enterprise/use-tea-for-enterprise.md) - Enterprise risk management
## Reference

View File

@ -107,7 +107,7 @@ test('flaky test', async ({ page }) => {
});
```
**Good Example:**
**Good Example (Vanilla Playwright):**
```typescript
test('deterministic test', async ({ page }) => {
const responsePromise = page.waitForResponse(
@ -126,12 +126,43 @@ test('deterministic test', async ({ page }) => {
});
```
**Why it works:**
**With Playwright Utils (Even Cleaner):**
```typescript
import { test } from '@seontechnologies/playwright-utils/fixtures';
import { expect } from '@playwright/test';
test('deterministic test', async ({ page, interceptNetworkCall }) => {
const submitCall = interceptNetworkCall({
method: 'POST',
url: '**/api/submit'
});
await page.click('button');
// Wait for actual response (automatic JSON parsing)
const { status, responseJson } = await submitCall;
expect(status).toBe(200);
// Modal should ALWAYS show (make it deterministic)
await expect(page.locator('.modal')).toBeVisible();
await page.click('.dismiss');
// Explicit assertion (fails if not visible)
await expect(page.locator('.success')).toBeVisible();
});
```
**Why both work:**
- Waits for actual event (network response)
- No conditionals (behavior is deterministic)
- Assertions fail loudly (no silent failures)
- Same result every run (deterministic)
**Playwright Utils additional benefits:**
- Automatic JSON parsing
- `{ status, responseJson }` structure (can validate response data)
- No manual `await response.json()`
### 2. Isolation (No Dependencies)
**Rule:** Test runs independently, no shared state.
@ -152,7 +183,7 @@ test('create user', async ({ apiRequest }) => {
const { body } = await apiRequest({
method: 'POST',
path: '/api/users',
body: { email: 'test@example.com' } // 'body' not 'data' (hard-coded)
body: { email: 'test@example.com' } (hard-coded)
});
userId = body.id; // Store in global
});
@ -162,7 +193,7 @@ test('update user', async ({ apiRequest }) => {
await apiRequest({
method: 'PATCH',
path: `/api/users/${userId}`,
body: { name: 'Updated' } // 'body' not 'data'
body: { name: 'Updated' }
});
// No cleanup - leaves user in database
});
@ -213,7 +244,7 @@ test('should update user profile', async ({ apiRequest }) => {
const { status: createStatus, body: user } = await apiRequest({
method: 'POST',
path: '/api/users',
body: { email: testEmail, name: faker.person.fullName() } // 'body' not 'data'
body: { email: testEmail, name: faker.person.fullName() }
});
expect(createStatus).toBe(201);
@ -222,7 +253,7 @@ test('should update user profile', async ({ apiRequest }) => {
const { status, body: updated } = await apiRequest({
method: 'PATCH',
path: `/api/users/${user.id}`,
body: { name: 'Updated Name' } // 'body' not 'data'
body: { name: 'Updated Name' }
});
expect(status).toBe(200);
@ -412,7 +443,7 @@ test('slow test', async ({ page }) => {
**Total time:** 3+ minutes (95 seconds wasted on hard waits)
**Good Example:**
**Good Example (Vanilla Playwright):**
```typescript
// ✅ Fast test (< 10 seconds)
test('fast test', async ({ page }) => {
@ -436,8 +467,50 @@ test('fast test', async ({ page }) => {
});
```
**With Playwright Utils:**
```typescript
import { test } from '@seontechnologies/playwright-utils/fixtures';
import { expect } from '@playwright/test';
test('fast test', async ({ page, interceptNetworkCall }) => {
// Set up interception
const resultCall = interceptNetworkCall({
method: 'GET',
url: '**/api/result'
});
await page.goto('/');
// Direct navigation (skip intermediate pages)
await page.goto('/page-10');
// Efficient selector
await page.getByRole('button', { name: 'Submit' }).click();
// Wait for actual response (automatic JSON parsing)
const { status, responseJson } = await resultCall;
expect(status).toBe(200);
await expect(page.locator('.result')).toBeVisible();
// Can also validate response data if needed
// expect(responseJson.data).toBeDefined();
});
```
**Total time:** < 10 seconds (no wasted waits)
**Both examples achieve:**
- No hard waits (wait for actual events)
- Direct navigation (skip unnecessary steps)
- Efficient selectors (getByRole)
- Fast execution
**Playwright Utils bonus:**
- Can validate API response data easily
- Automatic JSON parsing
- Cleaner API
## TEA's Quality Scoring
TEA reviews tests against these standards in `*test-review`:
@ -821,7 +894,7 @@ For detailed test quality patterns, see:
**Use-Case Guides:**
- [Using TEA with Existing Tests](/docs/how-to/brownfield/use-tea-with-existing-tests.md) - Improve legacy quality
- [Running TEA for Enterprise](/docs/how-to/workflows/run-tea-for-enterprise.md) - Enterprise quality thresholds
- [Running TEA for Enterprise](/docs/how-to/enterprise/use-tea-for-enterprise.md) - Enterprise quality thresholds
## Reference

View File

@ -150,34 +150,40 @@ test('checkout completes', async ({ page }) => {
});
```
**After (With Playwright Utils + Auto Error Detection):**
**After (With Playwright Utils - Cleaner API):**
```typescript
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
import { test } from '@seontechnologies/playwright-utils/fixtures';
import { expect } from '@playwright/test';
// That's it! Just import the fixture - monitoring is automatic
test('checkout completes', async ({ page }) => {
const checkoutPromise = page.waitForResponse(
resp => resp.url().includes('/api/checkout') && resp.ok()
);
test('checkout completes', async ({ page, interceptNetworkCall }) => {
// Use interceptNetworkCall for cleaner network interception
const checkoutCall = interceptNetworkCall({
method: 'POST',
url: '**/api/checkout'
});
await page.click('button[name="checkout"]');
const response = await checkoutPromise;
const order = await response.json();
// Wait for response (automatic JSON parsing)
const { status, responseJson: order } = await checkoutCall;
// Validate API response
expect(status).toBe(200);
expect(order.status).toBe('confirmed');
await expect(page.locator('.confirmation')).toBeVisible();
// Zero setup - automatically fails if ANY 4xx/5xx occurred
// Error message: "Network errors detected: POST 500 /api/payment"
// Validate UI
await expect(page.locator('.confirmation')).toBeVisible();
});
```
**Playwright Utils Benefits:**
- Auto-enabled by fixture import (zero code changes)
- Catches silent backend errors (500, 503, 504)
- Test fails even if UI shows cached/stale success message
- Structured error report in test output
- No manual error checking needed
- `interceptNetworkCall` for cleaner network interception
- Automatic JSON parsing (`responseJson` ready to use)
- No manual `await response.json()`
- Glob pattern matching (`**/api/checkout`)
- Cleaner, more maintainable code
**For automatic error detection,** use `network-error-monitor` fixture separately. See [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md#network-error-monitor).
**Priority 3: P1 Requirements**
```
@ -353,7 +359,7 @@ test.skip('flaky test - needs fixing', async ({ page }) => {
# Quarantined Tests
| Test | Reason | Owner | Target Fix Date |
|------|--------|-------|----------------|
| ------------------- | -------------------------- | -------- | --------------- |
| checkout.spec.ts:45 | Hard wait causes flakiness | QA Team | 2026-01-20 |
| profile.spec.ts:28 | Conditional flow control | Dev Team | 2026-01-25 |
```
@ -399,7 +405,7 @@ Same process
# Test Suite Status
| Directory | Tests | Quality Score | Status | Notes |
|-----------|-------|---------------|--------|-------|
| ------------------ | ----- | ------------- | ------------- | -------------- |
| tests/auth/ | 15 | 85/100 | ✅ Modernized | Week 1 cleanup |
| tests/api/ | 32 | 78/100 | ⚠️ In Progress | Week 2 |
| tests/e2e/ | 28 | 62/100 | ❌ Legacy | Week 3 planned |
@ -465,15 +471,26 @@ Incremental changes = lower risk
**Solution:**
```
1. Run *ci to add selective testing
2. Run only affected tests on PR
3. Run full suite nightly
4. Parallelize with sharding
1. Configure parallel execution (shard tests across workers)
2. Add selective testing (run only affected tests on PR)
3. Run full suite nightly only
4. Optimize slow tests (remove hard waits, improve selectors)
Before: 4 hours sequential
After: 15 minutes with sharding + selective testing
```
**How `*ci` helps:**
- Scaffolds CI configuration with parallel sharding examples
- Provides selective testing script templates
- Documents burn-in and optimization strategies
- But YOU configure workers, test selection, and optimization
**With Playwright Utils burn-in:**
- Smart selective testing based on git diff
- Volume control (run percentage of affected tests)
- See [Integrate Playwright Utils](/docs/how-to/customization/integrate-playwright-utils.md#burn-in)
### "We Have Tests But They Always Fail"
**Problem:** Tests are so flaky they're ignored.
@ -530,43 +547,6 @@ Don't let perfect be the enemy of good
*trace Phase 2 - Gate decision
```
## Success Stories
### Example: E-Commerce Platform
**Starting Point:**
- 200 E2E tests, 30% passing, 15-minute flakiness
- No API tests
- No coverage visibility
**After 3 Months with TEA:**
- 150 E2E tests (removed duplicates), 95% passing, <1% flakiness
- 300 API tests added (faster, more reliable)
- P0 coverage: 100%, P1 coverage: 85%
- Quality score: 82/100
**How:**
- Month 1: Baseline + fix top 20 flaky tests
- Month 2: Add API tests for critical path
- Month 3: Improve quality + expand P1 coverage
### Example: SaaS Application
**Starting Point:**
- 50 tests, quality score 48/100
- Hard waits everywhere
- Tests take 45 minutes
**After 6 Weeks with TEA:**
- 120 tests, quality score 78/100
- No hard waits (network-first patterns)
- Tests take 8 minutes (parallel execution)
**How:**
- Week 1-2: Replace hard waits with network-first
- Week 3-4: Add selective testing + CI parallelization
- Week 5-6: Generate tests for gaps with *automate
## Related Guides
**Workflow Guides:**

View File

@ -18,17 +18,25 @@ MCP (Model Context Protocol) servers enable AI agents to interact with live brow
## When to Use This
**For UI Testing:**
- Want exploratory mode in `*test-design` (browser-based UI discovery)
- Want recording mode in `*atdd` (verify selectors with live browser)
- Want recording mode in `*atdd` or `*automate` (verify selectors with live browser)
- Want healing mode in `*automate` (fix tests with visual debugging)
- Debugging complex UI issues
- Need accurate selectors from actual DOM
- Debugging complex UI interactions
**For API Testing:**
- Want healing mode in `*automate` (analyze failures with trace data)
- Need to debug test failures (network responses, request/response data, timing)
- Want to inspect trace files (network traffic, errors, race conditions)
**For Both:**
- Visual debugging (trace viewer shows network + UI)
- Test failure analysis (MCP can run tests and extract errors)
- Understanding complex test failures (network + DOM together)
**Don't use if:**
- You're new to TEA (adds complexity)
- You don't have MCP servers configured
- Your tests work fine without it
- You're testing APIs only (no UI)
## Prerequisites
@ -71,13 +79,11 @@ MCP (Model Context Protocol) servers enable AI agents to interact with live brow
Both servers work together to provide full TEA MCP capabilities.
## Installation
## Setup
### Step 1: Configure MCP Servers in IDE
### 1. Configure MCP Servers
Add this configuration to your IDE's MCP settings. See [TEA Overview](/docs/explanation/features/tea-overview.md#playwright-mcp-enhancements) for IDE-specific configuration locations.
**MCP Configuration:**
Add to your IDE's MCP configuration:
```json
{
@ -94,36 +100,20 @@ Add this configuration to your IDE's MCP settings. See [TEA Overview](/docs/expl
}
```
### Step 2: Install Playwright Browsers
See [TEA Overview](/docs/explanation/features/tea-overview.md#playwright-mcp-enhancements) for IDE-specific config locations.
```bash
npx playwright install
```
### 2. Enable in BMAD
### Step 3: Enable in TEA Config
Edit `_bmad/bmm/config.yaml`:
Answer "Yes" when prompted during installation, or set in config:
```yaml
# _bmad/bmm/config.yaml
tea_use_mcp_enhancements: true
```
### Step 4: Restart IDE
### 3. Verify MCPs Running
Restart your IDE to load MCP server configuration.
### Step 5: Verify MCP Servers
Check MCP servers are running:
**In Cursor:**
- Open command palette (Cmd/Ctrl + Shift + P)
- Search "MCP"
- Should see "Playwright" and "Playwright Test" servers listed
**In VS Code:**
- Check Claude extension settings
- Verify MCP servers are enabled
Ensure your MCP servers are running in your IDE.
## How MCP Enhances TEA Workflows
@ -162,16 +152,14 @@ I'll design tests for these interactions."
**Without MCP:**
- TEA generates selectors from best practices
- May use `getByRole()` that doesn't match actual app
- Selectors might need adjustment
- TEA infers API patterns from documentation
**With MCP:**
TEA verifies selectors with live browser:
**With MCP (Recording Mode):**
**For UI Tests:**
```
"Let me verify the login form selectors"
[TEA navigates to /login]
[Inspects form fields]
[TEA navigates to /login with live browser]
[Inspects actual form fields]
"I see:
- Email input has label 'Email Address' (not 'Email')
@ -181,47 +169,58 @@ TEA verifies selectors with live browser:
I'll use these exact selectors."
```
**Generated test:**
```typescript
await page.getByLabel('Email Address').fill('test@example.com');
await page.getByLabel('Your Password').fill('password');
await page.getByRole('button', { name: 'Sign In' }).click();
// Selectors verified against actual DOM
**For API Tests:**
```
[TEA analyzes trace files from test runs]
[Inspects network requests/responses]
"I see the API returns:
- POST /api/login → 200 with { token, userId }
- Response time: 150ms
- Required headers: Content-Type, Authorization
I'll validate these in tests."
```
**Benefits:**
- Accurate selectors from real DOM
- Tests work on first run
- No trial-and-error selector debugging
- UI: Accurate selectors from real DOM
- API: Validated request/response patterns from trace
- Both: Tests work on first run
### *automate: Healing Mode
### *automate: Healing + Recording Modes
**Without MCP:**
- TEA analyzes test code only
- Suggests fixes based on static analysis
- Can't verify fixes work
- Generates tests from documentation/code
**With MCP:**
TEA uses visual debugging:
**Healing Mode (UI + API):**
```
"This test is failing. Let me debug with trace viewer"
[TEA opens trace file]
[Analyzes screenshots]
[Identifies selector changed]
[Analyzes screenshots + network tab]
"The button selector changed from 'Save' to 'Save Changes'
I'll update the test and verify it works"
UI failures: "Button selector changed from 'Save' to 'Save Changes'"
API failures: "Response structure changed, expected {id} got {userId}"
[TEA makes fix]
[Runs test with MCP]
[Confirms test passes]
[TEA makes fixes]
[Verifies with trace analysis]
```
**Recording Mode (UI + API):**
```
UI: [Inspects actual DOM, generates verified selectors]
API: [Analyzes network traffic, validates request/response patterns]
[Generates tests with verified patterns]
[Tests work on first run]
```
**Benefits:**
- Visual debugging during healing
- Verified fixes (not guesses)
- Faster resolution
- Visual debugging + trace analysis (not just UI)
- Verified selectors (UI) + network patterns (API)
- Tests verified against actual application behavior
## Usage Examples
@ -290,43 +289,6 @@ Fixing selector and verifying...
Updated test with corrected selector.
```
## Configuration Options
### MCP Server Arguments
**Playwright MCP with custom port:**
```json
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@playwright/mcp@latest", "--port", "3000"]
}
}
}
```
**Playwright Test with specific browser:**
```json
{
"mcpServers": {
"playwright-test": {
"command": "npx",
"args": ["playwright", "run-test-mcp-server", "--browser", "chromium"]
}
}
}
```
### Environment Variables
```bash
# .env
PLAYWRIGHT_BROWSER=chromium # Browser for MCP
PLAYWRIGHT_HEADLESS=false # Show browser during MCP
PLAYWRIGHT_SLOW_MO=100 # Slow down for visibility
```
## Troubleshooting
### MCP Servers Not Running
@ -433,107 +395,6 @@ tea_use_mcp_enhancements: true
tea_use_mcp_enhancements: false
```
## Best Practices
### Use MCP for Complex UIs
**Simple UI (skip MCP):**
```
Standard login form with email/password
TEA can infer selectors without MCP
```
**Complex UI (use MCP):**
```
Multi-step wizard with dynamic fields
Conditional UI elements
Third-party components
Custom form widgets
```
### Start Without MCP, Enable When Needed
**Learning path:**
1. Week 1-2: TEA without MCP (learn basics)
2. Week 3: Enable MCP (explore advanced features)
3. Week 4+: Use MCP selectively (when it adds value)
### Combine with Playwright Utils
**Powerful combination:**
```yaml
tea_use_playwright_utils: true
tea_use_mcp_enhancements: true
```
**Benefits:**
- Playwright Utils provides production-ready utilities
- MCP verifies utilities work with actual app
- Best of both worlds
### Use for Test Healing
**Scenario:** Test suite has 50 failing tests after UI update.
**With MCP:**
```
*automate (healing mode)
TEA:
1. Opens trace viewer for each failure
2. Identifies changed selectors
3. Updates tests with corrected selectors
4. Verifies fixes with browser
5. Provides updated tests
Result: 45/50 tests auto-healed
```
### Use for New Team Members
**Onboarding:**
```
New developer: "I don't know this codebase's UI"
Senior: "Run *test-design with MCP exploratory mode"
TEA explores UI and generates documentation:
- UI structure discovered
- Interactive elements mapped
- Test design created automatically
```
## Security Considerations
### MCP Servers Have Browser Access
**What MCP can do:**
- Navigate to any URL
- Click any element
- Fill any form
- Access browser storage
- Read page content
**Best practices:**
- Only configure MCP in trusted environments
- Don't use MCP on production sites (use staging/dev)
- Review generated tests before running on production
- Keep MCP config in local files (not committed)
### Protect Credentials
**Don't:**
```
"TEA, login with mypassword123"
# Password visible in chat history
```
**Do:**
```
"TEA, login using credentials from .env"
# Password loaded from environment, not in chat
```
## Related Guides
**Getting Started:**

View File

@ -62,7 +62,7 @@ Edit `_bmad/bmm/config.yaml`:
tea_use_playwright_utils: true
```
**Note:** If you enabled this during installation (`npx bmad-method@alpha install`), it's already set.
**Note:** If you enabled this during BMad installation, it's already set.
### Step 3: Verify Installation
@ -175,13 +175,16 @@ Reviews against playwright-utils best practices:
### *ci Workflow
**Without Playwright Utils:**
Basic CI configuration
- Parallel sharding
- Burn-in loops (basic shell scripts)
- CI triggers (PR, push, schedule)
- Artifact collection
**With Playwright Utils:**
Enhanced CI with:
- Burn-in utility for smart test selection
- Selective testing based on git diff
- Test prioritization
Enhanced with smart testing:
- Burn-in utility (git diff-based, volume control)
- Selective testing (skip config/docs/types changes)
- Test prioritization by file changes
## Available Utilities
@ -189,6 +192,18 @@ Enhanced CI with:
Typed HTTP client with schema validation.
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/api-request.html>
**Why Use This?**
| Vanilla Playwright | api-request Utility |
|-------------------|---------------------|
| Manual `await response.json()` | Automatic JSON parsing |
| `response.status()` + separate body parsing | Returns `{ status, body }` structure |
| No built-in retry | Automatic retry for 5xx errors |
| No schema validation | Single-line `.validateSchema()` |
| Verbose status checking | Clean destructuring |
**Usage:**
```typescript
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
@ -206,7 +221,7 @@ test('should create user', async ({ apiRequest }) => {
method: 'POST',
path: '/api/users', // Note: 'path' not 'url'
body: { name: 'Test User', email: 'test@example.com' } // Note: 'body' not 'data'
}).validateSchema(UserSchema); // Note: chained method
}).validateSchema(UserSchema); // Chained method (can await separately if needed)
expect(status).toBe(201);
expect(body.id).toBeDefined();
@ -224,6 +239,17 @@ test('should create user', async ({ apiRequest }) => {
Authentication session management with token persistence.
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/auth-session.html>
**Why Use This?**
| Vanilla Playwright Auth | auth-session |
|------------------------|--------------|
| Re-authenticate every test run (slow) | Authenticate once, persist to disk |
| Single user per setup | Multi-user support (roles, accounts) |
| No token expiration handling | Automatic token renewal |
| Manual session management | Provider pattern (flexible auth) |
**Usage:**
```typescript
import { test } from '@seontechnologies/playwright-utils/auth-session/fixtures';
@ -262,6 +288,17 @@ async function globalSetup() {
Record and replay network traffic (HAR) for offline testing.
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/network-recorder.html>
**Why Use This?**
| Vanilla Playwright HAR | network-recorder |
|------------------------|------------------|
| Manual `routeFromHAR()` configuration | Automatic HAR management with `PW_NET_MODE` |
| Separate record/playback test files | Same test, switch env var |
| No CRUD detection | Stateful mocking (POST/PUT/DELETE work) |
| Manual HAR file paths | Auto-organized by test name |
**Usage:**
```typescript
import { test } from '@seontechnologies/playwright-utils/network-recorder/fixtures';
@ -301,6 +338,17 @@ PW_NET_MODE=playback npx playwright test
Spy or stub network requests with automatic JSON parsing.
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/intercept-network-call.html>
**Why Use This?**
| Vanilla Playwright | interceptNetworkCall |
|-------------------|----------------------|
| Route setup + response waiting (separate steps) | Single declarative call |
| Manual `await response.json()` | Automatic JSON parsing (`responseJson`) |
| Complex filter predicates | Simple glob patterns (`**/api/**`) |
| Verbose syntax | Concise, readable API |
**Usage:**
```typescript
import { test } from '@seontechnologies/playwright-utils/fixtures';
@ -337,6 +385,17 @@ test('should handle API errors', async ({ page, interceptNetworkCall }) => {
Async polling for eventual consistency (Cypress-style).
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/recurse.html>
**Why Use This?**
| Manual Polling | recurse Utility |
|----------------|-----------------|
| `while` loops with `waitForTimeout` | Smart polling with exponential backoff |
| Hard-coded retry logic | Configurable timeout/interval |
| No logging visibility | Optional logging with custom messages |
| Verbose, error-prone | Clean, readable API |
**Usage:**
```typescript
import { test } from '@seontechnologies/playwright-utils/fixtures';
@ -373,6 +432,17 @@ test('should wait for async job completion', async ({ apiRequest, recurse }) =>
Structured logging that integrates with Playwright reports.
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/log.html>
**Why Use This?**
| Console.log / print | log Utility |
|--------------------|-------------|
| Not in test reports | Integrated with Playwright reports |
| No step visualization | `.step()` shows in Playwright UI |
| Manual object formatting | Logs objects seamlessly |
| No structured output | JSON artifacts for debugging |
**Usage:**
```typescript
import { log } from '@seontechnologies/playwright-utils';
@ -396,13 +466,24 @@ test('should login', async ({ page }) => {
- Direct import (no fixture needed for basic usage)
- Structured logs in test reports
- `.step()` shows in Playwright UI
- Supports object logging with `.debug()`
- Logs objects seamlessly (no special handling needed)
- Trace test execution
### file-utils
Read and validate CSV, PDF, XLSX, ZIP files.
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/file-utils.html>
**Why Use This?**
| Vanilla Playwright | file-utils |
|-------------------|------------|
| ~80 lines per CSV flow | ~10 lines end-to-end |
| Manual download event handling | `handleDownload()` encapsulates all |
| External parsing libraries | Auto-parsing (CSV, XLSX, PDF, ZIP) |
| No validation helpers | Built-in validation (headers, row count) |
**Usage:**
```typescript
import { handleDownload, readCSV } from '@seontechnologies/playwright-utils/file-utils';
@ -444,6 +525,17 @@ test('should export valid CSV', async ({ page }) => {
Smart test selection with git diff analysis for CI optimization.
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/burn-in.html>
**Why Use This?**
| Playwright `--only-changed` | burn-in Utility |
|-----------------------------|-----------------|
| Config changes trigger all tests | Smart filtering (skip configs, types, docs) |
| All or nothing | Volume control (run percentage) |
| No customization | Custom dependency analysis |
| Slow CI on minor changes | Fast CI with intelligent selection |
**Usage:**
```typescript
// scripts/burn-in-changed.ts
@ -490,6 +582,7 @@ export default config;
```
**Benefits:**
- **Ensure flake-free tests upfront** - Never deal with test flake again
- Smart filtering (skip config, types, docs changes)
- Volume control (run percentage of affected tests)
- Git diff-based test selection
@ -499,6 +592,17 @@ export default config;
Automatically detect HTTP 4xx/5xx errors during tests.
**Official Docs:** <https://seontechnologies.github.io/playwright-utils/network-error-monitor.html>
**Why Use This?**
| Vanilla Playwright | network-error-monitor |
|-------------------|----------------------|
| UI passes, backend 500 ignored | Auto-fails on any 4xx/5xx |
| Manual error checking | Zero boilerplate (auto-enabled) |
| Silent failures slip through | Acts like Sentry for tests |
| No domino effect prevention | Limits cascading failures |
**Usage:**
```typescript
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
@ -540,98 +644,76 @@ test.describe('error handling',
**Benefits:**
- Auto-enabled (zero setup)
- Catches silent backend failures
- Opt-out with annotations
- Structured error reporting
- Catches silent backend failures (500, 503, 504)
- **Prevents domino effect** (limits cascading failures from one bad endpoint)
- Opt-out with annotations for validation tests
- Structured error reporting (JSON artifacts)
## Fixture Composition
Combine utilities using `mergeTests`:
**Option 1: Use Package's Combined Fixtures (Simplest)**
**Option 1: Use Combined Fixtures (Simplest)**
```typescript
// Import all utilities at once
import { test } from '@seontechnologies/playwright-utils/fixtures';
import { log } from '@seontechnologies/playwright-utils';
import { expect } from '@playwright/test';
test('full test', async ({ apiRequest, authToken, interceptNetworkCall }) => {
await log.info('Starting test'); // log is direct import
test('api test', async ({ apiRequest, interceptNetworkCall }) => {
await log.info('Fetching users');
const { status, body } = await apiRequest({
method: 'GET',
path: '/api/data',
headers: { Authorization: `Bearer ${authToken}` }
path: '/api/users'
});
await log.info('Data fetched', body);
expect(status).toBe(200);
});
```
**Note:** `log` is imported directly (not a fixture). `authToken` requires auth-session provider setup.
**Option 2: Create Custom Merged Fixtures (Selective)**
**Option 2: Merge Individual Fixtures (Selective)**
**File 1: support/merged-fixtures.ts**
```typescript
import { test as base } from '@playwright/test';
import { mergeTests } from '@playwright/test';
import { test as apiRequestFixture } from '@seontechnologies/playwright-utils/api-request/fixtures';
import { test as recurseFixture } from '@seontechnologies/playwright-utils/recurse/fixtures';
import { test as base, mergeTests } from '@playwright/test';
import { test as apiRequest } from '@seontechnologies/playwright-utils/api-request/fixtures';
import { test as interceptNetworkCall } from '@seontechnologies/playwright-utils/intercept-network-call/fixtures';
import { test as networkErrorMonitor } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
import { log } from '@seontechnologies/playwright-utils';
// Merge only the fixtures you need
// Merge only what you need
export const test = mergeTests(
apiRequestFixture,
recurseFixture
base,
apiRequest,
interceptNetworkCall,
networkErrorMonitor
);
export { expect } from '@playwright/test';
export const expect = base.expect;
export { log };
```
// Use merged utilities in tests
test('selective test', async ({ apiRequest, recurse }) => {
await log.info('Starting test'); // log is direct import, not fixture
**File 2: tests/api/users.spec.ts**
```typescript
import { test, expect, log } from '../support/merged-fixtures';
test('api test', async ({ apiRequest, interceptNetworkCall }) => {
await log.info('Fetching users');
const { status, body } = await apiRequest({
method: 'GET',
path: '/api/data'
path: '/api/users'
});
await log.info('Data fetched', body);
expect(status).toBe(200);
});
```
**Note:** `log` is a direct utility (not a fixture), so import it separately.
**Contrast:**
- Option 1: All utilities available, zero setup
- Option 2: Pick utilities you need, one central file
**Recommended:** Use Option 1 (combined fixtures) unless you need fine control over which utilities are included.
## Configuration
### Environment Variables
```bash
# .env
PLAYWRIGHT_UTILS_LOG_LEVEL=debug # debug | info | warn | error
PLAYWRIGHT_UTILS_RETRY_ATTEMPTS=3
PLAYWRIGHT_UTILS_TIMEOUT=30000
```
### Playwright Config
```typescript
// playwright.config.ts
import { defineConfig } from '@playwright/test';
export default defineConfig({
use: {
// Playwright Utils works with standard Playwright config
baseURL: process.env.BASE_URL || 'http://localhost:3000',
extraHTTPHeaders: {
// Add headers used by utilities
}
}
});
```
**See working examples:** <https://github.com/seontechnologies/playwright-utils/tree/main/playwright/support>
## Troubleshooting
@ -698,47 +780,6 @@ expect(status).toBe(200);
## Migration Guide
### Migrating Existing Tests
**Before (Vanilla Playwright):**
```typescript
test('should access protected route', async ({ page, request }) => {
// Manual auth token fetch
const response = await request.post('/api/auth/login', {
data: { email: 'test@example.com', password: 'pass' }
});
const { token } = await response.json();
// Manual token storage
await page.goto('/dashboard');
await page.evaluate((token) => {
localStorage.setItem('authToken', token);
}, token);
await expect(page).toHaveURL('/dashboard');
});
```
**After (With Playwright Utils):**
```typescript
import { test } from '@seontechnologies/playwright-utils/auth-session/fixtures';
test('should access protected route', async ({ page, authToken }) => {
// authToken automatically fetched and persisted by fixture
await page.goto('/dashboard');
// Token is already in place (no manual storage needed)
await expect(page).toHaveURL('/dashboard');
});
```
**Benefits:**
- Token fetched once, reused across all tests (persisted to disk)
- No manual token storage or management
- Automatic token renewal if expired
- Multi-user support via `authOptions.userIdentifier`
- 10 lines → 5 lines (less code)
## Related Guides
**Getting Started:**
@ -755,6 +796,7 @@ test('should access protected route', async ({ page, authToken }) => {
## Understanding the Concepts
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Why Playwright Utils matters** (part of TEA's three-part solution)
- [Fixture Architecture](/docs/explanation/tea/fixture-architecture.md) - Pure function → fixture pattern
- [Network-First Patterns](/docs/explanation/tea/network-first-patterns.md) - Network utilities explained
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - Patterns PW-Utils enforces

View File

@ -90,16 +90,14 @@ TEA will ask what test levels to generate:
- E2E tests (browser-based, full user journey)
- API tests (backend only, faster)
- Component tests (UI components in isolation)
- Mix of levels
**Recommended approach:** Generate API tests first, then E2E tests (see [API Tests First, E2E Later](#api-tests-first-e2e-later) tip below).
- Mix of levels (see [API Tests First, E2E Later](#api-tests-first-e2e-later) tip)
### Component Testing by Framework
TEA generates component tests using framework-appropriate tools:
| Your Framework | Component Testing Tool |
|----------------|----------------------|
| -------------- | ------------------------------------------- |
| **Cypress** | Cypress Component Testing (*.cy.tsx) |
| **Playwright** | Vitest + React Testing Library (*.test.tsx) |
@ -190,7 +188,7 @@ test.describe('Profile API', () => {
const { status, body } = await apiRequest({
method: 'PATCH',
path: '/api/profile',
body: { // 'body' not 'data'
body: {
name: 'Updated Name',
email: 'updated@example.com'
}
@ -205,7 +203,7 @@ test.describe('Profile API', () => {
const { status, body } = await apiRequest({
method: 'PATCH',
path: '/api/profile',
body: { email: 'invalid-email' } // 'body' not 'data'
body: { email: 'invalid-email' }
});
expect(status).toBe(400);
@ -226,52 +224,28 @@ test.describe('Profile API', () => {
```typescript
import { test, expect } from '@playwright/test';
test.describe('Profile Page', () => {
test.beforeEach(async ({ page }) => {
test('should edit and save profile', async ({ page }) => {
// Login first
await page.goto('/login');
await page.getByLabel('Email').fill('test@example.com');
await page.getByLabel('Password').fill('password123');
await page.getByRole('button', { name: 'Sign in' }).click();
});
test('should display current profile information', async ({ page }) => {
// Navigate to profile
await page.goto('/profile');
await expect(page.getByText('test@example.com')).toBeVisible();
await expect(page.getByText('Test User')).toBeVisible();
});
test('should edit and save profile', async ({ page }) => {
await page.goto('/profile');
// Click edit
// Edit profile
await page.getByRole('button', { name: 'Edit Profile' }).click();
// Modify fields
await page.getByLabel('Name').fill('Updated Name');
await page.getByLabel('Email').fill('updated@example.com');
// Save
await page.getByRole('button', { name: 'Save' }).click();
// Verify success
await expect(page.getByText('Profile updated successfully')).toBeVisible();
await expect(page.getByText('Updated Name')).toBeVisible();
});
test('should show validation error for invalid email', async ({ page }) => {
await page.goto('/profile');
await page.getByRole('button', { name: 'Edit Profile' }).click();
await page.getByLabel('Email').fill('invalid-email');
await page.getByRole('button', { name: 'Save' }).click();
await expect(page.getByText('Invalid email format')).toBeVisible();
});
await expect(page.getByText('Profile updated')).toBeVisible();
});
```
TEA generates additional E2E tests for display, validation errors, etc. based on acceptance criteria.
#### Implementation Checklist
TEA also provides an implementation checklist:
@ -400,18 +374,13 @@ Run `*test-design` before `*atdd` for better results:
*atdd # Generate tests based on design
```
### Recording Mode Note
### MCP Enhancements (Optional)
**Recording mode is NOT typically used with ATDD** because ATDD generates tests for features that don't exist yet (no UI to record against).
If you have MCP servers configured (`tea_use_mcp_enhancements: true`), TEA can use them during `*atdd`.
If you have a skeleton UI or are refining existing tests, use `*automate` with recording mode instead. See [How to Run Automate](/docs/how-to/workflows/run-automate.md).
**Note:** ATDD is for features that don't exist yet, so recording mode (verify selectors with live UI) only applies if you have skeleton/mockup UI already implemented. For typical ATDD (no UI yet), TEA infers selectors from best practices.
**Recording mode is only applicable for ATDD in the rare case where:**
- You have skeleton/mockup UI already implemented
- You want to verify selector patterns before full implementation
- You're doing "UI-first" development (unusual for TDD)
For most ATDD workflows, **skip recording mode** - TEA will infer selectors from best practices.
See [Enable MCP Enhancements](/docs/how-to/customization/enable-tea-mcp-enhancements.md) for setup.
### Focus on P0/P1 Scenarios
@ -444,43 +413,6 @@ TEA generates deterministic tests by default:
Don't modify these patterns - they prevent flakiness!
## Common Issues
### Tests Don't Fail Initially
**Problem:** Tests pass on first run but feature doesn't exist.
**Cause:** Tests are hitting wrong endpoints or checking wrong things.
**Solution:** Review generated tests - ensure they match your feature requirements.
### Too Many Tests Generated
**Problem:** TEA generated 50 tests for a simple feature.
**Cause:** Didn't specify priorities or scope.
**Solution:** Be specific:
```
Generate ONLY:
- P0 scenarios (2-3 tests)
- Happy path for API
- One E2E test for full flow
```
### Selectors Are Fragile
**Problem:** E2E tests use brittle selectors (CSS, XPath).
**Solution:** Use MCP recording mode or specify accessible selectors:
```
Use accessible locators:
- getByRole()
- getByLabel()
- getByText()
Avoid CSS selectors
```
## Related Guides
- [How to Run Test Design](/docs/how-to/workflows/run-test-design.md) - Plan before generating
@ -489,6 +421,7 @@ Avoid CSS selectors
## Understanding the Concepts
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Why TEA generates quality tests** (foundational)
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - Why P0 vs P3 matters
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - What makes tests good
- [Network-First Patterns](/docs/explanation/tea/network-first-patterns.md) - Avoiding flakiness

View File

@ -221,7 +221,7 @@ testWithAuth.describe('Profile API', () => {
const { status, body } = await apiRequest({
method: 'PATCH',
path: '/api/profile',
body: { name: 'Updated Name', bio: 'Test bio' }, // 'body' not 'data'
body: { name: 'Updated Name', bio: 'Test bio' },
headers: { Authorization: `Bearer ${authToken}` }
}).validateSchema(ProfileSchema); // Chained validation
@ -233,7 +233,7 @@ testWithAuth.describe('Profile API', () => {
const { status, body } = await apiRequest({
method: 'PATCH',
path: '/api/profile',
body: { email: 'invalid-email' }, // 'body' not 'data'
body: { email: 'invalid-email' },
headers: { Authorization: `Bearer ${authToken}` }
});
@ -250,58 +250,31 @@ testWithAuth.describe('Profile API', () => {
- Automatic retry for 5xx errors
- Less boilerplate (no manual `await response.json()` everywhere)
#### E2E Tests (`tests/e2e/profile-workflow.spec.ts`):
#### E2E Tests (`tests/e2e/profile.spec.ts`):
```typescript
import { test, expect } from '@playwright/test';
test.describe('Profile Management Workflow', () => {
test.beforeEach(async ({ page }) => {
test('should edit profile', async ({ page }) => {
// Login
await page.goto('/login');
await page.getByLabel('Email').fill('test@example.com');
await page.getByLabel('Password').fill('password123');
await page.getByRole('button', { name: 'Sign in' }).click();
// Wait for login to complete
await expect(page).toHaveURL(/\/dashboard/);
});
test('should view and edit profile', async ({ page }) => {
// Navigate to profile
await page.goto('/profile');
// Verify profile displays
await expect(page.getByText('test@example.com')).toBeVisible();
// Edit profile
await page.goto('/profile');
await page.getByRole('button', { name: 'Edit Profile' }).click();
await page.getByLabel('Name').fill('New Name');
await page.getByRole('button', { name: 'Save' }).click();
// Verify success
await expect(page.getByText('Profile updated')).toBeVisible();
await expect(page.getByText('New Name')).toBeVisible();
});
test('should show validation errors', async ({ page }) => {
await page.goto('/profile');
await page.getByRole('button', { name: 'Edit Profile' }).click();
// Enter invalid email
await page.getByLabel('Email').fill('invalid');
await page.getByRole('button', { name: 'Save' }).click();
// Verify error shown
await expect(page.getByText('Invalid email format')).toBeVisible();
// Profile should not be updated
await page.reload();
await expect(page.getByText('test@example.com')).toBeVisible();
});
});
```
TEA generates additional tests for validation, edge cases, etc. based on priorities.
#### Fixtures (`tests/support/fixtures/profile.ts`):
**Vanilla Playwright:**
@ -505,7 +478,7 @@ Compare against:
TEA supports component testing using framework-appropriate tools:
| Your Framework | Component Testing Tool | Tests Location |
|----------------|----------------------|----------------|
| -------------- | ------------------------------ | ----------------------------------------- |
| **Cypress** | Cypress Component Testing | `tests/component/` |
| **Playwright** | Vitest + React Testing Library | `tests/component/` or `src/**/*.test.tsx` |
@ -568,25 +541,14 @@ Don't duplicate that coverage
TEA will analyze existing tests and only generate new scenarios.
### Use Healing Mode (Optional)
### MCP Enhancements (Optional)
If MCP enhancements enabled (`tea_use_mcp_enhancements: true`):
If you have MCP servers configured (`tea_use_mcp_enhancements: true`), TEA can use them during `*automate` for:
When prompted, select "healing mode" to:
- Fix broken selectors in existing tests
- Update outdated assertions
- Enhance with trace viewer insights
- **Healing mode:** Fix broken selectors, update assertions, enhance with trace analysis
- **Recording mode:** Verify selectors with live browser, capture network requests
See [Enable MCP Enhancements](/docs/how-to/customization/enable-tea-mcp-enhancements.md)
### Use Recording Mode (Optional)
If MCP enhancements enabled:
When prompted, select "recording mode" to:
- Verify selectors against live browser
- Generate accurate locators from actual DOM
- Capture network requests
No prompts - TEA uses MCPs automatically when available. See [Enable MCP Enhancements](/docs/how-to/customization/enable-tea-mcp-enhancements.md) for setup.
### Generate Tests Incrementally
@ -662,21 +624,11 @@ We already have these tests:
Generate tests for scenarios NOT covered by those files
```
### Selectors Are Fragile
### MCP Enhancements for Better Selectors
**Problem:** E2E tests use brittle CSS selectors.
If you have MCP servers configured, TEA verifies selectors against live browser. Otherwise, TEA generates accessible selectors (`getByRole`, `getByLabel`) by default.
**Solution:** Request accessible selectors:
```
Use accessible locators:
- getByRole()
- getByLabel()
- getByText()
Avoid CSS selectors like .class-name or #id
```
Or use MCP recording mode for verified selectors.
Setup: Answer "Yes" to MCPs in BMad installer + configure MCP servers in your IDE. See [Enable MCP Enhancements](/docs/how-to/customization/enable-tea-mcp-enhancements.md).
## Related Guides
@ -686,6 +638,7 @@ Or use MCP recording mode for verified selectors.
## Understanding the Concepts
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Why TEA generates quality tests** (foundational)
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - Why prioritize P0 over P3
- [Test Quality Standards](/docs/explanation/tea/test-quality-standards.md) - What makes tests good
- [Fixture Architecture](/docs/explanation/tea/fixture-architecture.md) - Reusable test patterns

View File

@ -662,7 +662,7 @@ Assess categories incrementally, not all at once.
- [How to Run Trace](/docs/how-to/workflows/run-trace.md) - Gate decision complements NFR
- [How to Run Test Review](/docs/how-to/workflows/run-test-review.md) - Quality complements NFR
- [Run TEA for Enterprise](/docs/how-to/workflows/run-tea-for-enterprise.md) - Enterprise workflow
- [Run TEA for Enterprise](/docs/how-to/enterprise/use-tea-for-enterprise.md) - Enterprise workflow
## Understanding the Concepts

View File

@ -63,7 +63,7 @@ TEA will ask where requirements are defined.
**Options:**
| Source | Example | Best For |
|--------|---------|----------|
| --------------- | ----------------------------- | ---------------------- |
| **Story file** | `story-profile-management.md` | Single story coverage |
| **Test design** | `test-design-epic-1.md` | Epic coverage |
| **PRD** | `PRD.md` | System-level coverage |
@ -114,7 +114,7 @@ TEA generates a comprehensive traceability matrix.
## Coverage Summary
| Metric | Count | Percentage |
|--------|-------|------------|
| ---------------------- | ----- | ---------- |
| **Total Requirements** | 15 | 100% |
| **Full Coverage** | 11 | 73% |
| **Partial Coverage** | 3 | 20% |
@ -123,7 +123,7 @@ TEA generates a comprehensive traceability matrix.
### By Priority
| Priority | Total | Covered | Percentage |
|----------|-------|---------|------------|
| -------- | ----- | ------- | ----------------- |
| **P0** | 5 | 5 | 100% ✅ |
| **P1** | 6 | 5 | 83% ⚠️ |
| **P2** | 3 | 1 | 33% ⚠️ |
@ -224,7 +224,7 @@ TEA generates a comprehensive traceability matrix.
### Critical Gaps (Must Fix Before Release)
| Gap | Requirement | Priority | Risk | Recommendation |
|-----|-------------|----------|------|----------------|
| --- | ------------------------ | -------- | ---- | ------------------- |
| 1 | Bio field not tested | P0 | High | Add E2E + API tests |
| 2 | Avatar upload not tested | P0 | High | Add E2E + API tests |
@ -235,7 +235,7 @@ TEA generates a comprehensive traceability matrix.
### Non-Critical Gaps (Can Defer)
| Gap | Requirement | Priority | Risk | Recommendation |
|-----|-------------|----------|------|----------------|
| --- | ------------------------- | -------- | ---- | ------------------- |
| 3 | Profile export not tested | P2 | Low | Add in v1.3 release |
**Estimated Effort:** 2 hours
@ -297,7 +297,7 @@ test('should update bio via API', async ({ apiRequest, authToken }) => {
const { status, body } = await apiRequest({
method: 'PATCH',
path: '/api/profile',
body: { bio: 'Updated bio' }, // 'body' not 'data'
body: { bio: 'Updated bio' },
headers: { Authorization: `Bearer ${authToken}` }
});
@ -443,7 +443,7 @@ TEA makes evidence-based gate decision and writes to separate file.
## Coverage Analysis
| Priority | Required Coverage | Actual Coverage | Status |
|----------|------------------|-----------------|--------|
| -------- | ----------------- | --------------- | --------------------- |
| **P0** | 100% | 100% | ✅ PASS |
| **P1** | 90% | 100% | ✅ PASS |
| **P2** | 50% | 33% | ⚠️ Below (acceptable) |
@ -457,7 +457,7 @@ TEA makes evidence-based gate decision and writes to separate file.
## Quality Metrics
| Metric | Threshold | Actual | Status |
|--------|-----------|--------|--------|
| ------------------ | --------- | ------ | ------ |
| P0/P1 Coverage | >95% | 100% | ✅ |
| Test Quality Score | >80 | 84 | ✅ |
| NFR Status | PASS | PASS | ✅ |
@ -502,7 +502,7 @@ TEA makes evidence-based gate decision and writes to separate file.
TEA uses deterministic rules when decision_mode = "deterministic":
| P0 Coverage | P1 Coverage | Overall Coverage | Decision |
|-------------|-------------|------------------|----------|
| ----------- | ----------- | ---------------- | ---------------------------- |
| 100% | ≥90% | ≥80% | **PASS** ✅ |
| 100% | 80-89% | ≥80% | **CONCERNS** ⚠️ |
| <100% | Any | Any | **FAIL** |
@ -684,7 +684,7 @@ Track improvement over time:
## Coverage Trend
| Date | Epic | P0/P1 Coverage | Quality Score | Status |
|------|------|----------------|---------------|--------|
| ---------- | -------- | -------------- | ------------- | -------------- |
| 2026-01-01 | Baseline | 45% | - | Starting point |
| 2026-01-08 | Epic 1 | 78% | 72 | Improving |
| 2026-01-15 | Epic 2 | 92% | 84 | Near target |

View File

@ -290,137 +290,84 @@ burn-in:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
```
#### Helper Scripts
#### Burn-In Testing
TEA generates shell scripts for CI and local development.
**Test Scripts** (`package.json`):
**Option 1: Classic Burn-In (Playwright Built-In)**
```json
{
"scripts": {
"test": "playwright test",
"test:headed": "playwright test --headed",
"test:debug": "playwright test --debug",
"test:smoke": "playwright test --grep @smoke",
"test:critical": "playwright test --grep @critical",
"test:changed": "./scripts/test-changed.sh",
"test:burn-in": "./scripts/burn-in.sh",
"test:report": "playwright show-report",
"ci:local": "./scripts/ci-local.sh"
"test:burn-in": "playwright test --repeat-each=5 --retries=0"
}
}
```
**Selective Testing Script** (`scripts/test-changed.sh`):
**How it works:**
- Runs every test 5 times
- Fails if any iteration fails
- Detects flakiness before merge
```bash
#!/bin/bash
# Run only tests for changed files
**Use when:** Small test suite, want to run everything multiple times
CHANGED_FILES=$(git diff --name-only origin/main...HEAD)
---
if echo "$CHANGED_FILES" | grep -q "src/.*\.ts$"; then
echo "Running affected tests..."
npm run test:e2e -- --grep="$(echo $CHANGED_FILES | sed 's/src\///g' | sed 's/\.ts//g')"
else
echo "No test-affecting changes detected"
fi
```
**Option 2: Smart Burn-In (Playwright Utils)**
**Burn-In Script** (`scripts/burn-in.sh`):
```bash
#!/bin/bash
# Run tests multiple times to detect flakiness
ITERATIONS=${BURN_IN_ITERATIONS:-5}
FAILURES=0
for i in $(seq 1 $ITERATIONS); do
echo "=== Burn-in iteration $i/$ITERATIONS ==="
if npm test; then
echo "✓ Iteration $i passed"
else
echo "✗ Iteration $i failed"
FAILURES=$((FAILURES + 1))
fi
done
if [ $FAILURES -gt 0 ]; then
echo "❌ Tests failed in $FAILURES/$ITERATIONS iterations"
exit 1
fi
echo "✅ All $ITERATIONS iterations passed"
```
**Local CI Mirror Script** (`scripts/ci-local.sh`):
```bash
#!/bin/bash
# Mirror CI execution locally for debugging
echo "🔍 Running CI pipeline locally..."
# Lint
npm run lint || exit 1
# Tests
npm run test || exit 1
# Burn-in (reduced iterations for local)
for i in {1..3}; do
echo "🔥 Burn-in $i/3"
npm test || exit 1
done
echo "✅ Local CI pipeline passed"
```
**Make scripts executable:**
```bash
chmod +x scripts/*.sh
```
**Alternative: Smart Burn-In with Playwright Utils**
If `tea_use_playwright_utils: true`, you can use git diff-based burn-in:
If `tea_use_playwright_utils: true`:
**scripts/burn-in-changed.ts:**
```typescript
// scripts/burn-in-changed.ts
import { runBurnIn } from '@seontechnologies/playwright-utils/burn-in';
async function main() {
await runBurnIn({
configPath: 'playwright.burn-in.config.ts',
baseBranch: 'main'
});
}
main().catch(console.error);
```
**playwright.burn-in.config.ts:**
```typescript
// playwright.burn-in.config.ts
import type { BurnInConfig } from '@seontechnologies/playwright-utils/burn-in';
const config: BurnInConfig = {
skipBurnInPatterns: ['**/config/**', '**/*.md', '**/*types*'],
burnInTestPercentage: 0.3, // Run 30% of affected tests
burnIn: { repeatEach: 5, retries: 1 }
burnInTestPercentage: 0.3,
burnIn: { repeatEach: 5, retries: 0 }
};
export default config;
```
**Benefits over shell script:**
- Only runs tests affected by git changes (faster)
- Smart filtering (skips config, docs, types)
- Volume control (run percentage, not all tests)
**package.json:**
```json
{
"scripts": {
"test:burn-in": "tsx scripts/burn-in-changed.ts"
}
}
```
**Example:** Changed 1 file → runs 3 affected tests 5 times = 15 runs (not 500 tests × 5 = 2500 runs)
**How it works:**
- Git diff analysis (only affected tests)
- Smart filtering (skip configs, docs, types)
- Volume control (run 30% of affected tests)
- Each test runs 5 times
**Use when:** Large test suite, want intelligent selection
---
**Comparison:**
| Feature | Classic Burn-In | Smart Burn-In (PW-Utils) |
|---------|----------------|--------------------------|
| Changed 1 file | Runs all 500 tests × 5 = 2500 runs | Runs 3 affected tests × 5 = 15 runs |
| Config change | Runs all tests | Skips (no tests affected) |
| Type change | Runs all tests | Skips (no runtime impact) |
| Setup | Zero config | Requires config file |
**Recommendation:** Start with classic (simple), upgrade to smart (faster) when suite grows.
### 6. Configure Secrets

File diff suppressed because it is too large Load Diff

View File

@ -15,9 +15,9 @@ Complete reference for all TEA (Test Architect) configuration options.
**Purpose:** Project-specific configuration values for your repository
**Created By:** `npx bmad-method@alpha install` command
**Created By:** BMad installer
**Status:** Gitignored (not committed to repository)
**Status:** Typically gitignored (user-specific values)
**Usage:** Edit this file to change TEA behavior in your project
@ -155,17 +155,7 @@ Would you like to enable MCP enhancements in Test Architect?
}
```
**Configuration Location (IDE-Specific):**
**Cursor:**
```
~/.cursor/config.json or workspace .cursor/config.json
```
**VS Code with Claude:**
```
~/Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
```
**Configuration:** Refer to your AI agent's documentation for MCP server setup instructions.
**Example (Enable):**
```yaml
@ -364,9 +354,9 @@ tea_use_playwright_utils: true
tea_use_mcp_enhancements: false
```
**Individual config (gitignored):**
**Individual config (typically gitignored):**
```yaml
# _bmad/bmm/config.yaml (gitignored)
# _bmad/bmm/config.yaml (user adds to .gitignore)
user_name: John Doe
user_skill_level: expert
tea_use_mcp_enhancements: true # Individual preference
@ -407,7 +397,7 @@ _bmad/bmm/config.yaml.example # Template for team
package.json # Dependencies
```
**Gitignore:**
**Recommended for .gitignore:**
```
_bmad/bmm/config.yaml # User-specific values
.env # Secrets
@ -420,8 +410,7 @@ _bmad/bmm/config.yaml # User-specific values
```markdown
## Setup
1. Install BMad:
npx bmad-method@alpha install
1. Install BMad
2. Copy config template:
cp _bmad/bmm/config.yaml.example _bmad/bmm/config.yaml
@ -558,48 +547,48 @@ npx playwright install
## Configuration Examples
### Minimal Setup (Defaults)
### Recommended Setup (Full Stack)
```yaml
# _bmad/bmm/config.yaml
project_name: my-project
user_skill_level: beginner # or intermediate/expert
output_folder: _bmad-output
tea_use_playwright_utils: true # Recommended
tea_use_mcp_enhancements: true # Recommended
```
**Why recommended:**
- Playwright Utils: Production-ready fixtures and utilities
- MCP enhancements: Live browser verification, visual debugging
- Together: The three-part stack (see [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md))
**Prerequisites:**
```bash
npm install -D @seontechnologies/playwright-utils
# Configure MCP servers in IDE (see Enable MCP Enhancements guide)
```
**Best for:** Everyone (beginners learn good patterns from day one)
---
### Minimal Setup (Learning Only)
```yaml
# _bmad/bmm/config.yaml
project_name: my-project
user_skill_level: intermediate
output_folder: _bmad-output
tea_use_playwright_utils: false
tea_use_mcp_enhancements: false
```
**Best for:**
- New projects
- Learning TEA
- Simple testing needs
- First-time TEA users (keep it simple initially)
- Quick experiments
- Learning basics before adding integrations
---
### Advanced Setup (All Features)
```yaml
# _bmad/bmm/config.yaml
project_name: enterprise-app
user_skill_level: expert
output_folder: docs/testing
planning_artifacts: docs/planning
implementation_artifacts: docs/implementation
project_knowledge: docs
tea_use_playwright_utils: true
tea_use_mcp_enhancements: true
```
**Prerequisites:**
```bash
npm install -D @seontechnologies/playwright-utils
# Configure MCP servers in IDE
```
**Best for:**
- Enterprise projects
- Teams with established testing practices
- Projects needing advanced TEA features
**Note:** Can enable integrations later as you learn
---
@ -622,7 +611,7 @@ output_folder: ../../_bmad-output/web
# apps/api/_bmad/bmm/config.yaml
project_name: api-service
output_folder: ../../_bmad-output/api
tea_use_playwright_utils: false # API tests don't need it
tea_use_playwright_utils: false # Using vanilla Playwright only
```
---
@ -642,9 +631,9 @@ planning_artifacts: _bmad-output/planning-artifacts
implementation_artifacts: _bmad-output/implementation-artifacts
project_knowledge: docs
# TEA Configuration
tea_use_playwright_utils: false # Set true if using @seontechnologies/playwright-utils
tea_use_mcp_enhancements: false # Set true if MCP servers configured in IDE
# TEA Configuration (Recommended: Enable both for full stack)
tea_use_playwright_utils: true # Recommended - production-ready utilities
tea_use_mcp_enhancements: true # Recommended - live browser verification
# Languages
communication_language: english
@ -668,74 +657,6 @@ document_output_language: english
---
## FAQ
### When should I enable playwright-utils?
**Enable if:**
- You're using or planning to use `@seontechnologies/playwright-utils`
- You want production-ready fixtures and utilities
- Your team benefits from standardized patterns
- You need utilities like `apiRequest`, `authSession`, `networkRecorder`
**Skip if:**
- You're just learning TEA (keep it simple)
- You have your own fixture library
- You don't need the utilities
### When should I enable MCP enhancements?
**Enable if:**
- You want live browser verification during test generation
- You're debugging complex UI issues
- You want exploratory mode in `*test-design`
- You want recording mode in `*atdd` for accurate selectors
**Skip if:**
- You're new to TEA (adds complexity)
- You don't have MCP servers configured
- Your tests work fine without it
### Can I change config after installation?
**Yes!** Edit `_bmad/bmm/config.yaml` anytime.
**Important:** Start fresh chat after config changes (TEA loads config at workflow start).
### Can I have different configs per branch?
**Yes:**
```bash
# feature branch
git checkout feature/new-testing
# Edit config for experimentation
vim _bmad/bmm/config.yaml
# main branch
git checkout main
# Config reverts to main branch values
```
Config is gitignored, so each branch can have different values.
### How do I share config with team?
**Use config.yaml.example:**
```bash
# Commit template
cp _bmad/bmm/config.yaml _bmad/bmm/config.yaml.example
git add _bmad/bmm/config.yaml.example
git commit -m "docs: add BMad config template"
```
**Team members copy template:**
```bash
cp _bmad/bmm/config.yaml.example _bmad/bmm/config.yaml
# Edit with their values
```
---
## See Also
### How-To Guides

View File

@ -167,11 +167,10 @@ Feature flag testing, contract testing, and API testing patterns.
### Playwright-Utils Integration
Patterns for using `@seontechnologies/playwright-utils` package (11 utilities).
Patterns for using `@seontechnologies/playwright-utils` package (9 utilities).
| Fragment | Description | Key Topics |
|----------|-------------|-----------|
| overview | Playwright Utils installation, design principles, fixture patterns | Getting started, principles, setup |
| [api-request](../../../src/modules/bmm/testarch/knowledge/api-request.md) | Typed HTTP client, schema validation, retry logic | API calls, HTTP, validation |
| [auth-session](../../../src/modules/bmm/testarch/knowledge/auth-session.md) | Token persistence, multi-user, API/browser authentication | Auth patterns, session management |
| [network-recorder](../../../src/modules/bmm/testarch/knowledge/network-recorder.md) | HAR record/playback, CRUD detection for offline testing | Offline testing, network replay |
@ -181,9 +180,8 @@ Patterns for using `@seontechnologies/playwright-utils` package (11 utilities).
| [file-utils](../../../src/modules/bmm/testarch/knowledge/file-utils.md) | CSV/XLSX/PDF/ZIP handling with download support | File validation, exports |
| [burn-in](../../../src/modules/bmm/testarch/knowledge/burn-in.md) | Smart test selection with git diff analysis | CI optimization, selective testing |
| [network-error-monitor](../../../src/modules/bmm/testarch/knowledge/network-error-monitor.md) | Auto-detect HTTP 4xx/5xx errors during tests | Error monitoring, silent failures |
| [fixtures-composition](../../../src/modules/bmm/testarch/knowledge/fixtures-composition.md) | mergeTests composition patterns for combining utilities | Fixture merging, utility composition |
**Note:** All 11 playwright-utils fragments are in the same `knowledge/` directory as other fragments.
**Note:** `fixtures-composition` is listed under Architecture & Fixtures (general Playwright `mergeTests` pattern, applies to all fixtures).
**Used in:** `*framework` (if `tea_use_playwright_utils: true`), `*atdd`, `*automate`, `*test-review`, `*ci`
@ -211,51 +209,11 @@ risk-governance,Risk Governance,Risk scoring and gate decisions,risk;governance,
- `tags` - Searchable tags (semicolon-separated)
- `fragment_file` - Relative path to fragment markdown file
## Fragment Locations
**Fragment Location:** `src/modules/bmm/testarch/knowledge/` (all 33 fragments in single directory)
**Knowledge Base Directory:**
```
src/modules/bmm/testarch/knowledge/
├── api-request.md
├── api-testing-patterns.md
├── auth-session.md
├── burn-in.md
├── ci-burn-in.md
├── component-tdd.md
├── contract-testing.md
├── data-factories.md
├── email-auth.md
├── error-handling.md
├── feature-flags.md
├── file-utils.md
├── fixture-architecture.md
├── fixtures-composition.md
├── intercept-network-call.md
├── log.md
├── network-error-monitor.md
├── network-first.md
├── network-recorder.md
├── nfr-criteria.md
├── playwright-config.md
├── probability-impact.md
├── recurse.md
├── risk-governance.md
├── selector-resilience.md
├── selective-testing.md
├── test-healing-patterns.md
├── test-levels-framework.md
├── test-priorities-matrix.md
├── test-quality.md
├── timing-debugging.md
└── visual-debugging.md
```
**Manifest:** `src/modules/bmm/testarch/tea-index.csv`
**All fragments in single directory** (no subfolders)
**Manifest:**
```
src/modules/bmm/testarch/tea-index.csv
```
---
## Workflow Fragment Loading
@ -371,207 +329,6 @@ Each TEA workflow loads specific fragments:
---
## Key Fragments Explained
### test-quality.md
**What it covers:**
- Execution time limits (< 1.5 minutes)
- Test size limits (< 300 lines)
- No hard waits (waitForTimeout banned)
- No conditionals for flow control
- No try-catch for flow control
- Assertions must be explicit
- Self-cleaning tests for parallel execution
**Why it matters:**
This is the Definition of Done for test quality. All TEA workflows reference this for quality standards.
**Code examples:** 12+
---
### network-first.md
**What it covers:**
- Intercept-before-navigate pattern
- Wait for network responses, not timeouts
- HAR capture for offline testing
- Deterministic waiting strategies
**Why it matters:**
Prevents 90% of test flakiness. Core pattern for reliable E2E tests.
**Code examples:** 15+
---
### fixture-architecture.md
**What it covers:**
- Build pure functions first
- Wrap in framework fixtures second
- Compose with mergeTests
- Enable reusability and testability
**Why it matters:**
Foundation of scalable test architecture. Makes utilities reusable and unit-testable.
**Code examples:** 10+
---
### risk-governance.md
**What it covers:**
- Risk scoring matrix (Probability × Impact)
- Risk categories (TECH, SEC, PERF, DATA, BUS, OPS)
- Gate decision rules (PASS/CONCERNS/FAIL/WAIVED)
- Mitigation planning
**Why it matters:**
Objective, data-driven release decisions. Removes politics from quality gates.
**Code examples:** 5
---
### test-priorities-matrix.md
**What it covers:**
- P0: Critical path (100% coverage required)
- P1: High value (90% coverage target)
- P2: Medium value (50% coverage target)
- P3: Low value (20% coverage target)
- Execution ordering (P0 → P1 → P2 → P3)
**Why it matters:**
Focus testing effort on what matters. Don't waste time on P3 edge cases.
**Code examples:** 8
---
## Using Fragments Directly
### As a Learning Resource
Read fragments to learn patterns:
```bash
# Read fixture architecture pattern
cat src/modules/bmm/testarch/knowledge/fixture-architecture.md
# Read network-first pattern
cat src/modules/bmm/testarch/knowledge/network-first.md
```
### As Team Guidelines
Use fragments as team documentation:
```markdown
# Team Testing Guidelines
## Fixture Architecture
See: src/modules/bmm/testarch/knowledge/fixture-architecture.md
All fixtures must follow the pure function → fixture wrapper pattern.
## Network Patterns
See: src/modules/bmm/testarch/knowledge/network-first.md
All tests must use network-first patterns. No hard waits allowed.
```
### As Code Review Checklist
Reference fragments in code review:
```markdown
## PR Review Checklist
- [ ] Tests follow test-quality.md standards (no hard waits, < 300 lines)
- [ ] Selectors follow selector-resilience.md (prefer getByRole)
- [ ] Network patterns follow network-first.md (wait for responses)
- [ ] Fixtures follow fixture-architecture.md (pure functions)
```
## Fragment Statistics
**Total Fragments:** 33
**Total Size:** ~600 KB (all fragments combined)
**Average Fragment Size:** ~18 KB
**Largest Fragment:** contract-testing.md (~28 KB)
**Smallest Fragment:** burn-in.md (~7 KB)
**By Category:**
- Architecture & Fixtures: 4 fragments
- Data & Setup: 3 fragments
- Network & Reliability: 4 fragments
- Test Execution & CI: 3 fragments
- Quality & Standards: 5 fragments
- Risk & Gates: 3 fragments
- Selectors & Timing: 3 fragments
- Feature Flags & Patterns: 3 fragments
- Playwright-Utils Integration: 8 fragments
**Note:** Statistics may drift with updates. All fragments are in the same `knowledge/` directory.
## Contributing to Knowledge Base
### Adding New Fragments
1. Create fragment in `src/modules/bmm/testarch/knowledge/`
2. Follow existing format (Principle, Rationale, Pattern Examples)
3. Add to `tea-index.csv` with metadata
4. Update workflow instructions to load fragment
5. Test with TEA workflow
### Updating Existing Fragments
1. Edit fragment markdown file
2. Update `tea-index.csv` if metadata changes (line count, examples)
3. Test with affected workflows
4. Ensure no breaking changes to patterns
### Fragment Quality Standards
**Good fragment:**
- Principle stated clearly
- Rationale explains why
- Multiple pattern examples with code
- Good vs bad comparisons
- Self-contained (links to other fragments minimal)
**Example structure:**
```markdown
# Fragment Name
## Principle
[One sentence - what is this pattern?]
## Rationale
[Why use this instead of alternatives?]
## Pattern Examples
### Example 1: Basic Usage
[Code example with explanation]
### Example 2: Advanced Pattern
[Code example with explanation]
## Anti-Patterns
### Don't Do This
[Bad code example]
[Why it's bad]
## Related Patterns
- [Other fragment](../other-fragment.md)
```
## Related
- [TEA Overview](/docs/explanation/features/tea-overview.md) - How knowledge base fits in TEA

View File

@ -51,9 +51,7 @@ You've just explored the features we'll test!
### Install BMad Method
```bash
npx bmad-method@alpha install
```
Install BMad (see installation guide for latest command).
When prompted:
- **Select modules:** Choose "BMM: BMad Method" (press Space, then Enter)
@ -272,7 +270,7 @@ test('should mark todo as complete', async ({ page, apiRequest }) => {
const { status, body: todo } = await apiRequest({
method: 'POST',
path: '/api/todos',
body: { title: 'Complete tutorial' } // 'body' not 'data'
body: { title: 'Complete tutorial' }
});
expect(status).toBe(201);
@ -393,7 +391,7 @@ See [How to Run ATDD](/docs/how-to/workflows/run-atdd.md) for the TDD approach.
**Explanation** (understanding-oriented):
- [TEA Overview](/docs/explanation/features/tea-overview.md) - Complete TEA capabilities
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - Design philosophy
- [Testing as Engineering](/docs/explanation/philosophy/testing-as-engineering.md) - **Why TEA exists** (problem + solution)
- [Risk-Based Testing](/docs/explanation/tea/risk-based-testing.md) - How risk scoring works
**Reference** (quick lookup):